Scale Computing – HC3 Product Launch

Scale Computing… the company with a name that never really made sense… until today. You see, Scale Computing began in 2007 as a storage company. Their most recent product lines, the M Series and S Series products, utilized HGFS (licensed through IBM) to provide a clustered filesystem for data consumption. Need more space… just add a new node and let the cluster work its magic. Combine the simplicity of the cluster filesystem with the creation of a highly usable web-based management utility, and you get a simple and powerful storage system.

But… isn’t that storage? Not really compute? Perhaps the “Computing” was foreshadowing for the most recent Scale Computing product release: HC3 Hyperconvergence system.

HC3 represents a significant change to the direction and focus of the company. Scale is utilizing IP and existing products in the storage realm and enhancing the storage experience with server virtualization functionality.

Let’s dig into some of the nitty gritty technical details we all want to see:

Technical low-down

The virtualization functionality is provided via the highly powerful and versatile KVM hypervisor. The use of KVM in a system like this always raises an eyebrow, or two. More often than not, KVM is relegated to the “Linux/UNIX geek” realm. The hypervisor is highly functional, feature rich, and has a very solid following in the Open Source community. New functionality, enhancements, and maintenance is constantly ongoing. Plus, with KVM in the OpenStack development line, KVM is just the beginning of where the Scale Computing layer could go in the future. Additionally, as a consumer of Open Source software in its solution, Scale contributes back to the community. Scale has slightly modified KVM to allow for better caching and has released that code back to the community for consumption elsewhere.

As mentioned above, Scale is building the HC3 on their existing HGFS filesystem. KVM is a natural hypervisor selection based on the fact that it operates at the same level in the OS as the filesystem… just another service installed in the OS.

Many of the expected virtual machine management functionality is present in the HC3 product:

  • Live Migration
  • Resource Balancing
  • Thin Provisioning
  • VM Failover/Restart On Node Failure
  • Storage Replication
  • etc…

As far as hardware is concerned, the HC3 product is built on the same hardware as the M Series storage line. You can see more details here: http://scalecomputing.com/files/documentation/series_datasheet_1.pdf. Heck, existing Scale Computing customers can install a firmware upgrade on the M Series hardware to get HC3 functionality… gratis too.

Management of the environment, like the storage, is handled in a clustered fashion. Connection to any of the node management IP addresses results in the ability to control the entire cluster. No need for a centralized controller for management services. The connection for management is handled by any HTML5 compliant browser. I guess this means an iPhone browser could be used (although, I question the usefulness of such a form factor… but, it should work nonetheless).

Once logged into the management interface, a number of components can easily be managed: Virtualization, Storage, General, etc… If you have used the interface before, the only major difference between the pre-HC3 and post-HC3 interfaces is the addition of the Virtualization option. The interface is very simplistic with very few options available. Navigation is logical and easy to use.

Target Market

Scale computing has made a very conscious decision to focus on the SMB/SME markets. These markets tend to employ a small number of IT personnel with limited knowledge and high expectations placed upon them. For the business, selecting a product that performs a role, is easy to use, and provides high levels of availability is extremely desired.

Scale has identified the market and designed HC3 to reflect what their customers want:

  • Easy administration
  • Few configuration options
  • Expandability
  • Small form factor (1U servers)
  • Support for common operating systems
  • Resiliency
  • Performance
A solution like HC3 fits really well for the SMB/SME markets. If the business does not have a need for a highly customizable environment, HC3 may be exactly what they are looking for. Leveraging HC3 may result in IT administrators having extra time on their hands to work on other projects.

What makes Scale Computing HC3 different?

One of the most significant differentiators for the HC3 product is their starting point. While it makes sense for THIS company, starting with a solid storage foundation, followed by a virtualization plan is really a different path to get to a converged infrastructure product. Scale has developed a solid storage product and decided to make a hypervisor decision that compliments existing efforts while providing the functionality customers are looking for.

The ability to scale compute and storage resources is becoming an expectation and the norm in virtual environments. HC3 allows for two major ways to scale:

  1. Addition of a new HC3 node – This adds additional footprint for executing virtual machine workloads… plus additional storage.
  2. Addition of a new M Series node – This adds additional storage without the compute functionality.

Scaling by adding an M Series node, while possible, just does not make sense at this time. The possibility to add additional compute resources to the HC3 cluster holds so much more potential benefit to a consumer that I find it hard to believe this would be used. But, for what it is worth, that is an option.

Simple is better. For the target market, removal of complexity results in a predictable and, hopefully, low-touch compute environment. There is less of a need to have deep knowledge just to keep the compute environment functional. Scale has made a number of configuration decisions behind the scenes to reduce the load on customers.

What is missing?

With all the hotness that HC3 provides, a number of notable features are missing from this release:

  • Solid reporting – Aside from some “sparkline”-esque performance graphs (on a per VM basis), the ability to look back on a number of statistics for any number of reasons (Ex: troubleshooting performance issues) just is not there. For the target market, this may be an acceptable risk. I do not necessarily agree, though.
  • VM Snapshotting – At this time, the snapshotting functionality is achieved by snapshotting the entire filesystem.
  • Crash Consistent Snapshots – The snapshots of the volumes are crash consistent — in the event a VM is restored from a snapshot, it is in a state that mimics a sudden loss of power… the server has crashed. So, reliance on OS and application recovery is necessary. Probably a good idea to have backups. Pausing the VM, if possible in your environment, prior to taking a snapshot would help in stability… but, that is a stretch.

Virtual Bill’s Take

I absolutely love the focus on the SMB/SME markets. They need some TLC, for sure. By focusing on creation of a utility virtualization device, the IT resources in the business can focus on moving the business forward rather than messing with complicated details. Howard Marks made a comment during a briefing from Scale: “Do you care if your car has rack and pinion steering? I just want to get in and drive”. This solution addresses the need to get in and drive. I can see this as being very appealing to quite a number of companies out there.

Scale Computing is an up and comer in the converged infrastructure game going on now. Converged infrastructure is making a play in the corporate IT ecosystem that challenges traditional thinking… and that is always a good thing. Selection of KVM as the hypervisor is logical, but it is going to be a hurdle to overcome. KVM just does not have the household recognition as other vendors. So, getting support from many directions is going to be challenging. But, after speaking with Scale Computing, they’re up for the challenge.

If they play their cards right, HC3 could help usher in a flood of new customers to Scale and a change in how SMB/SMEs operate server environments.

Advertisements

End User Computing Paradigm Change…

My friends… we are standing at the beginning of a new user computing paradigm. For the majority of us (aka – technical people interacting with users in a Corporate environment), time and time again, we have seen a shift in how users interact with their data.

Originally, people used punch cards to interface with their computers. The concept of a personal computer was outside the realm of thought for the users. How is it possible when a computer takes up an entire room? Right?! That was followed by a terminal sitting on someone’s desk. The user was remote controlling the computer… centralized computing. Next up was the PC on the desk. Computers were powerful enough that they could perform work on the data locally and distribute the load across the myriad of nodes in the network. Up until about 1 year ago, that was the preferred model.

However, that has changed and we are staring the new paradigm in the face. This new paradigm is being ushered in by the increased domination of Virtualization technologies (VMware being the largest player). Specifically, we are looking at a centralized computing with access anywhere.

Datacenter computing technology performance has shot through the roof. Due to the increased cost of equipment, larger amounts of data, and expected performance SLAs, the Corporate environment is moving to keeping the data centralized in the high-speed data center environments.

But, the question becomes how to present the data to ensure performance, reliability, and security. Keeping the desktop PC becomes questionable. In order to keep the SLAs, high-cost WAN connection, WAN compression/optimizers, local caches, etc… become necessary. Keeping the massive amounts of data generated on a daily basis on the local LANs become more and more difficult. Data backup, DR, etc… become issues. Additionally, local PCs are very rarely backed up. Much of the user’s work is saved locally (ex: My Documents or the Desktop).

Virtualization has been a major shift in how resources are utilized. This has manifested as abstraction. Now, computing resources have been abstracted such that they are not tied to a specific machine. Windows, Linux, Solaris, FreeBSD, etc… are no longer tied to a specific hardware platform. Now, the operating systems can “float” from one environment to another… and multiple operating systems can run on a single hardware platform at any given time (see Virtualization 101).

The same abstraction concept has been applied at the application level, now. Applications are no longer tied to a specific operating system. Now, they can be moved from one OS to another freely. Plus, any dependencies are included with the applications and application conflicts are seriously reduced.

This abstraction allows for Corporate IT departments to become very creative with how user environments are provisioned. By hosting the desktops in the data center, there is no more tie to what is on the user’s physical desk. By abstracting the applications, there is no need to have a specific workstation image for a user group. A generic workstation can be deployed in the data center that can run the applications that the USER needs to run… regardless of whom that user is.

The technology exists to change end user computing. Once the IT departments of the world have identified the path they would like to travel down, the largest speed bump to contend with is user acceptance of the environment. For inexplicable reasons, users are attached to the workstation on their desk. Something about its presence on their desk is comforting. “Abstracting” is the key to the end user computing paradigm change. However, abstracting the end user can be difficult. Instead, focus on the attachment to their workstation and provide concrete proof why moving to the abstracted (aka – “better”) environment will benefit them.