ContainerX Continues to Evolve at Lightning Speed

Virtually Speaking

ContainerX exploded onto the container management scene late last year with the beta version release of of its highly ambitious multi-tenant management platform. At the time the company told this column that it wanted its product to become the vSphere of containers.

So was this all hubris? What has ContainerX been up to since then?

It turns out that the company is very much alive and well, and their product is evolving at lightning speed.

It’s been through no less than six beta releases over the last half year, and on June 16th the ContainerX management platform came out of beta and into General Availability.

What’s more, the company announced a free version of the platform for up to 20 logical cores that you can download and put through its paces.

That’s in addition to a Gold version — aimed at SMEs — which starts at $25,000 per year, and a pricier Platinum one for larger enterprises and service providers available from $70,000 per year. (In case you’re wondering, the main difference between Gold and Platinum is that the Platinum version includes multi-tenancy chargebacks and a license that allows the platform to be used across customer sites.)

A quick refresher on ContainerX’s platform: It supports the Docker container format and uses Docker Swarm for clustering.

Add in Docker’s libnetwork networking code and ContainerX’s own proprietary storage plugin, and you have the basis for a container management system.

Initially there were plans to support CoreOS’s rkt container runtime as well as Docker’s, but this looks increasingly unlikely to come to fruition. “Only one company asked for rkt support, so we have no plans to support it at the moment,” says Kiran Kamity, the company’s CEO.

But the smart part of ContainerX’s platform comes from what the company calls Elastic Clusters and Container Pools.

These allow a flexible pool of compute resources to provide separate container pools (think multi-tenancy) that are given resource limits in terms of the proportion of the total CPU and memory resources from the compute pool they can consume.

These container pools can also be given differing priorities, so that high-priority pools can get more resources to deal with demand spikes at the expense of lower priority ones.

While all of this is sill broadly the same as it was at the start of the year, three months ago the company added integration with VMware’s vSphere server virtualization platform, according to Kamity. “That means you can go to the ContainerX control panel and create a VMware compute cluster,” he says.

ContainerX’s software can crawl through vCenter and let you pick where to create the VMs and what templates to use to create them, he explains. Integration for OpenStack and HyperV is on the roadmap for the future.

So is ContainerX for you? Kamity says the first container management systems were roll-your-own affairs cooked up by end-user companies themselves using the likes of Mesos, Docker and Kubernetes. The next wave were built by giants like AWS and Microsoft Azure for their own use.

And then newer container platforms like CoreOS Tectonic or Rancher were built with Linux using single-tenant customers in mind.

What makes ContainerX different, Kamity says, is that it’s been designed for large enterprises that need multi-tenancy and chargebacks, and who may well be Microsoft and VMware shops.

“They don’t want to deal with container management themselves — they want to buy it from a vendor,” he says.

That means typical customers will be large companies with Windows workloads to containerize, service providers who require multi-tenancy, and more generally companies looking for a turnkey container management solution, he believes.

2016 is shaping up to be the year that many innovative new container management platforms go GA — ContainerX included. But it probably won’t be until 2017 we find out which ones sink and which ones swim.

And for 2018? That’s likely when we’ll see which ones get gobbled up by the bigger fish in in the ocean.

 

[Source: Serverwatch]