A recent tweet from a customer sparked an interesting discussion earlier this week.
@chuckhollis What decision making factors do you think are most important for a client deciding between mid-tier and enterprise storage?
I struggled to provide a meaningful answer in 140 characters or less, but probably didn't do a good job.
In EMC's case, the reference is to VMAX (usually described as enterprise storage) and VNX (usually described as mid-tier). Please keep in mind that the category descriptors are imprecise and often vigorously debated and spun by vendors -- so that's not my goal here.
So let me try again ... this time, with a bit more room to stretch out.
Why This Can Be Hard
Imagine you're sort of new to all of this inner-circle secret-society storage stuff, and you've got to make a logical decision between two storage arrays, both from EMC: the VMAX and the VNX.
Both are built on Intel technology. Both have powerful functionality, like FAST. Both replicate, have high availability, support lots of drives, many ports, etc. To many, they look more alike -- how do you choose?
You might be tempted to say -- well, all things being different -- simply choose the less expensive of the two. And I know that -- right or wrong -- that gets done each and every day.
But you have to ask yourself -- why would EMC go to all the trouble to develop, sell and support two entirely different block storage arrays unless there were some substantial differences between the two?
So, let's look at a handful of meaningful differences that might help people decide.
Obvious Technical Differences
The VNX uses a traditional dual-controller design with modest amounts of storage cache. Like with all dual-controller designs, the second one is in there largely for redundancy purposes -- it keeps storage accessible in the event of a controller failure, although with half of the aggregate controller performance. Perfectly acceptable for some people, less-than-desirable for others.
By comparison, the VMAX uses multiple controllers (actually, storage directors and engines) which act as a single uniform scale-out complex, usually with much more nonvolatile storage cache. One benefit of this setup is that controller failure only dings application performance by a comparatively modest amount, for example.
The large, non-volatile storage caches on the VMAX used to be a big differentiator (great for sokaing transient workload bursts) but now that we have FAST Cache on VNX, that particular advantage is a bit less pronounced.
And, of course, there are connectivity differences: the VNX has integrated NAS and the VMAX requires a separate gateway. The VMAX supports mainframes, iSeries and a few untypical host types out there; the VNX doesn't. And, of course, some of the software functionality is very different: TimeFinder, SRDF, etc.
But, just to get to the essence of the discussion, let's assume that their feature set was roughly equivalent. And, just for grins, let's assume that pricing was pretty much the same.
How would you choose?
A Thought Experiment
Imagine a set of eight demanding storage workloads you need to support. Just for argument's sake, imagine option "A" would be to purchase four VNX arrays to support the requirement (two workloads per array), and option "B" would be to purchase a single VMAX (all eight workloads on the same array). And let's assume the pricing is identical, just to keep it simple.
What would be the key differences?
The VMAX approach would create a single pool of resources: capacity, compute, cache, ports, replication, etc. If one of the workloads got bursty, it could grab more resources, for example. Capacity would be pooled as well. Every workload would have the benefit of improved performance headroom, and less degradation in the event of a controller failure. There'd be a single point of control and management.
Compare that with the VNX approach: four individual pools of resources. Capacity, compute, cache and ports wouldn't be shared as a single resource pool. Not to mention four times as many individual points of management and control. One might argue that having everything isolated from each other might be desirable to some.
But let's move on a bit ...
Another Thought Experiment
Now, imagine two IT organizations supporting those imaginary eight workloads.
The first IT group is organized to deliver "storage as a service". They've got a storage service catalog and a streamlined workflow. Their processes are relatively mature with regards to things like change control, doing non-disruptive modifications, and the like. Metrics abound everywhere -- IT is run largely like a business.
The second IT group is in a different space. All IT is largely project-focused. Storage resources are grouped around the applications and servers they support. Most IT funding is owned by disparate business functions who don't see much logic to sharing amongst themselves. And any "shared services" model is largely around a pool of people who do things, vs. processes that deliver a service catalog.
Which IT organization would be more comfortable with a VMAX?
Which one would be more comfortable with multiple VNX arrays?
I'm not being critical or judgmental here -- just acknowledging that IT organizations come in different sizes and flavors.
The Bottom Line
When asked to describe the differences between two things, we -- as technologists -- tend to immediately go to the technical differences and ignore all else. Surprising, I know, but it happens.
What's more interesting -- and ultimately useful -- is to describe the use cases for each. Not only the application characteristics, but how the IT shop is organized to get work done.
Because, ultimately, I think that determines the "fit" of any particular technology much more than the technology itself.