OK, today's a busy day in storage land -- EMC has some pretty big news.
But -- somewhere in all the speeds and feeds, architectural discussions and weighing in -- we in the storage world have an entirely new model to consider around storage virtualization.
Rather than take the extreme view that "this new model rocks, and all other are no longer worthy", I would say that the storage virtualization model embedded in the V-Max architecture is considerably different than those that have come before, and -- just on that basis alone -- it's worthy of discussion.
Storage Virtualization Basics
If I could pick one storage technology that's been tortured to death over the last decade or so, it'd be storage virtualization.
Before we dive in to different architectural approaches, it'd be useful to have a quick chat around the different use cases people envision for this technology.
One key theme is "pooling" -- make all my disparate storage look like a single, giant pool of capacity to improve utilization -- provision from anywhere, to anywhere.
Another key theme is "migration" -- give me a tool that helps me move from old to new, or between service levels, or across arrays -- and do so with a minimum of effort and disruption.
Another key theme is "management" -- give me an abstraction that makes my disparate storage arrays easier to understand, configure, provision and manage.
Another key theme is "functionality" -- put needed functionality at the virtualization layer (e.g. replication), and I won't have to put it in all these different storage devices.
And, finally, there's a strong theme of "asset re-use" -- take this stranded, aged stuff that's on my floor (thanks to byzantine financial asset treatment), and let me use it for at least part of my needs.
Is it no wonder that storage virtualization -- to this very day -- is so poorly understood across the industry?
So many different priorities that people bring to the table when discussing the topic.
And You Have Choices
As I mentioned in my previous post, take any arbitrary piece of storage functionality, and you can put it one of three places: at the server, in the fabric, or in the array. This is true for replication, encryption, backup -- and storage virtualization.
Most of the storage virtualization discussion so far has been about different approaches to putting it in the storage network. For example, file virtualization can be done very elegantly in the network (think EMC Rainfinity), but that's not the only way in could be done.
And today, we've got three different approaches to putting storage virtualization in the network: (1) use a server appliance (e.g. IBM SVC), (2) use an array controller (e.g. HDS USP), or (3) use an intelligent switch (e.g. EMC Invista).
Rather than argue the pros and cons of each here, what I'd like to point out is that now -- for the first time -- we can consider an entirely new model where storage virtualization is an inherent property of the array architecture, and not just an alternative way to use the box.
Storage Virtualization and V-Max
If you understand the architecture of V-Max, you'll notice some interesting things when it comes to thinking about storage virtualization.
Pooling -- a V-Max architecture is a single, giant pool of intelligently tiered storage. It all looks like a single array if you choose.
Migration -- within a V-Max complement, moving storage between physical arrays is either fully automated, or pretty easy to do -- the logical model is simply "it's a single array". Now, this doesn't address how you get stuff *into* a V-Max, but -- once it's there -- migrations are covered at least as well as -- if not better than -- alternative virtualization approaches.
Management -- it manages like a single giant array, period. All the "ease of stuff" EMC has been working on for the past few years applies here as well. Plus, anything you're using with a Symmetrix today to manage storage just works. Hard to beat that.
Functionality -- one of the criticisms of legacy storage virtualization approaches is that you couldn't really use any underlying functionality in the array too easily -- you were forced to accept what the virtualization device did as your baseline. With this model, not only do you have access to all of Symmetrix functionality (arguably some of the best in the industry), but there's no "functionality integration" issue.
Asset Re-use -- no, you can't plug old legacy storage into the back of the V-Max. But it's fair to point out that V-Max is designed to run with a mix of older and newer storage building blocks as part of the same storage controller complex.
So, one could aruge persuasively that -- yes -- V-Max creates a new paradigm when thinking about storage virtualization. Yes, there are a new set of pros and cons to consider (and I am quite sure they will be hotly debated), but there's no arguing that there's a new model in town that's very much unlike the legacy ones.
But there's some problems avoided as well -- and we should talk about those.
Typical Problems Avoided With This New Approach
One serious problem in these multi-vendor lashups is the whole ball of wax around error propogation, error management, customer support and ultimately customer satisfaction.
At a tactical level, a problem can be as simple as an underlying older array that's being virtualized starts throwing some errors, and the virtualization device doesn't see it. One thing leads to another, and before long you've got some cranky users.
Going up the stack, the customer support provisions from many of these legacy virtualization vendors can be be a bit -- thin. Certainly not the end-to-end support that many enterprises desire. Some shops knew what they were getting into when they went down this path, others got a nasty surprise.
Now, compare that with the V-Max approach. Error detection and management is Symmetrix-class -- as is customer support.
Another serious problem avoided is scaling limitations -- both capacity and performance.
Put a decent-sized large enteprise scenario in front of any of the legacy virtualization vendors, and you'll end up with some pretty interesting configurations -- as you factor in performance, availability, replication, DR, etc. -- we're talking lots and lots of storage virtualization plumbing.
Not an issue with the V-Max architecture, once you fully understand it.
And -- finally -- let's not forget that all this storage virtualization plumbing is not free. There's hardware, software, design, implementation and support -- all above and beyond whatever storage capacity you're envisioning.
Well, with V-Max, all of that just -- well -- goes away.
It's Going To Take Some TIme
It's going to take a while to understand just how different V-Max is.
Heck, most people still don't understand what Atmos does yet :-)
That being said, I think many aspects of the storage architecture discussion will be changing as appreciation grows for what we've got here.
And -- as part of that -- I think it's inevitable that the storage virtualization discussion will evolve as well as a result of what V-Max now brings to the table.
Let the debate begin!