It was great to read Wikibon's first take on a new market segment -- "server SANs" -- written by my colleague Stu Miniman (@stu).
Of all the various industry analysis available, the Wikibon content regularly does it for me, at least when it comes to storage. Not to mention the attractiveness of their open, collaborative model. While I don't always agree with everything they publish, they consistently play a valuable role in our community.
While this particular article makes a great start, I found myself thinking "but what about ... and what about ... and ...".
Time for a blog post ...
Different Paths To Server SAN
There are multiple paths that get us to Wikibon's notion of a "server SAN". One path is simply the motivation of providing a distinct alternative to other models: familiar external storage, simple DAS, hyperscale, cloud, etc. But, to be clear, server SAN is positioned as an alternative to external storage arrays, and not a good fit for cloud, hyperscale, etc.
So, according to Wikibon, most enterprise storage users now have an alternative that's becoming worthy of evaluating: newer server SAN solutions, or more familiar external storage arrays. Pretty straightforward.
But there's another path that also gets you to server SANs, and that's the whole software-defined discussion. Whether you start the conversation at SDDC (software-defined data center) or perhaps SDS (software-defined storage), there's a chain of thought that gets you to mostly the same place, albeit with a very different set of criteria.
And I believe that these distinct and disparate motivations will further segment this space, beyond the initial market map that's presented here.
Let's Talk About Control Planes ...
For most smaller and/or tactical storage deployments, there's not too much interest in how storage is managed and orchestrated. The requirement is usually straightforward: give me a simple storage interface that allows me to provision, manage, report, troubleshoot, etc. In these more modest environments, you theoretically are not interacting with storage very often, and it tends to be simple tasks that are event-driven.
Simply re-creating what we've come to expect with external SAN/NAS should be sufficient.
But the perspective changes dramatically as we start considering larger, more complex and more demanding environments.
Not only do you have more storage, you have more stakeholders who interact with storage at some level: service delivery managers, server admins, database admins, app owners and developers, network people, business continuity people, security folk, finance, and more.
And, oh yes, the storage team.
All of the sudden, control planes become a very important topic, especially in the context of IT-as-a-service and cloud operational models. Indeed, the larger and more sophisticated the environment, the more important management becomes -- in all its wonderful flavors.
Physical storage arrays have done a good job -- up to now -- of providing good control planes, and working towards integration from different control points. But there's no getting around the fact that it's an after-the-fact integration. Today's external storage simply wasn't conceived as an integral component alongside compute and networking.
Could server SANs do better? Perhaps, depending.
If server SAN ends up being nothing more than a recreation of the familiar external storage model -- except now using internal server DAS -- we'll have the mostly the same situation we have today, at least when it comes to management and the control plane.
We're just putting familiar stuff in a new container.
But if we can converge server SAN software functionality with compute and network, we'll have the basis for an entirely new and more powerful model.
My rather forward-looking assertion is simple: to the degree that infrastructure software functionality can be converged, the stronger foundation we'll have for building the required integrated control planes. And that convergence will ideally occur in the hypervisor -- something I've written about recently.
Yes, I'm making an argument for VMware's VSAN.
And Data Services ...
Today, the majority of familiar data services we see today (replication, tiering, encryption, dedupe, snaps, caching, compliance, metadata, etc.) are embedded as part of the external storage array. You buy an array, here is what it does. Indeed, that's one of the big differentiators between ostensibly similar looking storage products once you get beyond mundane speeds and feeds.
One path for server SANs is to simply recreate this embedded data services model, but just do it in server software instead of external hardware. And, once again, for more modest environments, that might be the desirable approach -- everything packaged together.
But if we turn our focus to larger, more complex and more demanding environments, there's a strong motivation to provide software-based data services independently of the storage target. For example, snaps and replication that works the same regardless of your backend. Or a standardized encryption approach. Or a standard caching mechanism.
Customers will inevitably want to mix-and-match their approaches to data services -- there's clearly room in the market for both. But as Wikibon's view of server SAN evolves, there ought to be a clear segmentation between products that provide embedded data services, and those that provide data services independently of the back-end data store.
And, once again, the hypervisor emerges as a logical place to provide those data services, as it sits neatly between application and infrastructure.
Interacting With Compute And Network
Even if we narrow our focus to storage in isolation, it interacts substantially with compute and network. Storage nodes consume CPU and memory which dramatically impact performance, storage nodes need to communicate over a network, again affecting performance and availability. Storage functionality consumes from the same resource pool as application and other IT functions.
Having to go hard-partition, separately configure and separately manage the resources associated with storage infrastructure certainly takes away from the idealized picture of a liquid pool of dynamic infrastructure resources that intelligently interact. I think this aspect of convergence will further segment Wikibon's proposed server SAN category over time.
But that's just a narrow storage-specific view.
For example, when I want to provision infrastructure for a new application, I want to provision all the resources in one go, applying policies that reflect business priorities. I certainly want the software to take care of all the messy interactions and dependencies across the different domains. And when there's a problem, I want the software-based infrastructure to inform me, and tell me what to do about it -- if it can't solve it by itself.
And that's the real power of software-based infrastructure convergence.
Stepping Back A Bit
I've been criticized before as perhaps living a bit too much in the future. That's all nice, and it makes sense, but what about today?
I'd probably boil it down like this: a new category of storage is emerging, and it uses servers and software to accomplish what could once only be done using an external array.
Some will look at it as a tactical replacement for what they're doing today: faster, better, cheaper, etc. All good.
But others will look beyond the isolated storage discussion, and see the breathtaking potential for software-based convergence with other disciplines.
Thanks to Stu and the Wikibon team for a nice piece of work :)
Like this post? Why not subscribe via email?