All those zettabytes of ones and zeros need to live somewhere. If they are to be of any value, they must be stored, protected and managed. The more information we produce, consume — and depend on — the more storage matters.
At the same time, it appears that software is eating our world: extending the power of human intellect in ways that continually surprise us — now often powered by the avalanche of information we are creating about ourselves and the world around us.
In particular, software is transforming how we think about data centers: the technologies and operating principles that enable us to produce, consume and act on information quickly and efficiently.
Software is inevitably changing core data center technologies — compute, network and storage — both individually and how they work together.
I believe this is what makes software-defined storage an interesting and relevant question for IT architects: how can we use software to become far better at storing, protecting and managing information?
These people are thinking about the future, and what it might bring.
The Expanding Digital Universe
It’s not hard to come to the conclusion that — yes — we’re all generating and consuming more information. But what may not be as obvious is where it’s all coming from — and how we will be expected to harness it.
The recent EMC/IDC Digital Universe report offers an excellent overview, giving good insights into the shape of the information world to come: dramatically more information from sensors and mobile devices, the growing need to extract analytical insights in near-realtime, as well as the obligation to secure and protect it from new threats.
Business leaders recognize the shift, and are starting to think differently.
An organization’s information base is now a critical asset on the balance sheet; the basis for new services, improved efficiency as well as competitive differentiation. In the information age, data matters.
The vast majority of this enterprise information wealth will inevitably be the responsibility of enterprise IT groups. While you can certainly outsource capacity; you can’t outsource accountability.
Challenges In The Here And Now
But we don’t have to look to an emerging world to make a case for relevancy; there are serious concerns in today’s pragmatic IT world: there’s an awful lot spent on storing, protecting and managing information, resulting in a continual search for new and innovative ways to get more value from every IT dollar.
I pay $4 for a cup of coffee at Starbucks; the cost of the raw coffee beans is but pennies.
Indeed, to get an accurate lifecycle view of storage costs, we need to think less as technologists and more as manufacturers and retailers: considering labor, overhead, efficiency, distribution, consumption — and rapidly responding to changing demands.
The ultimate value of software-defined storage is greatly magnified when taking this broader view. And as information volumes continue to double and double and double again, more IT organizations will shift their view from the cost of the ingredients to the value of the service delivered.
“Software-Defined” — In One Word
What makes something software defined?
Definitions vary, but I have mine: composability — the ability to dynamically compose a service, on demand, and under programmatic control. If it helps, think of an automated factory line that builds each item to spec: quickly, accurately and efficiently.
Why is composability so important? Three reasons: efficiency, effectiveness and responsiveness.
Without composability, the catalog of services is largely static, and must be forecasted and provisioned far in advance. I call this “have a hunch, provision a bunch”. Since coming up short is not a good thing, over provisioning is the norm. And that’s not efficient.
I wear a size 10.5 shoe. If the store only has size 10 or 11, I have some hard choices to make.
Composability ensures that the service delivered is effective for requirement at hand.
Without composability, a service will likely be static in a world of application requirements that are inherently dynamic: more/less performance, more/less protection, more/less capacity — services need to be responsive to the needs at hand.
We’ve already seen how composability can be effective in the world of server virtualization — or software-defined compute if you prefer. Here is what I need to compose for this new application: virtual CPUs, memory, operating system images, application code, high availability, etc. As I’m doing this, I don’t have to think much about what flavor of hardware I’m using, or even its current configuration.
The same inevitable trend has now begun in the network world via software-defined networking. Here is what I need to compose for this new network: topology, protection, services, bandwidth, latency, resiliency, security, etc.
As I’m doing this, I don’t have to think much about what vendor’s hardware is underneath it all.
Once again: far more efficient, effective and responsive.
We want the same from storage.
We want to be able to think of storage services as composable. Here is an application requirement. Here is what I would like to compose in terms of capacity, performance, protection, location, security, etc. I can do so efficiently, effectively — and respond quickly to any changes in circumstance.
Can we do this with today’s storage technologies? Perhaps. But there’s a clear opportunity to do things much, much better.
Other, Less Satisfying Definitions Of SDS
Just for completeness, I thought I’d share my thinking as to why I've discarded the many alternate definitions floating around for software-defined storage.
One popular definition for SDS is anything with an open API: REST or similar. I see that more as a necessary condition vs. a defining attribute. If I’m going to be dynamically composing storage services, of course I want to do that via an API.
However, the reverse isn’t true: having an API doesn’t mean I can dynamically compose storage services with it.
Another definition you’ll hear is that software defined storage has to be entirely software-based, and that it runs on standard server hardware. While that might be a desirable attribute for some reasons, it doesn’t speak to what that storage actually does, nor does it speak to how it’s controlled.
As an example, I could have a very dumb and uncommunicative storage stack running on bog-standard hardware — and it wouldn’t be “software-defined” by this definition. Indeed, you’ll find that many of the current crop of “software-only” storage stacks lack this key ability to programmatically compose dynamic storage services.
Just because you’re software doesn’t mean you’re software-defined!
An interesting side-effect is that this “composability via API” definition can be inclusive of traditional external storage arrays — if they can compose a variety of storage services programmatically. One might argue that a software-only approach can do better than a hardware-based approach but that’s entirely subjective — and not a defining attribute.
Software-Defined Storage As Part Of The Whole: SDDC
We must not lose sight that each software-defined technology is part of a greater whole. The prize at hand is the software-defined data center: an architectural pattern where where all services are dynamically composable: both individually and collectively.
If we don’t keep this larger view firmly in mind, we can end up simply re-engineering existing technology silos. Put differently, any consideration of SDS implies new, converged operational models that span traditional boundaries.
It’s Not Going To Be Quick, And It’s Not Going To Be Easy
It took many years for server virtualization to be accepted as the norm in modern data centers. Part of this was technological maturity; a far greater challenge was the inherently slow pace of technology adoption and resulting organizational changes.
The way things were done had to be fundamentally re-engineered in order to fully exploit the benefits of server virtualization. And if you’ve been around ten years or more, you clearly remember how we used to do things.
I don’t spend much time in the networking world, but — from all appearances — that journey has clearly started. You can clearly identify progressive organizations who recognize the benefits of software-defined networking, and are motivated to move quickly. You can also see another tranche of slightly more conservative organizations who are studying and evaluating the technology, and are likely to move in that direction before long.
And, of course, some folks who aren’t interested in the least :)
I would expect to see the same pattern when it comes to software-defined storage.
My goal here is to be helpful to those progressive architects who “get” that storage is a fundamentally important technology in their worlds, who appreciate that the fundamental architectural models have begun to change.
These people are motivated to capitalize on what’s now becoming possible — and do so as quickly as possible.
Next, we’ll start our tour through a software-defined storage model.
Like this post? Why not subscribe via email?