Storage -- as a technology and a discipline -- is ripe for a complete re-thinking in 2010. Many of the necessary ingredients are already in the market, more are coming.
For those of us who've been way too close to this storage stuff for way too long, it's nothing more than a complete re-learning of everything we've ever known.
As we start 2010, I'm doing my best to lay out what I believe will be the "big ideas" in storage going forward. Since I work for a vendor, you know the rules -- I can't pre-announce new products or technologies.
But what I *can* do is lay out the thinking behind the roadmap -- and that's what I'm doing here.
New Ideas Require New Terms
Yes, I know I'll be accused of buzzword marketing, but there's a serious side to all of this: if we're working in the realm of relatively new ideas, we'll need relatively new terms to describe these concepts.
In that spirit, the "capstone" term I'm using to explain these changes is virtual storage: the complete abstraction of logical from physical. Just like we've seen virtual servers change our thinking around computing; the belief is that virtual storage will change the way we think about spinning disks.
If you haven't read this post, now might be a good time.
Underneath that broad heading, I want to break the discussion into supporting concepts and technologies that make virtual storage a tangible and implementable strategy.
The first supporting concept you may already know about -- it's FAST -- fully automated storage tiering. Much like a VMware cluster can dynamically react to changing workloads and optimize resources, FAST does the same thing with different types of storage media: enterprise flash, FC, SATA, spin-down, compression, dedupe, etc.
The second supporting concept I wanted to introduce was distributed storage federation. More than simple stretch clustering, the idea is to create an enabling abstraction that allows us to access and use information regardless of where the target and source might be geographically.
Put differently, we like the idea of moving virtual servers around here and there -- but we'll need to be able to do the same thing with information.
For an initial discussion of distributed storage federation concepts, please see here.
And, with that foundation, I'd like to take the next step, and introduce the "information utility" concept -- basically, how we apply private cloud concepts to the information domain.
From Private Clouds To Information Utility
About a year ago, I started writing about private clouds. People were extremely skeptical at the time, many still are. But you'd be surprised how many people in IT use the phrase "private cloud" to describe what they'd like to build going forward.Of course, this was all somewhat helped along by all the great technology from VMware, the VCE Coalition, Vblock, Acadia and all that came with it. My view? A year from when we started, all the pieces are starting to come together in a nice fashion, and many people are moving in this direction.
Most of the "cloud" discussion -- up to now -- has been about compute, and the applications that use compute.
I shorthand the cloud discussion into three attributes:
-- built differently than traditional IT (dynamic pools of resources, flexibly consumed)
-- operated differently than traditional IT (low-touch and zero-touch operational models)
-- consumed differently than traditional IT (convenient consumption, pay for what you use)
If it doesn't pass these three tests, I have a hard time calling anything a "cloud". And, thankfully, most people are coming around to the conclusion that it's entirely possible (and may be attractive) to build a cloud that's owned and operated by an IT group.
Private clouds go further for the enterprise IT crowd -- they are completely under the control of the IT group, they provide an attractive migration path (through virtualization), and they provide choice between using internal resources or any number of compatible service providers.
All well and good. If you're still skeptical, fine, but you'd be surprised how many times the phrase "private cloud" comes up unprompted in IT planning sessions.
But what about information?
Good question ...
Information Is Different
Business people -- and IT people -- feel much more strongly about information than they do compute, applications or connectivity.
Information is really what all this IT stuff is all about, if you think about it for a moment. Everything else is a vessel for information: storage, compute, application, network, device, etc. Sometimes, it's useful to think of information like money -- it has value, it's bad news when it falls into the wrong hands, etc.
Getting back to clouds, you'll notice that many of the popular use cases for cloud models involve information that isn't especially sensitive or critical.
Or, as I look at, cloud-washing will only get you so far when talking about information.
Targeting The Information Utility
First, as I look across the IT spectrum, I want to target the "everything else" kinds of information, and specifically exclude the red-hot, transaction oriented or bandwidth-heavy information that needs lots of IOps and/or bandwidth: those mission-critical databases, for example.
If 3% of the information in any organization is the red-hot stuff, I want to think about this information utility in terms of the 97% that is everything else. And, BTW, based on the numbers I've seen, that 3% is really more like 1% or less ...
That's where the big inefficiencies are. That's where the operational model isn't adding value. This is the land where good enough is good enough. And this is the part of the storage landscape that I believe is ripe for transformation in 2010.
Why "Information Utility"?
I like this term to describe the idealized state for a few practical reasons.
First, people tend to trust that most utilities usually work quite reliably: phones, power, heating, transportation, etc. Clouds don't quite have that reputation yet.
Second, users of storage tend to think in terms of "their information" rather than the storage-related technologies that make it all possible. And I believe selling big ideas always involves selling them to the business, and not just to IT.
Third, cloud skepticism has set in for certain audiences (that's to be expected), and we don't have to wait many years to build a practical "information utility" inside of IT -- the pieces are in place, and I'm arguing it will be a very attractive proposition during 2010 -- especially for the storage teams.
Fourth, whereas "cloud" makes a great term for dynamic use of resources (compute, memory, network), let's face it -- most storage use cases aren't temporary! People consume storage, and rarely want to give it back. Ever. Unlike other resources, it's persistent and stateful.
But, to achieve this goal, we're going to borrow heavily from cloud concepts.
We're going to want to build our information utility differently. We're going to want a dynamic pool of storage-related resources that are very efficient, yet automatically adapt to changing requirements.
We're going to want to run our information utility differently, and engineer "touch" out of the administrative processes. The ultimate goal of any cloud (or any utility) is zero-touch: things just run, and all you have to do is keep an eye on green/yellow/red indicators.
And we're going to want to think in terms of consuming our information utility differently: more consumption models for users, more convenient consumption for users, showing them what they're consuming (and maybe paying for it), and -- underneath the covers -- being able to have far more choices around technology, and even if it comes from an external service provider.
I want to use the next few posts to explain how the supporting technologies for an "information utility" are coming together very quickly, and how this model may be far more achievable in the short term than people might realize.
I hope you find this an interesting journey ...