One excellent post spoke of managing storage performance. Another speaks to large warehousing workloads meandering through the fabric, and the havoc that caused. Good reading.
It'd be easy enough to say, "yes, darn it, we need better tools!". And plenty of storage admins would agree with me wholeheartedly.
My argument, however, is that tools in isolation can only get you so far. At some point, the model needs to change. And that's a more difficult proposition.
Managing Storage In Isolation Is Becoming Unproductive
In my simple way of thinking, managing IT infrastructure is less about managing the thing itself, and more about managing the outcome.
And to effectively manage the outcome, you need context. You can't solve performance problems without context. You can't solve availability problems without context. And you can't solve capacity problems without context.
But where should that context be established?
Round One -- Grow The Storage Management Model To Establish Context
This was the thinking behind EMC's ControlCenter during the last decade. Put agents on everything in your environment. Use those agents to help create context that answers the key questions for storage administrators.
Add that agent-heavy mission to the legacy mission of "granular element management" for multiple flavors of storage array, access models, storage fabrics, etc. -- and you can see the potential for ugly mission creep.
And, in the middle of the last decade, that's exactly what happened. The agents got heavier and heavier, and there were more of them, resulting in a strong preference for minimizing or eliminating storage-specific agents in host and application environments.
As storage devices got more complex and capable (and relevant standards such as SMI-S fell farther and farther behind), the mission of being a granular storage element manager became more difficult as well.
That being said, EMC's ControlCenter is still perhaps the most widely used storage management framework in the industry today. Despite its historical challenges, it still does the job for thousands of enterprises around the globe. It is far from perfect, though.
Going forward, the goal is to make the ControlCenter architecture less dependent on agents, far more pluggable, work closely with underlying storage element managers, and receiving useful context from other enterprise management environments, such as EMC's Ionix.
We believe that many customers will still need a powerful storage-centric view of the world (especially in larger enterprises), but we need to start thinking of it more as a layer, and less as a specific product.
Round Two -- Focus On The Virtual Machine To Establish Context
The widespread popularity of VMware has created a new focal point for establishing context with regards to storage management: the virtual machine itself. Witness the headlong rush by EMC and other vendors to create all manner of powerful and elegant plug-ins for vCenter.
And, to a certain extent, this can be a better -- albeit not perfect -- model.
In this model, everything centers around the virtual machine and what it needs to get the job done. Storage performance, availability, capacity, etc. -- all done from a VM-centric point of view.
In these models, the storage admin is more of a generalist -- and a specialist. The storage admin is responsible for providing aggregate capacity, performance and availability to the VMware farm, but usually doesn't get bogged down in the details.
Unless there's a problem, and then the storage admin is suddenly a specialist :-)
Make no mistake, this VM-centric view will be very popular in small to mid-sized environments. But that still leaves us with larger environments, and -- not to mention -- non VMware environments.
Round Three -- Focus On The Application Service To Establish Context
In enterprise IT, it's all about service delivery. Infrastructure exists to deliver services to users and applications, period. And, if you're looking for storage context: performance, availability, capacity -- that's the most useful (and most difficult) model to establish.
Start with the delivered service. Map back to supporting application, middleware and database components.
Correlate the supporting infrastructure, virtualized or otherwise, including storage. Now, go ahead and ask your storage-specific questions -- and you'll get useful answers that are hard to match other ways.
This line of thinking is what lies behind EMC's Ionix suite: service delivery management through correlated application and infrastructure discovery.
But the balkanized state of affairs in most enterprise IT shops makes achieving this more of a political and organizational challenge, rather than a technological one.
Unless you're a service provider, that is :-)
Storage Models Can Help As Well
Up to know, I've been sharing how the management model itself can potentially help to alleviate some of the storage management challenges.
But the storage model itself is showing strong promise in being able to minimize this challenge.
At a granular technology level, there is compelling evidence that object-based storage models (think Centera and Atmos in the EMC portfolio) are as about as close to "zero touch" in terms of management as is humanly possible. Of course, not all forms of information can be conveniently stored as objects today, but it's hard to ignore the evidence here.
For more traditional forms of storage, one powerful model that I've been exploring here (and that EMC is working on) is "virtual storage" -- a complete separation of logical from physical, in much the same way that virtual servers seperate logical from physical.
Much in the way that a modern vSphere cluster can pool resources and automatically adjust to changing needs -- even over distances -- virtual storage should do the same. Here's a pile of resource. Here are the outcomes I want. Go do -- and do so with a unusual degree of efficiency, automation and resiliancy.
But we believe that there's more that can be done in terms of redefining the boundaries between storage and the people who use it.
Recently, I sketched out another direction EMC was working towards -- the "information utilty" which takes the idea even further by (a) transparently incorporating storage-related services that many treat as a standalone function today -- backup, replication, archiving, compliance and other forms of information governance -- as well as (b) putting more responsibility for managing storage into the hands of end users.
The first concept isn't that controversial. The second one will be undoubtedly controversial, much in the same way that self-service computing is controversial for many enterprise IT shops today.
Putting It All Together
Coming full circle, there's no disagreement about the growing challenges associated with managing ever-increasing amounts of storage -- and the information it contains.
While it's easy to say that tools need to be better (and they do), maybe we need to consider the management model as well.
And, trust me, the people at EMC spend a *lot* of time thinking about this topic.