Like all good stories, it's going to take a while before all the characters and plot lines are fully developed, but it's a promising start. Better yet, there's a first round of products behind the strategy, with much more to come.
IT infrastructure professionals, please take note: VMware has big ideas on how the storage ecosystem should evolve.
Every economic era has been powered by infrastructure.
In the United States, the Hoover Dam created a booming economy where there was none. The US Interstate project enabled commerce and tourism. And the Transcontinental Railroad connected two coasts into a single country.
In the modern era, IT is the infrastructure at hand: from the internet, to data centers, to mobile devices -- it's the infrastructure that powers our modern economy. We in the IT infrastructure business should feel some kinship with those who came before us.
Thanks to virtualization, how we think about IT infrastructure is changing: from hardware-defined to software-defined.
The change has now begun in both networking and storage disciplines. Once fully underway, there will be no going back here either.
While nothing is ever guaranteed, VMware is in a privileged position to lead the transformation of IT infrastructure: thanks to superior technology, wide customer adoption and a thriving partner ecosystem. Storage -- for a variety of structural reasons -- will be perhaps the most difficult discipline to re-envision as being entirely software-defined.
But the long journey ahead is starting to become clearer, and the pieces are starting to fall into place.
And with that preamble …
The VMware Perspective On Software-Defined Storage
How we think about compute has been completely transformed by virtualization. The core idea behind the software defined data center is simple: what if we could do the same for networking and storage?
So our first approximation at a VMware view on software defined storage is simple: do for storage what has already been done for compute: same operational model, same application-centric model, and so on.
But there's also a strong notion of operational convergence: storage is no longer a silo; it's an integral part of the infrastructure experience: provisioning, monitoring, reporting, etc.
At an application level, we've got the well-understood needs of traditional enterprise applications: performance, availability, security, efficiency, etc.
But at the same time, there's a growing class of newer applications, built on newer frameworks and wanting a different model entirely for storage.
These newer application scale out, not up. They expect to have large memory pools: whether DRAM or flash. And they're perfectly comfortable in providing availability services at an application level vs. depending on the infrastructure.
On the storage side, the various approaches are rapidly proliferating. While there are plenty of traditional SAN and NAS arrays, there's a new crop of all-flash arrays to consider. Server-side flash is becoming a major force; as the delivered performance is quite addictive. Object and BLOB stores are finding their way into the enterprise: whether they be HDFS stores, or object interfaces into things like S3.
And there's the slow emergence of virtual storage arrays: storage array software stacks that are designed to run on commodity hardware, presumably virtualized. In this world, no simple model will suffice.
It's not uncommon to hear from IT professionals that storage is still problematic. While there's been a lot of work done to expose storage array capabilities to the VM admin, it's still not an ideal world -- while the worlds are converging, in many shops there are still two distinct domains with two distinct workflows.
Today, matching the right storage with the right application isn't the easiest thing to do. Over-provisioning performance, protection and capacity is the norm: just to be safe.
And there's plenty of room for increased efficiency on the operational side: making storage an integral part of the virtual administrator's experience.
The hypervisor is closest to application requirements, and can potentially use that knowledge to help create better approaches.
All infrastructure resources -- compute, memory, network and storage -- are accessed through the hypervisor. As such, it can create a more complete picture of supporting infrastructure than any one infrastructure component can possibly achieve.
And, finally, the hypervisor is inherently hardware agnostic -- it works with almost everything, and isn't tightly bound to one storage implementation or another.
The VMware Approach To Software-Defined Storage
Pull all of these pieces together, and the picture becomes clearer.
In the VMware model, the hypervisor uses its knowledge of application requirements to push policy requirements down on all infrastructure, including storage: capacity, performance, protection and other attributes.
At provisioning time, storage workloads are placed based on exposed capabilities that can satisfy the requirements. As application requirements and infrastructure capabilities change over time, storage workloads are dynamically relocated to preserve the correct balance between requirement and resource.
Data services are then invoked (replication, encryption, etc.) as needed, implemented either in the storage array, or potentially as a layered service. In this model, these data services are a property of the application, and not necessarily the storage.
The loop is closed when delivered service levels are monitored against targets. Gaps can drive action: the addition of more resources or services, relocation of a storage workload, an alert to the administrator, and so on.
Finally, VMware believes that converged infrastructure pools -- basically commodity servers whose functions are defined by software -- are becoming fully capable of supporting certain storage workloads, and will complement the traditional storage arrays that we all use today.
To better describe the model, VMware is using specific terms as convenient shorthands:
- a policy-driven control plane that understands application requirements, pushes them down on the infrastructure, and monitors the results
- a set of application-centric data services that respect application and VM boundaries, and
- a virtual data plane that abstracts a variety of storage targets: from familiar arrays to newer software-based virtual storage arrays.
What's Being Announced At VMworld
All of this makes sense from a vision perspective, but -- as part of VMworld -- we also have some technology examples to look at and evaluate.
First, there's VSAN -- more formally "Virtual SAN" -- and that's worth an entire post by itself.
Second, virtual volumes (VVOLs) are slowly becoming more real -- at VMworld, there will be a tech preview as well as partner demos. More on VVOLs and why they're important in another post.
Third, there's a new flash read cache capability that will be part of vSphere 5.5 -- and it's a particularly useful capability as it's neatly integrated into the hypervisor, VMotion, etc.
Fourth, VMware is formally announcing Virsto, a neat volume manager-type product for external storage that I've discussed before here.
Much, much more work is required to make this vision a reality -- but it's certainly a start. And there's plenty of cool stuff behind this initial vision that I'm itching to share, but that's going to have to wait :)
If you've ever heard Paul Maritz speak, he shares these incredibly prescient sound bites that become real before long. Case in point: many years ago, he described virtualization as one of those technologies -- like the microprocessor -- that was "infinitely extensible".
In some regards, VMware's vision for the software-defined data center -- and software-defined storage -- is a logical extension of existing virtualization concepts to new domains. -- and integrating them in ways not previously possible.
VMware has already shown this for compute resources: thanks to virtualization, they can be abstracted, pooled and automated. In some sense, storage is potentially similar: using virtualization principles, it can be abstracted, pooled and automated.
Virtualization changed how we think about compute -- the world will never be the same as a result.
And I'm betting the same thing has begun in the storage world.
Like this post? Why not subscribe via email?