If we’re going to dig into software-defined storage, we’re going to need a conceptual model — just so we can keep the discussion organized.
The particular model I’ll be using for this discussion is the one VMware currently uses.
Vendor bias aside, I’ve personally found it the most useful model out there for explaining not only software-defined storage, but exposing important differences as compared to the way things are done today.
The model itself is not bound to any specific technology — but you will find many aspects already implemented in VMware’s current product set.
And an open invitation: if someone has a better model, please share it!
I think of a conceptual model as a precursor to an architectural model. The conceptual model details the functions and how they’d ideally interact; the architectural model instantiates them into a specific set of technologies and use cases.
As with any model, you’ll certainly find familiar functions and concepts — but here they are grouped and abstracted in different ways than you might expect.
If you’re new to this series — and are willing to do some prep — you might want to read this post and this post.
Models Do Matter
Enterprise architects know that how you group and abstract functionality is critically important. It’s not just drawing pretty pictures on a whiteboard. It’s serious stuff; and not lightly considered.
Behind every IT architectural model, there’s always an organizational one implied. Change the architecture; you’ll end up changing how the organization uses it.
A good architecture often fails in the hands of a poor organizational model; great organizational models can be greatly hampered by poor architecture.
I think this is an extremely relevant point, as so many models I see are explicit accommodations to the in-place way of doing things. That’s dangerous territory in my book: best to envision a better model, and then be clear about the organizational implications. Work back from that point as needed; don’t start with a long list of compromises.
So let’s begin?
Application Centricity — And Policy
It’s not just a buzzword, it’s largely why IT exists in the first place: to deliver the applications that people want to use. For a discussion of software-defined anything, it’s the logical starting point.
In any SDDC (or SDS) discussion, we start with the needs of applications (or logically related groups of applications, if you prefer), and work downwards. At a high level, I think of an application as comprised of logic, data, resources required — and policy.
It is helpful to distinguish more precisely between application logic, and the “container” of information, resources and services it uses. While there are cases where the application logic itself might potentially take direct control of the resources it needs, I would argue this approach is frequently undesirable: brittle, inflexible, inefficient and very challenging to code.
One of the key ideas in SDDC is that policies are instead bound to application containers (e.g. virtual machines). Those policies are then used to drive the behavior of everything else: resources needed, services required, security — you name it.
Think of a bar code affixed to a package.
The bar code describes its contents, and how it must be handled. And you don’t need to open up the package every time a decision needs to be made.
It’s a simple and elegant construct.
Change a policy, and you change a behavior of an application’s container. Establish compliant policies, and verifying compliance becomes that much easier. Application developers are not disenfranchised — they can specify external requirements (e.g. redundancy, performance, etc.) without needing to define the implementation of those policies.
Going back to previous notions of composability, it should be clear that a policy drives the composition of supporting services. Think of a policy statement as a build manifest - or a blueprint - exactly what’s needed for a specific application, and at a specific point in time.
Storage and Policy
If we think about storage, there’s a very long list of attributes that could potentially be part of an application container’s policy: capacity, performance, availability, protection, encryption, retention, compliance, cost optimization, geographical location — and that’s only a partial list!
In our software-defined storage world, we’d ideally be able to dynamically compose a set of storage services needed at a particular point in time - without regard for what specific hardware is being used, how it’s currently configured, etc.
The notions of policies and composability are very extensible.
New policies — reflecting new requirements and new compositions — can be slightly modified versions of existing ones. Going farther, policies can be conditional, as in if heavy-demand and end-of-month, then allocate-more-performance. Or if requested-service-not-available, try next-best-approach.
I also believe this approach to be reasonably future-proof.
New technologies — what ever they may be — are either new services that can be composed via policy, or better implementations of existing ones.
Yesterday, my highest-performance-possible policy was implemented via stripe-on-15K-disks. Today, it might be cache-using-flash, and tomorrow it might be use-all-flash.
Granularity Matters, Perspective Matters
I believe the ability to dynamically apply specific policies to specific application boundaries is an essential defining characteristic of software-defined storage. Individual application requirements change frequently, and simply provisioning vast buckets of pre-defined capabilities certainly isn’t efficient, effective or responsive.
But that raises the question — where in the stack can one obtain the required clear view of application boundaries — as well as the resources and services they touch?
VMware believes the hypervisor (vSphere in this case) is in an architecturally privileged position to interpret and enforce per-application policies.
It sees the application itself (actually, the application container), as well as can dynamically arbitrate all the resources and services an application may be using.
I can’t disagree with this perspective — trying to do it “somewhere else” in the stack makes your head hurt. As a specific example: in the storage array world, it is always devilishly tough to discover application boundaries, and interpret policies. “Something else” would have to do it on behalf of the storage array — an administrator, management software, etc.
I can only assume the network world struggles with the same challenge.
Control Planes And Policies
The next stop on our software-defined storage tour is the layer responsible for interpreting policies, and then translating them into the required composed resources and services.
You might call this layer “management” or something similarly non-descriptive; many of us now call it the control plane, or — more accurately — control planes, as there are always multiple points of control and monitoring in any large environment.
Here is where the balancing act occurs between supply and demand. Here is where issues and problems are resolved, ideally in a manner transparent to both application and user. Here is where we continually automate — and re-automate — to the greatest degree possible.
There is nothing wrong with the notion of a "single pane of glass" — it’s just that everyone want their own! Regardless of whether we’re considering a traditional IT organizational model, or a newer IT-as-a-service model — you can clearly delineate multiple points where storage has to appear in context with other relevant items.
One Thing — Many Perspectives
Let’s consider storage as an example of this.
If your operational model defines a dedicated storage administrator, they’re certainly going to want a storage-centric view of their world: what’s on the floor, who’s using it, how is it configured, how is it performing, etc.
If your model defines an infrastructure-as-a-service delivery manager, that person is going to want to see storage in the context of compute and network — all delivered as a consumable infrastructure service.
If you have application administrators and database administrators, the story will be familiar — they too will want to see storage in the context of their application and their database.
If you have dedicated availability administrators (data protection, business continuity, etc.) they will want to see storage in the context of applications and the services that protect them.
If you provide chargeback or showback, the portal will inevitably include storage services provisioned and consumed. Finance will want to understand the costs of storage services delivered. Compliance will want to understand how compliance policies are being enforced. Capacity planning will want forecasting models and constraints.
And that’s just a partial list.
Here is my point: it is unrealistic to think in terms of a single point of control when it comes to software-defined storage, or any dynamic, composable service for that matter. There will be many points of control, bounded by operational constraints: some passive observers, others able to change policy and thus change the composition of services and resources consumed.
The model for who-can-do-what must not be static either; it should quickly evolve and be as dynamic as applications themselves. The natural tendency will be to empower more organizational functions as needs evolve and maturity increases. When it comes to division of responsibilities, there are no right answers, only temporary solutions.
The implication? No more storage kingdoms.
Next up: data services and the data plane.
Like this post? Why not subscribe via email?