I think we're getting ready to take the next step in the discussion around private clouds -- from an initial set of generic concepts, to a more precise articulation of an underlying architecture.
At a surface level, private clouds are basically fully virtualized environments that allow IT organizations to use a dynamic combination of internal and external resources.
From a concept barely discussed just a few months ago, it seems to have risen as a dominant theme in our currently cloud-crazy industry discussion. I've lost count of the vendors and analysts trying to either co-opt (or distance themselves!) from the term.
The high-level definition above is enough to get the conversation moving in the right direction, but there's a far deeper level of architectural abstraction that eventually needs to be discussed.
And, despite my limited abilities, I'm going to bravely venture out into this next phase of the industry discussion.
Let me know how I do?
Of Clouds And Architectures
Not surprisingly, every IT vendor is wrapping the word "cloud" into a description of what they're doing today.
I usually try and dramatically oversimplify the discussion:
- Clouds are built differently than traditional IT -- think scalable pools of resources, abstracted services, etc. I want to dive deeply into this first topic with this post.
- Clouds are operated differently than traditional IT -- think low-touch or zero-touch operational models.
- Clouds are consumed differently than traditional IT -- think about pricing that's convenient for the consumer, rather than the provider.
Yes, it's inevitable that IT vendors follow the buzzword trends, but if we look beyond the surface, there's an underlying architectural structure around the first point that needs to be discussed in some detail before we can move collectively forward, especially with regards to private clouds.
By "architecture", I"m referring to fairly well delineated functional abstraction and service layers, generally built one on top of another. To be clear, I'm not suggesting a precise set of technologies or interfaces, more of a conceptual model.
Yes, there are various standards bodies and consortiums discussing different aspects of what a generic cloud architecture might look like, but -- frankly -- I find most them somewhat unsatisfying in one regard or another. I'm taking it from the other direction -- what do enterprises need to move forward?
So, I invite you to fasten your seatbelts, return your tray tables to their upright position, and join me on the first round of envisioning an end-to-end private cloud architecture.
Let's Get Started, Shall We?
Before we get started in describing horizontal architectural layers, I need to point out that it's very useful to think in terms of a chronological evolution of various abstractions and services.
We need to always keep in mind the legacy abstractions and services people largely use today.
Additionally, we need to think in terms of transitional abstractions and services that can take portions of the legacy, and move them forward to a cloud-like architecture.
And, finally, we need to think in term of legacy-free abstractions and services that presume the existence of a cloud.
That notion of three generic classes of abstraction and services (legacy, transitional and legacy-free) helps us think more clearly about where we're coming from, and -- more importantly -- where we're going.
For every layer in model, assume that any abstraction and/or service could either be provided by an IT organization, or an outside service provider. Where the abstraction or service actually comes from should be completely irrelevant in our architecture. Put differently, IT organizations should be free to compose entire end-to-end IT stacks using a dynamic combination of internally and externally provided services.
This sort of framing also neatly contains all the different "*aaS" discussions: IaaS, PaaS, SaaS et. al. by simply stating "external service option" at a given level of the architecture.
Another thought before we get started.
I've found it useful to discuss these concepts in terms of an "enterprise boundary", specifically the combination of internal and external resources that the enterprise cares about.
And, correspondingly, within service providers, there's a "multi-tenancy boundary" that described the pool of shared resources provided to clients.
These boundaries turn out to be useful in describing who controls what -- a key issue when contemplating any sort of cloud.
The Foundation -- The Physical Stuff
At its most fundamental layer, computing requires CPU, memory and persistent storage. It was true in 1950, and it's still true today. More importantly, these are physical entities -- there's no getting around this!
Connecting these components is a network, fabric or other sort of interconnect. Whether we're contemplating the small scale (e.g. a single server) or the macro scale (e.g. a global compute cloud), the network/fabric/interconnect is the glue that makes computing at scale possible. As we'll see, it also emerges as the ideal control point for any enterprise IT architecture.
The First Layer -- Interconnection
At least to my way of thinking, the first logical boundary for cloud abstractions services is the network itself. Servers have to find and communicate with other servers, or storage, or end user devices.
Rather than jump in deep at this point and discuss legacy, transitional and legacy-free abstractions and services at this layer, I'll leave that for another time.
For the moment, let's just assume that various entities can discover each other, and communicate in a reliable, secure and efficient fashion. Indeed, these interconnect abstractions and services are perhaps the most well understood of all.
The Second Layer -- Information
What flows over any network? Information.
So, maybe it's my inherent vendor bias, but it seems that the next logical layer would be a rich set of information abstractions and services.
As far as abstractions go, it's pretty clear where our legacy is coming from: blocks and files. That's the world we live in today.
As we contemplate private cloud architectures, perhaps the most obvious transitional abstractions involve making our legacy models location-independent -- at least, as much as possible given the speed of light. As we transition to private clouds, we need to make LUNs and file systems seamlessly appear where they need to be, using caching and smart algorithms to overcome distance.
And, finally, when we contemplate legacy-free information abstractions, the answer keeps coming up the same: objects wrapped in rich metadata.
The fascinating discussion to me is the rich set of services that can be provided to various information abstractions. The richer the metadata, the richer the set of services that can be provided.
At one end of the spectrum, throw up a pile of LUNs, and we can see the usual information protection services: backup, remote replication, CDP, etc. Even without metadata, there's the ability to dynamically change service levels within a LUN to provide both performance and cost advantages -- as but one example.
Or, consider file systems. More metadata, more services. We can now consider archiving of specific files, or securing them more granularly, or providing additional compliance measures.
Allow us to peek inside the files to gather even more metadata, and we can provide even richer and more finely-grained services.
Within the construct of "legacy, transitional, legacy-free", I would offer that files offer an interesting transitional model -- almost an object, but not quite.
Which leaves us with the legacy-free information abstraction -- objects wrapped in rich metadata. Note: the visuals offered up by Dave Graham on this topic are not to be missed!
Now, if we look backwards at the underlying abstractions -- CPU, memory, storage, network -- there's no real reason why an information abstraction (e.g. storage) can't be arbitrarily imposed on the underlying interconnect and hardware abstraction.
Or, put differently, there's no real reason why a storage block array, file server, or object store couldn't be packaged up in a virtual container, and invoked independently of the underlying abstraction.
The Third Layer -- Compute
One could argue whether the compute layer comes first, or the information layer comes first, but it's rather academic. However, unless you can get to your information, there's only so much you can do with compute, hence my incentive to position compute "above" the information layer.
A traditional operating system is a legacy compute abstraction. You don't access the underlying hardware directly. A commodity hypervisor is a more advanced compute abstraction.
Vsphere, with its cooperating hypervisors, is an even more advanced compute abstraction. And one could go even farther, and imagine giant geographically-dispersed pools of compute abstractions, maybe Java virtual machines.
As far as compute services, we've already seen a rich set evolve with virtualiztion: load balancing, fault tolerance, and more.
And, when we get to legacy-free compute services, we'll probably see things like dynamic invocation of compute services based on the situation at hand -- for example, invoking small Java virtual machines on handheld devices, communicating with network-based compute images invoked in such a way to both maximize the user experience, as well as optimize the use of power, bandwidth, etc.
The Fourth Layer -- Application
There will be those that thing I'm throwing way too much into this layer, and argue that I should have decomposed this layer into data sources, middleware, etc.
Fine, feel free to do so. But, given the end-to-end nature of this discussion, please allow me a few gross oversimplifications.
It's pretty clear where our legacy is coming from, either monolithic applications, or more slightly advanced two-tier or three-tier architectures.
The transitional model seems pretty clear to me as well: exposing services interfaces between these elements using RESTful protocols if possible.
However, many of us believe that there is an intriguing legacy-free model of simply dynamically composing external services -- wherever they may be sourced from -- packaging them up with customized logic into what VMware describes as a "vApp" -- a composite application object comprised of fully virtualized entities that delivers an end-to-end business service.
Since the abstraction level is high enough, it's easy to imagine the virtual machines relocating where they need to be to provide the service, and with no presumption that the virtual service providers live within the enterprise.
The Fifth Layer -- User Experience
If we harken back to the earlier discussion of "enterprise boundaries", the user experience is best thought of with one foot inside this boundary, and one foot outside.
Indeed, how many enterprise user experiences are already provided on a device or environment not entirely controlled by IT?
Our legacy here is clear -- it's the classic "enterprise desktop", usually running Windows, that we all live with every day.
One interesting transitional model is the virtual desktop -- enterprise user experiences that follow us around wherever we go, independent of the device.
But, eventually, it's pretty clear that, before long, we'll see "enterprise user portals" -- user experiences, controlled by IT, that comprise a shifting mix of legacy desktop applications and newer SaaS-style applications, made available on both large screens and smaller ones.
Thinking About Control Planes
Enterprise IT implies control -- of resources, of service delivery, of security and compliance. That's one of the key aspects that separates a private cloud model from more generic public cloud models.
Perhaps the best way to think of these functions is as "control planes" that run all through the architecture -- from user experience all the way down to underlying hardware -- and give enterprise IT the end-to-end visibility and control they need -- regardless of whether the resource or service is provided internally or externally.
Our legacy model is to establish control in specific physical domains -- hardware, application or security perimeters.
Our transitional model appears to be trying to do the same sort of thing, but with virtual entities.
The legacy-free model clearly appears to be heading towards a network-centric model that expresses everything as a network service, regardless of whether it's sourced inside or outside the firewall.
I'm hoping our thinking will morph from "management" to "orchestration". If the majority of what IT is doing is composing and expressing IT services for users, and relying more on outside service providers over time, we'll care more about the service being delivered, rather than the specific device or component that's doing the delivery.
It's also pretty clear that the traditional disciplines of resource management and security/compliance are converging -- both need a common, real-time view of the enterprise (whether the components are inside the firewall or not), both need a common view on policy and compliance, and so on.
If you think about it, even large components of these "control" disciplines can be consumed as external services if desired.
Where Does That Leave Us?
Hopefully, we'll have a relatively simple framework to have an end-to-end next-gen enterprise IT architecture discussion without wandering all over the place.
- Physical stuff at the foundation: CPUs, memory, disks, network ports, etc.
- Five conceptual layers of abstraction and services: interconnect, information, compute, applications and user experience
- Two converging conceptual control planes: service delivery and security/compliance/governance.
- Two consumption models at any layer: internal or external
- Three general buckets of abstractions and services: legacy, transitional and legacy free.
So, Why Am I Taking All The Time To Lay This Out?
Glad you asked. Many of us are getting into these end-to-end discussions with forward-leaning enterprises and service providers. They're hungry for a "big picture" that ties multiple themes together. Maybe this one isn't perfect, but it's a step in the right direction.
I'm also being called upon to rationalize EMC's multiple R+D and M+A themes -- past, present and future. People are asking for the "big picture". Well, here's a decent version of it for you to go contemplate.
More and more of the discussion needs to evolve around next-generation operational and consumption models. Hard to do that in any detail unless you can paint a decent backdrop of what a next-gen enterprise IT architecture might look like.
But, more importantly, it helps us think about enterprise IT as it always should have been thought of -- less about specific technologies and products, and more around useful abstractions and services.
Regardless of where they come from.