The next mountain to climb is clearly in sight: it's the software-defined data center, or SDDC.
Many might justifiably see SDDC as a linear extension of familiar server virtualization concepts, now being applied to things like storage and networks.
While that's quite true, there's a much smaller group who sees something far more impactful -- a complete re-thinking of cloud-scale IT infrastructures: both software and hardware.
I have recently converted to this second smaller group, sorry to say. As a result, I have become much more extreme than usual around the potential of what's in play here -- nothing less than yet another redrawing of the industry: the vendors who supply, and the cloud-scale customers who consume.
From a vendor perspective, it really doesn't matter where you play in the stack -- there's nowhere to hide. SDDC concepts are going to change your world.
If you didn't like the previous disruptive waves, you're really going to hate this next one.
Yes, much of what we think of as "hardware" today becomes dynamic, virtual services under software control. But, in this new world, we start thinking about hardware differently as well -- especially at scale.
A Starting Point
How to think about SDDC, especially at scale?
One approach would be to make the case for central importance of the new control point: the network fabric. As scale increases, so does the number of entities that need to communicate effectively, as well as the need to orchestrate their relationships.
At some point, the individual entities themselves (servers, storage, apps, ports, etc.) start to take a back seat to the communication and control planes that orchestrate them all.
Interconnecting 500 entities that provide IT services is moderately interesting. Interconnecting 5,000 or 50,000 gets more interesting. Scale to 500,000 or perhaps 5 million moving pieces in a multiple data center architecture -- and now you're paying very close attention to the network fabric: costs, scalability, agility, extensibility, manageability, etc.
Scott McNealy was presciently correct – in this world, the network *is* the computer. Hence all of the interest in SDN (software-defined networking) and things like OpenFlow.
In this emergent world, familiar entities like storage services, compute services, application services, security services, etc. -- while interesting in their own right -- become simple "users" of the software-defined network, or UNIs (user network interfaces) in the parlance. Like any network service, these "users" are dynamically invoked, dynamically scaled up and down, and interconnected dynamically.
The architectural focus quickly shifts to the network controller as the prime orchestrator of IT service delivery. New classes of management applications create powerful feedback loops of pattern detection, decisions and re-orchestration of the resource pools and how they're configured.
But what about hardware in this new world? Doesn't it all become just a commodity?
Well, yes and no ...
Hardware In An Era Of SDDC
Table stakes for hardware in any cloud model is the uniform use of merchant silicon: compute, network, flash storage, etc. Any appetite for bespoke silicon like FPGAs and ASICs is seriously diminished -- a trend which has been well in play for quite a while.
But all of those nice merchant chips need to be packaged in such a way that they're easy to consume, performant, efficient to operate and maintain, can live in a data center environmental envelope, and so on.
We can't lose sight that the scale has changed: instead of primarily thinking at the server/node level, we need to think clearly at the rack level, the rack cluster level, and so on.
Take our familiar friend converged infrastructure, using the popular VCE Vblocks as an example.
Look inside one, and you'll mostly find merchant silicon throughout. But a node designated as a "storage" node can't be used for anything else. Nor can a compute node, nor a network node. Using the best-available current technology, we're still missing an entire degree of resource pooling and dynamicism as a result.
The success of converged and pre-integrated approaches has not gone unnoticed by the vendor community.
One group of vendors are basically attempting to apply the VCE blueprint, only using their own in-house technology: HP, IBM, Dell, Hitachi, etc. A group of smaller startups are taking the concept downward in scale: combined server/storage/network "bricks" that start small and scale a bit. A potentially interesting play for smaller shops, to be sure.
But what happens when we start thinking "convergence" at cloud scale?
Hardware Thinking At Cloud Scale
At cloud scale, seemingly small optimizations at the node/rack level can have a big impact.
For example, this node here is a "hot" one (lots of compute and memory, on-board flash, lots of bandwidth), this other thing is a "cold" one (not much compute / memory / network but a lot of low-cost spinning storage), this other thing maybe is a "traffic" node or perhaps a "controller" node -- with many variations in between.
Sure, all these nodes can be built from largely the same parts bin; they're just packaged differently for the use case(s) at hand. Of course, each node can support software functionality in addition to what it was originally envisioned for, as all functionality is essentially packaged in virtual machines that can potentially run anywhere.
Bits of storage code running on compute servers, compute code running in network nodes, and so on.
I should also point out there appears to be the beginnings of a change in the thinking of the cloud-scale operations I've met -- they appear to be getting tired of being in the hardware systems integration business.
This crowd originally broke with mainstream IT products years ago for all the right reasons: they couldn't get what they needed, so they started doing integration for themselves at a component and subsystem level. Good for them.
While capex is certainly reduced with their model, there's an inevitable opex tax to be paid with this approach. You're now in the business of sourcing components, qualifying them, integrating them, supporting them, managing inventories, continually adapting your software stack, etc. Human beings are invariably required, and that's generally an anathema to the cloud model.
A pre-integrated approach to cloud-scale infrastructure just might be potentially to this growing crowd -- as long as it was created with their unique needs in mind.
Does This Stuff Really Interest You?
If you're a CTO or data center architect type -- and you have a deep, passionate interest in these sorts of infrastructure topics -- we'd really appreciate a short chat with you. We've been talking to folks, and we're looking for more input right now. Heck, we might even offer you a job :)
If this sounds like you, please drop me a note at chuck dot hollis at emc dot com ...