Almost four years ago, I wrote a post that speculated on the emergence of a very particular type of cloud -- a private cloud. Fast forward, and you'll find private clouds everywhere -- it's certainly not a new or unfamiliar concept any more.
WIth this post, I'd like to attempt to introduce a new cloud archetype into the discussion -- the dispersed cloud.
While the concept might seem somewhat strange and unfamiliar, I believe it won't be too long before we simply accept them as another integral part of the broader computing landscape.
Like the private cloud before it, the emergence of dispersed clouds shouldn't be a debatable matter -- the required forces are already visibly at work, and I believe the eventual outcome is simply a matter of time.
The "Big Lump" Cloud Model
The vast majority of clouds today are implemented as what I'd describe as "big lumps": massive, centralized data centers with hardened infrastructure -- the bigger, the better -- at least from an efficiency perspective.
In one sense, they're not all that new -- mostly linear extensions of the more traditional mainframe data centers that came before.
But the world they serve is changing -- and fast. And, once you think about it, the limitations of this centralized approach are easy to spot.
The Rise Of The Machines
The first wave of the internet was mostly about human beings interacting with each other and with applications.
We send mail, we run our favorite apps, we upload a bazillion pictures on Instagram, we chatter about meaningless things on Twitter, and so on.
If you think about each of us as human nodes in a network, there's some sort of natural boundary on how much information we can generate and consume as individuals.
Over time, we'll certainly demand richer forms of information (3D video, advanced analytics, etc.) but the limiting factor tends to be our ability to generate and consume.
Now, replace those human nodes with a far larger number of automated ones: surveillance cameras, building sensor networks, traffic flow monitors, smart cars, or something similar.
First, you can potentially have many, many more smart nodes in the world than human beings -- for example, how many different computing devices do you use each day? Second, each automated node is capable of generating far more raw information than we as humans can create -- no matter how much HD video you shoot -- and do so 24 hours a day, 7 days a week.
And, at scale, we're talking not simply billions of nodes, but perhaps hundreds of billions or even trillions -- each generating and processing enormous amounts of data.
The Network Becomes The Limitation
At some point, it's not entirely reasonable to assume that all that data gets shipped to some centralized location, processed, acted upon -- and the resulting actions forwarded back to the node for potential action.
Even if network costs were near-zero (which they're most certainly not), you still have bandwidth, latency and availability issues to consider.
Put differently, there are going to be hard and fast limitations on how much data you can shovel from the edge to the core -- and back again -- quickly enough to make a useful decision.
In this emerging world of billions of smart nodes and sensors, the "big lump" cloud model appears to break, and break badly.
Ideally, you'd move more processing -- and the capability for autonomous action -- to the edge, whether done by individual nodes, or cohorts of local nodes working in concert together. Work could still be done even if the centralized services weren't available for some reason.
Consider The Following
Let's imagine a smart traffic grid around a congested urban corridor. You've got maybe thousands of smart sensors, each spewing real-time images or passive vehicle information. All of that rich data has to be captured, and acted upon in near real-time: traffic signals reprogrammed, congestion rerouted, optimization policies re-evaluated, etc.
In a perfect world, you'd be using a "cohort" model: individual nodes would share what they're seeing, take note of what other nodes are seeing, and make largely autonomous decisions on how to optimize traffic flow locally but with global policy considerations.
Replace those smart nodes with human traffic control officers communicating via radio, and you've got a decent analogy.
Extending the model further, let's say you were on the lookout for specific vehicles. You could push a template profile down to the individual nodes, and they could do the scanning for you without having to resort to a centralized approach. Back to our human police officer model, that's the familiar APB -- All Points Bulletin.
So now let's interpose the Big Lump Cloud model on our human traffic officers.
First, they now have to report back *everything* they see to central command, whether it's interesting or not. They can't act autonomously unless they receive direct orders from the core. They can't communicate effectively and cooperatively with adjacent police officers to smooth traffic flow, or track specific vehicles.
And, of course, if central command goes off-line for any reason, traffic grinds to a complete snarl.
Ideally, you'd empower and enable your traffic officers in the field to work autonomously and cooperatively around common goals, with a lightweight coordination model.
That's the ethos of a dispersed cloud: substantial processing and a high degree of autonomy at the edge, cooperation between available cohorts, and lightweight centralized services that may or may not be available.
Once you go looking for dispersed cloud applications, you tend to find them in all sorts of interesting places, usually involving some aspect of geography (naturally!).
Anything to do with cars and road transport.
Aircraft and air traffic control.
Surveillance, physical security and defense applications.
Smart houses and buildings.
Advanced supply chains.
Next-gen location-aware mobile apps.
And we've only just begun.
It's the internet of things; the "machine internet" -- untold numbers of smart nodes working intelligently and cooperatively -- just like people do.
A New Computing Model Will Be Needed
We, as IT professionals, tend to view today's highly-centralized clouds as potentially better ways of doing what we've always done with IT: run applications on behalf of users: more efficient, more elastic, more dynamic, etc. And, to be sure, there's plenty of goodness there.
But when we start to think of the clouds we'll need to build to serve the needs of many billions of intelligent, autonomous nodes -- it will be increasingly apparent that the highly-centralized big lump cloud model isn't ideal: too expensive, too inflexible, too slow, too fragile.
The raw information and the required actions will live on the edge. The storage and processing will live at the edge. Cooperative and adjacent cohorts will live at the edge. In essence, the ideal dispersed cloud will be comprised of aggregated and coordinated edge elements.
And I predict it won't be too long before we see all sorts of these newer dispersed clouds sprouting up.