One of the tidbits out of Cisco last week was a relatively quiet announcement around their interest in what they've described as "fog computing": think dispersed nodes, internet-of-things, etc.
Leaving commentary on their choice of names aside, I found myself drawn in, reviewing all the material in some detail.
Why do I find this so interesting?
The move from centralized to dispersed architectures is one of those meta-topics that is on many minds, including mine.
So much so that I did a fun piece about a year ago ("The Emergence Of Dispersed Clouds") -- one of those way-out-there speculative jaunts where I felt something very new had to emerge to fill a clear gap in the technology landscape.
Not to namecheck with buzzwords, but think internet of things, real-time analytics, intelligent sensors everywhere ...
I guess Cisco sees the same mega-trends others see. That’s good.
But as I went through their materials, it was very clear they saw the world differently than I did.
I suppose that’s good as well.
We are entering a world where there is arguably more and cheaper compute (and storage and potential bandwidth) at the edge of networks vs. the core. That's hard for a traditional guy like me to wrap my head around, but there you have it.
Start with considering several billion mobile devices. Now add in a burgeoning population of fixed-location smart sensors: cameras, thermostats, etc. And, finally, add in the native sensors in things that move: cars, aircraft, etc.
All of the sudden, you've got many tens of billions of semi-autonomous compute/storage/network devices sitting on the edge of the network. I haven't run the numbers, but I wouldn't be surprised if the resources at the edge will greatly outweigh those sitting in data centers at the core.
Not to point out the obvious, but in this model all the relevant data will be gathered at the edge. Very often, actions and decisions will need to be made and implemented at the very same edge.
The current model of bringing everything back to a centralized cloud (and driving results back out) will start to look inefficient, unreliable and inflexible before long.
Work will increasingly need to be pushed out to the edge, to a collection of smart nodes working together. And some big questions inevitably result.
How will these smart nodes discover each other? Communicate and collaborate securely and reliably? Work together to tackle workloads that once only ran in a data center? And -- once so enabled -- potentially enable entirely new classes of applications and workloads?
Today, the model is largely hub-and-spoke. Data is captured at the edge, usually sent raw to a centralized computing location over a network, where processing is done. Any instructions or actions are then sent from the core back to the edge.
What's Wrong With This Picture?
For starters -- it's the dependence on a centralized network. Sure, things are fine as long as you're not asking too much -- moderate amounts of data being moved, willingness to tolerate occasional latency or outages, etc.
But once you start cranking up the requirements (e.g. making near-realtime decisions based on video streams), it's pretty clear you go looking for a different model.
What you'd like is a model that does more processing and decision-making at the edge. One where centralized environments provide limited support services, rather than needing to be reliably reachable for anything to function.
Enter the rationale for a new kind of cloud-like model.
Cisco's View Of Fog
Cisco has dubbed this new paradigm "fog computing", which certainly gets points for cleverness.
Not surprisingly, Cisco sees this mostly as a network issue, which is not entirely unjustified.
They announced their first version of IOx -- basically a software-only version of Cisco's IOS wrapped in with a Linux distribution, presumably encapsulated in Cisco router hardware -- with the idea that new "edge" applications would be built on this new platform.
Kudos for them for taking steps in a positive direction. That being said, I found myself arguing with a number of their embedded assumptions.
Painting A Broader Picture?
One things that bothers me in the Cisco model is that there is no clear notion of cohorts: edge nodes that can cooperate in and ad-hoc manner without relying on a centralized server, network or cloud.
Think smart cars as an example. Or all the smartphones in a single location.
Instead, Cisco appears to be talking about doing application processing using an edge controller (vs. the core), which is a bit disappointing. Among other limitations, this only serves to replicate our existing hub-and-spoke model but at a smaller scale.
It doesn’t fully embrace the notion of just how powerful edge devices themselves have become.
Although there are plenty of use cases where one can certainly rationalize a dedicated edge controller (e.g. smart video surveillance, smart traffic lights, etc.), it just seems so — limiting!
A related discussion comes from the positioning of IOx, as a platform for building services. To my way of thinking, the platform itself needs to provide fairly robust services out of the box to enable dispersed applications to be more readily built.
Core services like discovering and authenticating potentially cooperating nodes. Persisting data across those nodes. Dividing up and coordinating compute tasks among nodes, and aggregating their results. Managing communications back to centralized clouds as needed, and so on.
The notion of a dedicated, master edge node — while intellectually convenient — isn’t really a requirement any more -- if we choose. It’s more a matter of distributed, cooperating software.
Expanding The Use Cases
When it comes to use cases for dispersed clouds (or perhaps an expanded model of fog computing?), I think we’re not yet being imaginative enough.
Many of the examples tend towards public sector applications: things like video surveillance, traffic management, defense, etc. All good and very valuable.
Other large sensor networks that require edge-based collection, processing and decisioning aren’t hard to imagine: smart energy meters, health care, transportation networks, and much more.
But I think notions of dispersed clouds could also make a considerable impact on a personal level.
The idea of smart vehicles that communicate and collaborate has been around for a while — indeed, in the US the FCC is looking at the issue.
Personally, I’m looking forward to a world where the car is potentially smarter than some of the drivers I encounter.
But I think this dispersed model can result in even more interesting personal use cases. For example, in my family of five, we all carry reasonably powerful smartphones. There are laptops, there are tablets, there are smart thingies everywhere: media devices, game consoles, maybe smart controllers before long. We are all frequently in close proximity in our house.
What could we do with all of that?
For starters, wouldn’t it be great to pool all that aggregated storage capacity, and have sort of a distributed pool of reliable data across everything? Sync when in range, autonomous when not, etc. No centralized server, just peer nodes communicating and sharing in proximity — perhaps with external access as needed.
Or how about something more pragmatic, like an iPhone update? Imagine you’re in a conference room — how many iPhones are in the room? If one was already updated, it could offer to update the others in the room using Bluetooth, as a background service.
Do something nice for your co-workers :)
Consider big public events: last weekend, we were treated to the annual spectacle of the Super Bowl — and incoming video streams were blocked due to lack of bandwidth.
Instead, imagine a peer streaming application that used a local network mesh to pass content around the stadium. Not something that the wireless carriers would get excited about, but interesting nonetheless.
Or a content sharing application between you and your friends to create a shared repository of pictures and videos from the event across your devices — with no need for upload/post/download.
Same basic notion: smart software that uses ad-hoc mesh networking to discover and cooperate with participating heterogeneous nodes.
No architectural dependence on centralized network infrastructure, servers or clouds.
Clearly, I’m way off of the direction that Cisco is thinking :)
Getting A Bit Philosophical Here
For everything thing, there is often the anti-thing, sort of a cosmic ying and yang.
In tech, we have proprietary software and open source. Stateful and stateless computing models. Fat desktops and thin clients. The list goes on.
Today, cloud in its current form is certainly a “thing”.
What will its anti-thing look like?
Like this post? Why not subscribe via email?