Of all the "software-defined" categories, software-defined networking is now garnering the lion's share of industry attention. A simple scan of industry publications and associated vendor positioning will amply demonstrate this.
A more tangible example was at the recent VMworld: VMware announced NSX, and demand for sessions was apparently 3x-5x oversubscribed. It seems that everyone wanted to go learn about the new capabilities -- even though VMworld is not a network event.
From my inevitable storage perspective, I couldn't be more excited about the potential. SDN concepts applied to storage and availability can potentially remake how we think about architecture, services and operations.
SInce I've never been shy in speculating about the future, let me share what I think we may be looking forward to down the road.
A Brief History Of Storage And Networks
When I joined EMC in 1994, we were just beginning to convince people to use external shared storage vs. storage that was dedicated per-host. That, of course, implied a network of some sort.
The mainframe crowd was starting to move from bus-and-tag cables to the newer ESCON, albeit in a slow fashion. Us UNIX types were using SCSI, so some of the first approaches involved multiple SCSI cables: between 16 and 32 ugly, thick SCSI cables between array and host.
Not even a network by most definitions.
In the latter 1990s, McData had been building ESCON directors to improve connectivity and manageability in larger mainframe environments. They had started to work with the new FC standard. EMC acquired them, and the first fibre-channel SANs were born.
Also from this era was the idea of using TCP/IP over ethernet via NAS protocols, and filers slowly became very popular as an alternative to block-mode protocols. Several years later, iSCSI then came along as a lower-cost block alternative over ethernet.
Storage wars are never over: a decade later, you'll still find people debating the merits of one approach over another.
When remote replication first became popular, it was very demanding on its transport: guaranteed latency and bandwidth was the norm, hence very expensive. It didn't like to share and play well with others.
From a storage networking perspective, not much has really changed over the last five years. We still use a lot of FC, there's plenty of NAS/CIFS/iSCSI out there, FCoE is slowly finding its way here and there, and so on.
The Basics Of SDN
I am most certainly not a networking guy, and the description that follows here offers unambiguous proof of this assertion.
The control plane is now thought of in terms of management applications that coordinate underlying resources to provide various network services.
While that's exciting enough to any networking professional, it becomes especially interesting to storage architects working at moderate-to-large scale. Once you get beyond a dozen or more storage arrays -- or do anything with remote replication -- you inevitably spend a lot of time working with various networks.
If you're in this camp, you''ll probably agree with me: today's storage networks are brittle, inflexible things: statically provisioned, usually monitored in a silo, and difficult to change or flex. I think that software-defined storage networks have the potential to change all of that.
The Storage Networking Landscape
Networks are everywhere in larger storage environments. First and foremost, we have the fabric between host servers and storage arrays. If we have multiple data centers, we also have WAN links as part of the architecture.
Digging deeper, we have a growing class of node-to-node interconnects, especially in the newer scale-out architecture. And let's not forget that storage administration is usually done on a secured and isolated network fabric.
Now, let's re-imagine all of this as a 10Gb (or faster) fabric. Everything is under software control. As long as we're fantasizing, let's imagine control planes for networking, storage and compute converging in a single administration nexus.
How would this world be different?
Imagining A Software-Defined Storage Network
Let's start with basic provisioning. As new hosts come on line, data services can be soft-provisioned (block, network, object) as well as latency and bandwidth requirements -- as well as desired pathing redundancy.
As certain hosts demand more or less from the storage fabric, network resources can be dynamically re-allocated without disturbing the connection between application and storage. Redundancy can be added or removed.
You get the picture -- sort of a SAN equivalent for what we today with compute using vSphere.
Now, let's look at remote replication -- a particularly difficult part of storage networking. Once again, remote bandwidth can be soft-provisioned, along with performance and availability requirements. Redundancy can be added or removed. Network resources can be flexed up or down depending on circumstances, all non-disruptively.
Let's move to inter-node storage connections -- whether it be a scale-out architecture like Isilon's OneFS or some of the newer software-only distributed storage stacks like VSAN. Or, heck, just consider the connections between Hadoop nodes if you like.
The inter-node connections here have a unique characteristic: you don't need much from them most of the time, but -- when you do -- you need a lot. Like when doing a node rebuild, or a data workload relocation, etc.
The ability to dynamically flex the inter-node fabric -- and share it intelligently with other inter-node traffic -- would be a huge boon to administering larger storage environments, especially if it could be seamlessly orchestrated along as an integral part of other tasks.
There's more: setting up secure multi-tenant environments -- storage, network, compute -- would be a breeze. As all traffic is easily visible, visualizing performance problems between domains gets that much easier. A common security and authentication mechanism for all traffic, regardless of type.
The mind boggles.
So, Where Are We?
Well, it's understandably early days for SDN, and even earlier days for software-defined storage networks. I can see very small glimmerings, for example, in the interaction between VSAN and the vSphere networking abstractions, but perhaps that's just me being optimistic :)
What concerns me is that it's not even being discussed publicly: by either vendors or large-scale users. Maybe I'm being naive here, but the way that these speculative concepts become pragmatic products is through customer demand -- people demanding the same agility in networking and storage that they've come to expect from compute and cloud.
I can't offer a time frame for what I've described here, but I will assert that the key technology ingredients have fallen into place to make this a potential reality.
All we need are some vocal customers to stand up and start demanding a better storage world.
Like this post? Why not subscribe via email?