From time to time, various vendors attempt to combine the best of the two most popular storage access models in our industry: block (presumably on FC, iSCSI or FCoE) and file (most always on ethernet/IP).
Historians of our industry can point to many brave but ultimately unsuccessful attempts to combine the inherent simplicity and ease-of-management of NFS with the performance and availability attributes of SAN.
Many of us have been tracking the progress of pNFS in this regard, and it looks like the time is nearing when customers can seriously consider enterprise-class implementations of this converged access model.
The big question in everyone's mind: will it permanently blur the lines between file and block access?
To Begin With
It's hard to argue with the utter simplicity of file systems as a way to organize and present storage. Everything has a name, the names are usually logical, they're hierarchical, they can be easily redirected and abstracted, and so on.
All of us computer types cut our teeth on one type of file system or another, so we intuitively seem to know our way around.
Block and LUN SANs, however, take a bit more getting used to. The names are less logical, there's little or no hierarchy, familiar filesystem concepts and operations can't be used, and so on. But -- historically speaking -- they've tended to deliver superior response times, bandwidth, predictability and availability.
The Best of Both Worlds?
Any storage type familiar with both has always wondered -- would it be possible to combine the presentation and access model of NFS, and deliver the performance, predictability and availability characteristics of SAN?
Many companies have taken a stab at this at one time or another, including EMC. If you're familiar with MPFS, it was an extension to the NFS environment that used the NAS servers mostly as a metadata server -- and then used SAN protocols to directly access the data itself.
The good news? It delivered the intended benefits -- we have great stories of customers building large enterprise "SANs" (using FC at the time) that were accessed and managed as NFS (using ethernet).
Although the setup could produce eye-popping results, there were two inherent challenges.
First, at the time, you were looking at two separate I/O subsystems --- one for file protocols (ethernet/IP) and a separate one for block protocols (FC SAN). That could get both expensive *and* complex.
The second challenge was that we were the only ones really doing it at the time. You had to use our server-based MPFS client, only use EMC NAS, etc. No real ability to freely mix and match. That's an obvious inhibitor.
And with the pending availability of pNFS (aka NFSv4.1), it looks like both challenges are being addressed.
What's Different Now?
First, it's no surprise that 10Gb ethernet is both a technical and economic reality for many. Whether you choose to run IP protocols (e.g. NFS and/or iSCSI), FC protocols (e.g. FCoE), -- or some combination -- we're at a point where we're not looking at multiple flavors of physical storage connectivity to make the scheme work. It's ethernet -- period.
Second, we've now got a workable industry standard that most every vendor seems interested in implementing. It's evolutionary, not revolutionary.
The core concept in pNFS 4.1 is the notion of a "layout" which clients request from storage devices. At an extremely high level, the layout is nothing more than a map of logical objects to more physical locations. Layouts can be granted, updated, revoked, etc.
A client -- using the layout -- can now directly access storage objects without the metadata servers (e.g. NAS head in this example) getting involved. That's a big win, when you think about it.
If you'd like a more detailed overview on all of this, please take a look at this summary deck from Sorin Faibish of EMC who is our lead on this initiative.
And On To The Bake-A-Thon!
At some point, every vendor working on this stuff needs to get together in a single location, and try out their interoperability with every other vendor. Call it a plugfest, call it a bake-a-thon, call it whatever -- it's a key step in making any interop standard a reality.
Coordinating and staging such an event is no easy trick, though. It takes a considerable amount of time and effort, and is clearly an area where the larger vendors have a role to play in moving things along. As an example, EMC is proud to be sponsoring just such an event next week. If you're a vendor -- and you have some interest -- the flyer is here.
Note to all: us vendors do occasionally cooperate behind the scenes when it's important to our customers. Widespread vendor participation in this particular pNFS 4.1 event is a strong indication of just how broad industry acceptance might be going forward.
The VMware Angle?
I don't want to speak for VMware regarding their intentions for integrating pNFS 4.1 support as an integral part of the ESX I/O stack, but -- you have to admit -- being able to use a single, standardized NFS-type file system that had the I/O characteristics of a block-oriented world -- well, that's a juicy scenario to consider.
And, since the metadata server isn't necessarily in the data path, one can imagine smaller, virtual machines being the "NAS head" with all the heavy I/O being done direct from server to storage with very little in the way.
Convergence Is King
In our industry, convergence is a hot theme: convergence of function, convergence of infrastructure, tighter integration of technologies with others in the stack, and so on.
The idea of a converged storage access model, combined with a converged storage network, accessing a device with converged functionality and architecture -- well, that's just too cool to pass up.