Stop me if you've heard this before -- wouldn't it be great if we could make all of our local file servers around the globe act like a single giant intelligent file server?
I can count at least a dozen different takes on this particular IT challenge in the last 15 years or so -- but none has enjoyed wide popularity or success.
But, in the spirit of bringing you cool new things to consider, I'd like to share with you an interesting partnership that's starting to enjoy some good traction with customers to do exactly what I've described.
While nothing is perfect in this world, this particular combination seems to do a far better job than previous approaches in addressing some of the more challenging issues behind delivering global file services.
Here At EMC, We've Bitten From This Apple Before ...
Anyone who spends time with file servers has thought about this particular challenge more than once. At its root, you've got the challenge of overcoming the latency associated with distance. File performance is acceptable when the data is close to where you are; and less acceptable as distance increases.
That's why it's not uncommon to see file servers scattered around a global corporate network.
But, when you do that, you've got some thorny challenges: creating a single view, getting the right information to the right place at the right time, ensuring consistency, controlling costs, etc.
Your head hurts after a while.
Way back when, this was sometimes thought of as a namespace problem, an approach we tried with Rainfinity a while back. Yes, you can create what *appears* to be a global file system, but it certainly doesn't behave, operate or optimize like one.
If all the data happens to live in one place, we can build really big single filesystem, a-la-Isilon, but that isn't a good answer where information producers and consumers are widely dispersed across different timezones.
If you're willing to adopt an object API, EMC's Atmos offers an incredibly elegant and robust answer to this problem -- that is, if you're willing to embrace an object view of the world.
The reality is that 99.9% of the world prefers the familiar filesystem model, and -- even with the CIFS/NFS gateway we built -- a global Atmos service doesn't behave or perform like a traditional file server sitting in your campus.
Other vendors have tried here as well. A few years back, file caching appliances were all the rage. They did a good job of masking the problem for some use cases, but didn't really architecturally change the equation. You'll also see various replication schemes in an effort to explicitly move stuff around here and there as needed.
File transfer services are certainly popular (basically, enhanced FTP), but they create other challenges as well -- they don't fit into established workflows and models, and -- let's face it -- it takes time and effort to explicitly upload and download those big objects.
Personally, I had sort of given up on this whole area -- I started to think it was one of those problems that needed to be solved elsewhere in the stack, e.g. at the data management layer or application layer. It seemed too hard to solve directly within storage layer itself, especially since the presentation layer (e.g. familiar filesystem) appeared to be complete cast in concrete for eternity.
A Hybrid Approach?
So, in a nutshell, here's the problem ...
Advanced object storage solves the data logistics problem (single image, single instance, moving data around based on policy, cloud-like service consumption model, etc.) but doesn't solve the presentation problem, e.g. "behaves like a local file system".
Global file systems and caching approaches solve the "behaves like a local file system" problem, but don't do a great job of addressing the underlying data logistics problem.
Could there be a way to combine the two? Yes.
The new answer appears in the form of a partnership between the EMC Atmos team and Panzura. You're not alone if you haven't heard of Panzura -- neither had I until recently. They're a 60+ person series-C startup, largely comprised of ex-NetApp, ex-EMC and ex-Riverbed management who've been working on this particular problem since 2008.
And they've started to rack up some nice real-world wins.
Conceptually, it's not hard to understand what they bring to the table. Their "secret sauce" is their global file system: single view of all data, global dedupe, encrypted, global locks, intelligent node caching, snaps, AD integration and more.
This software is delivered through small, distributable appliances that replace the traditional local file server for all intents and purposes.
EMC's Atmos brings a few important pieces: a well-established distributed object storage model behind the file system, a rich set of metadata capabilities to drive policies, a great management model -- as well as multiple consumption options: your own storage cloud, consumed as a service, or perhaps a combination.
Although it's a somewhat cumbersome story to tell, it obviously seems to be getting noticed ...
Once you get your head wrapped around the specific challenges of the use case, and can appreciate what each piece of the solution does individually and together, there are some nice implications.
- "one view of the truth" happens transparently for all readers and writers in all locations.
- remote performance (and user experience) is essentially a policy decision: how aggressively do you want to cache locally, using either the Panzura layer or the more persistent Atmos locally replicated object approach?
- availability and protection is likewise a policy decision: you can use traditional file-oriented backup schemes (Avamar comes to mind), or go with the Atmos data protection model (multiple objects, distributed parity error codes, etc.).
- the back end storage services can either be delivered internally, using a variety of external service providers (including AWS), or any combination.
- costs can be well managed as well, with a rich set of information and archiving options to either an internally managed environment, or external storage services as desired, e.g. (EMC's CTA). Global dedupe helps here as well for information types that can benefit from it.
- the underlying Atmos model allows different service levels to be established, metered and either shown or charged back as needed.
- and, of course, Atmos' rich metadata model can support all sorts of interesting policy constraints on where information is allowed to be stored, and where it isn't, e.g. personal information not leaving an EU country, for example.
If you're intrigued by all of this, there's a join webinar coming up on Nov 15th.
The Bigger Picture?
As an industry, we've done a decent job of creating 'cloud' models when all the information and processing conveniently happens to occur in one place, e.g. a data center or two.
But you can already start to see the limitations of that model showing up everywhere. Information is created in one place, and consumed in another.
And compute is starting to follow the information around, and not the other way. Data gravity is real.
As a result, ideas and concepts around information logistics are becoming more popular.
I don't think there will ever be a single, perfect answer to this class of challenge, just a sequence of interesting approaches for one use case or another.
And, certainly, there are other ways out there to address this particular challenge, but none others that I'm aware of that offer this level of enterprise-grade robustness and functionality.
For me, that makes the Atmos / Panzura combination worthy of understanding, especially if you're looking for something above and beyond the ordinary.