Today ends most of the speculation around the often-speculated-but-seldom-seen "Maui" product, now offcially known as Atmos.
Before we get to the usual skepticism that accompanies any unique or novel technology, let's take a look at what EMC has brought to market, what problems it tries to solve, and some of the underlying technology concepts.
Ready? Let's go!
A New Category -- COS -- Cloud Optimized Storage
As part of this product introduction, we're going to be introducing a new term -- COS, or Cloud Optimized Storage -- to the industry lexicon.
Why? The traditional storage taxonomy doesn't do a good job of describing what Atmos (and, presumably, future solutions from other vendors) actually does. As you'll see shortly, it isn't SAN, NAS or even CAS.
So, what makes "cloud optimized storage" so different? The use of policy to drive geographical data placement.
OK, that's an extremely abstract concept, so let's build it up in pieces.
A Simple Model
Imagine you had a single data center with all sorts of juicy content that people wanted around the globe.
If you happened to be close to the data center, access times would be pretty good, right? But, if you're halfway around the world, data access could be slow, almost to the point where it might be unusable.
If it's a very popular piece of content (say, the latest political spoof from Saturday Night Live), just having one copy in your data center sitting on one spindle wouldn't be enough to keep up with demand, would it?
To deliver a reasonable user experience, you'd want to have (temporarily) multiple copies, ideally dispersed across multiple continents.
And then there's networking costs. Spending on big pipes to deliver essentially the same content from one side of the world to the other over and over again strikes most people as wasteful, not to mention expensive.
Global mobile phone operators have discovered that when there's a popular set of football (soccer) matches in Europe, people around the world want to watch them on their mobile phones.
That's a lot of redundant bytes being sent over very long wires. How about making a temporary copy closer to where people are accessing it?
So, as a starting point, let's imagine a global object repository, formed by multiple storage nodes scattered around the internet, all seen as a logical whole. Not really a file system in the traditional sense, although it could be presented as one if needed.
Applications load content from anywhere. They're now part of the 'global object pool'. When you load the content, you specify a policy, such as "gold" or "free" or "secure" or "pay per view" or "we think this is gonna be really popular" or "keep a certain minimum number of copies around for redundancy purposes".
That policy specification is dynamically interpreted by the Atmos environment. If something gets very popular, and access times elongate, Atmos can make multiple copies based on where the interest is coming from. And when the demand storm subsides, go back to a more cost-effective scheme.
The same mechanism can be used to implement, for example, authentication and digital rights management schemes. Or any other external logic that's triggered by either information access, ingestion, or external events (such as the passage of time).
As the definition of "policy" is rather open-ended and extensible, this approach leaves the door open to all sorts of clever applications in all sorts of surprising areas. Examples might include the capture and distribution of geophysical data. Or video-based training and certification. Or large-ish source code libraries. Or global software distribution. Or all sorts of video-on-demand models.
Use your imagination.
If the information objects in question are big (or there are a lot of them), and they're all being used around the globe, and it's going to be hard to predict when it's going to be popular, it's worth considering an Atmos-style approach.
What It's Not
As with any new technology, people tend to say "well, that's just like XYZ". Making those sorts of statements will be very hard to do when considering Atmos. In fact, most of us believe that this sort of approach is pretty unique in the industry.
So, let's go through the list of what Atmos isn't.
First, it isn't "clustered NAS storage", unless your definition of "cluster" includes geographically dispersed nodes, and your definition of NAS includes global object stores, rather than file systems.
Second, it's not a straighforward content delivery network as provided by Akamai and others. Not only is the functionality much richer with this sort of approach (driven by an extensible policy engine), but it can easily handle not only objects of considerable size, not to mention potentially several billions of objects.
Third, it's not something like a Centera. Sure, some of the object repository and metadata thinking can be found there, but this is an entirely different technology set and use case. With Centera, it's more focused on the preservation and retention of information, rather than global scale distribution.
Fourth, it's not really a new storage array, at least from a hardware perspective. The Atmos hardware is basically very dense, very cost-effective storage, with industry standard servers running the Atmos software. It competes well with others of its class, and has some interesting packaging innovations, but if you're a storage array geek like me, not a lot to get overly excited about.
And, finally, it's not the over-used "Web 2.0" storage as described by IBM and others. Those are still fairly traditional arrays, sitting in a single data center location.
Lots Of Coverage Today (updated periodically)
I'll be updating this section with all the coverage as it emerges today.
For starters, there's StorageZilla's excellent post, as well as Steve Todd's, not to mention coverage from StorageBod, Chris and a few others. Chris Mellor at The Register checked in with an interesting take, as usual. Even more from Network World and Tarry Singh -- both interesting!
And, if you're really curious, you can go back through some of my previous posts, and see all the bread crumbs I tried to drop along the way :-)
A few questions are coming in that deserve answers.
Was this an acquisition, or did EMC develop Atmos themselves? Not that it really matters, but this one was a 100% organic development, although you can see some obvious DNA from earlier EMC offerings and initiatives.
Do you have any customers? Yes, we have several. As Chris Mellor points out, Atmos has been in customers' hands since June of this year. They're names you'd recognize, but -- as usual -- they don't really want to be part of EMC's PR initiative :-)
Why did you wait so long to announce this? Lots of reasons, really. One is that we really didn't need to -- we had the product, we had customers who wanted to give it a try, so we didn't need to drive demand. Another was a natural conservativeness around completely new technology -- we wanted to make sure the product worked as advertised, that we had happy customers, etc.
Is Atmos hardware-agnostic? Yes, that's the design. It runs well as a VMware guest, for example. That being said, our experience with customers so far indicates a strong desire for hardware that's built for purpose -- especially at this sort of scale. So that's why you see the nice arrays as part of the announcement.
Who are you competing with? That's a tough question. Other that a few home-grown solutions out there, we're not aware of any other vendor who offers a product like this for sale. I'm sure that won't last very long, though.
Typically, when EMC announces something like this, we go through a period of competing vendors doing the competitive trifecta (you don't need it, it doesn't work, and we'll have something better soon anyway), and then we start seeing similar offerings from other vendors, usually in 12-24 months.
So, I'll be better able to answer your question in a year or so.
What about VMware's cloud strategy? And does this really change anything for 98% of the people that are wrestling with traditional IT challenges? Steve Foskett takes a skeptical view of all of this (probably the first of many to do so), asking fair questions.
He's right -- Atmos doesn't solve problems in today's traditional data center. I don't think anyone at EMC ever positioned it that way, though. Atmos also doesn't make a decent cappucino, in case that one comes up as well.
And, if you think for a moment about vClouds slinging guested virtual machines hither and yon, one does has to ask the question about how the information might follow it?
[Update on Nov 11 AM]
Well, EMC's competitors have taken notice of what we're doing, and have predictably weighed in.
CalvinZ over at HP thinks this is all about hardware, and has taken the time-tested approach of name-calling (proprietary, monolothic, etc.). HP thinks Atmos is a traditional clustered NAS device, which is incorrect on several counts. I tried to leave a comment on his post, but I guess I'm not welcome there any more.
Marc Farley (formerly of EQL, then Dell, and now 3PAR) weighed in with an unusually rancorous post that was more about me than the company or product. I left a comment suggesting he might lighten up a bit; we'll see if it goes through.
It's not that I was expecting a congratulatory note or anything from these competitive bloggers, but -- really -- I was hoping they could do a bit better in responding to the Atmos announcement.
It's pretty hard to intelligently respond to all that whining.
What Does All Of This Mean?
For those of you consuming traditional storage with traditional IT use cases, none of this will matter much to you in the short term. Sure, it's all interesting, but not many typical IT organizations are wrestling with the problem of global content distribution and logistics.
But, for a few of you -- and you know who you are -- this sort of approach will be inherently intriguing. You're running a global business. You're ingesting and distributing content from everywhere to everywhere.
All you can see in the future is more, more, more and still more.
And I think you'll be very intrigued by EMC's fresh thinking on the topic.