Rather than incessantly brag about the achievement (sooner or later, other vendors will eventually figure out how to do this), I thought it was indicative of how our thinking around storage is starting to change -- and fast.
The BasicsLet's start with contemplating storage arrays and drives just a few years back -- say, 2007.
You wanted decent capacity, but decent performance as well. Sure, there were exotic faster drives, and large "data tub" drives based on ATA, but it took heavy lifting to get the right data on the right storage at the right time.
Most people ended up splitting the difference, and preferring a "middle of the road" drive: say, a 300GB FC drive spinning at 10K. But it wasn't an ideal compromise in many ways. Rather than guess wrong on performance, a lot of IT groups played it safe, and put everything on these FC drives.
And, if you needed more performance, you'd use a combination of striping, short-stroking, etc. to get the performance you needed, usually wasting more capacity in the process.
Now, let's fast forward to 2010. To get screaming performance, we've now got enterprise flash drives. They're not a little faster, they're a whole new kind of fast. To get efficiency capacity, we've now got these great "data tubs" at 2TB that spin slowly, or spin-down when not needed.
And -- most importantly -- we're now getting intelligent software (FAST, FMA, etc.) that does a decent job of looking at access patterns, and dynamically putting the right information in the right place at the right time.
All of the sudden, how enterprise IT buys storage capacity is now up for a big change.
Optimizing The Two Extremes
We've been quite public in repeatedly stating that "middle of the road" drives won't be nearly as popular going forward.
Instead, we're going to see arrays built from two kinds of media: a small amount of enterprise flash to speed things up across the entire array, and a ginormous amount of uber-cheap disk behind it. And, of course, software to automatically combine the two in an intelligent and useful manner.
All of the sudden, capacity in most enterprise arrays takes on a new interesting design metric: how dense can you get with your capacity?
By the way, if you're building storage controllers on Intel merchant microprocessors (as EMC is now doing), you've got more than enough horsepower in your newer controllers to not only drive all that capacity at very high performance, but enough cycles left over for replication, dynamic optimization, etc.
Case In Point
Consider the new CX4 configurations. Here's a pair of Intel-based storage controllers driving 960 2TB drives, packed 390 drives to a rack. That's ~1,800 TB in 3 racks, folks, very dense stuff. Let's not forget spin-down, or putting this farm behind a Celerra and getting very efficient dedupe and archiving as well.
Or, if you will, the V-Max announcement from a month back where, among other things, we now have the option of two storage engines powering 1,200 drives. Also can go behind a Celerra, if you're interested.
Pop in a handful of enterprise flash drives, use the new FAST capabilities, and you've got a storage farm that's (a) far faster, (b) far cheaper (c) far denser and (d) far easier to manage than anything we or anyone else could have built just a short while ago.
The numbers are attention-grabbing, and speak for themselves. BTW, EMC has always stood behind its customer commitments -- if it doesn't do what we say it will do, we'll make it good -- as always.
But -- it's fair to say -- these arrays build and configure differently than storage arrays from just a short while ago. And that's causing some challenges in people's heads.
Q: "How do you do wide striping?"
A: "Well, we've done that for a while, but with flash, it's relatively pointless"
Q: "How can I make sure performance-sensitive data doesn't end up on 2 TB SATA?"
A: "You're going to have to get confident that FAST does that for you".
Q: "We capitalize our storage over 5 years"
A: "Given the rate of recent technological change, that's a very l-o-n-g time"
Q: "Why aren't you recommending 15k FC drives striped like before"?
A: "Well, we did that to get great random IOps, but now there's a better/faster/cheaper way".
Q: "Why aren't other vendors doing this?"
A: "You'll have to ask them that question"
Change The Technology, Change The Thinking
As I've mentioned so many times before, the entire IT infrastructure landscape is changing very rapidly. We've seen what's happening in the compute environment around virtual servers, we're using the term "virtual storage" to describe the same sorts of changes that are happening in the storage domain.
But it's not just about the technology -- it's about updating our operational and consumption models to reflect the new realities of what the technology can now do. I use the term "information utility" to describe what happens when you take virtual storage, and consider entirely new ways to operate and consume it.
Any way you look at it, though, technology innovation is starting to get ahead of our collective ability to consume it. And I believe that this is becoming less to do with technology, and more around our ability to change our thinking.
And this seemingly minor announcement is nothing more than a small data point of the IT journey we're on.