As part of EMC's "megalaunch", the respected Symmetrix VMAX got a very healthy upgrade of new features and functionality through software -- specifically Enginuity 5785.
Although others will probably do a better job on the deep dive specifics, I thought I'd use this post to share -- at a high level -- what the VMAX is all about, and -- more importantly -- what's new with this important release.
The World of High-End Enterprise Storage
Not everyone is familiar with this part of the storage landscape. A while back, I attempted to write a post around what made this part of the market very different from all others. If you're not familiar with high-end storage context, it's worth reading.
In this particular category, I think that most of the data shows that EMC has been the market leader for over 15 years. More recently, traditional high-end storage market shares have shifted significantly in our direction, which is promising to say the least.
In this post, I thought I'd take you through the Big Picture, using the exact same intro deck we use with customers. Along the way, you'll see many of the new features that were announced today, but in context.
If you're keeping score at home, much of this content was initially discussed publicly (although not by EMC!) when we shipped Enginuity 5785 to customers in December 2010. Indeed, many bloggers saw this as a big deal -- and were curious as to why we weren't making more noise about it.
Well, we're "making the big noise" today :-)
Why Do People Buy VMAX?
Specific reasons might vary, but -- in general -- this is a pretty good list from the deck.
The availability is there, the capacity is there, the functionality is there, and the performance is there. No muss, no fuss -- many people see it as the best on the market.
Second, it automatically optimizes storage service levels.
One of the key technology behind this -- as many of you know -- is the latest version of FAST (fully automated storage tiering), with this version being dubbed FAST VP (virtual provisioning)
Third, it's become purpose-built for large-scale virtualized server environments: big pools of server, please meet big pools of storage. And this is especially important as more and more critical application workloads end up in virtual machines vs. less demanding test-and-dev environments.
Finally, it does all of this with demonstrably better economics than other alternatives -- capex and opex.
It may not be all things for all people, but when the fit is there -- there's really nothing better.
How It's Built
In terms of underlying hardware architecture, I don't see it as fitting neatly into established categories. It's not really monolithic, nor is it entirely modular.
It offers a significant degree of scale-out and scale-up, but can't really be called either pure scale-out or scale-up as compared to other examples.
The vast majority of hardware is standard stuff (Intel CPUs, etc.) -- with very little use of custom ASICs, for example. As a result, as standardized components get better/faster/cheaper, we can easily incorporate them into the VMAX architecture.
The Core Building Block
Here the processor boards are both redundant and very tightly coupled using a dedicated board-to-board bridge. Each director (two per storage engine) has a healthy amount of processing, memory and I/O connectivity.
Multiple storage engines are combined to build progressively larger -- and faster -- VMAX arrays.
The largest VMAX arrays -- surprisingly popular, by the way -- use 8 storage engines (16 storage directors) and sport an respectable 128 processing cores.
As you can see from the stats on the right of the slide, simply adding more storage engines results in some impressive aggregate capabilities -- up to 2,400 drives (either traditional or newer enterprise flash drives), up to 1 TB of global memory for storage cache operations, plenty of front-end and back-end ports -- and a wide variety of connectivity options for just about every server on the data center floor, including both mainframe as well as newer 10GbE.
Unlike high-end arrays from long ago, a customer can start with a modestly-sized VMAX, and then progressively grow it with more storage engines, more capacity, more ports, etc. -- all non-disruptively.
Not Just For Traditional Enterprise IT, Either
Many of the attributes that make it attractive for large-scale enterprise IT organizations also make it compelling for the new generation of IT service providers.
At its foundation, the VMAX delivers an attractive cost-to-serve (capex and opex) at scale for the vast majority of enterprise workloads.
SPs can use a single storage platform to serve up a very wide range of service levels, manage them consistently, and change them dynamically.
And use of technologies like FAST VP as well as scale-oriented management tools means that the SP operating at scale can likely provide a better service -- at a lower cost -- than many IT operations can do on their own.
Finally, many enterprise IT organization are discerning when it comes to working with service providers.
They want to know what they're running on. And it's turned out to be useful when the SP can say they're running on the same storage infrastructure as found in the most demanding operations around the globe.
This suitability for SP environments also helps in many enterprise settings -- it's fair to say that many enterprise IT organizations aspire to be "internal service providers" for the organizations they serve.
The VMAX does a good job of supporting these newer delivery models for enterprises moving in that direction.
Federated Live Migration
This is turning out to be an important new feature, but not everyone might understand its significance, so let me explain a bit ...
Storage migrations are a painful fact of life, especially in larger settings. The norm for tech refreshes is around three years. That means that -- every three years or so -- you're moving many terabytes of data from your old storage to your new storage.
Now, bump up the scale a bit. Many of our larger customers have dozens -- or sometimes hundreds -- of these larger arrays. As a result, it's not unusual to be in a state of "perpetual migration" -- at any one time, there are old arrays coming out of the environment, and new ones coming in.
Storage migrations -- at scale -- are among the most painful IT tasks known. There's usually a ton of planning, a ton of work -- and a lot of application disruptions. Since we're talking about critical applications, there are only small windows of application availability to work with, and no one is happy about any form of downtime.
Going a bit further, imagine a single array supporting hundreds of such critical applications, each with its own downtime window. Does your head hurt yet? Good.
Federated live migration allows the non-disruptive movement of application storage from "old array" to "new array" with far less planning and disruption. It uses a combination of array-based data movement coupled with coordinated I/O redirection (done by a MPIO product like PowerPath) to vastly simplify and speed up the migration process.
From a customer perspective, this can be a big deal. Imagine the new storage array (say, a VMAX) offers impressive new performance, economics and functionality. But to get to that world, you have to migrate a petabyte of data spread across 150 applications, none of which want to be taken down.
With the new capabilities, the migration can be done in the background -- and space reclaimed -- in far less time and with far less effort -- making the new economics that much easier to get to. And as data volumes grow, I'm betting these sort of non-disruptive large-scale migration capabilities become more and more important.
A Big Performance Improvement
A neat trick of the new software is a non-trivial performance bump for a very useful I/O pattern: large-block sequential I/O. It's almost like a hardware performance bump, but without the need for new hardware!
Large block sequential I/O shows up in high-speed backup and restore operations, many forms of data warehousing and analytics, not to mention all sorts of bulk copy and movement tasks. Smarter applications make every effort to lay down data sequentially, because it's faster to read that way.
Going a bit further, as we get into FAST VP and dynamic data relocations, this performance bump means that the VMAX can get the right data on the right media at the right time about 2x faster than the previous software release.
Add in the new FAST VP capabilities (described below) that offer substantial performance bumps for more randomly oriented hot-spot OLTP data, and customers that invest in a few flash drives should see a healthy and non-trivial performance boost across the board.
How often do customers get a big performance bump through a no-cost software upgrade? Far too infrequently, I'd argue :-)
The Importance Of Being Virtual
The newest features in VMAX show just how far that thinking has come.
Traditionally, individual applications were hard-wired to their storage devices, as well as hard-wired to a specific service level.
In the virtual world, it's a pool of compute and storage resources, where individual application service levels are carved out of a pool vs. using dedicated resources. Administrators set and manage policies, and not individual devices.
Although the above might sound a bit like gobbledygook, it represents a fundamental shift in how we think about storage going forward.
FAST Gets FASTer
FAST -- fully automated storage tiering -- is based on two storage facts.
And the second fact is that flash drives are far faster (~30x) than the spinning disks they replace.
If smart software can spot the hot spots, and intelligently move them to the right media at the right time -- there's the potential for not only huge performance increases, but huge cost savings as well.
The performance boost comes from accessing popular data from solid state vs. rotating rust. And the cost savings come from being able to use the most cost-effective disk drives available for all the data that *isn't* particularly popular at a given time.
Many of us think this is the single biggest innovation in storage media technology in the last few decades -- it's that big.
Our first version of FAST for Symmetrix did this detect/optimize/move action on a per-volume basis. Even though the granularity wasn't ideal, it delivered a huge wallop of performance and cost savings for many of our customers.
Now, with the availability of FAST VP (virtual provisioning), we've delivered an industry-leading implementation of a far more granular, automated and simplified approach.
Use more flash in your storage service class, get better performance, albeit at higher cost. Use less flash in your storage service class, get better economics, but not extremely high levels of performance.
Or anything in between :-)
All the administrator has to do is then map arbitrary applications to pre-established service classes (or policies) and -- voila! -- FAST VP takes care of the rest -- automatically optimizing all applications against the pool of resources established.
Storage volumes can be grown or shrunk non-disruptively. And application storage can be moved non-disruptively between service classes (e.g. gold to silver) as needed.
Need more performance? Add a bit more flash. Looking to save some money? Use more SATA. Or simply move the application to another category, and be done with it.
The other notable feature of this release is integrated management support. Using the standard Symmetrix Management Console (SMC), it's pretty straightforward to set up FAST VP, and monitor its effectiveness.
There's no need for in-depth training or a professional services engagement -- we've designed it to be immediately usable by just about every Symmetrix storage admin that's out there.
Yes, There's Chargeback Now
I mentioned before that much of the industry was quickly moving to a service provider model -- either as a dedicated service provider, or enterprise IT groups that wanted to be more like internal service providers.
It's hard to be any sort of service provider without the ability to provide cost transparency -- who's using what resource -- and what they should be charged.
One of the cooler features is the ability to use newer versions of ControlCenter to not only discover, report and analyze different storage service classes, but to rather easily create chargeback reports based on the combination of capacity used and service level delivered.
While not a fully-featured end-user customer billing portal, it's enough functionality to meet the needs of many organizations who simply want to show their users what resources they're consuming.
A Big Difference In Cost, Performance, Power, Etc.
On the left side, we've got a typical DMX-4500 configuration. This was state-of-the-art for high-end enterprise storage just a few short years ago. Lots of these out there, and they're still being sold today.
You see a configuration of 1,400 drives -- a mix of high and medium performance FC drives -- delivering about 213 TB raw. This sort of configuration wasn't unusual -- you'll find lots of environments set up similarly.
Now, consider the right side of the chart.
Note that now there are only 517 drives required to support the capacity -- less hardware, less cost, less footprint, etc. Also note that there are a modest number of 200 GB flash drives (13) and a large amount of SATA drives (164 1TB models).
Finally, take a look at the key stats, marked "KPI" on the chart.
14% more capacity.
20% more performance.
60% more green.
And 64% fewer drives.
And all the automated operational benefits as well ...
Rather significant, wouldn't you say? That's the point -- FAST VP a big leap forward for our customers.
More Efficiency Features
This version of Symmetrix Virtual Provisioning defaults to virtual provisioning as its foundation.
You might wonder why we didn't call the feature "thin provisioning" like everyone else. The reason is rather simple: this particular feature goes far beyond traditional thin provisioning implementations.
In addition to the usual efficiencies associated with thin volumes, there's the ability to easily grow, shrink, reclaim and relocate storage capacity non-disruptively, and while the system is in "full boogie" mode.
Much more than your garden-variety thin provisioning implementation.
We're also making major progress on another important aspect of storage efficiency -- storage admin efficiency!
Whether you compare common tasks like provisioning to either previous versions of the Symmetrix, or perhaps our competitors, you'll notice a dramatic reduction in both the number of "clicks" required to do a task, and the amount of wait time between steps.
More can be obviously done here, but many of our customers say it's a night-and-day difference between what's come before.
In a world of dramatically growing storage demand, *all* storage resources become precious, including scarce administrative time!
New Workflows For Virtualized Environments
In highly virtualized environments, there's often a need for a more natural and optimized workflow between storage teams and server teams.
Rather than a serial process where server teams tells storage team what they want (and wait for it!), the newer models involve handing over a substantial pool of resources to the server team so they can get on with their job without waiting for a bunch of work from the storage team.
The new VSI -- virtual storage integrator -- is an instantiation of these new workflows.
Virtual server administrators -- both VMware and Microsoft Hyper-V -- get a nifty and comprehensive plug-in for their preferred administrative tools.
They can discover, provision and configure to their heart's content without having to go to the storage team for each and every request.
The impact is simple: vastly improved performance for many operations. The bigger and more complex your VMware environment, the more these features become important.
Hardware locking, in particular, might not sound like a big deal, but if you're trying to cram ever-more virtual machines on shared data stores, it can be a big deal in some environments.
Symmetrix -- Now Even More Secure
On the security front, there's now Data at Rest Encryption (a.k.a. D@RE) that encrypts selected data sets using RSA technology and without the usual performance impacts.
Storage encryption doesn't have to occur at the array level -- there are good solutions for doing this at the application, HBA, switch, etc.
But, like everything else in life, there are pros and cons to doing this directly in the array itself.
This point feature joins the existing Symmetrix security features -- secure credentials, tamper-proof auditing, secure erasure, etc. -- as well as nice integration with the rest of EMC's security portfolio -- RSA key management, RSA enVision security information event management, Ionix IT compliance verification, etc.
Taken together, these advanced features -- when configured and managed properly -- provide what is arguably the most secure storage array on the planet.
Lots More To Talk About, But ...
Probably a lot more as well -- I started through the release notes, and quickly lost track of all the more detailed new features and enhancements.
One, in particular, stood out that's probably worthy of mention -- a new E-Licensing system.
Our customers have told us -- very frankly -- that the whole license thing was a massive pain in the butt: getting licenses, installing licences, verifying licence compliance, etc.
With this release, we've got our first round of a new approach to the challenge -- an E-Licensing system that should dramatically reduce the effort involved here.
I'll be curious to see what our customers think of this newer approach.
Impact Of New VMAX Software
For our existing VMAX customers, I think EMC has done right by them.
We've essentially given them a vastly improved storage array (performance, functionality, integration, automation, etc.) -- not by selling them a new box, but as part of the normal no-cost release train.
In this business, that's rare indeed.
If you're not a VMAX customer, and all of this is starting to sound really good, I'd invite you to learn more about high-end enterprise storage, and VMAX in particular. It's not for everyone who uses storage, but if your environment runs traditional enterprise workloads, and your environment is getting rather substantial, you owe it to yourself to check it out.
And if you happen to be running on one of our competitors' offerings -- see what you're missing?