If you're in the IT infrastructure business, you know that flash storage is changing the way we think about performance.
Among IT vendors, EMC is somewhat unique in that we've invested more aggressively (and in more ways) in this industry transition than anyone else.
Back in 2008, EMC was the first enterprise storage vendor to introduce enterprise flash drives. Intelligently mixing flash and traditional disk continues to be a popular theme, as evidenced by the success of FAST (fully automated storage tiering) as seen in both the VMAX and VNX arrays.
Just for completeness: Isilon uses flash to speed up inode and metadata handling in their scale-out clustered array.
At EMC World last May, we laid out a much broader vision: flash in the server (EMC VFCache), shared server cache (referred to as Project Thunder, not yet GA) as well as an all-flash block storage array referred to Project X (based on the XtremIO acquisition).
And we've got strong evidence that customers will be interested in one, a few or potentially all of these different approaches -- depending on their requirements.
We're not forced into arguing the merits of one approach, or the disadvantages of another.
Since we're investing so broadly, it sort of forces competing vendors to pick one or two areas to try and compete, since no one to date has announced their intention to compete across the board.
One particular focus area that's drawing a lot of attention is server-side flash caching. Fusion IO got to market first, resulting in an increasingly familiar compare between what they're doing and what EMC is doing with VFCache.
With today's announcement at VMworld, you'll see clear evidence that EMC is starting to pour on the R&D muscle we're well known for.
The Flash Landscape
Flash used as storage makes IO -- and application performance -- smoke. Compared to previous approaches, you can get a whole lot of performance for a lot cheaper by intelligently using flash here and there. But some important distinctions need to be made.
First, there's a big difference between flash-as-cache and flash-as-storage.
The former is great for reads, or writes that can easily be recreated, e.g. temp files. The latter can be used as persistent storage: there are features that make sure the written data is there when you go to read it again, even if there's been some sort of failure in the meantime. Like your Oracle database records.
The second distinction that's becoming important is shared-resource vs. dedicated-resource.
Going back to disks, we're all familiar with disk drives that live inside of servers. They're fast and cheap, but they're difficult to share between servers and workloads, not to mention a bit lighter on the data protection functionality. The same is true with server-side flash cache: fast, cheap -- but difficult to share and protect.
It's not a simplistic either-or discussion, sorry to say. For example, put some server-side cache in front of a traditional array, do some decent read caching, and -- voila! -- your traditional array just got a whole lot faster because now sees a lot less work it has to do.
Not surprisingly, people are looking for a best-of-both-worlds approach, and that's where software inevitably comes in: bridging the different resource pools, getting the data to the right place at the right time, making the whole thing manageable, etc.
While the hardware aspects of flash are mildly interesting, the real value and differentiation is being seen in software functionality and integration.
And that's what you'll see in this announcement: yes, some new hardware -- but please pay attention to the software side.
VFCache 1.5 – Yes, New Hardware
We now have a new 700GB SLC card available. This is particularly useful in split card mode where part of the capacity is being used for read caching, and part for non-persistent writes.
VFCache now supports multiple cards per server. The caching algorithms have been tweaked based on customer experience, and now there’s support for customizable max I/O sizes, which is turning out to be useful in Microsoft Exchange environments.
So far, so good.
VFcache 1.5 - Software Dedupe
For starters, the VFCache software now supports fixed-block 8K dedupe, enabled or disabled on a per-card basis. Host CPU cycles are used to calculate a signature; if that block is already being stored, it won't be stored again -- for both reads and non-persistent writes. Great for use cases where there's a lot of duplicate data potentially being cached.
I wasn't able to get a precise quantification of just how many CPU cycles are used for this activity, but the lead on the project shared it was almost negligible. There's also the difficult-to-quantify benefit of less write activity and hence less wear, but I haven't heard any complaints on wear-out to date.
Bottom line: more flash caching for less money. Nothing wrong with that.
VFCache 1.5 -- Full VMotion Compatibility, And More
A few people have pointed out that the first version of VFCache didn't support standard VMotion operations. That gap has now been closed -- it's rock-solid, and does it using standard vSphere interfaces vs. a proprietary approach. Done!
But VMotion isn't the only clustering approach out there, so VFCache 1.5 now also supports Windows 2008 R2 failover, native RHEL clusters, Symantec Veritas as well as the SIOS SteelEye.
Server support is also greatly expanded for all the popular rack-mount types: HP, Dell, Cisco, Fujitsu and NEC. HP Proliant blade servers are supported through PCI extension.
More connection options are now supported: in addition to familiar FC, there's now support for 1Gb/10Gb iSCSI and FCoE.
VFCache 1.5 – Coming Soon For Cisco UCS Blades!
We've got many customers who are committed to the UCS blades, and the first version of VFCache didn't have anything for them. Well, that's now been addressed.
There's a new customized LSI Nytro WarpDrive PCIe mezzanine card that will come in both 400GB SLC and 800GB MLC versions. Hardware will be ordered through Cisco, software through EMC.
VFCache 1.5 -- Foundation Work For Better Array And Management Integration
In this world, server-side cache can be seen as either part of the converged infrastructure, or as an extended part of the storage domain.
Although we're not announcing an availability date, a future version of Unisphere for VMAX will offer, for example, mutual awareness between the VMAX and VFCache -- things like recognition and reporting of LUNs under VFCache control to Unisphere, reporting of VFCache performance statistics to VMAX, and error reporting with call-home support using the VMAX capabilities.
Not 100% here yet -- and more work to do -- but you can see where things are going.
VFCache -- The Performance Evidence Continues To Mount
Do a good job of caching reads (and non-persistent writes), and application performance is nothing short of eye-popping. Kind of like an after-market turbo kit for whatever block array you might be using.
But there's also evidence that the new dedupe feature can potentially increase performance: as dedupe efficiencies increase, more cache is available, which increases the probability a given block can be cached vs. fetched from the array.
More work needs to be done here to precisely quantify the effect for our customers, but it's promising ...
The Bottom Line
It was last February when EMC announced the VFCache product. Here we are -- about six short months later -- and you can see how busy we've been.
Imagine what we'll have for you all next year at EMC World :)