« Planning To Fail | Main | Living In A Fat Browser World »

June 01, 2010

TrackBack

TrackBack URL for this entry:
http://www.typepad.com/services/trackback/6a00d83451be8f69e20133ef60286f970b

Listed below are links to weblogs that reference Unlearning Storage:

Comments

Martin G

Chuck,

it becomes more and more obvious that the smarts of any storage device is going to be software; yes, the hardware is important but many of the hardware advances such as faster switching, faster backplanes, improved redundancy for availability etc actually have applications to improve the whole infrastructure, not just storage.

And yes it is obvious that 'storage as virtual machine' has a very strong value proposition but....I wonder if some of your competitors feel entirely comfortable putting their storage appliances on top of the current leading virtualisation technology? Sure they'll work with it but run on it?


twitter.com/needcaffeine

As much as I agree with you on VMs & their placement in the cloud, there are applications which are not cloud applicable. Such as high I/O apps or large data sets.

Sure the chip will replace the HDD as the primary data storage, though similar to core 2 duo's vs duo cores, having multiple paths & storage devices has to be faster than having singular large SSDs.

Rob

Storage lives in one location, right?

Not always, not when technologies like distributed cache coherence and global federation (e.g. VPLEX) are fully considered. Storage lives where it needs to, including potentially multiple places at the same time.

...
Piggy-backing on caffeine dude.
I'm not buying it for similar reasons. For those report
runs that typically do 8 million back-end IOs (who knows
how many satisifed in SGA) I'd prefer average IOs in the
2-3 ms range so the reports run in 4-6 hours. Average IO
of 6 ms stretches run times out to 12+ hours. Introduce
hops (and yes additional block coherency) you risk adding
to average IO time. I'm sure more than few will be shocked when they move certain apps to the cloud. The
IO best be close in many use cases. Sure, email is already slow. But again, these are cases where people are
knocking on the door waiting on report runs. There are
a lot of those people and runs.

Chuck Hollis

Hi Rob

I understand where you're coming from, but I think you're missing a few key points.

First, we all know that storage supports a wide range of use cases, including running mongo report runs.

Second, 2-3ms is now old school, the target with the newer flash drives is <1ms, so you need to raise the bar a bit.

Third, no one is proposing that increasing the latency between an application and its data is a good thing. Applications generally want to be close to their data -- that's true whether we're talking desktops, data centers or the proverbial cloud.

Report runs -- in particular -- are great use cases for fully virtualized resource pools -- call them clouds or call them whatever. CPU and IOs spike nicely during the peak, and those resources can be used for other purposes.

Now, if you can imagine pooling resources between two of your data centers in such a way where you can move both information and applications to an alternate location while not taking the app down -- well, that's the use case that's interesting so many people.

Thanks for the comment.

Chuck Hollis

@needcaffiene

While I would agree with you on the general premise that there are certain apps that aren't a good fit for a cloud, I think your examples are poor.

Google, for example, deals with extremely high I/O rates and large data sets. Most people would call that a "cloud". I work with a number of service providers in the credit card industry who provide a "cloud service" to financial institutions that have -- yes -- high I/O rates and large data sets.

What I think both you and Rob are getting at is that it's generally a bad thing to increase latency between application and data set. No argument there!

-- Chuck

Chuck Hollis

Martin

Your point is valid -- who supports the end-to-end stack? We saw this arise in the last round of old-school storage virtualization -- vendor A would put their virtualization thingie in front of vendor B's storage, and the responsibility for end-to-end support would move to vendor A, not vendor B, as a result.

Having learned from this experience, we realize that when EMC is vendor A, we need to be prepared to offer end-to-end support for vendors B,C, D and E. I think you saw that with the VPLEX announcement, as an example.

Thanks for the comment.

Andy Sparkes

We at HP have many storage products that are essentially VM's. The most visible is VSA - the virtual edition of SANIQ and have a large and successful partnership with both VMware and Microsoft. As well as enabling the use of commodity architectures it also allows the storage sharing mechanism needed to exploit the benefits of server virtualisation, its fully featured and I haven't met anyone yet who bemoans the performance hit for this flexibility. This makes me think that its virtualisation thats provided the key catalyst for this trend and will keep us vendors feet firmly on the ground as migration between different stacks becomes easier as you aren't tied to a proprietary hardware platform and I also believe there is enough healthy competition in the hypervisor market to keep everyone honest.

Rob

"Second, 2-3ms is now old school, the target with the newer flash drives is <1ms, so you need to raise the bar a bit."

Sure. Maybe in a year or two? Today the prices don't
work. So let's say everything is flash at some point.
The customer run times go from 4-6 hours to 45 minutes
and they are estatic. Now move those flash drives to a
cloud. Suddenly, their report run is 8 hours. The IOs
have to be nearby for many folks (not email). I don't
see that changing.

Chuck Hollis

Rob

I think you and I are thinking about things very differently.

If the report (and its associated resources like CPU, memory, storage, etc.) live in a "cloud" vs the "data center", they should run at roughly the same speed ....

Perhaps the problem is your definition of "cloud".

As far as "prices not working", have you asked the ecstatic business owners if they see value in getting their reports run in 45 minutes vs. 4-6 hours?

You might be surprised at the answer :-)

-- Chuck

The comments to this entry are closed.

Chuck Hollis


  • Chuck Hollis
    Chief Strategist, VMware SAS BU
    @chuckhollis

    Chuck has recently joined VMware in a new role, and is quite enthused!

    Previously, he was with EMC for 18 years, most of them great.

    He enjoys speaking to customer and industry audiences about a variety of technology topics, and -- of course -- enjoys blogging.

    Chuck lives in Holliston, MA with his wife, three kids and four dogs when he's not travelling. In his spare time, Chuck is working on his second career as an aging rock musician.

    Warning: do not buy him a drink when there is a piano nearby.
Enter your Email:
Preview | Powered by FeedBlitz

General Housekeeping

  • Frequency of Updates
    I try and write something new 1-2 times per week; less if I'm travelling, more if I'm in the office. Hopefully you'll find the frequency about right!
  • Comments and Feedback
    All courteous comments welcome. TypePad occasionally puts comments into the spam folder, but I'll fish them out. Thanks!