« My Continuing Frustration With Automation | Main | A Look In The Mirror: Are You Creative? »

March 19, 2014

Comments

Adam Sekora

Hey Chuck,

On this note: "There’s a long list of more debatable strong points: density, efficiency, serviceability, etc." I'm not sure where that logic comes from, if you could clarify I would appreciate it.

From what my team has seen VSAN is more dense in most if not all environments that can house at least two drives in a server (which is all rack mount and most blades). VSAN is more efficient in the respects that really matter: space & cost. Yes, it consumes more CPU cycles on servers but why should we care how many CPU cycles it consumes if the combined cost is lower than the traditional storage array?

And from what we have seen I would argue that VSAN is far more serviceable. As it has the ability to tolerate a configurable number of failures, this protects us from the occasional outage caused by addition or removal of disk shelves, instances where a controller goes down while performing maintenance on a second controller and a potential protection from instances of rack/row power failures. This also makes it very easy to move the storage platform around within the datacenter without taking service outages.

I'm curious what the other opinions are out there and perhaps that we have overlooked something

--Adam Sekora
@vdoubleshot

Chuck Hollis

Hi Adam

Keep in mind, I've spent almost 20 years in the storage array world, so I very much understand and appreciate the perspectives.

Density: it's hard for server packaging to match the density of efficient array packaging -- if one considers storage by itself.

However, I've met more than a few people who are convinced that when all three disciplines are considered (storage, compute, network), that better density results from server nodes with embedded storage, as you do.

I would agree that combined cost should be the desired metric. However, everyone looks at the numbers a bit differently, so I don't go there. The only costs that matter are the ones you see.

I think the protection and serviceability aspects will be debated for a while. There are two major models in play, and each look at the world differently.

You don't sound like you've overlooked anything, unless I'm missing something. Understanding more about your environment, goals, philosophies, etc. would help be be more specific, but I'm flying blind here.

If you'd like to discuss more, please drop me a note at chollis@vmware.com

Thanks!

-- Chuck

Wences Michel

Good article and we believe VSAN is a game changer, and as in all things, it depends on your business requirements and the business problem we are trying to solve. We think this article is spot on and we like the idea of "VSAN Plus"-- using the right tool for the right job. VSAN brings new innovation for solving certain modern storage business issues at a better TCO for some Use Cases. Also the traditional SAN will still do what it does best for the enterprise. So it is all good and this is a win win for enterprise storage solutions.

Chad Sakac

Disclosure - Chad here, and I'm an EMCer.

Chuck - I think you're spot on, and for one, I'm talking about the use of hyper-converged models like VSAN as part of the "persistence" universe and fit for workload with almost every customer I talk to. It adds another compelling choice. The more we can make SPBM (and ViPR for cases where that catalog of choices must be > vSphere based workloads) act as an abstractor of capability, the better.

Jae Kim

Chuck,
Your tiering examples are in line with what I am thinking but perhaps we can be a bit more specific within the context of VMDK's. Today, I would largely imagine that most customers of VMWare are provisioning a monolithic VMDK for a single guest. i.e. if a requestor of a vm says they need 500gb for the app, a 500GB vmdk gets provisioned. What is VMware's stance on perhaps this model changing were a vm should be comprised of multiple vmdk's? Therefore, a given vm is made up of a "master vmdk" that holds the root filesystem and app binaries and then secondary vmdk's from an external tier where the bulk of the storage is actually required? This master vmdk would reside in VSAN where things like transient logging, etc.. (/tmp, pagefile, etc...) and be of relatively the same size within the VSAN layer. This sort of mimics what we do in the physical bare metal world where root volumes are on local disk and we allocate SAN luns for the data. In effect, should we not manage vmdk's like we manage luns?

Chuck Hollis

Hi Chad -- agreed!

Chuck Hollis

Hi Jae

You bring up good points. We probably owe people guidance and documentation on how best to split things up in order to achieve what's being described here. Consider it a work in progress :)

-- Chuck

The comments to this entry are closed.

Chuck Hollis


  • Chuck Hollis
    Chief Strategist, VMware SAS BU
    @chuckhollis

    Chuck has recently joined VMware in a new role, and is quite enthused!

    Previously, he was with EMC for 18 years, most of them great.

    He enjoys speaking to customer and industry audiences about a variety of technology topics, and -- of course -- enjoys blogging.

    Chuck lives in Holliston, MA with his wife, three kids and four dogs when he's not travelling. In his spare time, Chuck is working on his second career as an aging rock musician.

    Warning: do not buy him a drink when there is a piano nearby.
Enter your Email:
Preview | Powered by FeedBlitz

General Housekeeping

  • Frequency of Updates
    I try and write something new 1-2 times per week; less if I'm travelling, more if I'm in the office. Hopefully you'll find the frequency about right!
  • Comments and Feedback
    All courteous comments welcome. TypePad occasionally puts comments into the spam folder, but I'll fish them out. Thanks!