This rather long post is basically a scripted version of the presentation -- although there are always a few things that usually get said that go above and beyond what's here.
Disclaimer: these are the slides and the story that I've personally been using recently. I'm not entirely sure if it's 100% official or sanctioned or anything else like that. Hope you find this useful and entertaining!
Well, if you're involved in IT, those "interesting times" have started.
Many of us believe that IT infrastructure is ripe for a seminal change in how it's built, operated and paid for.
This structural change in IT is the primary motivation behind the VCE Coalition that was recently announced.
Simply put, this coalition is about three industry-leading companies aligning investments to accelerate this industry transition.
So, let's get started ...
Before we get into IT infrastructure specifically, I'd like to take a moment and consider these other forms of industrial infrastructure.
Telephony. Modern power generation. Containerized shipping. Automated manufacturing.
Each one of these forms of infrastructure was subject to a radical transformation in the last century. A complete re-thinking of how things got done.
One could argue that -- now -- it's IT's turn to go through a similar transition.I always like pointing out how we used to make phone calls. One of the biggest advances in telephony long ago is when they came up with idea of "wheels on the chair" so the operators could move around faster when they patched phone calls.
When we talk about changing the nature of IT infrastructure, we want to do far more than just "add wheels to the chair".
How IT Will Evolve
At the same time, we're seeing the formation of more external clouds. Sure, there are many concerns and questions as to their suitability for enterprise IT environments, but there are certain aspects of them that are just plain fascinating.
The fact that they're flexible, dynamic, on-demand and efficient is extremely attractive to all of us that work in the IT business.
Virtualization Changes Everything
Virtualization, specifically VMware, is changing the game rapidly.
We're meeting more and more customers who've reached the "tipping point" in virtualizing their environment.
They now have the opportunity to change their internal operating model to look more like a service provider or cloud, and less like a traditional, physical IT model.
At the same time, you've probably noticed more and more service providers using VMware to target enterprise IT requirements. They're building these external clouds to provide many of the features that IT organizations need.
Well, since both sides are using pretty much the same technology and the same standards, we have an interesting picture taking shape.
Virtualization is not only transforming both sides of the equation, it gives us the ability to easily move things around if we choose.
Pool workloads more efficiently inside the data center. Move them around between data centers. Move them to an external service provider, and back again.
Now, we all know that there's more to moving workloads than simply moving a running program. There's boatloads of information as well. And, of course, the environment has to be demonstrably secure.
But -- through the magic of powerpoint and hand waving -- imagine these environments coming together to create a hybridized environment that offers the best of both worlds.
The Private Cloud Model
This is what all three companies are calling a "private cloud" -- everything IT wants from the cloud, but with the control and migration path that IT needs.
In this fully virtualized model, IT has an entirely new set of infrastructure choices -- run very efficiently internally, use any number of compatible service providers, or any dynamic combination of the two.
Just to be clear, this is not old-school outsourcing. You could move a workload to a service provider on Monday, take it back on Tuesday, and give it to someone else on Wednesday.
Put differently, we believe the potential now exists to virtualize -- or containerize -- the vast majority of workloads.
And, as we'll see in a moment, this same private cloud can support client and desktop experiences that follow users where they go.
All under the control of IT.
Building The Private Cloud
This view of the cloud is very different from most in a key aspect -- there's no presumption that you'll have to rewrite your applications to move them to the cloud.
So many cloud propositions start form the premise that all you have to do is rewrite everything to use their cloud.
Given that there's about ten gazillion lines of code out there, we'd call that a "barrier to adoption".
By comparison, all this model asks is that programs run on the Intel instruction set -- or can get to that if needed.
That means you can run just about any damn thing in the cloud if you choose. Off-the-shelf software. New programs written in modern tools. Even thirty-year-old COBOL code can go to the cloud.
If we look at the technology enablers of a private cloud, we think there are three.
First, we'll need a cloud operating system. That comes from VMware.
Second, we'll need cloud internetworking and unified computing environment. That comes from Cisco.
And, finally, we'll need to manage information and resources effectively in this environment -- something EMC has dubbed "virtual information infrastructure".
We see this transformation playing out in three places -- within the data center, on the desktop, and with compatible service providers.
Make no mistake, these newer service providers bring a lot to the table -- they offer you entirely new options to choose from, and here is where we find the legacy-free operational models that are so compelling.
But, make no mistake, these players can be thought of as "the new outsourcers" -- they will be making a strong case that they can do a better job of delivering IT infrastructure services than internal IT organizations can.
What Is VCE?
The Virtual Computing Environment coalition was formed to accelerate this industry transition to fully virtualized environments -- and private clouds.
For well over a year, the three companies have been coordinating investments in a number of areas, and it creates a pretty compelling picture.
First, all three companies share a common vision: the evolution of IT infrastructure towards private clouds.
And, you've probably already seen some of alignment we've done around our respective roadmaps: UCS and Nexus from Cisco, vSphere and View from VMware, as well as V-Max and Ionix from EMC -- all largely introduced this year.
As part of our public announcement on November 3rd, we added more to the picture.
For example, we've introduced a reference architecture -- the Vblock -- that magnifies the benefit of our respective technologies, and greatly accelerates deployment.
And, on top of that, we've started to characterize specific use cases for virtualization at scale.
But, despite being three best-of-breed companies, there are times when people prefer dealing with only one company -- customer support, for example. So we've started to invest in new constructs that help us act as one company when we need to.
Finally, we think we bring the single most compelling partner ecosystem to bear on this opportunity -- everything from resellers and system integrators, to outsourcers and service providers.
Each company has very broad portfolios, but -- for the purposes of this discussion -- I'm just going to hit a few highlights to emphasize the strengths of our respective technologies.As we do this, I'll try to give you a sense for how the integration is being done, and how we're wrapping services and solutions around the stack.
And I want to make sure we're very clear as to where we are not only with enterprise data centers, but compatible service providers as well -- since that's an important part of the story.
Pretty cool stuff, no? Simply put, they're the key technology that makes all of this possible.
vSphere is the cloud operating system that does most of the heavy lifting.
vCloud is both APIs and a partner ecosystem that extends and improves the compatible choices available.
And, finally, VMware has just announced VMware View 4 that tackles the user client experience by providing an integrated management environment.
Rather than do a deep dive on all the goodness in the VMware portfolio, I'd like to highlight just a few key features that illustrate some important points.VMware vSphere Architecture
It creates abstracted pools of resources using cooperating hypervisors across multiple servers, networks and storage devices.Right now, there is no equivalent technology in the marketplace.
What many of us appreciate is its open extensibility -- we can easily extend any of its capabilities -- management, networking, storage, security -- in a way that we couldn't do with other operating environments.
Very often, vSphere is called a "software mainframe", because many of its concepts are eerily familiar to those who've worked in that environment -- except, this time, we're doing it with commodity technologies.
vCompute: Powerful Enough For All Applications
One of the perception barriers that we need to overcome is that virtual machines can't scale to handle the really big applications.
This year, with the combination of vSphere and newer intel Nehalem processors as found in Cisco's UCS, we're at some pretty spectacular performance levels.
I mean, take a look at these stats -- 8 virtual CPUs per VM, a quarter-terabyte of RAM, well over 300 thousand IOPs.
Folks, this is bigger than Big Unix. These are healthy-sized mainframe workloads running out of individual virtual machines
Sure, there are frequently concerns about running big apps in VMs, but sheer performance is no longer one of them.
Our default planning assumption is that, over the next few years, we'll see less and less R+D going to the proprietary RISC chips. Intel is outspending all of them combined. Already today, it's hard to make a price/performance argument in favor of legacy RISC architectures.
Well, if most of the world is going Intel, what becomes of the legacy UNIXes? Solaris? HP-UX? AIX?
Our assumption is that -- over time -- most of the today's workloads running on these legacy environments end up on something like Windows or Linux, running on a modern hypervisor like VMware -- all on an Intel architecture.
Advanced Features In VMware
We're now entering an era where VMware can do very useful things that we just can't get done any other way. We're no longer talking "as good as" -- we're talking "better than" other alternatives.
Here's one example.
We all know that going from physical to virtual saves power. But, starting last year, we've been able to go one step further.
Since we're using a shared pool of servers, VMware's Distributed Power Management feature allows workloads to be consolidated on a fewer number of servers during non-peak times -- and the unused servers can be simply powered off.
When workloads increase, the opposite happens.
And, in many cases, this can result in an additional 50% of server power savings above and beyond what we got with ordinary virtualization.
This has the interesting property of turning spare server capacity from mostly opex into mostly capex -- you only pay for the server power and cooling when you need it.
Something we all want in our private clouds!
VMware Fault Tolerance
Another very promising capability is VMware's new fault tolerance feature. We're not talking about standard UNIX or Windows clustering here -- we're talking about something far better.
Take any arbitrary application. Put in a virtual machine, and set a flag. The virtual machine maintains a precise, synchronized copy of its state using a neighbor.
If the server fails, the second one picks up precisely where the first one stopped -- not in minutes, in milliseconds.
There's much more going on the server side with VMware, but let's take look at what's happening on the desktop.
Indeed, there's a school of thought that many organizations will build their first private clouds to support virtual desktops.
From Desktops To Users
We think that 2010 will be the year where many IT organizations take a hard look at their desktop strategy, and consider moving to a model where IT moves away from provisioning physical devices, and moves towards a model of provisioning user experiences that follow users around.
For one thing, Windows 7 is here. And most organizations have been trudging along on Windows XP running on ancient hardware. Something definitive will need to be done in many cases
And the question is -- will you do what you did before -- buy a bunch of desktops and laptops -- or perhaps consider a new option?
From VDI to CVP
You're probably familiar with the first part of the story. VDI -- or virtual desktops -- take a desktop image, wrap it in a virtual machine, and run it on a fast server with a fast network and a thin client.
Nice trick, but it doesn't work for everyone -- especially since more and more of our workforces are becoming mobile.
With VMware View 4, we've got the second piece of the puzzle -- CVP, or client hypervisors.
These are thin layers of virtualization that run on the user device that support the exact same desktop image that's running on the server.
And -- together -- these work cooperatively to create a desktop virtualization model that works for just about everybody.
The Power of Choice
Earlier this year, I was on a small proof of concept program using pre-release technology here at EMC. The idea was to give people $500 to buy their own PC, and prove this stuff out.
I ended up buying a Macbook Air. Very sexy.
I loaded up a hypervisor that ran the corporate XP image. When I was on the corporate network, I ran in what appeared to be a thin client mode -- everything ended up on the server.
And when I was on airplanes, I had a local image with my files and stuff. When I connected to the network, it synced in the background. All very cool.I was then able to login using my son's gaming PC at home and did the same thing. Once I authenticated, boom -- it was all there -- and at blazing speed.
When I'm in the office, I use a four-year old cost-reduced laptop. When it gets done booting, I run in thin client mode.
Here was the cool part of the pilot -- wherever I went, my desktop followed me -- network or not. If I lose a device, or want to use another one, it's no biggie. If I were to fall in love with a netbook, or maybe Apple's rumored iPad, it's all possible.
I think we'd all like to live in that world. More importantly, our users want to live in that world!
Another thing to think about is -- where do you want to point this? Last year, it seemed that most of the VDI projects were pointed at transactional workers. This year, the likely target seems to be knowledge workers who want premium experiences that conveniently follow them around.So, once again, we think in 2010 many organizations will take a hard look at this sort of approach, especially as they consider their plans around Windows 7
The New Operational (and Consumption) Model
OK, time to get controversial.
Clouds -- of any sort -- are built differently, operated differently and consumed differently than traditional IT.
It's one thing to get comfortable with the technology, it's another thing entirely to think in terms of different operational and consumption models -- because, here, we're dealing with the 3 P's -- people, process and politics.
Let me share with you some of what we're finding. For example, a while back we studied 200 traditional IT provisioning exercises -- you know, how big, how many, and so on -- the kind of thing that IT does every day.
We came back six months later, and found that 92% of them were fundamentally wrong -- way too much, or way too little. We ended up calling this "have a hunch, provision a bunch". Wasteful of resources, not to mention everyone's time and effort.
New approach -- take a typical, ordinary request, and put it in a modest virtual machine. A "bar code" tells the server, network and storage what do to.
The user will come back with one of three statements -- one, "not good enough", turn the knob to the right to deliver more performance, availability, etc. Or, perhaps "you're charging me too much", in which case we turn the knob way to the left. And, once in a while, we'll get it just right.
Here's the point -- most of the before-the-event sizing we do is proving to be totally useless. Instead, let's start to adopt a model where we adjust things after the fact as things change, and not try to divine the future ahead of time.
Better yet, we've found that business users are overjoyed by this approach -- they get what they want, when they need it. In business, cheap is good, but fast is better!
Many organizations are ready to take the next step, and start delivering self-service computing to some of their users. The only thing better than low-touch is zero-touch!
Earlier this year, I set up an environment on Amazon EC2. The hardest part was finding my reading glasses to read my credit card info. I didn't do it because it was cheap -- I did it because it was EASY.
And I'm absolutely that the majority of the appeal of cloud-like discussions to business types is getting what they want, when they need it, and as they need it. Cheap is always good, fast and easy is better.
If you think about it for a moment, there are probably good chunks of your user population that'd be thrilled by a self-service portal. Think application test and dev, business analytics users, power users -- anyone who's got a big desktop is probably a likely candidate.
The Giant Computer
Here's what the specs look like today, but it's certain that we'll see bigger numbers with progressive releases.
Some people ask the question -- why not use another hypervisor?
Well, lots of pragmatic reasons to use VMware for time being, but one big one is the ability to aggregate large pools of resources into a single cluster.
The bigger the pool, the better.And, if for some reason today's specs don't look attractive enough, don't forget, we always can build more than one!
The Cisco Offering
Indeed, in addition to de-facto market leadership, Cisco is bringing many of the advanced features we'll need in these cloud-enabled networks.But, in a few important areas, they've added much more to the discussion -- unified fabric, and -- more recently -- the UCS, or unified computing system.
So let's get started.
Call it unified fabric, call it converged ethernet, call it FCoE -- it's all pretty much the same discussion.
We want to move from the world on the left (bad) to the world on the right (good).
We want to use a single ethernet pipe to connect everything in the data center, and change its behaviors via software commands, rather than meddling with physical hardware.
The idea is simple -- wire once, and walk away. The capex benefits are pretty easy to understand -- better sharing of resources, leveraging the scale economics of ethernet, etc.
But the big play here is opex -- everytime a change needs to be made, it gets made via software -- new topologies, new configurations, short term resource swings, etc.
For these new private cloud builds, we're strongly encouraging people to seriously consider FCoE (and the supporting converged ethernet ecosystem) going forward. No need to rip and replace.
And we think this is going to prove out to be a huge benefit in these next-gen datacenters we're now building.Clustering At Scale
And it's another thing entirely to do this across data centers, some of which may be actually owned by service providers.
Lots of challenges to go solve, but there's a very specific network challenge that Cisco is addressing, and that's creating a single logical LAN or SAN that behaves and operates like it's in a single data center, even though it might incorporate multiple locations.
And, thankfully, Cisco is now starting to introduce technology that solves this part of the equation.
The Cisco UCS
When we at EMC first got a look at what Cisco was working on, we got pretty excited. As you might suspect, we get a close look at everything from mainframes to commodity servers as part of our storage business.
And, simply put, this was the single most innovative computing architecture we had seen in a very long time.
There's a lot to love here -- it's the first server we've seen built and designed to support virtualization at scale. Gotta love that.
The "big memory" feature means we can support far more VMs per blade than any other offering in the market -- and that directly translates into compelling economics.
The converged fabric model means we can support a converged management model as well -- using coordinated templates to manage provisioning in a far simpler and more elegant way that we've ever been able to do in the past.
A Different Model
The best way to run a UCS is to think of everything as a network service.
Build one out, and load up your virtual machines. They load balance automatically, thanks to VMware's DRS.
When you need more compute resources, rack them up without a power-down. The UCS discovers the new resources, and rebalances the load. No downtime, no reconfig exercises -- simply load and go.
Just like a network switch works!Onwards And Upwards
But most people want to run multiple clusters, and frequently in multiple locations.
So most of the R+D is now focused on geographically dispersed clusters, where the various nodes either all belong to the same organization, or might be a combination of owned and service provider assets.
Lots of interesting things that need to be done to make all of that work, but you'll probably realize that the operational model has changed significantly.
Instead of managing individual "things" -- devices, etc. -- we're now managing a pool of resources that might not be all in one place any more.
The management model -- and the operational model -- need to change.
Thinking In Terms Of Service Delivery
When we start fully envisioning this model, it isn't long before we realize we need some new "knobs" to control overall aspects of the environment such as allocation of resources, end-to-end service delivery, and security compliance.
The good news is that the technology now exists to run IT using these new models.
The frequent challenge is that most traditional IT organizations don't have the roles and responsibilities established for this new operational model.
This means that either (a) an existing group has to have its charter expanded to run the end-to-end model -- the NOC guys are a typical candidate, or (b) an entirely new team has to be assembled to run this new environment effectively.
Personally, I believe this transition to a service provider operational and management model will turn out to be the single largest obstacle to overcome for larger IT organizations. There are literally decades of history and culture built up around running physical IT, which is no longer entirely useful for the virtual world.More on that in a moment
What EMC Brings To The Table
Most people would expect EMC to supply the storage for this next-generation stack, and you'd be right.
But -- in addition to storage, there are several other contributions we've been investing in.
One is information management -- whether physical or virtual, information has to be backed up, archived, replicated and so on. There are some very interesting new considerations in these fully virtualized environments where applications and information can move around dynamically.
A while back, EMC realized that these environments would have to be managed very differently, so we've invested over $1B of R+D and M+A in creating EMC Ionix -- a next-generation resource management stack for managing next-generation IT.
And, finally, no one is going to use any kind of cloud unless it's provably secure.
Our RSA division has been working hard to create the new security frameworks we need for these environments.
Storage, backup, management and security -- that's what EMC brings to VCE.
The Impact Of Storage Architecture
Much like we'd like to build giant pools of compute that can dynamically adapt, we'd also like to build giant pools of storage that can do the same thing.
A good example is this year's V-Max, which uses a scale-out clustered architecture to achieve pretty decent scalability: 2000 drives and 2 petabytes of usable capacity, for example.
But we'd like to do more than simply build giant arrays.
We'd like for our customers to be able to federate multiple arrays in multiple locations, and intelligently move information to the right place at the right time, following the workloads as they move around.
You've seen some of that thinking already in EMC's Atmos product, but there's more to come.
Fully Automated Storage Tiering -- FAST
One of the things we like about VMware is that we can take a bunch of different workloads, throw them on a VMware cluster, and VMware just figures out how to optimize things.
We'd like to do the same sort of thing in the storage world, but in our world, we use different technologies.
On one hand, we've got enterprise flash drives. Incredibly fast, reliable and energy efficient, but on the expensive side.
And, on the other hand, we've got very large SATA drives that are extremely cost effective, but aren't particularly fast.The idea behind FAST is amazingly simple -- break up storage into manageable chunks, and then move the pieces around dynamically based on how the data is being accessed.
Turns out that the vast majority of applications have a "hot spot" -- 5 or 10% of the data is responsible for 90 or 95%. The problem is -- the hot spot moves around as usage patterns change.
Well, if we can use the power of the array controller to dynamically move popular information onto flash, and move the less popular information onto far cheaper SATA, we can achieve both significantly higher performance and substantially lower cost -- at the exact same time.
And that's the idea behind FAST.
We believe that, much like VMware has permanently changed the economics of computing, technologies like FAST will permanently change the economics of storage. It's that significant.And, as we think about a bit further, there's no reason why those SATA drives couldn't be compressed, deduplicated and spun down -- making for extremely cost-effective storage.
Now that we've taken a tour of some of the technology highlights, let's talk about how some of these pieces are coming together -- both in terms of reference architectures, as well as specific use cases.
You may not be surprised as to the enormous amount of IT effort that is spent selecting servers, storage, network, etc. -- procuring it all, integrating the pieces, supporting it all, and so on.
Our goal here is to create entirely new options for IT organizations wanting to accelerate virtualization at scale -- without taking any traditional options away.Introducing Vblocks
One of the ways we're doing that is with Vblocks.
Representing the best technologies from VMware, Cisco and EMC, they are pre-optimized and pre-integrated for fast and predictable deployment.
With Vblocks, it's a different conversation.
You tell us how many virtual machines you'd like to run, we show you the appropriately configured Vblock.
One of the frequent reactions to a Vblock is -- hey, it looks closed, don't like lock in, etc. This sort of reaction surprises me, because the reality of the situation is 180 degrees in the opposite direction.
Customers still have the choices they've always had. VMware's products work with everything, Cisco's products work with everything, EMC's products work with everything. And the vast majority of IT infrastructure building is mix-and-match between these vendors and others.
We wanted to create a new choice to go alongside existing choices -- if you want best-of-breed, and you want it quickly, you now have a new option to consider.Now, there's nothing preventing anyone from building their own Vblock-like solution, or combining various technologies from other vendors -- but many IT organizations are looking for a more efficient way to put capacity on the floor in a predictable and optimized manner.
And that's what Vblocks are all about.
You may be looking at the big one -- supporting 3000 to 6000 virtual machines, and be wondering -- who needs that?
Well, in addition to very large enterprises and service providers, even mid-sized organizations considering desktop virtualization may need this kind of scale.
Element Management -- Redefined
Well, of course, you can manage it the traditional way -- server, storage, fabric, etc. all individually -- but that defeats some of the purpose here.
So EMC has built an "element manager" for Vblocks.
It sits on top of server, storage, fabric and virtualization element managers to provide an integrated experience, leveraging the template-based models found in UCS, Vsphere, EMC storage and so on.This new element manager in turn plugs into your existing enterprise framework.
Vblock Use Cases
So, what are people interested in using Vblocks for?
But most people aren't interested in ripping and replacing what they've already got running.
They're more interested in Vblocks and VCE-related technologies for their next wave.
Some of these people are looking at virtualizing their "tier 1" applications -- the heavy workloads.
Even if they might be a bit skittish about putting that Big Database or that Big Instance on a virtualized server, there's plenty of the supporting cast and smaller versions that can be effectively virtualized today.
VCE Joint Services
So far, almost everyone has pretty much liked the idea of an integrated best-of-breed approach to next-gen IT infrastructure. Many customers are already using two, or perhaps all three of the vendors associated with VCE.
However, there are times when it's advantageous to act and behave as a single company, rather than three independent ones, and that's the basic rationale behind our investments in joint services capabilities.
All three companies, though, are very much agreed on one guiding principle -- this is all about enabling the ecosystem of resellers, integrators, outsourcers, consultants and other partners. Everything we do should be seen as win for not only our customers, but our partners as well.
Unified Customer Engagement
On the pre-sales side, we want to make sure that we have a coordinated engagement with a customer with people who are comfortable with the combined portfolio and offering. That means we have to create a new skill set.
And, on the post-sales side, we want to make sure that there's "one throat to choke" when dealing with complex support issues. Some pieces are already in place (for example, joint customer support), others are being built out right now.
The goal, as alway, is the same: bring our respective individual strengths to the table when needed, yet be able to act as one company when needed.
Seamless Support Experience
Each of the three companies already had world-class enterprise support capabilities in place.
And customers appreciate the depth of skills in each organization, and don't want another layer between them and the people who can help them.
By using a combination of cross-training and integrated process flow, we were able to create the benefits of a unified support experience without adding yet another layer of overhead.
Put simple, call any one of us, you've called all three.
VCE Coalition Services
Again, each company has a broad range of specific strengths in different aspects of helping customers transition to next-gen environments like a private cloud, both delivered by themselves, but more frequently by working with partners.
By rationalizing and integrating the combined professional services portfolios from each company, we were able to provide an end-to-end view of the private cloud journey for customers that wanted to take an incremental path towards evolution.
But we thought we could add something new to the discussion as well.
Sure, they could wait for all of their technology to be incrementally refreshed over time. And perhaps manage a large-scale organizational change effort at the same time.
Or, perhaps, maybe take a look at a shortcut?
That "shortcut" turned out to be the genesis of Acadia: a joint professional services venture based on standing up Vblocks and VCE environments using a BOT (built, operate, transfer) model.
Much like Vblock is a new choice for getting to good faster than traditional approaches, we wanted the same sort of alternative for the deployment, operational and consumption models that go along with private clouds.
Although Acadia may occasionally work directly with customers, the primary goal (once again) is partner enablement.
The Journey Has Begun
When people starting virtualizing their environments to gain efficiency, they were also at the same time laying the foundation for their private clouds. By putting applications and workloads in nice, abstracted and relocatable containers, the stage has been set in many IT organizations.
The primary challenge on most people's minds these days is regaining control of these large-scale virtualized environments -- and to do so in such a way that the resulting operational model looks more like a cloud or service provider, and less like a traditional IT organization. Although the technology is there to do things better, it's pretty obvious all the heavy lifting will be in people, process and politics.
The real payoff to all of this is the creation of an entirely new set of choices for IT infrastructure -- run very efficiently in the data center, use compatible service providers, or any combination. Or, perhaps, get real good at this stuff and start offering your services to others!
Any way you look at it, the transition of IT infrastructure to private cloud models has begun in earnest.
In our minds, it's not about what the future will look like -- it's about how quickly you want to get there.