Paul oriented everyone by replaying a slide from earlier in the session, pointing out that he -- and VMware -- were responsible for the "virtual infrastructure" part of EMC's two-part strategy.
He did a good job setting historical context, by sharing how he had started out working for Intel in the very early days when Intel was considering its first microprocessor. He recounted how no one really saw the potential for what the microprocessor could do, or how it could change computing at the time, but -- over time -- by investing in the technology and extending its applicability and ecosystem, we have the IT world we live in today.
He felt very strongly that virtualization was in a similar early stage -- people were just beginning to realize how powerful virtualization technology could potentially be, and how it could change many aspects of the technology world we know today.
And that the journey had just begun, in his opinion.
He then replayed an earlier theme introduced by Joe around IT today being primarily about offering customers efficiency, control and choice, and positioning the three major VMware initiatives against that background.
The current VDC-OS initiative (virtual data center operating system) was targeted towards data centers that wanted better efficiency and control of both their application workloads and their server environments.
The vClient initiative was targeted at what users saw: creating virtualized desktop and rich application experiences that could intelligently target any device the user chose, and exploit its native capabilities -- again, about efficiency and control for the IT department, as well as new choices for users.
And the recently announced vCloud initiative was aimed at service providers to provide compatible infrastructure choices that could be controlled by enterprise IT if needed.
All very logical, all very sensible.
He told a story that the very first environment targeted by the VMware team was the simple use case of running a guested windows OS on a Linux workstation -- nothing more.
The client hypervisor evolved into the server hypervisor over time -- and more people saw the potential of this technology.
A major step forward occurred when multiple hypervisors started cooperating with not only each other, but external management and orchestration frameworks.
The result was that virtualization quickly evolved from a per-server model to a pooled-resource model, with DRS (distributed resource scheduling) as an obvious example.
Paul took the opportunity that this sort of software is extremely difficult to do well -- multiple, independent entities, all communicating and cooperating together, and doing so in a predictable and efficient manner. He flatly stated that the competing virtualization technologies were still perfecting individual server hypervisors, and hadn't made the leap into multiple cooperating hypervisors, which VMware had been successfully doing for quite some time.
This all turned out to be merely a warmup for what was to come.
The Cloud As An Architecture
Paul made an interesting and useful distinction between "cloud as architecture" (e.g. a way to build computing environments) and "cloud as service" (a way to acquire computing resources) -- and I felt this was a very useful point.
He then presented a fascinating view of cloud-as-architecture which I'm stealing and adding to my decks going forward. Sorry, you can't see how the slide built using this medium, but try to imagine how the slide built as I describe it.
He started with creating an idealized "cloud" container -- whether it be inside a data center, in a service provider's facility, or both.
At the bottom, he saw that these architectures would be built from industry-standard building blocks that we were all familiar with. He took pains to point out that these wouldn't necessarily be the cheapest components, just the best components for the job at hand.
Above that, software helped these industry-standard components deliver scale and availability (think ESX, for example). And, to exploit this scale and availability, existing applications have to be "wrapped" in a virtualization layer -- they couldn't use a cloud architecture natively without being rewritten.
Above the software was a key insight -- that virtualization created an obvious and compelling place in the stack to inject policies regarding security, compliance and other governance concerns. The architectural attractiveness of this sort of approach shouldn't be lost on too many people.
Going farther up the stack, there will be a need for management that monitors service delivery and re-orchestrates resources as needed.
Finally, this architectural stack would provide idealized support for newer application development frameworks (e.g. Ruby on Rails, et. al.), alongside supporting legacy applications.
And this leads to the key point of the slide: virtualization is the only way to make this happen in an evolutionary way -- a point lost in some of the other cloud discussions I've been following.
Considering The vSphere Architecture
Paul then shared a high-level conceptual view of vSphere oriented around two primary functions: aggregation of resources (vCompute, vStorage, vNetwork) and a layer that handles services and policy.
He also stated a repeating theme -- even though VMware products such as vSphere would have base capabilities, he thought that any VMware product should be extensible with value add from other companies that have specific expertise that VMware did not have.
Hence the "plug in" architecture shown here.
Paul stated the obvious -- if the proposition was that all data center applications would have to be virtualized to participate in a cloud (whether internal or external), the virtualization layer should be prepared to support the largest and most demanding applications.
And he presented some fairly compelling evidence that this would be the case sooner than later.
The vast majority can be comfortably virtualized today (as we know), but there's a long tail that must be considered.
As you can see, Paul clearly committed that the next version of VMware's flagship product would have the "beef" to handle very large applications: up to 8 virtual CPUs in a single virtual machine, 256 GB of memory per virtual machine, 40 Gb/sec network bandwidth and driving over 200,000+ IOPs.
That's a lot of crunch for a single application instance, isn't it? And that's just in 2009.
He showed how performance linearly scaled, and then compared the results using virtual machines against a native machine.
Now, of course there was a tax for using virtualization on this particular test (if I remember correctly, TPC-C workloads drive very small I/O transactions, which probably isn't the friendliest workload for a hypervisor).
He also stated that the overhead associated with all forms of virtualization was dropping rapidly with each successive release.
And then he put it all in perspective -- we were talking nose-bleed levels of performance here to start with.
This single virtual machine was delivering 24,000 transactions per second, showed near-linear scalability from 1 to 8 virtual CPUs, and was driving 250 MB/sec in very short I/O blocks, most likely 512 bytes.
That's impressive, if you think about it -- for a single virtual machine. Sure, there are some apps out there that need to drive more from a single application image, but not a whole lot.
And that's what can be done today -- without considering faster hardware (coming this year) or further optimizations in the hypervisor.
I think Paul was having fun, because he wanted to put up a slide that showed all the hardware required to drive this sort of workload today.
So he put up a lab picture of 500 CLARiiON disk drives that were required to drive all of this I/O, and superimposed a a red arrow on the relatively tiny blade that was running the virtual machine that was driving all of this.
As Chad Sakac would point out, we'd love the chance to rerun this particular test on a single rack of enterprise flash drives, but I guess we'll have to wait for that.
Now, what kind of applications do you have in your environment that drive enough I/O to keep 500 very fast spindles extremely busy?
I think he needed to make a point -- VMware wasn't just for small applications any more -- and hadn't been for a while.
The example he used was network state, and the problems created when virtual machines move from server to server, and specific network state characteristics might be lost.
VMware had created a basic capability to solve this problem (the vNetwork distributed switch), but had also created the capability to do an entirely software-based version of their Nexus switch (the 1000V) that ran entirely as a virtual machine.
From a network administrator's point of view, this meant that not only was the network simpler to configure and monitor, but the virtual switch behaved exactly the same as a hardware switch would, except without the hardware part, of course.
EMC is particularly intrigued with idea of putting infrastructure into virtual machines -- Avamar was the first example of more to come -- since all sorts of costs, overhead and complexity come out of a customer environment by doing so.
If you think about it, there's really no reason anymore to demand dedicated hardware server appliances to do most of IT's housekeeping these days.
I wonder how many other vendors will see the light, and start moving in that direction?
Paul also showed another related example using vStorage. He put up an EMC-related example of of how EMC used the vStorage APIs to do a better job of delivering virtual (thin) provisioning.
Sure, VMware had their own capabilities, but -- if the customer wanted a more robust implementation, they could select EMC (or anyone who had done the vStorage integration work) to get a seamless solution.
Again, this approach to pluggable architecture not only works well for aggregation functions (storage, network, compute) but the service and policy layer -- an area that EMC continues to exploit with our RSA portfolio (security) as well as our Smarts resource management portfolio.
The fact that VMware recognized this need for a standardized way for other vendors to extend VMware's functionality -- largely from day one -- reflects fairly progressive thinking in the commercial operating system world, in my opinion.
Putting It All Together
He routinely speaks of the power of virtualization to create a "giant computer" or a "software mainframe" that takes industry-standard components and platforms, and makes them do amazing things.
One eye-opening chart was just how big a VMware-based "giant computer" could be this year.
Take a look at the "giant computer" specifications on the left. His use of the term "software mainframe" might be doing the architecture a bit of a disservice :-)
The energy efficiency example he gave was equally eye-opening.
He showed an example of a VMware cluster running a complex "VMark" benchmark -- one where lots of tasks ramp up, run for a while, and then ramp down -- sort of designed to simulate what a workday might look like.
Using DPM, it's now possible to dynamically move workloads to fewer servers, and power down the CPUs that aren't being used. When the workload expands, these CPUs are re-powered on, workloads moved over, and so on.
At a high level, this sort of approach saved 50% of energy consumed -- and that's after any energy saving that was achieved from virtualization in the first place.
And then he put up a stunner.
It was possible for one virtual machine to emit an instruction-level trace that could be used to synchronize state on a second (or third, or fourth) virtual machine.
If the first machine failed, the alternate machine would be in the exact same state as the first one, and could take over the workload without the need to reboot and reinitialize.
Now, having spend time with true fault-tolerant architectures (Stratus, et. al.) the idea of doing this particular trick entirely in software was a tantalizing prospect, to say the least.
And the ability to specify a given application as HA (high availability with reboot), FT (instruction and state synchronized for immediate failover), or even FT+ (always have a hot synchronized virtual machine, even after the first one had failed) -- and to do so on virtually any legacy application, and using industry-standard components -- well, that was certainly something to consider.
Now, who wouldn't want this in their private cloud?
My head was starting to hurt by this time, but there was more to come ...
He felt that customers expected a platform vendor to be the best at managing their platform, and couldn't entirely leave it up to third parties.
And VMware was no exception.
Where he thought VMware had the opportunity to do better than the rest of the industry was to establish core capabilities that spoke in terms that the business wanted to see -- performance, availability, security, cost, etc. -- and still provide an extensible platform for other vendors to participate.
More To Come
Even though I'm now several longish posts into covering this event, there's still more to come.
A lot of important things with important implications were shared publicly, and I just want to make sure that as many people as possible have the opportunity to appreciate some of this material, and hopefully influence some of their thinking going forward.
I know it's seriously influenced my thinking ...