Yes, I'm going to throw a new term at you -- it's Application Image Management, or AIM.
Before you scream “buzzword alert!”, take a moment and hear me out, please.
For quite a while, we've understood the potential to transform how application images are defined, managed and delivered in a radically optimized fashion.
And today, EMC is announcing the acquisition of exceptionally intriguing technology that cuts across multiple disciplines and delivers some eye-popping benefits.
How does the potential of running 3x-6x more virtual machines on the exact same hardware grab you?
What's Going On Here?
Up to now, most of the industry is moving towards a "gold copy" template-based approach for application provisioning.
The idea is deceptively simple: package up your application, your database, your middleware, your agents, your operating system, and anything else you need in a big, honking "gold copy" (think virtual machines), and there you have it.
Then, when it's time to stand up a new server and application in a new role, simple grab the appropriate "gold copy", and you're there.
In one sense, this sort of approach is a vast improvement over the previous handcrafted approach to standing up new application and server images. Lots of legacy vendors chasing this sort of approach.
We thought that there might be a better way of doing things.
Imagining The Ideal Solution
Well, for starters, we all realize there's a ton of bloat with this sort of approach. These images can get very large, and contain a lot of code and data that's never, ever used.
This is not a trivial concern.
If you accept the premise that the primary barrier to running far more virtual machines per server is a shortage of main memory (one of the architectural premises of the UCS, by the way), cutting anywhere from 20% to 90% of the memory footprint of each and every virtual machine can end up being a Really Big Deal when you think about it -- even if the virtual machines have no shared code or data.
Not only that, these bloated images take longer to transfer over a network, and consume more on-disk storage.
It's especially frustrating since we know that these application and server images are nothing more than compositions of objects: programs, libraries and other code objects -- some of which are used, many of which are not.
Wouldn't it be great if we could take these combined server/application images, run them through some sort of pre-processor that decomposes them into lists of objects that are actually used, and only assemble the pieces that matter?
We're not just talking about skinny operating systems here (JeOS -- or Just Enough Operating System), we're talking the entire code stack from the device drivers all the way up to the user interface.
That would save a lot of run-time memory, wouldn't it?
Let's take the next step.
Now that each "gold copy" server/application image is decomposed into its constituent objects -- and all the relationships understood -- wouldn't it be great to store all of that in some sort of repository that expressed server/application images as nothing more than a structured assembly of related objects?
My my my. You'd be able to have the tool look at some sort of arbitrary update or patch, and instantly evaluate whether it mattered to your production environment (or not), and be able to instantly identify the runtime environments it affected.
Now wouldn't that be cool?
Going even further, it'd be even better if you could drive an automated workflow to put the affected server/application images into a test/qual cycle before putting them into production, and have the ability to do roll-back automatically of everything affected if there turned out to be some sort of issue.
If we think about the resulting environment, wouldn't IT compliance issues become a whole lot easier? I mean, if someone defines an arbitrary policy, you could enforce many aspects of it automatically at the composite object level, right?
And wouldn't licensing compliance become nothing more than a real-time report against the repository?
Nice picture. Especially if this can be done to arbitrary software components without any serious restriction.
Hence the name "application image management", or AIM, since -- in effect -- that's what you're doing. Calling it simply "automated provisioning" doesn't do the concept justice, at least in my book.
From a storage perspective, maybe we ought to call it "pre-dupe" rather than "de-dupe"? Compared against what can be done with ordinary disk-based deduplication, we're now able to go so much farther in terms of footprint reduction -- not only on disk, but in memory where it *really* counts.
What EMC Is Announcing
The press release gives you the basics -- EMC has acquired a fascinating company with a fascinating technology.
FastScale's lead product -- FastScale Composer Suite -- does most of what I described above today.
It does it for newer private clouds built around VMware. It does it for legacy environments, including bare-metal. And it even does it for public clouds, like Amazon's.
Now, place FastScale squarely in the middle of EMC's Ionix suite of next-generation resource management technologies, and the picture gets even more interesting.
For those of you who might have missed the back story, EMC has spent over $1B on acquisition and R+D over the last few years to assemble this legacy-free next-gen portfolio, with names like Voyence, Infra, ConfigureSoft, nLayers and Smarts -- just to name a few.
The underlying premise of Ionix is discovering, correlating and orchestrating underlying infrastructure objects. It creates real-time models of IT operations for all the ITIL-like functions.
In once sense, Fastscale will extend Ionix's modeling paradigm to include all manner of runtime code objects.
When we're done integrating FastScale into the underlying object model, application images (and all their constituent parts) just become more objects for us to manage end-to-end.
And not those big, bloaty monolithic traditional server/application images: more pragmatic ones that are decomposed into their constituent parts, assembled as needed in an incredibly optimized fashion, with useful and exploitable metadata around component object roles, relationships and uses.
I can't expect everyone to get as excited as I am about this, but I think a few people will see the picture, and probably get excited as well.
We'll see, won't we?
Something For Everyone
So, where's the payoff for a technology like this? Lots of places, if you think about it
The server team gets a huge potential win: anywhere from 3x-6x more virtual machines per server with very little effort. That ought to raise a few eyebrows.
The application provisioning team also gets a huge win: their world just went from physical to virtual to model-based. They can now manage provisioned server/application images in much the same way a database manages records.
Producing a new run-time image is somewhat similar to running a report.
The security and compliance team also gets a huge win: arbitrary policies can be enforced in the application image domain quickly and accurately, with full compliance reporting.
But, if we step back a bit, there's even a more interesting picture.
As enterprise IT transitions from physical to virtual to private cloud, we've got another key piece in the next-gen operational model that separates this new fully-virtualized world from the previous physical one.
As EMC works with our partners at VMware and Cisco, we add this important new piece to the overall transformative approach we're all cooperatively investing in, and there's yet another big piece of differentiated value for the private cloud.
IT people now have the potential to do far more with VMware than they could before, and when you fully contemplate Cisco's complementary network-centric template management paradigm (as well as VMware's complementary approach) – not to mention UCS’s big memory model -- it all fits together so neatly and nicely -- at least in my eyes.
I love it when a plan comes together.