The challenging part -- as they look around their data centers, all they can see is the legacy of past decisions: incompatible architectures, stove-pipe stacks and the rest. It's like an archeology project: you can see the layers of different waves that have come through the IT organization.
How can you move forward when all you can see is history?
This Is Turning Out To Be A Key Discussion
The problem is simple: so much of the investment is tied up in stuff that's incompatible with the new future state. This "all I see is legacy" situation is frequent, and can easily paralyze an IT organization.
In general, getting people to move forward is essentially two interrelated activities:1) articulate the future state -- what it is, why it needs to be, and how it is better than the current state.
2) show people an incremental path on how to get to the future state, and the benefits that result from each step.
But there's a few ways that I've found to break the logjam. Here's how I do it.
Agreeing The Future State
I usually start the conversation with "imagine it's three years from now". A bit out there in the future, but not so far away if you think about it.
What is the dominant processor architecture in the data center?
Most people instinctively would say "Intel, naturally". SPARC and Itanium don't appear long for this world; and mainframes will always be around in some capacity. That leaves IBM valiantly defending its PowerPC franchise running AIX, as well as a smattering of niche platforms.
So, generally speaking, in three years, we've got a handful of mainframes at the top of the pyramid, perhaps some IBM AIX systems that haven't been migrated yet, and the rest is Intel-compatible architectures as far as the eye can see. I'm skipping over the odd iSeries, Tandem or other specialized platform that refuses to budge, but you get the picture.
Some organizations will be entirely Intel-based. Others will be predominantly Intel, with a smattering of non-Intel platforms that do specific roles in the business. The exceptions to these generalities will likely be few and far between.
How Do You Want To Do It?
If the majority of your data center is going to be Intel-compatible in three years, how do you want to do it: old school or new school?
Old school: physically separated resources bound to specific applications; each discipline managed as a traditional silo, all physical assets and associated resources charged back to business users.
New school: fully virtualized pools of resources shared by the majority of applications, IT disciplines re-worked to deliver services vs. manage individual technologies, resource consumption based on what you actually use.
Most people instinctively say "new school" when presented with that choice. Getting there might not be easy, but that one tends to be a straightforward choice.
Now, how do we get from where we are today, to where we want to be in three years?
Capping The Legacy
New workloads continually come into the IT environment, and there are choices as to where they go. Over time, popular platforms tend to grow as a result, and less-popular platforms tend to wither and shrink.
You'll often here IT shops express this thought as "virtual first": all new workloads and activities are directed to the virtualization platform, unless there's a damn good reason to do otherwise. Over time, the "damn good reason" bar tends to get more onerous, and -- as a result -- almost all new or incremental activities end up fully virtualized.
Surrounding The Legacy
Many important applications are actually a combination of related activities: user front ends, decision support back ends, application development and testing activities, and the like. Indeed, this ratio between "core" and "surround" can be stunning, especially in environments like SAP.
Well, maybe the core database engine can't come over to a virtualized Intel environment right away, but everything else sure can. Sure, there are a handful of situations where the testing team wants an exact clone of the production environment, but -- under scrutiny -- the vast majority of surrounding stuff can easily move to a newer environment.
Replatforming The Legacy
Big enterprise apps have a finite shelf life. After a number of years, they usually need to be re-platformed: a combination of upgrades to modern versions of the operating system. modern versions of the database and middleware, modern versions of the supporting software stack, etc.
No one is arguing for a rip-and-replace of perfectly function application stacks in pursuit of the new environment; move these core apps over when it makes sense down the road.
Creating A Place To Start
Well, if all of this stuff is going to start finding its way over to the "new thing", it better be in place and functioning well, shouldn't it? As the legacy gets capped, and all those new workloads come in, they're going to need to be sent somewhere. As various teams start surrounding the legacy, that's going to have to end up somewhere as well.
And, finally, if we're going to seriously consider re-platforming on something Intel-based and highly virtualized, we're going to have to have something in place that's very mature from a technology and operational process perspective.
Getting that proven enterprise-class, fully-virtualized platform in place is something that needs to be there sooner than later. No time for the usual beauty pageant of individual technologies. No time to slowly evolve operational processes around the new way of doing things.
The sooner the new platform is in and working, the sooner the workload capture and migrations can start, and the sooner the organization can start to reap the benefits.
If you know where you want to go, and understand the benefits of getting there, it all boils down to getting there faster.
Speed requires focus, and smart IT leaders will focus the organization on the stuff that matters: defining and maturing the new operational processes, for example. Or maturing the new capability quickly enough that it can begin to scoop up increasing portions of the environment.
If you understand this concept, you'll appreciate why things like Vblocks exist, and why they usually come packaged with professional services to build, operate and transfer them back to the IT organization as quickly as possible.
VCE, Vblocks and everything else are all about acceleration: helping IT customers get to their next IT model as quickly as possible.
Putting It Differently?
I am told I am somewhat known for the crazy stuff that sometimes comes out of my mouth.
Getting to a private cloud model isn't really about dealing with your existing legacy -- it's about getting to the "new legacy" as quickly as possible.
Food for thought.