Every road has its speed bumps: knowing what they are likely to be helps a great deal.
I thought it'd be useful to simply share my perspectives of likely issues, concerns and challenges that tend to show up in each phase of the journey.
Predictably, we've got lots of data points for the first phase, somewhat fewer for the second phase, and only a handful of observations for the third phase.
What's This Journey Thing All About?
We've been using a three-phase model to describe how many organizations will proceed from where they are today to a private cloud model.
Phase 1 is simple: IT production. This is the virtualization of the applications that IT owns and aren't connected to named business applications. This usually includes file and print servers, application development and test, and infrastructure management environments.
The details aren't really important, it's the use case: this is all the tier 2 stuff that's lying around and consuming a disproportionate amount of IT resources.
Phase 2 is more challenging -- it's the business production applications. These are the named applications and processes that the business sees -- and depends on to run the business day-to-day.
All of the sudden, the stakes are much higher: service delivery, security, availability, recoverability, performance -- these are the workloads that really matter.
Phase 3 is about changing how IT goes about doing its job -- delivering IT as a service. This means adopting an IaaS / PaaS / SaaS model for most use cases, as well as progressively consuming external IaaS / PaaS / SaaS services as they make sense.
With that framework in mind, let's dive in and discuss some of the more frequent speed bumps and pot holes along the way :-)
Phase 1 -- IT Production
Characteristically, there's not a lot of budget or resource here, so the picture is pretty much the same: progressively virtualize the "tier 2" landscape in an incremental and stepwise fashion.
"Works with what I've got already" is a key theme here -- there's very little interest in considering new classes of servers, networks and (usually) storage. Certainly, no one's going to invest in new management frameworks, or the processes to go use them effectively.
Since the goal here is primarily resource efficiency, all of the discussion tends to focus on that metric: how many servers did we consolidate, how efficiently are we using storage, etc.
Nothing wrong with that.That being said, I usually see two things that IT organizations fail to do during phase 1 that causes issues down the road.
The first is brain-dead simple: IT organizations sometimes don't measure and/or audit the impact of their activities. Typically, enormous savings are generated in terms of resource utilization and process speed during this phase.
You don't get credit for what you don't measure.
Doing this -- whether it's a formal internal process, or you engage outside resources -- is turning out to be important.
Documenting the savings gives senior management confidence to invest further to get more benefit -- something you'll need down the road.
As an example, EMC's IT organization engages with outside analysts to routinely document the "before and after" of our bigger projects, and phase 1 virtualization is no exception. The example shown here is from EMC IT -- and there's a detailed document behind the headlines.Imagine the IT leaders in front of the executive team: "Here's how much money we saved and value we created during our phase 1 activities. I'm here before you today to get more resources to move to phase 2".
If you're an IT leader, that's a discussion you want to be able to have.
The second concern is a bit more esoteric, but equally important: the validation of the technologies and processes you'll need in phase 2.
Think about it: if we're talking virtualization of big, hairy production applications, just about every aspect of IT service delivery has to be upgraded: backup and data protection, service delivery management, security and compliance -- the list stretches on.
On one hand, there's very little need to do these activities if all you want to do is tackle phase 1 activities and call it a day. On the other hand, if you see phase 1 as simply the first step in a longer journey, you're going to want to carve off resources to implement and validate key technologies and processes you'll need in phase 2.
Combined, these two omissions are tending to hold IT organizations back. They're not getting credit for the work they've already done, and they didn't invest in preparing for the future before it was actually here!
Phase 2 -- Business Production
When I meet an IT organization that's in the throes of phase 2 activities, it's a fun discussion to be sure. They're living the dream, so to speak :-)
And the opportunities to miss a trick here or there seem to multiply as a result. I'll start with the big ones, and work my way down.
First and foremost, the justification model changes significantly in phase 2.
In phase 1, it was all about resource efficiency. By comparison, although there's additional resource efficiencies in phase 2, the primary motivation is delivering better IT: more responsive, more more highly available, better secured, easier to migrate, easier to manage, etc.
Success metrics tend to be self-fulfilling prophesies, so I am of the belief that IT leadership has to be very clear as to what the goals of phase 2 are, and -- more importantly -- how they are distinctly different than the goals in phase 1.
Second, and almost equally important, phase 2 is about process change.
In a nutshell, you are re-engineering the fundamental processes of how IT gets done. No advanced technology will do this for you automatically. When we're talking about significant process change, we're really talking about the organizational changes that support it. If you organize for success, the processes and technologies will follow.
Make no mistake -- most IT disciplines and functions are affected to some degree -- management, security, data protection, service delivery, capacity planning - the list goes on.
When one first proposes virtualizing business critical applications, what you tend to hear is all the concerns around the maturity of technology. Sure the technology can always be better, but there's more than enough that's been proven to get on with it.
Understand the real concern: roles, responsibilities and required skill sets will change significantly as well, and that's not everyone's cup of tea.
Going down the list a bit, there's the concern around dealing with the legacy. With so much tied up in running the existing environment, how will we ever make progress on the new one? Aren't we creating yet another silo or stovepipe that has to be managed? Don't we have to rip and replace our entire environment to get there?
It's a big concern in people's minds, and has to be addressed. I recently wrote a post on this topic alone, simply because it's one of those classic perceptual walls that some organizations run into.
Finally, in some cases, we have business-oriented application owners who may object strenuously to the notion of virtualizing their core applications. The natural response is to convince these people that it's going to be OK; the better approach is to tell them what's in it for them: better performance, faster upgrades to new functionality, easier development environment, the ability to grow quickly without disruption, etc.
Phase 3 -- IT As A Service
There's only a handful of customers I've met who are wrestling with this phase, so -- understandably -- there's only a small sampling of potential speed bumps to talk about. More will come in the future, so consider this a preview ...
The role of good IT governance with associated process becomes much more critical in this phase. One concern is IT resource utilization -- when it's far easier to consume IT, more IT will tend to be consumed. In some cases this is desirable (e.g. creates value for the business), in other cases it's a waste of valuable resource.
Another issue that the IT governance model will have to tackle is control points: what control points should remain with IT, and which ones should be given to proficient users? There's no right answer here, just the need for a framework that incorporates multiple models.
Certainly, one governance model is needed between IT and the business, but in many cases a second one will be needed between IT and the growing capabilities of external service providers.
What's the framework for evaluating which IT capabilities and resources should be given to a service provider, and which ones need to stay with internal IT?
It's almost impossible to make a case that everything should go, or that everything should stay -- so a framework will be needed as more attractive external services become available, and the maturity of the IT organization increases in order to more easily consume these external services.
And, of course, the tools and process that were put in place during phase 2 need to be able to extend their capabilities to control and manage to external service providers as they begin to be incorporated into the overall enterprise IT fabric.
In a world where the business dynamically consumes IT resources, they'll need a good handle on what they're consuming, and the economic implications of their choices. Whether this is a chargeback or showback model is less important -- putting IT consumers in charge of their choices is the goal here, and that requires yet more process change.
The last speed bump in this list really isn't an issue, it's an opportunity.
A few of these IT organizations have found that -- given their new portfolio of capabilities -- they can drive an entirely new engagement with the business around what is now possible. More projects can be supported, more initiatives can be considered -- things can move faster and more efficiently than ever before. Business users have to re-calibrate how to use these new capabilities to re-think their overall strategies and game plans.
And, if you're in the business of using IT to create competitive advantage, that's a very good thing indeed :-)
Where are you in your journey?