These days, EMC has meaningful investments in a increasingly wide range of technology areas, far beyond our original enterprise storage roots: virtualization, security, big data analytics, applications, data protection, etc.
This fascinating diversity has presented a certain challenge for audiences who want to get a quick tour of how we see various trends at work at different parts of the stack, and -- more importantly -- how they come together to fundamentally alter the IT delivery model.
It ain't just about storage anymore ...
Bringing together all of this information can be a yeoman’s task, but we're starting to make headway on integrating different perspectives across EMC groups. The result is rather panoramic in nature, but good reading if you like broad vistas.
As part of the team, I offered to write up the current deliverable in the form of a blog post -- albeit an extremely lengthy one ...
Disclaimers: timeframes are relevant here: most of these thoughts are around how we see the world evolving over the next 12-36 months. None of this should ever be construed as a commitment by EMC to deliver specific capabilities. Just to be clear, the longer-term perspective (3-5+ years) isn't covered here, although you can glean a few hints here and there ...
Not to disappoint you, but most of this material has appeared in different forms already -- both here on this blog and elsewhere. A few bits are relatively new.
If we had a few hours together and wanted to talk big-picture tech, this is the deck I would use.
For everyone else -- here's the writeup ...
IT Is Being Transformed
If you're a regular reader of this blog, this is not exactly a new idea. But it's a good starting point for the discussion at hand.
In addition to the plethora of new technologies, we add to the mix new consumption models (as-a-service, cloud, etc.), operational models as well as new business outcomes that are desired.
Concepts like "mobile knowledge workers" and "big data analytics" weren't part of the mainstream discussion a few years ago, now they are.
More importantly, there's a visible shift towards digital business models -- a complete re-envisioning of the business using completely digital constructs -- and that changes the game for familiar IT as we know it.
Not only is the world changing, the pace of change seems to have picked up significantly. This has put most traditional IT functions in a tough spot, strategically speaking.
Most of the historical rationale for IT investment has been around saving money: take something being done in the physical world, automate it, and do it for less.
Just consider how many IT functions report to the CFO, for example.
But while the need to be more efficient really hasn't gone away (will it ever?) there's now much more demand to deliver IT capabilities that the business can use to drive revenue in new and interesting ways.
And in this new IT world, agility rules above all else.
So, with those big thoughts in mind, let's start our tour.
Infrastructure is Being Transformed
We're all somewhat familiar with the cloud technologies that are now so popular in the market: standard technology components, virtualization, etc.
From an IT practitioner's view, the infrastructure roadmap up to this point could be generically thought of as a three-step process.
First, standardize as much as possible, almost always on an Intel processor base. Thankfully, those annoying chip architecture debates have largely subsided, and we can move on.
Second, progressively virtualize applications on that standardized plumbing -- saving money and increasing agility.
Third, automate as much as possible to drive even further efficiency and responsiveness.
Sure, it's not ever as easy as it looks here, but not a bad representation of the general direction we've seen so far.
But, if you think about it, all we're really doing here is putting familiar application constructs into nice virtual containers, and reconstructing the surrounding orchestration around the new containers.
Nice ... but from an infrastructure perspective, we can do more -- much more.
Enter the current discussion around software-defined data centers.
The idea is simple yet powerful: take other familiar infrastructure entities, and re-envision them as virtualized capabilities that can be dynamically invoked and orchestrated.
Now you're not only expressing applications as virtualized constructs, you're also expressing the infrastructure services they consume as virtualized constructs.
The potential result? Even more efficiency -- and more agility.
Another way of looking at software defined data centers is to contrast against the familiar, historical approach.
It's not uncommon to walk into an enterprise IT environment, and see multiple hard-wired stacks: one for each set of applications that IT has to run.
Here's the SAP stack, here's the Exchange stack, and so on.
Sure, maybe there's some resource commonality and pooling going on at different layers, but it's certainly neither pervasive nor architected.
In this traditional model, resources -- hardware and software capabilities -- are tightly bound to the application objects they support.
It's not hard to appreciate that this approach isn't ideal from either an efficiency nor agility perspective, but that's the way things have largely been done over the years for a variety of reasons.
Now, compare that perspective with a simplified view of a software defined data center.
In this model, the goal is to create a single logical pool of dynamically instantiated and orchestrated infrastructure resources are available to *all* workloads, regardless of their individual characteristics.
This workload requires great transactional performance, advanced data protection and advanced security. This other workload requires decent bandwidth, reasonable data protection and moderate security. And this other workload starts out requiring one thing, but grows to a point where it now needs something very different.
Rather than building static stacks for each requirement, the goal is to dynamically provision the service levels and infrastructure resources as pure software constructs vs. a bunch of specialized tin.
In some sense, notions around software-defined data centers are simply a deeper unpacking of familiar cloud concepts, perhaps without the unicorns?
How will these concepts be expressed in terms of physical data centers? Good question.
Certainly, there are those that argue for an IT landscape dominated by a handful of very large cloud providers, pointing to economies of scale.
I don't agree, and I'm not alone.
If we look at examples outside of IT, that hyper-consolidation of providers hasn't happened in other mature infrastructure industries: power generation and distribution, communications, air transportation, etc. -- and for good reasons.
No, instead we'll see perhaps thousands of public clouds -- offered to all comers -- and perhaps hundreds of thousands of private clouds. Barriers to adoption (and consumption) will continue to fall, and demand for easy-to-consume IT services will inevitably increase.
Here at EMC, we've chosen to enable both models simultaneously: partnering with our customers to help them acquire and operate their own private clouds, as well as partnering to establish a compatible ecosystem of external IT service providers. Up to this point, both approaches have done quite well, thank you.
Note that -- unlike other large IT vendors -- we don't intend to offer EMC-branded cloud IT services, and for very obvious reasons.
From a pure technology perspective, we've put a lot of thought into how various infrastructure technologies get packaged together, integrated and made easy to consume for both enterprise IT organizations and service providers alike.
In addition to the familiar a-la-carte menu of ingredients, we currently offer two flavors of integrated packaging.
One is VSPEX: a set of pre-qualified reference architectures, assembled by our partners, that offer considerable choice in server, hypervisor, storage network, etc. And, above that, the pre-integrated VCE Vblock which is essentially sold and supported as a single "cloud infrastructure product" -- again, through and with partners.
Operations Are Being Transformed
I think by now there's wide recognition that the operations model for any cloud is considerably different than the legacy one that preceded it. Not only is it distinct, it's the foundational basis for much of the efficiency and agility advantages that can result from cloud infrastructure.
Many of these resource domains (server, storage, network, security) have rich, robust management environments of their own.
These are well established disciplines, to be sure.
The predominant direction -- up to this point -- has been to expose underlying resource management domains upwards to a higher-level integrator (such as vCloud Director) to surface capabilities, drive workflows and so on. And, compared with previous approaches, it's a vast improvement.
But there's always room to do better.
If we go back to our software-defined data center construct, one aspect of what's really happening here is that a good portion of the embedded intelligence and functionality -- and the corresponding management constructs -- of each of these domains gets abstracted and virtualized as well.
If I were to hazard a real-world analogue to the concept, it'd be the popular "war room" approach: when you've got an important crisis, you put all the responsible people in one, logical place where they can interact, orchestrate and flex naturally, vs. being decentralized, distributed -- and, frequently, disconnected.
Not the greatest analogy; I know, I'm still working it :)
Back to our software-defined data center, this deeper abstraction of function creates the potential for newer orchestration and operations constructs that are even more efficient, optimized and responsive than the previous model. Examples might include dynamically realigning storage service delivery levels against the same pool of assets, or dynamic reconfiguration of network assets based on transient workloads.
The orchestration and the resources being orchestrated are being brought even closer together through deeper levels of abstraction, creating the capabilities for superior operational results: efficiency and agility.
Software Defined Storage
In particular, we're now being asked for our point of view on software defined storage, which is to be expected, given EMC's position in the marketplace. Here's one view of how we see things evolving ...
First, we think there will be a need for key abstractions that "wrap" today's world of purpose-built storage arrays and internal devices, and provide a consistent set of storage services regardless of the underlying storage device.
I'd be tempted to use the word "storage virtualization" to describe the idea, but that's already been tortured to death :)
The middle bits in the picture are the interesting parts. Imagine a single control plane (passive reporting first, dynamic reconfiguration to follow) that was largely agnostic to the underlying (usually intelligent) storage device.
Next, imagine the ability to create abstracted data-plane presentations of those resource pools (the familiar block, file, object) regardless of whether the underlying device supported it natively, or not.
Those capabilities would be exposed via RESTful APIs to the orchestration layer (VMware's vCloud Suite in this particular representation), which would then supply the necessary coordination and orchestration to deliver the composite infrastructure services (and presentation) required.
Inevitably, more intelligence found in today's storage arrays will be able to be optionally expressed as pure virtual machines, if desired. You can see that idea at work today in the present crop of VSAs -- virtual storage appliances.
One way of describing the thought is that traditional storage array functionality can "float upwards" via a virtual machine running on a server farm. Conversely, an application function that is data intensive (and not CPU intensive) that is encapsulated in a virtual machine can conceptually "float downward" into the array itself, providing a potentially superior level of performance for specific tasks.
In both cases, virtualization serves as a convenient abstraction that enables functionality to be run where it belongs, and not be rooted to a specific type of hardware device.
Now, there are certain extremists who might claim that all purpose-built storage devices immediately become obsolete and/or commoditized.
Not exactly the case from where I sit -- all that's changed is that there are just another set of consumption options (and optimization points) available to infrastructure architects.
And, with a bit of consideration, you'll probably realize that there's plenty of room for differentiated capabilities both above and below the virtualization abstraction point.
Just to make sure that nobody thinks we're playing favorites, here's another representation of the same conceptual picture, this time using OpenStack constructs vs. VMware's.
While I think many customers appreciate a tight integration of their chosen technologies from the vendor community, nobody wants to be locked in at one layer because they've adopted another layer.
Which brings us to storage itself ...
Storage Is Transforming
There's a lot happening in the storage world these days, and it can take some time to explain it all.
We recently came up with this one-slide representation in attempt to briefly summarize all the major transformations we think are going on in our familiar storage market. It's quite a list.
A while back, I wrote a blog post on this ("The Seven Shifts In Storage"), but -- in the interest of brevity -- here are the highlights:
1) The consumption model is changing to easy-to-consume and easy-to-control storage services -- whether those are delivered internally by the IT organization or externally by an IT service provider.
2) Domain-specific storage operations are being augmented by services exposed by RESTful APIs, as described above.
3) Distance-overcoming technologies are being introduced at the storage layer, allowing newer active-active models for resource balancing and failover.
4) Convergence is causing physical storage to become tightly integrated, managed and supported with other infrastructure components such as server and networks.
5) Scale-up architectures are giving way to scale-out approaches as data volumes grow and simplicity becomes paramount.
6) The underlying technology base is moving away from proprietary ASICs and FPGAs to the industry standard componentry found in servers, now running software stacks that are virtualizable and can potentially run on standard servers, as well as enabling migration of new workloads into the array itself.
7) The underlying media types are rapidly changing: from tape to disk, and from disk to flash.
While any one of these topics is worthy of a lengthy dissertation -- for the purposes of this discussion -- let's just take *one* of these key transformations (flash) and do a bit of a deep dive.
Flash Is Transforming Storage
Storage costs usually have to be evaluated against three criteria: cost per unit capacity, cost per unit performance and cost per degree of protection. Flash storage completely revolutionizes that second metric (performance) in a substantial way.
You've likely seen the before-and-after performance charts, so I won't bore you here.
The bar has obviously been raised dramatically as to what's now achievable, and the cost associated for delivering a given measure of storage performance drops dramatically. Yes, there are always exceptions, but we're focusing on the broad trends here vs. specific corner cases.
There's also clear recognition that there's no proverbial "best place" to put the stuff in the storage hierarchy -- in the array, in the storage network, in the server, etc.
For many familiar use cases, there's a well-understood benefit for mixing in a bit of flash with more traditional disk drives via a hybrid storage array.
Thanks to our good friend LoR (locality of reference, or data skew), a small amount of flash combined with intelligent software can often result in eye-popping performance benefits.
But if your need for speed is extreme, there's nothing faster in the storage world than a server-based flash pool accessed over the internal PCIe bus vs. a traditional storage network I/O. The only thing faster would be terabytes of volatile server DRAM which -- technically speaking -- is hard to consider as storage as it isn't persistent.
Whether that server flash is delivered via an in-server PCIe card -- or perhaps pooled as an external shared caching device via a low-latency interconnect -- we all believe that's the near-ultimate in storage speed.
Between those two extremes, we think there's also room for a fresh-sheet-of-paper all-flash design that doesn't have to worry itself about supporting the legacy rotating rust devices -- i.e. disks.
Enter "Project X" -- an all-flash design we've been previewing in certain venues, available next year.
I did a rather extensive writeup on this a while back -- and everyone who's seen one wants one, naturally -- but there are some important ideas at work here: scale-out design, serious CPU to support in-line dedupe for primary storage, ridiculously simplified administration model, and so on.
As with all new shiny things, there's the inevitable question of "where does it fit?", so here's where we think these all-flash storage arrays will fit best at the outset.
First, data sets that don't align well with PCIe-based caching approaches: data set too big, need data protection, etc.
Second, a distinct bias to random I/O patterns -- read *and* write.
Third -- and most importantly -- the business need for predictable low latency access to *all* the data, which largely precludes hybrid tiering approaches.
Looking at all of these different flash approaches, you might think we're choosing favorites. We're most definitely not. Each approach has distinct pros and cons in the context of specific use cases, and there's no convenient one-size-fits-all answer that we're currently aware of.
We also believe it's important to invest in software integration so customers can use any or all of these different approaches together.
Data Protection Is Being Transformed
All sorts of bad things can happen to data, and -- when it inevitably does -- there's a certain interest in getting it back online as soon as possible. But, as with most things, speed costs, doesn't it?
Much like flash is transforming the performance equation for primary storage, a group of interesting technologies are doing a good job of transforming the performance equation for data protection.
Let's take a look ...
By now, it should be clear that there's a secular trend away from tape-as-backup in favor of deduplicated-disk as backup.
While I do think there's a justifiable claim around superior economics for many situations, the real motivation (as I see it) is improved - and predictable! -- recovery times.
Disks are random-access devices, tapes are not. Case closed -- at least for this use case. Tape as applied to long-term, read-almost-never archives -- well, I'm more open to discussion.
Maybe you've heard the vendor phrase "backup is dead". While I see that as sort of a dog-whistle for vendors clamoring for attention, the underlying truth is that there's growing adoption of advanced continuous replication models (whether local or remote) that complement the traditional traditional backup/snap approaches.
One way of thinking about the difference is taking a movie vs. taking a single picture.
The acronyms used to describe the technology don't help with approachability -- CDP and CRR for example -- but idea is simple enough: continual logging of incremental state changes to data (at a very low level) that can be flexibly recovered or replayed to any point in time.
As disk costs continue to fall (both raw media costs and with deduplication and compression), the economics of this approach get more attractive every year -- as evidenced by the widespread popularity of technologies such as EMC's RecoverPoint.
More importantly, this data protection approach provides better recovery granularity (e.g. arbitrarily choose your point of restoration) that isn't a prisoner of the "scheduled backup window" or "scheduled snap" approach.
And, finally, if you've spent any time with distance-dissolving technologies such as VPLEX, you'll realize that one of its most powerful use cases is the near-zero-downtime stretched cluster use case. While not technically data protection, I've included it here as it's an important component in restoring immediate information access: servers, apps, network, etc.
Data Protection As A Service
Not only are the data protection technologies changing, but also the operational and consumption models. The ITaaS transition is affecting all the traditional IT disciplines, and data protection is certainly no exception.
The same familiar principles apply, only in different form: creating a broad range of service catalog offerings, making them easy to consume across the organization, full transparency back to the consumer, variable consumption, metering, etc.
In this sort of model, application owners are responsible for determining and provisioning their own protection requirements, and not necessarily require an extended workflow through the centralized data protection team.
As with any internal IT service provider function, the role of the data protection team quickly becomes focused on driving wider and appropriate consumption, improving the suite of offerings in the service catalog, looking for ways to reduce costs and improve proficiency, and so on.
Since data protection generally improves as distance increases, there's an increasingly growing role for external service providers to play: everything from remote DR to simply a safe place to stash recovery images. Simply put, there's becoming far less rationale to attempt to do everything yourself.
Security Is Being Transformed
Everything I personally believed about IT security has been largely obsoleted in the last few years. Hopefully, either you'll agree with that statement, or the security professionals in your organization will agree as well.
Valuable information is the new prize. We now live in a world where well-organized and well-funded groups target specific organizations using a wide variety of technologies and social engineering.
Frequently referred to as APTs (advanced persistent threats), they're now responsible for a substantial change in overall security philosophy and strategy. Combine the shift in threats with a corresponding shift in IT approach (e.g. cloud, IT as a service), and many of us see a unique opportunity to reconstruct IT service delivery in such a way that the IT services are transparently secured and protected as they are consumed by the organization.
In this model, as I consume an IaaS or PaaS or SaaS or mobile service, security controls are integral part of the service definition.
Security isn't an afterthought, it's baked into the consumable offering.
While this line of thinking might sound somewhat obvious, it seems that security is frequently considered somewhat after the fact vs. integrated at an architectural and service delivery level.
As a simple example here at EMC, when I sign up to use my iPad, it's automatically provisioned with the security services deemed necessary by the business. There's nothing extra for me to do, or be overly concerned about.
But the value-add around advanced security is also being transformed.
If you've ever followed my big data analytics discussion, you might remember that I've asserted that just about any critical business process is a candidate for transformation using the predictive power of big data analytics.
Since cyber-security is itself is quickly becoming a critical business process, it too is starting to be transformed through the power of predictive analytics. The new approach might appear somewhat unique to security professionals, but from alternative perspectives it's just another example of a much broader pattern we're seeing most everywhere else.
In these advanced security models, the goal is in some ways to build a classic big data predictive model: oodles of diverse information sources (both internal and external), a platform to consolidate and analyze, a team of advanced analytical professionals applying data science techniques to cyber-security challenges, and then using the results to programmatically drive improvements in policies, processes and responses to specific threats.
Enterprise Applications Are Being Transformed
Most enterprise applications are usually nothing more than instantiations of business logic. There's a better way to run a business process, and the enterprise app embodies that thinking.
If the business process being discussed doesn't lead to a particular competitive advantage in the business model, the choice is usually around packaged software, consumed via a traditional or SaaS fashion.
But the line of thinking becomes quite different when the business process of choice is intended to contribute to some sort of differentiated competitive advantage. Enter the world of application platforms, app factories and the rest of it.
Indeed, ask most CIOs about which part of their operation gets the lion's share of their attention, and it is inevitably the application side of the organization. In many ways, that's where much of the business value of IT gets created.
But there's been a noticeable shift in the app world for the last few years, and we expect it to continue.
Historically, the primary rationale for enterprise applications has been improved efficiency: here's what we're spending on process X today, here's what we could be saving if we invested in automating process X. To be clear, there's nothing wrong with that approach, and it still continues to drive a fair share of enterprise application spending across the IT universe.
But the new wave of enterprise application development seems to have an entirely new motivation: value creation. It's less about doing familiar things better, it's about doing entirely new things.
As a group, the newer enterprise applications inevitably support what I've dubbed a "digital business model": a complete re-envisioning of the business proposition using entirely digital constructs.
These newer applications are most always natively mobile, and not a mere adaptation of a familiar desktop or web presentation. They are inherently social and collaborative -- not by linking to familiar external social services, but by embedding notions of social and community and workflow into the application construct itself.
The more advanced examples support real-time decision making by users, hence a strong preference to deliver analytics in some capability as part of the decisioning process being supported. And, within that group, a sub-group that is starting to harness the power of big data behind those analytical capabilities.
One recent visible example was the mobile app created as part of the Human Face of Big Data project, sponsored by EMC.
It was (a) natively mobile, (b) had social, collaboration and community constructs integrated into the experience, (c) offered up simple comparative analytics as part of the application experience, and (d) was powered by a substantial big data feed.
And -- as a result -- it over-achieved its goals in a way that traditional approaches could not possibly have met.
These next-gen apps are built in new ways as well. First, there is an absolute focus on understanding the user experience: quantitative as well as qualitative.
How many people downloaded the app? What do we know about them? How many were still using it after the first few days? What parts of the application did users spend time in, and why? How many times did they favorite it, or recommend it to their co-workers?
These sorts of in-application analytics are increasingly becoming part of any iterative development process.
Of course, new applications require new tools, new frameworks and new (agile) methodologies.
And one can't simply assume that enterprise IT will be providing the internal cloud platform for these newer applications: these newer applications must be created to be cloud agnostic, yet manageable by the IT organization.
Big Data Is Transforming Business
If you've become fatigued by the big data meme, I can't offer you much hope: it seems that the conversation has only just begun.
Business leaders around the globe are waking up to the amazing potential of the data that is lying all around them -- mostly free for the taking -- and are busily hammering together capabilities to gather, analyze and exploit big data as a new competitive capabilities.
From an IT-centric perspective, it's the big one: an IT-enabled capability that can really move the needle from a business perspective.
Fortunately, we as technology vendors don't have to do much to encourage this line of thinking: the business and leadership journals are equally enamored as we at EMC are, and are writing at length on the topic. And this will only continue going forward.
One of the amazing aspects of big data analytics is its seemingly endless applicability. Take any handful of important business processes, apply big data analytics, and the result is inevitably a far superior business process.
This transformative effect appears to be largely independent of industry, government sector, size or geography.
Most recently, that list now includes forever changing the way we think about both elections and campaigns.
Capture a business process, and benchmark its effectiveness. Now, gather multiple data sources from inside and outside the organization. Correlate them to produce increasingly more effective predictive models. Use that insight to drive new forms of the business process.
Lather, rinse, repeat.
It's a continual improvement treadmill no one wants to get off of.
But so many organizations aren't analytically proficient today, yet have a strong desire to achieve that state.
Here at EMC, we use this three-phase model to generally describe how we see our customers becoming more analytically proficient.
We're also using this approach in our own business, because we too are one of those many organizations who see this as something important to get good at.
The first phase sounds easy, but is actually quite difficult in practice -- it's creating an environment that encourages broad experimentation with existing data sources. In some sense, it's the pre-work required before bringing in the data science team.
The goal in this first phase is create a "shopping mall" for enterprise data sources, complemented by an easy-to-consume platform resource replete with tools, collaboration and community.
The central challenge here seems to be less about technology, more about new governance and funding challenges where information and resources become very easy to consume :)
The second phase is relatively straightforward: identify a handful of candidate business processes, and bring in a data science team to see if there's valuable oil to be found. These experts can be internal hires, or -- more frequently -- external professionals who've done it before.
Part of the rationale for preceding this phase with the previous one is simple: the first thing these data science professionals will want is easy access to all the corporate data sources.
Inevitably, substantial business value is found, which drives additional investments in bigger environments, more data, more tools and more data science professionals. But it also drive a third phase: a wave of new application development that monetizes the new insights powered by predictive analytics.
Here, the architectural challenges can often be extreme: multiple, legacy data sources that weren't designed to stream their information capture in near-real-time; predictive analytical models that need the utmost in performance (think in-memory scale-out architectures), driving a strong interest in new tools and new architectural approaches.
New forms of magic require new magicians, and big data analytics are no exception.
The industry's clear focus is now on the new data science professional - a critical skill set widely acknowledged to be in excruciating short supply for the foreseeable future. We're helping to attack this industry problem on two fronts.
The first front is to help with the creation of more data science professionals. And to support this, we're investing in certifications, partnering with academia on new coursework, and related tasks.
We've studied how proficient data science teams do their work, and have used that insight to build Greenplum Chorus as part of Greenplum's Unified Analytics Platform, or UAP.
Optimized workflows meets social collaboration, and the result has emerged as the productivity platform of choice for so many proficient data science teams.
In addition, we've packaged up our own in-house data science capabilities, and have made them available, as a service, through the popular Greenplum Analytics Lab offering. The best use case is when the business has identified a few candidate business processes, and wants to make a limited investment to see if a data science-led approach offers superior results.
Based on experiences to date, it usually does exactly that.
To complement the big data picture, we're also working on newer storage architectures that are purpose-built for big data environments (Isilon as an example) and have also invested in critical application development skills, as evidenced by our recent acquisition of Pivotal Labs.
Over the horizon, there's initial work being done around highly-distributed geographically-dispersed big data architectures: use cases where it's infeasible to naively assume that all data can be sent to a central location, processed, and then disseminated outwards.
Stay tuned .. it's an interesting discussion.
IT Organizations Are Being Transformed
Up to this point, most of the discussion has been technology-focused. Yes, it's important to chart the technological changes -- but it's also important to chart the structural changes occurring in the IT organizations that use these tools.
If you're a regular reader of this blog, you know I've covered this one at considerable length.
For the casual reader, here's the essence: the fundamental model of most IT organizations will look more like an IT service provider over time, and less like a traditional IT set of silos: new roles, new skills, new processes and new measurement systems.
This isn't some abstract theory, our own EMC IT organization has served as an early adopter in this regard. The lessons we've learned we've since been able to apply to a wide range of enterprise IT organizations with considerable variability in size, industry and geography.
Even if our internal experience won't be identical to yours, I've found there's always something to be learned from someone else who's made multiple mistakes :)
The fundamental organizational construct within any ITaaS function is service delivery teams: infrastructure, application, et. al. Each is run (for the most part) as a "business": competing for internal customers, including other IT functions.
The resulting measurement and incentive system is what you'd find inside any competitive IT service provider: market share, customer sat, margin, etc.
Running an enterprise IT organization like a competitive IT service provider isn't exactly a *new* idea -- it's been around for a while -- but it's a relatively new thought to many IT leadership teams these days.
You've probably seen me share this "net net" slide that our CIO shared with the leadership team near the end of 2011.
The blue line is a measure of agility: the declining time it takes (on the average) to provision a set of services for an average new business requirement. I can vouch for this in my own experience: anytime I need some IT services to do something, they're typically not the bottleneck anymore -- my team is!
The green line shows the proportions of IT spend that is going towards new, value-creating initiatives instead of just keeping the lights on. By mid-2011, 42% of every dollar that EMC was spending on IT was in the value creation category. By mid-2012, this had climbed to around 54%.
Keep in mind, the total amount we spend on IT is decided by our finance team as part of our overall business model; fortunately, our IT team is not in charge of rationing demand, just delivering the IT services we business people want to consume at a competitive cost.
If I, as a business user, over-consume on IT, that's becoming a discussion I have with my friendly financial controller, and not the IT group :)
The investment in IT transformation here inside EMC has also produced some amazing next-gen internal capabilities as an outcome.
For example, this summer we went live with an Oracle E-Business Suite to SAP migration. If you wanted to name one core application that our business depends on, this would be it.
Not only did the IT team save the company a pile of money by running it on our private cloud, the new environment is far more agile and performant than the previous one, opening up all sorts of new possibilities that couldn't be considered when running on traditional physical infrastructure.
Like most large IT organizations, we build a non-trivial number of enterprise applications across our business.
The EMC IT team has extended ITaaS concepts to create a next-gen application factory that's producing amazing benefits in terms of productivity.
New business ideas can be prototyped quickly, feedback applied using agile methodologies, and fast-iterated to production vs. the traditional requirements-gathering waterfall approach.
We, like many companies, are moving quickly to a mobile-first strategy.
Our workforce is comprised largely of mobile knowledge workers; our partners are similarly mobile, as are our customers.
If we (as EMC) are going to meet their needs, it's pretty clear that we have to drive mobile application proficiency across the entire organization, and not just within the four walls of IT.
The response was simple and elegant: a mobile applications CoE (center of excellence) to act as a resource for any business unit desiring a mobile-first application experience for their audience. All the pieces are there, ready to consume: tools, enterprise app store, methodologies, secure enterprise mobile container, and more.
And -- already -- I am told that there are more "mobile first" applications in the pipeline than traditional ones.
Finally, like most organizations, we're completely seduced with the transformative power of big data and predictive analytics in our own business.
We're investing in becoming more proficient, and our IT team has responded with an "analytics-as-a-service" platform that makes various information sources easy to discover, easy to consume and easy to experiment with -- thus enabling broader proficiency across the organization.
To be clear, we already had a full complement of traditional data warehouses and reporting tools.
This particular platform is different: it's a place to discover and experiment with data vs. operational reporting on the business.
Across all of these transformational topics, we've taken what we've learned during the last few years, and have worked to package our learning to be more consumable from other enterprise IT organizations who are embarked on journeys similar to ours.
In addition to the predictable technology discussions, we have acquired a good body of process knowledge: not only the processes needed in any ITaaS model (e.g. showback/chargeback), but perhaps more importantly the organizational change processes that become so central to any IT transformation discussion.
Perhaps the most visible component of our investment has been the extremely popular industry certifications we've introduced along the way: taking best practices, and making it directly usable by our customers and partners in a hands-on setting.
IT = Opportunity
Here at EMC, you might now appreciate why we are so extremely bullish on the opportunities for our customers and partners going forward.
IT becomes strategic in this context, and not just a convenient way to save money.
Yes, new skills and new roles are needed: not only within the IT but the business as well.
And there's plenty of new technology -- here today and coming soon -- that makes all of this very achievable.
I suppose that's exactly what makes all this IT stuff so much fun ...