I was on vacation last week, but I couldn't help noticing all the furor over the Amazon outage.
"OK", I thought, "an infrastructure provider had a bad day -- it happens." Just the same as any power utility, or an airline, or a phone service, or a railroad, or any other provider of shared infrastucture services.
But the Amazon outage had deeper significance, as for many it now has become a symbol of All Things Cloud, especially the public, commodity kind.
And, like many public events, our collective reactions probably say more about us than we'd like to admit.
In A Nutshell
A significant hunk of AWS was down for an extended period. Hundreds of Amazon's customers -- large and small -- were impacted. Communications regarding the outage -- its nature and duration -- were mostly limited to tersely worded paragraphs on Amazon's web site.
Some users of the service were prepared for the inevitable: they had either accepted the inherent risks of the service, or had made alternative plans as needed.
Other users hadn't really thought through the impact of the service being unavailable for an extended period. To put it simply, they were in a world of hurt, and didn't have much of a plan other than wait it out and hope that their applications could be restored eventually.
What I found thoroughly enlightening was the extremely broad range of reactions, from "duh!" at one extreme to "the world is ending!" at the other.
For me, it was yet another object lesson in how we -- as human beings -- perceive and react to risks.
And as more of our day-to-day livelihood involves the use of IT, perhaps the enitre subject of how we perceive and react to IT-associated risks bears closer watching.
Perceptions Of Risk
The study of how we all perceive and react to risk can be quite fascinating. Not just limited to psychology, the topic can span many disciplines: economics, sociology, politics -- even health care.
We human beings often act in seemingly irrational ways in the face of risk.
A frequent example is travel. Many people see air travel as "riskier" than driving, even though the data overwhelmingly shows the opposite. You don't have to go far to find similar examples in gambling, energy policy -- even medical treatments.
A quick tour through Wikipedia shows that there are three schools of thought that attempt to rationally explain our collective irrationality.
One framework is based on psychology. A second looks at sociological and cultural factors. And yet a third (SARF) focuses on societal amplification of perceived risks -- based on the observation that we (as social animals) tend to look to others to determine how we feel about something.
Here Is Where Things Gets Important
Much of what is being done in IT today (cloud, big data, mobilility, etc.) involves doing things in a new and different way. Doing things differently moves the risk profile from the known into the relatively unknown.
I would argue that our natural human tendency to either overstate or understate real risks can greatly affect the rate of IT progress.
Go too fast (understating risks) and you'll inevitably end up having a bad day, reducing the rate of overall progress. Go too slow (overstating risks) and you'll likely miss out on the new opportunities and advantages of the new ways of doing things.
So if you see yourself as an "agent of change" using IT-based solutions to do things better, faster, cheaper, etc. -- you'll best want to arm yourself with a basic understanding of how human beings perceive risks.
And the answer isn't always "more data" ...
If you skip down the Wikipedia entry to the section on the psychometric paradigm, you'll find a key insight.
All things being equal, the greater people perceived a benefit, the greater the tolerance for a risk. If a person derived pleasure from using a product, people tended to judge its benefits as high and its risks as low. If the activity was disliked, the judgments were opposite. Research in psychometrics has proven that risk perception is highly dependent on intuition, experiential thinking, and emotions.
If that wasn't enough for you, skip down a bit to this discussion around SARF -- the social amplification risk framework:
The Social Amplification of Risk Framework (SARF), combines research in psychology, sociology, anthropology, and communications theory. SARF outlines how communications of risk events pass from the sender through intermediate stations to a receiver and in the process serve to amplify or attenuate perceptions of risk. All links in the communication chain, individuals, groups, media, etc., contain filters through which information is sorted and understood.
The framework attempts to explain the process by which risks are amplified, receiving public attention, or attenuated, receiving less public attention. The framework may be used to compare responses from different groups in a single event, or analyze the same risk issue in multiple events. In a single risk event, some groups may amplify their perception of risks while other groups may attenuate, or decrease, their perceptions of risk.
This should be a flashback moment for anyone who's present a project proposal to a large group.
On one side, you have the folks who see the upside and are minimizing the risks involved. On the other side, you have the folks who don't see the upside and are very concerned about real and imagined risks.
And then the fun really begins ...
Users See Benefits, IT People See Risks
Thess observations go a long way to explaining the inevitable divide between users of IT and providers of IT. The users see the direct benefits, so they tend to minimize risks. The providers don't see the direct benefits, so they tend to maximize risks.
And, very often, IT leaders are left in the position of closing that gap.
Years ago, I met a crusty IT manager and we got to talking about this asymmetry in perception. He walked over to his bookshelf, and pulled out one of many binders. On each page were ugly headlines and press clippings of various organizations having a "bad IT day" in a very public way. The articles went back to the late 1970s. There were several hundred to choose from.
Any time he felt that the user community was getting a bit ahead of itself, he took out his binder, made some photocopies, and sent it around. I thought that was rather clever.
Back To "Cloud"
It's hard for some IT people to understand just how seductive some of these external cloud services can appear to business people trying to get stuff done.
Low entry costs. Pay for what you use. Get going almost immediately. Scale up or down as conditions dictate. Access to advanced application functionality. No finance meetings, no IT meetings, etc. -- just get on with it.
Get exactly what you want, when you want it. How attractive is that?
So it's really no surprise that the natural tendency for users will be to minimize any risks associated with moving to an external service.
Now, turn the lens around to an IT perspective. There's poor visibility into how the service provider actually delivers the service. Add that to a very limited ability to control service delivery as they do with their internal resources. Making matters worse, there's no dedicated account team to call when the brown stuff hits the fan.
Is it no wonder that most IT people look askance at many of these external services?
Finding That Middle Ground
The economic and strategic case for building IT out of a rationalized blend of internal and external services is proving to be too compelling to ignore. Call it hybrid cloud, call it workload rightsizing, call it intelligent outsourcing -- the name matters less than the concept.
Focus internal IT resources on stuff that they're uniquely qualified to do. Create a framework to progressively use increasing proportions of external services. It's the same sort of transition that other corporate functions have already gone through.
Conceptually, I think there will be two things that will serve to bridge the gap between these divergent view points.
From the user side, a reasonable governance framework can temper buoyant enthusiasm with pragmatic concerns. I've written about this a few times, and -- regardless of whether or not you do it with EMC -- the approach appears to be working as advertised. Fair warning: there's heavy lifting involved.
From the IT side, there's the emergence of specialized service providers who provide the transparency, control and accountability that IT organizations need. EMC is very focused on investing in this newer breed of service provider, and there are already many more compelling offers in the market that many IT leaders aren't aware of. Indeed, if you've been following the success of VCE and Vblocks, many of them are landing in precisely this type of service provider.
But this space is moving very fast indeed. I'd encourage IT leaders to invest a few cycles to stay abreast of what's currently out there from service providers, and to continue to provide them direct feedback on IT's needs.
The Road Ahead
IT is going through a seismic shift, one where we will end up thinking about it more as a set of services we consume, rather than assemblies of various technologies and processes. Pick your favorite label; I use "cloud".
Most enterprises will inevitably want to be "cloud enabled" to various degrees. Doing so will require fundamentally changing the traditional role of IT.
Any such momentous undertaking inevitably involves risks, and -- more importantly -- how people perceive these risks, either enthusiastically understated, or pessimistically overstated.
I believe that successful IT leaders who can initiate and manage this change will inevitably find themselves in a role of bridging these two camps.
And, although facts will be useful, a deeper understanding of how we collectively tend to perceive risks may ultimately be more important.