One of the more painful topics in the storage business that never gets much of a public airing is data migrations: the inevitable necessity of moving all that information from one piece of hardware that's no longer useful to a new storage array, and doing so in a way that minimizes effort and disruption.
Storage hardware ages like any other part of IT hardware infrastructure. New gear is faster / cheaper / better; old gear gets prohibitively more expensive and difficult to manage over time.
Occasionally the motivations to move are around specific features, or perhaps improved performance and capacity. More often, the rationale is straight economics: it's cheaper to use the new gear than keep the old stuff around.
If you're a smaller IT shop, data migrations are periodic annoyances that only manifest themselves when a new array is coming in, and an old one is either to be decommissioned or repurposed. But if you're a larger shop, you've likely got an extended fleet of storage devices, and there are always new ones coming in and going out.
Simple numbers tell the story.
Imagine a typical larger enterprise with 3-5 petabytes of data under management, often much more. Storage arrays, generally speaking, are kept around for between 3-5 years.
That means that, on the average, you'll be moving a petabyte of data every year. There are roughly 200 working days per year, so that's 5 terabytes of data movement per day, every day -- just to keep the inventory current!
The future isn't going to help us here: data volume under management increases something in the range of 30%-60% per year for most larger IT shops. The arrays are getting ever-more capacious. Tolerance for downtime isn't growing. And that's before we start talking about "big data" or anything like that.
If you're immersed in the world of running enterprise storage gear, improving the state-of-the-art in data migrations is something you'd probably like the industry to get better at.