||Add To My Personal Library
August 27, 2010
Vol.32 Issue 18|
Page(s) 1 in print issue
Inside The Storage Refresh Cycle
Investigation & Planning Are Key To A Successful Strategy
Employees are frantically running through the halls, and customers are burning up the phone lines. A casual observer to this chaos might assume that a catastrophe just struck the company, but instead, a storage server failed, in turn severing access to critical data. What might seem like a trivial hardware nuisance to outsiders is big, bad news to organizations that rely heavily on their data, prompting data center and IT managers to increasingly focus on storage refresh cycles.
â€¢ Your companyâ€™s policy should dictate when storage refreshes will ideally occur, but the length between refreshes can differ among organizations.
â€¢ Telltale signs such as crashes and slowdowns can indicate a refresh requirement, but keep in mind that some problems might just require tweaking existing systems.
â€¢ Although the reactive approach can be required within a tight budget, experts recommend using a proactive strategy when considering storage refreshes.
According to a recent report published in The Wall Street Transcript, virtualization technologies and the cloud are generating an increased need for storage, but the lackluster economy has forced many IT organizations to delay upgrades. However, if the economy continues to improve, an industry-wide refresh cycleâ€"particularly for storageâ€"is going to occur. In the meantime, managers must know how to recognize the signs that storage upgrades or replacements are required, and doing so means donning an investigatorâ€™s cap and seeking the telltale signs.
The Cycle Strategy
According to liffie McKay, operations manager at DMD Systems Recovery (www.dmdsystems.com), the correct time to upgrade or refresh storage equipment should depend on your companyâ€™s policy. For example, some companies have a three-year refresh cycle, others have a five-year cycle, and still others simply run the equipment until it dies.
â€œWhile there are advantages to each option, companies must choose the method that limits their downtime and maximizes their budgets,â€ McKay says. â€œCompanies also need to take into consideration virtualization and software requirements when analyzing the equipment they are currently using.â€
He adds that the length between storage refreshes can affect not only the ability to reliably access data, but also to make money from older equipmentâ€"3-year-old equipment might still hold value, but older equipment might actually cost the organization money to remove. Other factors also come into play. Joel Hagberg, vice president of enterprise marketing for Toshiba America (www.toshiba.com), notes that storage upgrades depend on the application, the storage system architecture, storage device duty cycles, and component reliability. For example, a high-performance SAN/NAS system might be composed of 10,000rpm or 15,000rpm hard drives that can handle heavy workloads over a number of years.
â€œThe drives in these systems typically exceed 1.2 million hours MTBF [mean time between failures] and can tolerate significant use over a three- to five-year period. But in many of these systems, the refresh to new higher-capacity, higher-performance drives occurs approximately every 18 months,â€ Hagberg explains. â€œThis would imply that a refresh cycle on a three-year program would be able to take advantage of two generations of hard disk performance and technology advances.â€
Yet even when timeframes are set for regular storage equipment refreshes, problems can crop up that can force IT staff to replace or upgrade devices before the regular refresh interval arrives. Hagberg says that if systems begin to noticeably slow down or if you hear significant changes in the audible seeking noise of storage devices, the equipment might be failing. Other potential signs of storage hardware failure include system lockups or freezes, unexpected system shutdowns, slow startup times, and persistent system or application crashes.
â€œOther signs may point to a necessary upgrade but in fact may require something else,â€ says Elaine Pleshek, senior manager at Crossroads Systems (www.crossroads.com). â€œIn the area of data backup, issues with scheduled backups exceeding their allocated time windows may lead an IT manager to believe that a technology bump is needed when in fact the current infrastructure may be perfectly adequate and simply need to be reevaluated and adjusted. . . . It may be that newer, faster disks and tape drives are required, or it could mean that the storage network needs to be upgraded because the switch is now the bottleneck, or it could simply mean that configuration changes need to be made to better balance the system.â€
A down economy tends to have a pervasive effect on the storage refresh cycle. Although IT managers might know itâ€™s best to have a proactive approach to upgrading or refreshing storage devices, a perpetual lack of funding might force them to follow a reactive strategy that lets equipment be replaced only when absolutely necessary. The good news is that as the economy slowly improves, the recommended proactive approach can again assume a more prominent role in the overall storage plan of IT organizations.
â€œProactive planning is a proven strategy for saving money and reducing disruptions to business,â€ Pleshek says. â€œProactive planning should include not only solid visibility into the obsolescence roadmaps from the storage vendors but also a full understanding of the data growth and asset utilization within the IT managerâ€™s own infrastructure. In order to have a clear understanding of the efficiency of a storage environment, it is critical to have the monitoring and assessment tools in place.â€
Part of the proactive process involves looking at both current technology and innovations that are on the horizon. For example, Pleshek says that in terms of tape storage, most technologies are capable of reading media written by up to two earlier generations. With this in mind, the release of LTO-5 tape technology should prompt administrators who currently use LTO-2 and LTO-3 infrastructures to start planning upgrades before their data can no longer be read.
â€œâ€œThe goal of looking forward and anticipating growth will provide a realistic framework for businesses to accelerate as well as allocate resources where theyâ€™re needed,â€ says Lee Johns, marketing director for unified storage at HP StorageWorks. â€œBy planning ahead with storage upgrades, businesses can consolidate technology assets into pools of resources that can be leveraged on the fly, managed universally, and optimized for any workload.â€
by Christian Perry
IIdentifying Virtual Gaps |
As organizations make the move from physical to virtual infrastructures, gaps in storage infrastructures can materialize. Lee Johns, marketing director for unified storage at HP StorageWorks, lists the following signs that can help identify these gaps.
Availability. Assessing disaster recovery plans is an excellent way to evaluate storage infrastructure. Many older storage systems may include basic RAID or redundant controllers. A question to ask is whether a disaster recovery plan can be improved with features such as snapshots, local or remote replication, and multisite failover.
Manageability. The question here is how easily changes can be made to the storage infrastructure. In both physical and virtual environments, it is crucial that changes are made without disruption and in a way that does not increase management overhead.
Performance. If application performance is affected by new physical and virtual machines, there may be a need for an architecture that can scale both capacity and performance in a linear, rather than stair-step, model. This simplifies scalability and enables automatic load balancing.
Utilization. Rigid storage infrastructures result in underutilization. This becomes particularly apparent when deploying new virtual machines and evaluating the capacity allocated for new projects.