||Add To My Personal Library
September 10, 2010
Vol.32 Issue 19|
Page(s) 35 in print issue
Identify Data Center Weaknesses
Move Toward Fixing Points Of Failure
• Develop a comprehensive monitoring plan that includes staff time and vendor-supplied tools to help identify potential weaknesses.
• Choose tools that work well together and cover numerous issues such as power consumption and flow telemetry.
• Schedule regular checkups for all the automated monitoring tools in place to make sure they aren’t working as points of weakness themselves.
Every data center has potential areas of weakness, ranging from less-than-ideal security strategies to infrastructure issues, network glitches, and physical building problems. Unfortunately, there’s no single product that can ferret out all of the weaknesses, but there are tactics for finding points of failure, which is the first step in creating a more secure, reliable, and efficient data center. Here are some strategies for building strength.
Start With Monitoring
Data center managers concerned about finding weaknesses aren’t alone. In a recent survey of data center users, Emerson Network Power noted that there’s a growing concern regarding adequate monitoring and data center management capabilities. Also notable was increased concern over data center availability and heat density.
“With these three top concerns in mind, it is important for data center managers to track power and cooling consumption and compare it to capacities,” says Ashish Moondra, senior product manager for power products at Avocent (www.avocent.com), an Emerson division. “This will help data center managers anticipate areas of weakness and act quickly to fix them.”
Although it’s still important to walk around a data center’s physical space and check connections and gauge temperatures, most weaknesses will likely be found through monitoring software, note many experts. These tools are the only way to gain visibility into the network to determine potential problem areas, says Joe Yeager, a product manager at Lancope (www.lancope.com), a network performance and security monitoring firm.
“End-to-end visibility gives data center managers the situational awareness necessary to formulate a proactive plan of attack and reveals the effectiveness of actions taken to bolster data center health,” he notes.
Visibility should be pursued at every level, he adds, even in the virtual layer: “Visibility into the virtual environment to the same level of the physical environment should be a target for every data center.”
Find The Right Tools
Due to the breadth of infrastructure and operations, there’s no silver bullet piece of software that can accomplish a comprehensive vulnerability assessment, according to Ozzie Diaz, CEO of AirPatrol (www.airpatrolcorp.com), developer of wireless security solutions. However, periodic penetration testing and automated compliance auditing can be helpful, and for highly dynamic threats such as viruses, monitoring tools can prevent them from spreading.
Also vital is watching power utilization, adds Moondra. He notes, “Efficiency and total cost of power in the data center is becoming increasingly important for data center managers to control. Unfortunately, the reality is that most data center managers don’t have visibility into actual energy consumption.”
With monitoring tools, managers can gain insight into consumption, figure out costs, and make strong availability and efficiency decisions that can optimize the usage of all physical infrastructure components, Moondra believes.
One point to consider in general is that monitoring tools typically only monitor ingress and egress points but fail to monitor activity within a data center, says Yeager. “Packet sniffer technologies simply don’t scale within the data center, while syslog and SNMP monitoring solutions don’t give the level of granularity required,” he says.
Visibility into system-to-system activity is further complicated by virtual environments, which often mask VM (virtual machine)-to-VM communications, he adds, and seeing those communications is critical for securing the network, as well as managing network and application performance.
He recommends that IT departments use some type of tool that utilizes flow telemetry—available in most routers and switches—which can monitor, troubleshoot, and secure the data center across both physical and virtual environments.
Create A Schedule
With knowledge of monitoring in place and tools that can help accomplish the task, it’s time to set a schedule that makes sense for the center. Nearly all monitoring software works constantly and provides notification of potential problems, but there still needs to be a regular check of the tools themselves to make sure they’re tracking issues properly and sending notifications to the right people.
Moondra recommends that managers using monitoring software pull reports once a month. Reports can be scheduled for automatic generation, but managers need to set aside time to actually study them and see if any trends are forming.
“The reports can provide insight [into] capacity trends so managers can have a clear view of the status of their data centers and are able to make proactive decisions based off of report results,” says Moondra.
Although these types of regular check-ups are necessary, it’s useful to automate as much as possible, Yeager adds. Managers should set tools to automate whatever can reduce the burden of daily administration, he says, so that more focus can be put on tasks that can’t be automated, such as manual tests and training programs.
Address Design Issues
Automated tools can go a long way toward identifying weaknesses and helping managers to strengthen their environments, but sometimes, the weakness lies in overall data center design. If a vulnerability assessment uncovers design challenges, they can typically be addressed by consolidation, standardization, and virtualization, according to Peter ffoulkes, vice president of marketing at Adaptive Computing (www.adaptivecomputing.com).
With consolidation, he recommends eliminating “siloed” resources that may be underutilized yet are unable to be used for other purposes. The goal is to develop a single, flexible pool of resources that can easily be applied to support multiple business services, he notes.
Standardization can be used to reduce the number of different hardware and software architectures that are necessary to deliver the required business services, ffoulkes notes. In terms of virtualization, he says, “The major benefit comes from the logical separation of the software stacks that deliver business services from the underlying infrastructure, thus removing many dependencies and allowing the data center to be considered as a ‘flat computing fabric’ where the application stacks can be deployed on demand.”
The goal in employing these strategies is the same as using monitoring tools and hiring security consultants to do risk assessments: to find weaknesses that might compromise a data center and lead to downtime. By bringing together multiple strategies, tools, and tactics, as well as automating processes and looking at all aspects of a center, IT managers will gain visibility into the whole center, and through those multiple perspectives, they can do some strength training, data center style.
by Elizabeth Millard
TOP TIPS |
• For an overall view of the data center, which can establish a baseline, bring in a security consultant or system integrator who can offer vulnerability and threat assessment services and penetration testing.
• Using automated monitoring tools that have some type of rating system that prioritizes areas of weakness can be beneficial, as notifications can be tweaked.
• Automate the provisioning and management of a monitoring system in order to allocate resources to users according to organizational policies, business priorities, and service-level agreements.