Processor ® Free Subscription
Used HP, Used IBM, Used Compaq, Used Cisco, Used Sun
Home |  Register |  Contact Us   
This Week's Issue
Browse All Issues
Search All Articles
Product News & Information
Company
News & Information
General Feature Articles
News
Opinions



Cover Focus Articles Email This
Print This
View My Personal Library

General Information Add To My Personal Library
November 20, 2009 • Vol.31 Issue 28
Page(s) 25 in print issue

The Future Of Servers
What’s In Store For The Data Powerhouses In Your Facility

Key Points

• Analysts expect the trend of growing data and shrinking footprints to continue.

• Zero downtime and greater manageability will continue to be ultimate goals of server manufacturers.

• Non-x86 servers will continue to compete with x86 servers.

Daydreaming about the future of servers might lead to thoughts of machines the size of matchboxes, thought-operated servers, or other space-age technology. Although these advances are not likely in our lifetimes, there are sure to be new developments in the decades to come. Here is a look at what some analysts are predicting.

More & Less

The next decade will likely see a continuation of the current long-term trends of continually increasing densities and ever-smaller footprints. A typical hard drive 10 years ago held about 40GB; now, you can buy USB drives that size. Greg Schulz, principal analyst with StorageIO Group, expects a continued focus on density, more processing power at lower costs, less power needed per gigahertz per core, and more memory—all in a smaller footprint.

According to Schulz, many factors will drive density, including improved power supplies, intelligent cooling solutions, and blade servers. Also, he says, “Keep an eye on Ethernet going to 100Gb, perhaps even on some servers, and increased use of enhanced, high-performance, low-latency wireless.”

Schulz expects that the amount of data SMEs need to store will continue to grow. “There is no such thing as a data or information recession,” he says.

As for what’s next on server sizes, it’s likely that smaller form factors are inevitable. Whatever they will be called, Schulz predicts that the term blade server will be obsolete in another five or so years, as small footprints become the norm.

Zero Downtime

One important area to consider is downtime, both planned and unplanned. Over the years, we have referred to three-nines and five-nines of availability. But if current trends continue, there is the distinct possibility of no downtime at all. Unix system vendors are already working on just that. Already, they can do concurrent maintenance on some hardware and software.

“Unix can maintain or repair some hardware and software while most of the system is still running,” says Dan Olds, an analyst with Gabriel Consulting Group. “I see this ability only getting better over time.”

Redundant parts being placed inside a system are another way to get rid of downtime: Spare processors currently exist that can be turned on at will. Eventually, there might be a redundancy for nearly every component.

“I see more redundancy to the point where there are really no single points of failure in a system,” says Olds.

Greater Manageability

It often seems that IT is more labor-intensive and complex than it needs to be. Although the complexity will likely never go away, the labor required to manage it might. This is particularly the case in the non-x86 camp. The good news is that this is playing a role in overall data center manageability, too.

“The trend of the non-x86 system vendors to push to make their systems more easily managed will continue, and these management schemes are being extended beyond a single system or set of systems to the entire data center,” says Olds.

And the good news for the x86 world is that this will inevitably have a bleed-over effect, so expect x86 systems to continue to become easier to manage, better integrated, and more able to scale.

More Appliances

Mike Karp, an analyst with IT consultancy Infrastructure Analytics, finds the trend of single appliances being used to perform individual server functions a source of concern. These appliances, now in use in data centers for backup, security, and email archiving, could have a negative impact on long-term efficiency, he says.

“Appliances essentially are ad-hoc servers that are easy to install, but their overall effect on IT may actually be to diversify IT management rather than to consolidate it,” says Karp. “In other words, what was bought to simplify a single critical function may in the long run actually prove to be adding to IT complexity.”

x86: To Be Or Not To Be?

Since the mid-’90s, analysts have predicted the death of the mainframe, Unix, and other systems. They even termed these systems “legacy” and forecast the day when there would only be systems based on x86 and Intel processors. However, the mainframe has rebounded, Unix is holding its own, and it looks like x86 dominance won’t signal the end of all competitors.

“The value proposition of non-x86 systems will continue to be performance, high availability, and maintenance,” says Olds.

One of the reasons that non-x86 systems remain strong is that the vendors can do things that are much more difficult to accomplish with x86 systems. Non-x86 vendors own all the parts that make up their solutions—they own the hardware (processors, assist processors, interconnects) and the software (OS, firmware, often middleware, and even apps in some cases), and they all have storage, too. According to Olds, this gives them the ability to put together very highly integrated and optimized systems that can provide a lot of value to customers.

by Drew Robb


Most Promising Technology: Workload-Optimized Systems

Workload-optimized systems are now in the pipeline that will be tailored to run particular workloads such as database and Web serving. The configuration of these systems will vary depending on the particular requirements of the workload.

“Some analytics workloads require lots of memory and fast processors to crunch through millions of simulations, while others might need very large I/O capacity to stream data in and out for real-time analysis,” says Dan Olds, an analyst with Gabriel Consulting Group.

These workload-optimized systems will be easy to install and will focus on performing a single function with great efficiency.


Share This Article:    del.icio.us: The Future Of Servers     digg: The Future Of Servers     reddit: The Future Of Servers

 

Home     Copyright & Legal Notice     Privacy Policy     Site Map     Contact Us

Search results delivered by the Troika® system.

Copyright © by Sandhills Publishing Company 2014. All rights reserved.