Jump to content United States-English
HP.com Home Products and Services Support and Drivers Solutions How to Buy
» Contact HP

hp.com home


Scaling IT for the Planet

Creating the Worldwide Computing Utility

January 2002

printable version
» 

HP Labs

» Research
» News and events
» Technical reports
» About HP Labs
» Careers @ HP Labs
» Worldwide sites
» Downloads
Content starts here


What if accessing the computing power of the planet was as easy as turning on the tap for a glass of water?

What if you could add computing or storage capacity in minutes - instead of days or weeks? What if you could pay for technology resources the same way you pay for utilities like electricity or water - based on what you use?

Computing as a utility? A team at HP Labs is working to make it happen.

The vision: a world of service-centric computing, where scalable, cost-effective information technology capabilities are delivered, metered, managed and purchased as a service.

A worldwide network

An enormous amount of processing power and data storage will be required to meet future computing demands. Today's data centers contain thousands of servers. But they could be 10 times larger in the future -- as many as 50,000 servers -- and consist of commodity servers and storage connected via a high-speed IP-switched fabric.

At the same time, data centers around the world would be networked so that IT resources are allocated where they're needed most.

That's where HP Labs comes in. Researchers are creating an entirely new model of computing to develop and manage this vast infrastructure.

Planetary computing.

They call their model planetary computing. The goal: infrastructure on demand. Infrastructure that's scalable, flexible, economical and always on.

"The key is to provide a shared resource pool that allocates resources to applications on demand," says researcher Rich Friedrich. "If you need more servers, we'll take them from the pool. When you no longer need them, we'll put them back. We think that's an essential element in the future of Internet computing."

Currently, businesses must acquire more processing and storage than they need simply to cover peak times. Amazon.com and other web retailers must ensure they can handle peak demand in November and December, while the IRS has a peak in demand in April. For both entities, the hardware is under-utilized the rest of the time.

'You don't have to call the electric company before you plug in a new refrigerator and say you need another kilowatt of power. Adding information technology should be just as easy as plugging in an appliance.'

Adding or reconfiguring infrastructure capacity is slow and costly. And any regular Web user knows what often happens when a Web site is flooded with unexpected demand. At worst, the site crashes, and at best, there's the "world wide wait."

Infrastructure on demand

Researchers' work to solve these problems was recently incorporated into HP's new Utility Data Center, architecture and software for the first scalable, programmable data center. The goal is to provide automated infrastructure on demand with little or no operator intervention.

"You don't have to call the electric company before you plug in a new refrigerator and say you need another kilowatt of power," says Friedrich. "Adding information technology should be just as easy as plugging in an appliance."

In a programmable data center, the infrastructure is physically wired once, but can be rewired programmatically, to meet the changing needs of customers and services. This includes linking data centers so that resources can be optimized for a region, a nation or around the world. When it's "off-hours" in one area, data centers there could be used more fully by providing services for users in other parts of the world, and vice-versa.

The smart data center

Another challenge is managing all these resources, especially as data centers approach 50,000 servers.

"There are only so many gurus around who can keep the infrastructure running," Friedrich says. "As the demand on Internet computing grows, the experts become fewer and farther between."

Sooner or later, infrastructure growth will outstrip the ability of people (even gurus) to operate it cost-effectively.

To solve that problem, researchers are developing a data center control system that will broker between application demand and resource capacity. The system will determine what hardware or software assets are available and will then install, configure, deploy, monitor and assure services on a global scale.

This system will be self-monitoring, self-healing and self-adapting. That is, services and resources will monitor their own health in much the way people do, making changes to the system when demand changes or trouble arises -- calling on experts only when absolutely necessary.

The key is to provide a shared resource pool that allocates resources to applications on demand. If you need more servers, we'll take them from the pool. When you no longer need them, we'll put them back.

Data that follows you

HP Labs is also addressing the need for a planetary storage system, one that provides anywhere, anytime access to data.

The goal is to allow information to follow people and their devices so that they can get the content and services they need wherever they go. Researchers are developing automatically managed storage systems that would ensure secure data storage, access and delivery.

Smart cooling

Finally, researchers are tackling the difficult problem managing the energy demands of massive data centers. Increased computing power means more heat and, consequently, higher demand for air conditioning. The old model of cooling each section of the data center equally doesn't work.

Researchers are pioneering more sophisticated numerical, measurement and control techniques that allow for the determination of the temperature and flow distribution within a room, and provisioning the air conditioning resources appropriately.

For details about this work, see the technical paper, "Computational Fluid Dynamics Modeling of High Compute Density Data Centers to Assure System Inlet Air Specifications." This paper was written by Chandrakant D. Patel, Cullen E. Bash and Christian Belady of HP Labs and Lennart Stahl and Danny Sullivan of Emerson Energy Systems and was included in the proceedings of IPACK 2001, the Pacific Rim /ASME International Electronic Packaging Technical Conference and Exhibition.
© ASME, 2001

 

by Jamie Beckett


News and Events

» Redefining the evolution of the data center
(related feature story)
» Towards Service-Centric System Organization
(Technical Report)
» On Virtual Data Centers and Their Operating Environments
(Technical Report)
» Rich Friedrich
» Interview with Rich Friedrich: pay-per-use is key to meet future processing and storage demands
» Archives
rows of computers, monitors and servers
Privacy statement Using this site means you accept its terms Feedback to HP Labs
© 2009 Hewlett-Packard Development Company, L.P.