HP Labs sees the future in a simpler data center



By Margaret Steen

Just as today’s servers are made up of components with specialized purposes, today’s data centers require people with specialized expertise in many areas: networking, storage and applications, to name a few.

HP is looking to change that with its Converged Infrastructure model. It’s an architecture that considers all elements of computing as an ensemble: power distribution and cooling management; networking, computing and storage components; automation tools; a common management platform; and the people needed to run it.

“The convergence part is being able to look at all of the infrastructure components together, rather than in separate islands that are uncoordinated in their management and usage,” says Dwight Barron, chief technologist, HP BladeSystem Division and HP Fellow, HP.

When servers and the data centers where they reside are brought online for a specific function, “it leads to extra capacity in some places and under-capacity in others,” Barron says. The Converged Infrastructure model treats “the infrastructure as a pool of resources and templates.”

Pooling infrastructure resources requires creating flexible, modular building blocks for hardware. These include the CPU, memory for temporary storage, disks and communications technology for both inside a server and between servers. It also requires a common, virtualized network fabric that connects servers, switches, and storage and makes it easier to change connectivity and bandwidth allocation from the data center to the network edge. And, finally, it requires a common management platform that extends from infrastructure to applications, across servers, storage and networks.

Turning vision into reality, one building block at a time

This new way of thinking about computing infrastructure encompasses some technologies that are already on the market, some that are coming soon and some that are still visions in researchers’ heads. But HP Labs plays a critical role in all of them.

Senior Research Scientist Jichuan Chang

Senior Research
Scientist Jichuan Chang

“When we build an all-in-one server that’s trying to fit a lot of requirements for different users, we’re providing more than what some users need,” says Jichuan Chang, senior research scientist at HP Labs. “The all-in-one approach is not flexible enough. It leads to inefficiency: either over-provisioning or not meeting users’ requirements.”

The solution is disaggregation: breaking apart the all-in-one server and using the building blocks to create a more flexible infrastructure. This would have a place for computing functions, with a CPU and some local memory. Other components would provide storage, communications and other functions.

Collaboration with HP BUs drives innovation

First envisioning the Converged Infrastructure model, then bringing it to market, requires collaboration between HP’s business units and HP Labs that Barron calls “an iterative process.”

“We work very closely with the business units,” said Parthasarathy (Partha) Ranganathan, a distinguished technologist at HP Labs. “They help influence our vision, and we help influence their thinking.”

The business units are usually focused on innovations they can deliver to customers within 18 months. HP Labs, on the other hand, looks at technology that may be up to five years away: “solutions that customers can’t envision because they haven’t been invented yet,” Barron said. Input from the business units helps the Labs stay focused on the problems customers need solved.

Past successes

One example of the partnership between the two groups is power management. Work done by HP Labs and the business units showed that a lot of power used by IT equipment was going to cooling the equipment, not helping with actual work. “It was completely off the radar screen,” Barron says.

Customers weren’t asking specifically for equipment that used less power for cooling – they just knew they wanted to save on energy costs. Today, thanks to early research done by HP Labs and later work by HP’s business units, HP’s servers have power management features that greatly reduce the expense of cooling them.

Unified management

The co-creation model that HP Labs and HP’s business units use has also produced M-Channels and M-Brokers, software that helps manage servers. As data centers become increasingly complex, housing more types of servers and applications, it becomes more difficult to manage them efficiently.

“The M-Channel and M-Broker are trying to address this siloed, uncoordinated management,” Ranganathan says. M-Brokers are software agents that can decide policy based on a global view of the infrastructure: for example, what parts of a system should power down to save energy. M-Channels provide channels of communication between the multiple M-Brokers.

“We are a company that has the scope and scale to look at unified management, especially about power, across all these silos,” Ranganathan says.

Upcoming innovations

Just as converged infrastructure involves breaking down barriers between different parts of the data center, memory disaggregation means breaking down barriers between specific memory locations.

“The thinking is to break everything apart and put it together again,” Ranganathan says. “The new building block is a memory blade.”

The idea behind memory disaggregation is simple: each computer has a certain amount of short-term memory, but much of it is unused most of the time. By creating a pool of memory inside the machine – of the amount that is generally used – and putting the rest outside, to be shared among servers and drawn on when needed – “we can have up to a 50 percent reduction in total memory but without compromising performance,” Ranganathan says.

HP Labs’ work is also leading to other innovations that will build on the Converged Infrastructure model – and help customers manage the deluge of data that is the result of digitizing everything from photos to documents, Chang says. Its work on the memristor (short for memory resistor) could lead to a form of memory that merges the current short-term memory with disk storage.

HP Labs is also working on photonics: using optics instead of copper to transfer information. “Moving photons instead of electrons saves a lot of power,” Chang says.

Looking to the future, focused on the present

The key to translating HP Labs’ groundbreaking research into real-world data center innovations is letting the Labs focus on the future while still keeping customers’ needs in mind. Research at the Labs is “aligned with a business that we’re in or are likely to be in,” Barron says, “so we can find a home for it from a product perspective.”

The product teams, meanwhile, have been challenged by management to innovate, not just to create the next generation of products already on the market. And both these perspectives lead to a third challenge: “How do we do that innovation and integrate it into a very high-volume supply chain?” Barron says.

All this sets the stage for HP to meet the next challenges in computing: “a thousand-fold increase in performance compared to where we are right now,” Ranganathan says, “with new challenges on the way.” Those challenges include how to continue the “nice, seamless management experience” of the Converged Infrastructure model and keep reducing power use.