By Casimer DeCusatis, Ph.D.
Energy consumption is quite literally a hot topic for data center designers these days. A recent year-long study on this topic, published by the New York Times, provides an interesting perspective on this problem. According to the article, while most companies are reluctant to share details of their data center designs, it can be shown that a single data center supporting Amazon, Google, Facebook, or many other household names can easily consume as much power as a small city. Data centers worldwide consume around 30 billion watts of electricity, the equivalent of about 30 nuclear power plants. The power required to run servers, networks, and storage may only be a fraction of this total; heating and cooling of the data center also contributes to overall power consumption, as well as redundant power in the form of backup generators and batteries.
And the problem is apparently growing, as data centers scale up at ever-increasing rates and consumers demand applications that are always available, anytime, anywhere. According to data from the 2012 OFC/NFOEC conference, a decade ago there were perhaps 1 or 2 computer facilities in the world that consumed 10 MW of power or more. Today, there are dozens of such data centers, and many warehouse-scale facilities are planned which will easily run into the 60-70 MW range. While networking does not consume a large fraction of this power, at this scale power efficiency in all aspects of the data center design becomes critical. This is why energy efficient networks, especially fiber optic designs, are receiving more attention.
There have also been other reports on high power consumption from some types of cloud data centers. For some users, power consumption is the limiting factor in building out new data centers; clients who require more capacity and have the budget to spend are unable to install new systems because their buildings are maxed out on power consumption.
I agree that we should continuously strive to improve energy efficiency, and I’d like to discuss some options for designing more energy efficient data centers, including greater use of fiber optic links and software-defined networking. But first, in the interest of presenting a balanced view, I’d recommend that after reading the NY Times article, you also read the excellent analysis published by Information Week. Among other things, this article notes that the amount of electricity consumed per unit of computing is actually declining. There’s a nice discussion of PUE, or Power Usage Effectiveness, which is the ratio of energy imported by a facility to the amount of power consumed by IT resources. Modern data centers can achieve a PUE close to unity, a significant improvement over their predecessors from a few decades ago.
These efficiencies are driven, in part, by extensive server virtualization in multi-tenant cloud computing environments, which significantly increases server utilization. For the past decade, the use of virtualization in the data center has been steadily increasing, making its way from high-end mainframe servers into commodity x86-based servers. This reduces the number of physical servers, switches, and storage devices, and lowers overall power consumption. We might expect that network virtualization would yield similar efficiencies; a topic which I’m sure will be discussed in the context of software-defined networking at OFC/NFOEC next year. There is also significant research being done to further improve network energy efficiency by dynamically adjusting data rates in response to demand.
Of course, server consolidation must be handled carefully. As noted in the NY Times article, when combining multiple workloads onto a single server, performance degradation and other inefficiencies can result; in this case, the energy required to complete a task may actually increase. Modern workload consolidation solutions, such as those involving a href="http://public.dhe.ibm.com/common/ssi/ecm/en/zsl03170usen/ZSL03170USEN.PDF">IBM System Z enterprise class systems, are designed to take this into account and provide significant cost savings by placing tens or hundreds of virtual machines into a single server footprint. These systems require massive amounts of I/O for attachment to switches, storage, and other data center resources (hundreds of physical ports and thousands of virtual links). It’s worth noting that over 90% of the I/O on these consolidated platforms is implemented using fiber optic links, supporting a variety of protocols and data rates. Unlike the servers described in the NY Times, these systems won’t melt down inside a heavily loaded data center.
Runaway energy costs can be controlled or avoided by careful planning for the growth of your data center. Recent enhancements to copper and backplane links (such as the IEEE P802.3az standard have attempted to create more energy efficient interconnect by idling the links during periods of low activity. Some (researchers have proposed that running the data center at even slightly higher ambient temperatures can translate into significant savings on cooling. Also, companies like IBM have been offering “green data center” design services for many years. By instrumenting the data center with small wireless sensors that read temperature, relative humidity, and other factors near floor level and closer to the ceiling, a computer model can identify “hot spots” and propose design changes that improve efficiency. Sometimes this can be as simple as repositioning equipment to avoid having fans exhaust hot air into a computer’s inlet, or replacing bulky copper cables with more flexible, lighter weight optical fibers that allow greater airflow through plenum ducts.
While smaller optical cables can help, there are many other ways that optics can save power in the data center. Moore’s Law doesn’t apply to power and cooling, but there are efficiencies to be had in the optical network. For example, at last year’s OFC/NFOEC conference, there were several talks about energy efficient networking. According to one presentation, the average power consumption from a 10 gigabit per second copper PHY is around 300 W/Gbit; we can reduce this to perhaps 120 W/Gbps using an active copper link with power control. Compare this with a fiber optic transceiver (using a short wavelength VCSEL source over multimode fiber) which comes in at only 25 W/Gbps. Further, the optical links offer other advantages, such as supporting longer distances.
It’s been recognized at OFC/NFOEC that a limiting factor in the design of future exascale supercomputers is the energy consumption per calculation. This research also shows that the energy involved in data transport is significantly larger than the energy involved in computations. As an example, the energy consumed in a single floating point operation is around 0.1 picojoule per bit. The energy involved in transporting that data across printed circuit board distances (3-10 inches) is about 200 times higher, and the energy used in transporting data across distances in a typical data center is about 2000 times higher. Realizing that energy consumption will limit the size of future supercomputers, significant research has been devoted to addressing this problem.
One example which shows the efficiencies to be gained from proper design is the (IBM Leadership Data Center in Raleigh, N.C. This 60,000 square foot smarter data center facility was recently recognized by independent analysts such as LEED, or (Leadership in Energy and Environmental Design for having reduced its power cost by $1.8M per year (over a 50 % reduction, illustrating that data center operators might be motivated to reduce power as a cost saving measure). Two independent, redundant power sources in this data center feed a 3 MW electrical plant, which makes use of the latest efficiencies (two-cell, 1300 ton cooling tower with variable speed fans, a 1300 ton centrifugal water cooling chiller with variable speed drive) and extensive use of fiber optics for data networking. Further, data networking design principles which lead to energy efficient optical networks are described in IBM’s architecture for an Open Datacenter Interoperable Network (ODIN)
As our economic growth becomes increasingly dependent on digital services and electronic commerce, we can expect the computing demands on data centers to continue growing, as predicted by the NY Times article. Responsible engineers will strive to balance these demands with protection of the environment, and virtualized optical networks have a role to play in this approach. As optically interconnected hybrid cloud data centers proliferate, they can also provide other efficiencies For example, the Information Week article notes that using electronic forms conserves paper; there may be other environmental benefits from a digital economy, as well. I’m optimistic that data centers can ultimately help the environment while continuing to deliver the benefits we’ve all come to expect. You can learn more about energy efficient optics at OFC/NFOEC 2013, or drop me a line if you’d like to discuss on Twitter @Dr_Casimer
Disclaimer: Opinions, interpretations, conclusions, and recommendations are those of the author and are not necessarily endorsed by IBM.
Casimer DeCusatis, Ph.D.
IBM System Networking, CTO Strategic Alliances
Member, IBM Academy of Technology
Posted: 28 September 2012 by
Casimer DeCusatis, Ph.D.
| with 0 comments