Anda di halaman 1dari 116

This file is licensed to sergio torres morales (retos@correoinfinitum.com).

License Date: 9-26-2012


© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.

Datacom Equipment Power Trends


and Cooling Applications

This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.

This publication was prepared in cooperation with TC 9.9, Mission Critical Facilities,
Technology Spaces, and Electronic Equipment.

Any updates/errata to this publication will be posted on the


ASHRAE Web site at www.ashrae.org/publicationupdates.

This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.

Datacom Equipment Power Trends


and Cooling Applications

American Society of Heating, Refrigerating


and Air-Conditioning Engineers, Inc.

This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.

ISBN 10: 1-931862-65-6


ISBN 13: 978-1-931862-65-3

Library of Congress Control Number: 2005920599

©2005 American Society of Heating, Refrigerating


and Air-Conditioning Engineers, Inc.
1791 Tullie Circle, N.E.
Atlanta, GA 30329
www.ashrae.org

All rights reserved.

Printed in the United States of America on 10% post-consumer waste.

ASHRAE has compiled this publication with care, but ASHRAE has not investigated, and
ASHRAE expressly disclaims any duty to investigate, any product, service, process, proce-
dure, design, or the like that may be described herein. The appearance of any technical data
or editorial material in this publication does not constitute endorsement, warranty, or guar-
anty by ASHRAE of any product, service, process, procedure, design, or the like. ASHRAE
does not warrant that the information in the publication is free of errors, and ASHRAE does
not necessarily agree with any statement or opinion in this publication. The entire risk of the
use of any information in this publication is assumed by the user.

No part of this book may be reproduced without permission in writing from ASHRAE,
except by a reviewer who may quote brief passages or reproduce illustrations in a review with
appropriate credit; nor may any part of this book be reproduced, stored in a retrieval system,
or transmitted in any way or by any means—electronic, photocopying, recording, or
other—without permission in writing from ASHRAE. Requests for permission should be
submitted at www.ashrae.org/permissions.

ASHRAE STAFF

SPECIAL PUBLICATIONS PUBLISHING SERVICES

Mildred Geshwiler David Soltis


Editor Manager

Erin Howard Tracy Becker


Associate Editor Graphic Applications Specialist

Christina Helms Jayne Jackson


Associate Editor Production Assistant

Michshell Phillips PUBLISHER


Secretary
W. Stephen Comstock

This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.

Contents
Acknowledgments vii
Foreword ix
Chapter 1—Introduction 1
Chapter 2—Background 3
Chapter 3—Load Trends and Their Application 15
Chapter 4—Air Cooling of Computer Equipment 29
Chapter 5—Liquid Cooling of Computer Equipment 41
References/Bibliography 51
Introduction to Appendices 55
Appendix A—Collection of Terms from Facilities and IT Industries 57
Appendix B—Additional Trend Chart Information/Data 75
Appendix C—Electronics, Semiconductors, Microprocessors, ITRs 83
Appendix D—Micro-Macro Overview of Datacom Equipment Packaging 87
Index 101

This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.

Acknowledgments
Representatives from the following companies participated in producing this
publication:

Alcatel Hellmer & Medved

ANCIS Hewlett Packard

ATI Technologies IBM

Cisco Systems Intel Corporation

Cray Inc. Lawrence Berkeley National Labs

Data Aire, Inc. Liebert Corporation

Dell Computers Mallory & Evans, Inc.

Department of Defense NSA Motorola, Inc.

DLB Associates Consulting Engineers Nortel

EMC Corporation Sun Microsystems

EYPMCF Syska & Hennessy Group, Inc.

Fannie Mae Taylor Engineering

Freescale Semiconductor The Uptime Institute

Fujitsu Laboratories of America

This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.

viii⏐ Acknowledgments

ASHRAE TC 9.9 wishes to particularly thank the following people:

• Christian Belady, David Copeland, Shlomo Novotny, Gamal Refai-


Ahmed, and Robin Steinbrecher for their active participation, including
numerous conference calls, writing, and review.
• Dr. Roger Schmidt of IBM for his invaluable participation and sacrifice in
the creation and continual enhancement of this book.
• Mr. Neil Chauhan of DLB Associates Consulting Engineers for the evolu-
tion and production of the multiple drafts that this book went through and the
supporting graphics that set this book apart.
• Mr. Don Beaty of DLB Associates Consulting Engineers for his initial vision
for the book and the relentless determination and effort required to turn that
vision into a reality. His leadership and commitment continue to define the
ongoing success of the TC 9.9 committee.

In addition, ASHRAE TC 9.9 wishes to thank the following people:

Dan Baer, Ken Baker, Dina Birrell, Mike Bishop, Alan Claassen, Howard Cooper,
Tom Currie, Tom Davidson, Brian Durham, Bill French, Dennis Hellmer, Magnus
Herrlin, Mark Hydeman, Charlie Johnson, Christopher Kurkjian, H.S. Liang Lands-
berg, Andy Morrison, David Moss, Greg Paustch, Dick Pressley, Terry Rodgers, Jeff
Rutt, Melik Sahraoui, Grant Smith, Vali Sorell, Fred Stack, Ben Steinberg, Robin
Steinbrecher, Jeff Trower, William Tschudi, and David Wang.

This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.

Foreword
Datacom (data processing and telecommunications) equipment technology is
advancing at a rapid pace, resulting in relatively short product cycles and an
increased frequency of datacom equipment upgrades. Since datacom facilities that
house this equipment, along with their associated HVAC infrastructure, are
composed of components that are typically built to have longer life cycles, any
modern datacom facility design needs the ability to seamlessly accommodate the
multiple datacom equipment deployments it will experience during its lifetime.
Based on the latest information from all the leading datacom equipment manu-
facturers, Datacom Power Trends and Cooling Applications, authored by ASHRAE
TC9.9 (Mission Critical Facilities, Technology Spaces, and Electronic Equipment),
provides new and expanded datacom equipment power trend charts to allow the data-
com facility designer to more accurately predict the datacom equipment loads that
the facility can expect to have to accommodate in the future as well as providing
ways of applying the trend information to datacom facility designs today.
Also included in this book is an overview of various air and liquid cooling
system options that may be considered to handle the future loads and an invaluable
appendix containing a collection of terms and definitions used by the datacom equip-
ment manufacturers, the facilities operation industry, and the cooling design and
construction industry.

This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.

1
Introduction
PURPOSE / OBJECTIVE

The purpose of this book is to discuss datacom (data center and telecommuni-
cation) power trends at the equipment level as well as to describe how to use those
trends in making critical decisions on infrastructure (e.g., cooling system) require-
ments and the overall facility.
It is important to consider the fundamental definition of “trend,” which for this
book will be defined as the general direction in which something tends to move. The
trends referenced or presented in this book should not be taken literally but rather
considered as a general indication of both the direction and magnitude of the subject
matter. The intended audience for this document is:

• Planners and managers planning a datacom facility


• Datacom facility design teams planning and designing a datacom facility
• Datacom facility architects and engineers who require insight on datacom
equipment energy density and installation planning trends

There is an information gap that needs to be bridged between the information


technology (IT) industry (where IT is used in this book, it is interchangeable with
the term information services or IS) and the facility design / construction / operation
industry. Today’s datacom facilities require a holistic approach, balancing the trade-
offs between datacom equipment and facility cooling infrastructure.
It is important for both industries to have a general understanding of areas that are
NOT DIRECTLY their responsibility but DO DIRECTLY impact their budgets, oper-
ation, or performance. This same general understanding is important for equipment
manufacturers, design architects/engineers, contractors, and service technicians.

This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.

2⏐ Introduction

OVERVIEW OF CHAPTERS
The following is an overview of the chapters of this document:
Chapter 1—Introduction. The introduction states the purpose / objective of
the document as well as a brief overview of the upcoming chapters.
Chapter 2—Background. The five key aspects of planning a datacom facility
are discussed. In addition a simple example is provided to show how one might use
this process in the planning stage. Finally, the use of the metric power density is
discussed.
Chapter 3—Load Trends and Application. This chapter contains updated
and extended datacom equipment power trend charts including the historical trends
for power dissipation of various classes of equipment. An overview is provided of
the trend evolution of the various groupings of datacom equipment from the previous
trend chart to the trend chart published in this book. There is also a discussion of
applying the load trend charts when planning the capacity of a new datacom facility
and an introduction on how to provision for that capacity.
Chapter 4—Air Cooling of Computer Equipment. Various configurations of
air cooling of computer equipment are presented. These configurations include cool-
ing equipment outside the room, cooling equipment inside the room but outside the
rack, and cooling equipment physically mounted on the rack.
Chapter 5—Liquid Cooling of Computer Equipment. This chapter provides
an introduction into the reasons behind the re-emergence of liquid cooling as a
consideration and potential solution to higher density loads along with details on the
types of liquid used for enhanced heat transfer.
Appendices. The appendices are a collection of information included to supple-
ment the chapters of this book. Further, the appendices provide information that is
useful for those involved with datacom cooling but is not readily available or
centrally collected. For example, the appendices include cooling-related terms used
in the building design / construction industry and IT industry, which accomplishes
the goal of a centralized, single source as well as emphasizing integration and collab-
oration of the industries.

This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.

2
Background
DATACOM FACILITY PLANNING
Architects and engineers will generally provide the environmental infrastruc-
ture according to existing conventions, building codes, and local conditions.
However, they are not trained to be information technology futurists, and given the
volatility of technology, an IT staff would have far more credible insight into IT
requirements for their particular organization, at least for tactical planning cycles.
Nonetheless, the IT staff can provide some insight as to what could happen in
the future, thus providing some guidance in the strategic planning of a datacom facil-
ity in terms of the amount of space required, as well as the environmental impacts
governed by systems of the future.
As the current trends indicate increasing power density loads, there is a concern
over the impact that the increase will have on how to characterize or plan for these
loads, as well as the selection of the cooling system best suited to meet the load. The
most challenging question to answer is “Who really plans the datacom facility?”

• Is it the architect/engineer?
• Is it planned by the IT department based on forecast of future datacom appli-
cations growth?
• Is it planned by the facilities department once they are given the amount and
type of equipment from the IT department?
• Is it the owner/developer of the facility based on financial metrics?
• Is it a joint decision amongst all of the parties listed above?

Unfortunately, for many companies the planning process for the growth of data-
com facilities or the building of new datacom facilities is not a well-documented
process. The purpose of this book is to focus on the power trends of datacom equip-
ment and also briefly outline a process for arriving at the floor space required and,
hopefully, take some of the confusion out of the process.

This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.

4⏐ Background

Each datacom facility is unique and each company utilizes different applica-
tions, thereby resulting in a different set of hardware; thus the personalities of data-
com facilities vary quite dramatically. The space occupied by the hardware of one
specific datacom facility is shown below:

DATACOM FACILITY AREA BREAKDOWN EXAMPLE


(Note: Numbers Can Vary Dramatically)

Storage Servers 19.0% Aisles 20.0%

Compute Servers 11.0% Empty (Future Growth) 16.0%

Telecommunications 5.0% Cooling Equipment 12.0%

Command Area 4.0% Specialty Rooms 3.5%

Printers 2.0% Power Distribution 3.0%

Patch Panels 1.0% Room Supplies 2.0%

Columns 1.0%

Doorways / Access Ramps 0.5%

Subtotal IT Space 42.0% Subtotal Non-IT Space 58.0%

Grand Total 100.0%

The key in presenting this breakdown is that there are many components that
make up the space required for a datacom facility. Many times the focus is on the
servers, but a holistic view must be maintained in developing the space required and
must include all the elements.
The hardware that makes up the datacom facility should not be the initial focus
for planning a datacom facility. Although the hardware physically occupies the
space on the datacom facility floor, the software does all the work. Therefore, the
planning should begin with understanding what are the applications that need to be
accomplished for running the business, both now and in the future. Application
capacity drives hardware acquisition, which, in turn, drives floor space and energy
requirements.
The plan for space in the current datacom facility or in planning a new datacom
facility should consider the following five aspects:
1. Existing applications floor space
2. Performance growth of technology based on footprint
3. Processing capability compared to storage capability

This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.

Datacom Equipment Power Trends and Cooling Applications ⏐5

4. Chane in applications over time


5. Asset turnover
Each one will now be described briefly. Again, this book is focused on equip-
ment power trends and their impact on the environment housing this equipment and
the resulting infrastructure needed to support this equipment. However, the interre-
lationships of the other elements that go into the plan for datacom facility floor space
need to be understood. The importance of the equipment power trend chart will
become evident as we proceed through the steps in this planning process.

1. Existing Applications Floor Space


Even though a new datacom facility may be planned and one might say that
there is absolutely no relation between the new one being planned and the existing
one still in use, it is still instructive to generate the space allocated by the various
pieces of equipment in the current datacom facility. This can be surprisingly educa-
tional to those planning the next stage of either the new datacom facility or growth
of the existing one.
A simple way to graphically show this is with a pie chart, as shown in Figure 2-1,
for the spatial allocation of the example presented above. One can quickly get a sense
of the proportion of the various elements and their space requirements. Many times
people are surprised by how little space is taken up by the actual hardware (storage serv-
ers, compute servers, and telecom equipment) and how much space appears to be
“white-space” (i.e., facility area required to support the actual hardware).

Figure 2-1 Datacom facility area allocation example.

This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.

6⏐ Background

2. Performance Growth of Technology Based on Footprint


The question asked by the IT manager is “how much performance can I expect
out of the same space over time?” This relates to all elements of the datacom envi-
ronment but is primarily directed to servers and storage. The trends in performance
for the same footprint follow an approximate 25%-35% compound growth rate.
Over long periods of time this may appear as a smooth rate of increase, but at
any one time the older datacom equipment that is replaced with newer equipment
may take a 100% or more jump in performance for the same space occupied by the
older equipment. Datacom facilities that are planned to be in use over 20 years can
use long-term trends in performance to gauge fairly accurately the performance
improvements and how they intersect the roadmap of the company’s plan for the
performance improvements required.

3. Processing Capability Compared to Storage Capability


This will depend on the applications being run, but the industry uses a standard
rule of thumb that the servers run at 70% of capacity and the storage runs at about
50% capacity, depending upon the storage management strategy. It is important to
note that these factors are workload dependent and will also depend on the specific
applications.

4. Change in Applications over Time


This is probably a difficult one to assess since new applications are being devel-
oped on a continuous basis and it is difficult to know what the new applications are
going to be 10 years in the future. There may be no apparent change in applications
development plans upon initial inspection, but most organizations have a minimum
of 15%-20% growth rate.

5. Asset Turnover
Each IT organization has its own roadmap and a rate of hardware renewal.
Slower turnover means that more floor space will be required to support the growth
in applications that might be required. Faster turnover would allow more computing
power to exist in the current space taken up by older, lower performing equipment.
Of course the newer equipment in general generates more heat and requires
more power for the same footprint, and this is the issue that is being addressed in this
book. This is because the increase in the rate of transactions per watt of energy used
(i.e., greater processing efficiency) is not offsetting the increase in technology
compaction (i.e., more processing capacity for a given packaging) and the result is
more processing power per equipment footprint.

SIMPLE EXAMPLE OF DATACOM EQUIPMENT


GROWTH IMPACT ON A FACILITY
This section provides a simple example for the impact of the growth in an exist-
ing 5,000 ft2 datacom equipment room in a datacom facility. In addition to the data-

This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.

Datacom Equipment Power Trends and Cooling Applications ⏐7

com equipment itself, the room also houses power distribution units (PDUs) and
chilled water CRAC units and has some ancillary space (cross aisles, spare parts stor-
age, etc.). For the purposes of this example, we shall consider two baseline scenarios:

• Scenario 1 – equipment end user runs applications that require a balanced


datacom equipment deployment.
• Scenario 2 – equipment end user runs applications that require a compute
server intensive deployment.

These two baseline scenarios are summarized in Table 2.1 along with their cooling
load impact:

Note: This breakdown is not intended to encompass every datacom facility since
each facility is unique.

Total current cooling load based on Table 2.1 would be around 125,000 watts
(35 tons) for Scenario 1, which equates to an average of around 25 watts per square
foot (considered over the 5,000 ft2 gross floor area of the datacom equipment room).
For Scenario 2, it would be approximately 175,000 watts (50 tons), which equates
to around 35 watts per square foot.
Now consider the following potential projections:

• Current workload uses 30%-60% of the hardware capacity.


• Workload (applications) is increasing at rate of 40%-50% compound growth
rate (CGR).
• Workload will exceed current hardware capacity in about one to two years.
• More hardware is needed to sustain the applications installed.
• Company decides to replace 50% of compute servers and 50% of storage
servers that are now four years old with newer, more powerful versions that
will be capable of meeting the future workload. The new datacom equipment
cooling load values are determined from the trend charts in chapter 3.
• Associated power and cooling upgrades are also required to handle the more
powerful servers. The space for the additional floor-mounted cooling equip-
ment will be at the expense of some of the ancillary space. It is also assumed
that the cooling load watts per square foot. value for the cooling/power equip-
ment will increase through extended use to satisfy the increased load.

The resulting cooling load breakdown is illustrated in Table 2.2.


The new cooling load, based on the revised table, is now around 200,000 watts
(55 tons) for Scenario 1, which equates to an average of 40 watts per square foot. For
Scenario 2, the cooling load is approximately 300,000 watts (85 tons) or 60 watts per
square foot on average.

This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.

8⏐ Background
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.

Datacom Equipment Power Trends and Cooling Applications ⏐9


This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.

10⏐ Background

Figure 2-2 Summary comparison of current and new environmental metrics of


5,000 ft2 example facility scenarios.

Figure 2-2 provides a graphical summary for the two scenarios, which shows us
that, although the overall datacom facility depicts a relatively small W/ft2 increase
in average power density (15 watts per square foot for Scenario 1 and 25 watts per
square foot for Scenario 2), the maximum power density for a localized area with the
new servers is considerably higher in both scenarios (200 watts per square foot)
compared to the older server equipment.
This increased maximum density for the new servers results in the need for care-
ful consideration with regard to the cooling and power distribution to these areas.
Here we have emphasized that planning the floor space required for a datacom
facility involves many aspects, and a holistic view needs to be taken. We have
attempted to address the factors that are relevant in planning the amount of floor
space required for the datacom facility. Once these allocations are made for the vari-
ous pieces of equipment, then the other aspects of the infrastructure need to be
assessed, including power distribution capabilities and cooling capabilities.
As will be seen in the next chapter, the power density of some equipment types
is significant and increasing rapidly. These factors may cause the design team to
potentially examine other cooling options, such as expansion of the facility area to
decrease the heat density (which has to be weighed against the cost of the expan-

This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.

Datacom Equipment Power Trends and Cooling Applications ⏐11

sion), or possibly utilizing a more effective cooling system, such as water-cooled


server equipment.
With recent data showing the power per rack exceeding 20 kW, these trends
need to be closely examined, whereas in the past this was not a concern or an issue.
In today’s environment the equipment power trends have to be one of the top prior-
ities for any datacom facility planning process.

OVERVIEW OF POWER DENSITY DEFINITIONS


The current design and planning of datacom facilities typically uses metrics
based on either historical data or the industry experience of the design professionals.
Until very recently, the most common metric to use was an average watts per square
foot of available datacom equipment power over the technical (or raised floor) area
of the datacom facility.
The watts per square foot metric evolved from circumstances where the occu-
pancy of a given datacom facility was not known, which was the case when many
developer driven speculative facilities were built at the height of the datacom indus-
try boom. As a result, the natural high level or preliminary approach is to use a broad
and averaged metric such as watts per square foot to define the load.
There has been much controversy over the inaccuracies and varying definition
of the watts per square foot metric (Mitchell-Jackson 2001). Accurately determining
heat density in terms of watts per square foot requires a clear understanding of the
actual values and origins of the watts and area being considered.
The watts being considered can include simply the nameplate data or rated load
of each piece of IT equipment. A variation could be to use a de-rating factor to
account for the difference between the rated load and the measured load. Another
variation could be to base the load on the UPS input (assuming all equipment is on
UPS) since this accounts for efficiency losses of UPS units, PDUs (power distribu-
tion units), etc. Yet another variation could be to also include the load for support
equipment such as HVAC systems, although this is a value that is driven more by util-
ity companies that are concerned with total power to the building.
The area can vary simply by considering net versus gross areas of the datacom
equipment room, but there are many other variations as well. In cases where the foot-
print of the building is used as a guideline, the types of mechanical and power deliv-
ery systems have a profound impact on the actual building footprint. Specific
components such as chillers or generators can be located either inside or outside the
building depending on the emphases and preferences of the stakeholders and the
constraints of the site and/or local ordinances.
Also, since the power trend charts point to higher density loads that are greater
than the loads that have been experienced in the field to this point, little information
is established or available regarding what the preliminary watts per square foot
should be for those loads.

This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.

12⏐ Background

As a result, some are pushing for the more precise kW per rack metric. The kW
per rack metric is based on approximating the load per rack and then estimating the
population of racks within the facility to obtain an overall load. Although, there is
some logic to kW per rack since it more accurately defines a specific heat load over
a given footprint (although this footprint has been known to vary in size as well),
there remain obstacles to overcome in establishing the actual value(s).
The first challenge to overcome is the inherent sequence of events. Often at proj-
ect inception (especially if it is a new site, new room, or major renovation) the data-
com computer equipment has not been finalized and certainly the rack configuration
remains an unknown. Therefore, the rack configuration (i.e., the equipment type and
quantity within a given rack) is estimated in order to establish a load.
Secondly, equipment nameplate data are often the only information provided by
the manufacturers to establish the cooling load and using this method essentially
equates datacom equipment power load with the heat dissipation of that particular
piece of datacom equipment. However, this is not as accurate as first perceived since
the datacom equipment manufacturers’ nameplate data are published with a focus
on regulatory safety and not heat dissipation. To overcome this discrepancy, a stan-
dard thermal report format was introduced in ASHRAE’s Thermal Guidelines for
Data Processing Environments (ASHRAE TC 9.9 2004) and in conformance with
the guidelines set forth in that publication, datacom equipment manufacturers are
just beginning to publish meaningful heat release data for their equipment that
allows for a more accurate load assessment.
Both the watts per square foot and the kW per rack metrics are used to calculate
a load at a point in time, but only when the values are used in conjunction with the
datacom equipment power trend charts can we begin to understand and predict how
that load could change for future datacom equipment deployments across the life
cycle of the facility.

IT AND FACILITY INDUSTRY COLLABORATION


There is a critical need to increase the collaboration between the semiconductor/
IT industry and the facility building design / construction industry.
Historically, the semiconductor industry and the IT industry have collaborated,
including power and cooling integral to the equipment packaging. Similarly, histor-
ically the facility (including building design and construction) industry has collab-
orated among the design community, construction community, and building product
manufacturers, including power and cooling.
However, the IT and facilities departments within a given organization are often
separate departments (sometimes even reporting to a different division of the
company). IT and facilities have limited communication or collaboration channels
within many organizations and also within the overall industry. The result of the
limitation in these channels is the risk of one department negatively impacting the
other through making independent decisions.

This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.

Datacom Equipment Power Trends and Cooling Applications ⏐13

As an example of the noncollaborative process, consider the following project


approach to the introduction of a higher density load through the deployment of
blade server equipment. The blade server equipment is the result of technology
compaction, which allows for a greater processing density over the same equipment
volume. The greater processing density also results in a greater power density and
a greater heat density.

• Step 1 - The IT department determines the need to procure and deploy blade
servers, which represent a technology they have never used before. They
interact with the datacom equipment manufacturers and select a manufacturer
and product.
• Step 2 – The IT department obtains preliminary pricing from the manufac-
turer and submits for funding. Little or no consideration is given at this time
for additional deployment costs to augment the support or infrastructure ser-
vices (i.e., cooling). Management approves the pricing for the IT equipment
after going through the cost benefit metrics as a part of their approval process.
• Step 3 – The datacom equipment is procured and the facilities department is
notified that new equipment is coming and the datacom equipment room must
be modified to accommodate the new deployment.
• Step 4 – The facilities department discovers the datacom equipment loads are
far beyond what they have ever cooled before. Due to their current experience
with projected loads not being realized, their first reaction is skepticism and
the published loads are declared as being grossly overstated.
• Step 5 – The facilities department ask their counterparts in other firms and
discover that people feel these incredible loads could be real.
• Step 6 – The facilities department hires a mechanical consulting engineer and
assigns them the task of “figure out how to cool this”. No budget for this
scope was assigned previously and management is blindsided by an additional
cost that was not considered in their previous metrics. Consequently, com-
pounding the difficulty of accomplishing the actual cooling is the fact that
there are only minimal financial resources available to accomplish it.

A critical focus for ASHRAE Technical Committee TC 9.9 (Mission Critical


Facilities, Technology Spaces and Electronic Equipment) is to not only provide
engineering information to support the overall industry but to have that information
reach BOTH the facilities and IT industries.

IT INDUSTRY BACKGROUND
The IT industry continues to respond to client demand with their focus on more
speed, more data storage, more bandwidth, higher density, smaller footprint /
volume, more portable, more open, and lower cost.
The typical life cycle for a facility’s infrastructure (e.g., air handlers, pumps,
and chillers) can be 10 to 25 years, while the datacom equipment it serves is an order

This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.

14⏐ Background

of magnitude less. Further, the building itself (e.g., steel and concrete, bricks and
mortar) can have a life cycle well beyond 25 years.
A critical challenge is to initially plan and design both new construction and
renovation projects so that the investment in the building and its infrastructure is
fully realized and they do not become prematurely obsolete.
Datacom equipment power trends over the past 10 years have been on a path of
rapid increase. There has also been a trend toward equipment compaction
compounding the increases in load density (watts per square foot or watts per rack).
While power consumption is increasing, the focus on technology compaction is
causing the power per equipment footprint to increase at a more rapid rate.

This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.

3
Load Trends and
Their Application
INTRODUCTION

When appropriately applied, the “Datacom Equipment Power Trend Chart” can
be a powerful tool in considering what the future loads might be in a facility or space.
Future load is a critical component to planning, designing, constructing, and oper-
ating facilities to avoid ineffective expenditures, premature obsolescence, stranded
cost or assets, etc.
As stated in Chapter 1, it is important to consider the fundamental definition of
“trend,” which is the general direction in which something tends to move. The trends
referenced or presented in this book should not be taken literally but rather consid-
ered as a general indication of both the direction and magnitude of the subject matter.
Further, predicting future needs/loads is difficult and merely a speculation. Although
not precise, using trends and predicting future needs/loads is typically far more
effective than simply taking the narrow sighted approach of just considering the
current needs/loads.
The next section provides an overview of the original trend chart created by the
Thermal Management Consortium and published by the Uptime Institute (2000).
The new “Datacom Equipment Power Trend Chart” in this book is the result of direct
input from essentially the same representative Thermal Management Consortium
companies (often the very same individuals) that were used to produce the previous
trend chart and also based on recent information obtained since the publication of
the previous trend chart in 2000.
Over 20 datacom manufacturers were included in formulating this new trend
chart. Extensive interactions and iterations occurred for more than six months in
order to gain reasonable understanding and consensus among the representatives of
datacom manufacturers. Some of the features of the trend chart that were reevaluated
are as follows:

This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.

16⏐ Load Trends and Their Application

• The original trend chart was compared to the actual equipment that had been
shipped since the original publication of the chart. This review indicated that
there were servers shipped that exceeded the values predicted in the chart (an
indication that the published trends did not overstate the server loads).
• The individual trend lines or bands from the original trend chart were
reviewed for current relevance. The original trend lines were:
• Communication equipment (frames)
• Servers and disk storage systems
• Workstations (stand-alone)
• Tape storage systems
• The trend line assessment initially determined that the individual trends of
servers and disk storage systems were different and should be separated. Ulti-
mately this evolved further into changing the “Servers and Disk Storage Sys-
tems” to the following three categories:
• Storage servers
• Compute servers – 1U, blades, custom
• Compute servers – 2U and greater
• The intent of the trends is to characterize the actual heat rejection of fully con-
figured equipment and so by default they can all be called high density. Simi-
lar to the servers described above, for the trend originally called
“communication equipment (frames),” all of the high-density communication
equipment does not fit within the one trend and so it needed to be split. How-
ever, unlike the server groupings, the communication equipment trends can-
not easily be identified by type of equipment. As a result, the two
communication equipment trends that are included are generically called:
• Communication equipment – extreme density
• Communication equipment – high density
• The International Technology Roadmap for Semiconductors (ITRS) publishes
trends at the semiconductor level, and those values, trends, and projections
were considered during the “Datacom Equipment Power Trend Chart” assess-
ment.

The evolution of the power trend chart contained many steps. This chapter
graphically takes the reader through those steps and arrives at a new “Datacom
Equipment Power Trend Chart.” It will also provide some description of the issues
behind the application of the new trend chart.
Appendix B provides additional formats for the trend information, such as in
spreadsheet form and metric units. Also, versions are provided where each trend is
shown as a line rather than band and substituting a linear ‘Y’ axis scale for the loga-
rithmic scale.

This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.

Datacom Equipment Power Trends and Cooling Applications ⏐17

DEFINITION OF WATTS PER EQUIPMENT SQUARE FOOT METRIC


The watts considered for the original trend chart will also be considered for
these new trend charts and reflect the actual measured watts from a fully configured
rack of the specific equipment type indicated by the trend.
Obtaining the actual measured load for the various pieces of equipment and
associated configurations has never been easy. The ASHRAE book Thermal Guide-
lines for Data Processing Environments (ASHRAE TC 9.9 2004) provides the
industry with a more accurate load template with its sample “Thermal Report.” This
report requires the manufacturers to provide typical heat release values for mini-
mum, typical, and fully configured equipment.
The “Thermal Report” facilitates obtaining the equipment load information
needed for trend charts. Equipment product nameplate values imply higher levels of
power consumption and heat dissipation than will actually occur in the first year of
product shipment. This is because many manufacturers install larger power supplies
in the equipment than are initially required to achieve power supply standardization
across multiple product lines or to anticipate future product enhancements or feature
upgrades.
The trend chart and tabulated data deal with power densities as opposed to total
power input. Therefore, we need to understand the definition of the “equipment
square foot” term since that value makes up the denominator of the power density
value.
For the majority of equipment types, the equipment square foot definition repre-
sents the width × depth dimension of the packaging. However, for the specific types
of equipment mounted within a two-post rack framework, the width considered in
the calculation of the equipment square footage is the sum of the total width of the
packaging AND the width of two posts either side of the packaging in the installed
configuration.
Typical post widths are around 1.25 in. and therefore can add around 2.5 in. to
the width of the datacom equipment packaging that the two-post racks house. Figure
3-1 details the length and width dimensions used for the different types of equipment.

THE THERMAL MANAGEMENT CONSORTIUM


AND THE UPTIME INSTITUTE TREND CHART
In 2000, the Thermal Management Consortium (composed of major equipment
manufacturers) published through the Uptime Institute a trend chart showing both
the power density of datacom equipment since 1992 and then projecting that trend
through the year 2010 (Figure 3-2) (The Uptime Institute 2000).
This chart has been widely referenced in publications and presentations.
However, in some cases it has been mis-applied. This chart is based on maximum
“measured load” as defined in the Uptime document. Some general comments on the
chart are as follows:

This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.

18⏐ Load Trends and Their Application

Figure 3-1 Graphical representation of width × depth measurements used for


equipment ft2 definitions.

Figure 3-2 Uptime Institute power trend chart.

This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.

Datacom Equipment Power Trends and Cooling Applications ⏐19

• The data shown in the product trend chart provide a general overview of the
actual power consumed and the actual heat dissipated by data processing and
telecommunications equipment. These trends reflect data collected from hard-
ware manufacturers for many products.
• The data emulate the most probable level of power consumption assuming a
fully configured system in the year the product was first shipped. It was also
intended that the trend lines capture those equipment categories that dissipate
the most power, but in general most of the equipment in a specific class
should fall within the bands shown.
• Finally, the intent of the trends is that they are to be used as a forecasting and
planning tool in providing guidance for the future implementation of the dif-
ferent types of hardware.

Not all products will fall within the trends lines on the chart at every point in
time. It is not the intent to show or compare individual products with the trend lines.
However, it is the intent that most equipment will fall within the parameters given
and therefore this book provides valuable planning guidance for the design and oper-
ation of future data processing and telecommunication spaces.

TREND CHART EVOLUTION


The starting point for this chapter is the previous trend chart (Figure 3-3), which
shows the four trend lines for the groupings initially defined by the Thermal
Management Consortium and published by the Uptime Institute in 2000 (The
Uptime Institute 2000). This chart has been modified from the original chart in that
the “watts/m2” (right-hand “Y” axis) has been eliminated for clarity and to aid in
easily understanding the development of the new trend chart.
The trend lines for the previous trend chart were drawn as bands with a specific
thickness to cover a range of values at any given point. This feature will be carried
over for the updated trend lines for the same reasons. The four trend lines shown in
the previous trend chart will each be considered in turn and the changes described
below.
Similarly, as was the case with the previous trend chart, the new chart will also
be based on maximum “measured load,” which is based on maximum configured
racks or equipment cabinets. Therefore, the loads represent the equipment with the
highest load in its class. If there are fewer processors, memory, or I/O installed within
the system compared to a fully configured system, then the maximum values indi-
cated by the trend lines would be decreased.

TAPE STORAGE / STAND-ALONE WORKSTATIONS


The two groupings that have the lowest values in the previous trend chart do not
change, only their values are extended out to the new projection date of 2014 based
on the latest information available. Figure 3-4 shows these trend lines with the new
extended projections.

This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.

20⏐ Load Trends and Their Application

Figure 3-3 Previous power trend chart.

Figure 3-4 New power trend chart: workstations and tape storage projections.

This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.

Datacom Equipment Power Trends and Cooling Applications ⏐21

SERVERS AND DISK STORAGE SYSTEMS


This single grouping in the previous version of the trend chart underwent the
most change. The compute servers were increasing at a faster rate than shown in the
original trend chart (Figure 3-3), and the storage systems were not increasing as fast
as shown in the original trend chart. The two trends were separated as shown in
Figure 3-5.
Upon further study, it was determined that the compute servers also had a
noticeable shift in trend when comparing compute servers that were 1U, blade serv-
ers, or custom servers versus more conventional compute servers that were 2U or
greater. The trend shift, as shown on Figure 3-6, began around 2001, predominantly
driven by the introduction of the blade server technology.
Finally, these three trend lines that were spawned from the previous trend line
for all servers were extended to 2014 and this can be seen in Figure 3-7.

COMMUNICATION EQUIPMENT
As mentioned earlier, the previous trend chart accounted for the measured load
at maximum configuration. For the communication equipment trend line, this repre-
sented the largest density values for all groups. However, current studies reveal that
the communication equipment actually has two distinct groupings.
The extreme density communication equipment does indeed closely follow the
trend line that was shown in the previous chart, but the trend is representative of the
most powerful communications equipment available. More recently, the communi-

Figure 3-5 New power trend chart: compute and storage servers split.

This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.

22⏐ Load Trends and Their Application

Figure 3-6 New power trend chart: compute servers’ second split.

Figure 3-7 New power trend chart: compute and storage server projection.

This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.

Datacom Equipment Power Trends and Cooling Applications ⏐23

cation equipment technology has resulted in a distinct grouping (labeled in the new
trend chart as “Communication—High Density”) introduced around 2000 that has
significantly lower trend values, and this new grouping is indicated in Figure 3-8.
As before, the groupings are all projected to 2014, and for the communication
equipment trend lines, Figure 3-9 shows that particular extended projection.

ASHRAE UPDATED AND EXPANDED POWER TREND CHART


Figure 3-10 combines all of the seven new trend lines together for the new and
expanded power trend chart. As before, the new trend chart expresses the loads in
terms of heat load in watts per square foot of equipment footprint, and, based on the
trend data and projections, the trend lines were plotted in the seven logical equip-
ment groups listed below:

• Communication – extreme density


• Communication – high density
• Compute servers – 1U, blade, and custom
• Compute servers – 2U and greater
• Storage servers
• Stand-alone workstations
• Tape storage systems

Figure 3-8 New power trend chart: communication high-density equipment


trend introduced.

This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.

24⏐ Load Trends and Their Application

Figure 3-9 New power trend chart: communication extreme and high-density
equipment projection.

Figure 3-10 New ASHRAE updated and expanded power trend chart.

This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.

Datacom Equipment Power Trends and Cooling Applications ⏐25

Additional information related to the new trend chart can be found in Appendix
B, including SI versions of the trend chart, tabular data extracted from the trend
chart, and nonlogarithmic y-axis versions.

PRODUCT CYCLE VS. BUILDING LIFE CYCLE


Within the IT industry, it is common for computer equipment to have very short
product cycle times, typically ranging from one to three years. A product cycle is
defined as the time period between hardware upgrades and there are many driving
factors that influence this timeline. These factors could include advancements in the
hardware technology, such as processing speed and reliability that results in a greater
compaction, or the cycle could be driven by change in application requirements such
as software enhancements.
The short product cycle is usually in conflict with the life cycle of the environ-
ment in which the computer equipment is housed. The datacom facility along with
all of the associated infrastructure (mechanical cooling and electrical power distri-
bution equipment) has a much longer “product cycle” of around 10 to 25 years.
The gap between the life cycle of the datacom facility and the product cycle of
the computer equipment that it houses may result in the building having to endure
multiple datacom computer equipment product cycle changes during its existence.
The design of a datacom facility must take into account the impact of the future prod-
uct cycles and build in the necessary provisions that may be required for them.

PREDICTING FUTURE LOADS


The initial or Day 1 power and cooling load that a datacom facility may encoun-
ter could be vastly different from the final or ultimate power and cooling load that
the facility will be required to support. The challenges lie in how to predict what the
ultimate load is going to be and how to provision for it.
Current Day 1 determination of the load is typically based on common metrics
such as watts per square foot or watts per cabinet, which is often determined from
quantitative resources such as historical data or by other calculative methods.
However, the projection of how the loads are going to increase over the typical data-
com facility life cycle is not as simple, and the trend charts of the previous sections
of this chapter can be an aid in this projection.
In order to assist with the prediction of the future or ultimate load, the previous
sections in this chapter provided updated and expanded trend charts indicating the
projected loads up to 2014 for various groups of datacom computer equipment.
As mentioned before, a lot of the individuals and companies that were consulted
on the previous trend chart published in 2000 (Figure 3-2) [3] have provided input
into the new trend chart. Also, the TC 9.9 committee has among its members exten-
sive representation from almost every major computer equipment manufacturer,
including IBM, HP, Intel, Dell, Sun, Cisco, as well as facility design engineers.

This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.

26⏐ Load Trends and Their Application

This combination of resources has a unique insight into how the power
consumption of computer equipment will evolve, and the new trend chart is the
culmination of quantifying that insight into a tangible tool that can be used by the
industry. It was that insight that allowed for the evolution of the previous trend chart
to expand the groupings of the various types of equipment to result in seven trend
lines instead of the previously considered four.
By having an understanding of the group of equipment (or multiple groups of
equipment) that is to be housed within a given facility, the chart can be used to
quickly ascertain the increase in power density to that grouping over a given product
cycle timeframe.
For example, if we consider the extreme density communication equipment
group, the trend chart indicates a density of around 5,000 watts per equipment square
foot at the present time (2004). In three years’ time, the projected value increases to
7,000 watts per equipment square foot (40% higher), in five years’ time, the
predicted value is around 8,000 (60% higher), and in ten years’ time the value is
predicted to be 10,500 (110% higher).
For a datacom facility predominantly made up of extreme density communica-
tions equipment, this would mean that a holistic design would need to incorporate
provisions that would be able to accommodate the load increasing by more than a
factor of 2 over ten years and being able to accommodate phased upgrades that
would represent an increase in load of around 50% every three to five years.

PROVISIONING FOR FUTURE LOADS


The term provisioning as used in this section refers to planning and allocating
resources (financial, spatial, etc.) to accommodate changes that may be required in
the future. Provisioning could result in spatial considerations, such as providing
additional floor area to accommodate the delivery and installation of additional
equipment, or provisioning could have a more direct impact on the current design,
such as oversizing distribution infrastructure (e.g., pipe sizes, ductwork, etc.) to be
able to handle future capacity.
The provisioning for future loads is somewhat dependent on the datacom
owner/tenant and how much of an emphasis (i.e., how much they are willing to
invest) on the ability to seamlessly and expeditiously upgrade their computer equip-
ment in the future. That emphasis could determine whether the product cycle
upgrade is as simple as a plug and play or as complex and potentially disruptive as
a full blown construction project.
For those projects where a heavy emphasis is placed on product cycle upgrades
being deployed with little or no disruption, it may be prudent to design the power and
cooling distribution infrastructure to cater to the anticipated future or ultimate load
during the initial or Day 1 construction. This would include any infrastructure that
would be physically routed adjacent to, above, or below computer equipment.

This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.

Datacom Equipment Power Trends and Cooling Applications ⏐27

The Day 1 capacity of the datacom facility does not necessarily need to include
all of the power and cooling equipment required for the ultimate capacity, although
good design practice should make provisions for future augmentation. Those provi-
sions should include:
1. System growth components (e.g., isolation valves, additional taps, power taps).
2. Allocating the necessary spatial provisions required to accommodate the future
equipment and providing some consideration for how that future equipment would
be brought into the facility.

This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.

4
Air Cooling of
Computer Equipment
INTRODUCTION
As load densities increase, it becomes increasingly difficult to cool equipment,
especially when using air. The trends and predesign phase load calculation methods
described in the earlier chapters provide insight to establish the design criteria for
the load the facility will most likely be required to support today and in the future.
Using this information, combined with space geometry and other attributes,
determines the economics of using air cooling versus liquid cooling versus a combi-
nation. The initial and final load densities will directly impact the economic choice
between air- and liquid-cooled solutions as well as impacting the determination of
the optimum choice between the two. The following describes the basic types of air
cooling systems.
The cooling systems presented are limited to the systems within the datacom
room; it is not the intent to present options for central plant equipment (i.e., chillers,
drycoolers, etc.). The descriptions are not intended to be comprehensive but to
provide a sense of some of the choices.
Knowledge of these choices allows us to understand the provisions required for
a particular cooling system. These provisions are sometimes overlooked at the early
stages when considering high density load deployment but can have a significant
impact on the allocation of resources (financial, spatial, etc.).
The cooling systems presented in this chapter and in chapter 5 are categorized
into air-cooled and liquid-cooled systems. For the purposes of this book, the defi-
nitions of these categories are:

• Air Cooling – Conditioned air is supplied to the inlets of the rack / cabinet for
convection cooling of the heat rejected by the components of the electronic
equipment within the rack. It is understood that within the rack, the transport
of heat from the actual source component (e.g., CPU) within the rack itself
can be either liquid or air based, but the heat rejection media from the rack to
the terminal cooling device outside of the rack is air.

This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.

30⏐ Air Cooling of Computer Equipment

• Liquid Cooling – Conditioned liquid (e.g., water, etc., and usually above dew
point) is channeled to the actual heat-producing electronic equipment compo-
nents and used to transport heat from that component where it is rejected via a
heat exchanger (air to liquid or liquid to liquid) or extended to the cooling ter-
minal device outside of the rack.

AIR COOLING OVERVIEW


Air cooling is the most common source of cooling for electronic equipment
within datacom rooms. Chilled air is delivered to the air intakes of the electronic
equipment through underfloor, overhead, or local air distribution systems. While
each of the methods outlined in this section has benefits, it is left to the user to eval-
uate the limitations of each as well as the level of redundancy they offer for continued
operation.
Current industry guidelines recommend that electronic equipment be deployed
in a hot-aisle/cold-aisle configuration (as illustrated in Figure 4-1) (ASHRAE TC
9.9 2004). On each side of the cold aisle, electronic equipment is placed with the
intakes (fronts) facing the cold aisle. The chilled air is drawn into the intake side of
the electronic equipment and is exhausted from the rear of the equipment into the hot
aisle.
It is important to note that even though hot-aisle/cold aisle configuration is suit-
able for most deployments, there are some situations that may not benefit from this
approach, especially in the case of specific equipment that was not designed to oper-
ate in such environments. In addition, for methods using underfloor distribution
systems, cable cutouts or openings may cause undesirable leakage into the hot aisle.

Figure 4-1 Hot-aisle/cold-aisle cooling principle.

This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.

Datacom Equipment Power Trends and Cooling Applications ⏐31

UNDERFLOOR DISTRIBUTION
In an underfloor distribution system, chilled air is distributed via a raised floor
plenum and is introduced into the room through perforated floor tiles (Figure 4-2)
and other openings in the raised floor (i.e., cable cutouts).
The underfloor distribution system provides flexibility in the configuration of
the computer equipment above the raised floor. In theory, if the floor fluid dynamics
are set up properly, chilled air can be delivered to any location within the room
simply by replacing a solid floor tile with a perforated tile.
In practice, pressure variations in the raised floor plenum can create a non-
uniform distribution of airflow through the perforated floor tiles, in turn causing
facility hot spots. The various factors that influence the airflow distribution (i.e.,
raised floor height, open area of floor grilles, etc.) are well documented in a paper
by Patankar and Karki (2004).
The perforated tiles are located within the cold aisles, allowing chilled air to be
drawn through the front of the racks (via the electronic equipment) and discharged
at the rear of the racks in the hot aisles.
The warm air in the hot aisles is typically left unchanneled and is returned to the
top inlet of the computer room air-conditioning (CRAC) unit via airflow through the
room. Constricted airflow paths (e.g., due to low ceiling heights increasing the
impact of overhead infrastructure) can cause a negative impact on the effectiveness
of the cooling system.
The source of the chilled air is typically from CRAC units that are located within
the datacom room (Figure 4-2); this is currently the most common data center cool-
ing method. Figures 4-3 and 4-4 show a variation of the raised floor environment
where the chilled air is provided from air-conditioning units that are located outside
of the room.

Figure 4-2 Raised floor implementation most commonly found in data centers
today using CRAC units.

This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.

32⏐ Air Cooling of Computer Equipment

Figure 4-3 Raised floor implementation using building air from a central plant.

Figure 4-4 Raised floor implementation using two-story configuration with


CRAC units on the lower floor.

This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.

Datacom Equipment Power Trends and Cooling Applications ⏐33

OVERHEAD DISTRIBUTION
In an overhead distribution system, chilled air is distributed via ductwork and
is introduced into the room through diffusers supplying chilled air. The air is directed
into the cold aisles from above, vertically downward (Figure 4-5). The source of the
chilled air is cooling equipment that can be located either within or outside the data-
com room.
In general, overhead distribution systems have a higher static pressure than an
underfloor system and therefore inherently incorporate an increased ability to
balance airflows to provide uniform air distribution.
The warm air in the hot aisles is typically left unducted and is returned to the
cooling units via the room, and the potential of short-circuiting the supply air based
on the airflow patterns present in a shallow ceiling application remains a concern.
Figure 4-5 illustrates one method of overhead distribution, a technique that is
commonly found in telecom central office environments. In this example, the over-
head cold supply air is ducted to the cold aisles with the source of the cold air coming
from a centralized cooling plant located outside of the raised floor area. Alternative
schemes could supply the air using localized upflow CRAC units.
Although Figure 4-5 does not require a raised floor for cooling, the raised floor
may be used for some power and/or data/fiber distribution to avoid ceiling space traf-
fic with the ductwork.

MANAGING SUPPLY AND RETURN AIRFLOWS


Increasing, equipment heat loads mandate that underfloor and overhead air
distribution systems be fully engineered, including analyzing the impact of airflow
obstructions on both the supply and return airstreams. Within the engineering
community, considerable effort has been expended to develop techniques that can
manage the airflows independently of the facility and the associated spatial inter-
actions.

Figure 4-5 Overhead cooling distribution commonly found in central office


environments.

This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.

34⏐ Air Cooling of Computer Equipment

Figure 4-6 Raised floor implementation using a dropped ceiling as a hot air
return plenum.

Figure 4-7 Raised floor implementation using baffles to limit hot-aisle/cold-aisle


“mixing.”

Certain techniques aim to physically separate the hot and cold air in the datacom
facility to minimize mixing, as shown in Figures 4-6 and 4-7. Figure 4-6 uses a
dropped ceiling as the hot exhaust air plenum that mirrors the raised floor and is used
to channel the air back to the CRAC units.
Figure 4-7 shows baffles that essentially are placed over the cold aisles. This
method attempts to ensure that the cold supply air is forced through the inlets of the
datacom equipment, and it also prevents the short-circuiting of hot exhaust air at the
uppermost equipment in the rack being drawn into the cold aisle.
Of the two techniques, the dropped ceiling approach is more common. The
concern for both of these techniques is that as airflow requirements increase in the

This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.

Datacom Equipment Power Trends and Cooling Applications ⏐35

datacom facility, some of the servers may become “starved” or “choked.” There are
additional concerns with the baffle configuration method and its associated varia-
tions being difficult to implement from a practical standpoint due to fire and safety
code implications.
Figures 4-8 and 4-9 show a variation of the raised floor environment that may
actually have either distribution plenums or ducts on the inlet and/or outlet of the
servers. There are already products on the market that utilize a configuration similar
to this by enclosing and extending the rack depth and having built-in fans to assist
in the movement of air through the enclosed racks.

Figure 4-8 Raised floor implementation using inlet and outlet plenums/ducts
integral to the rack.

Figure 4-9 Raised floor implementation using outlet plenums/ducts integral to


the rack.

This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.

36⏐ Air Cooling of Computer Equipment

These techniques have demonstrated some promise, but there are concerns
about racks with multiple servers, especially from different vendors. The concern for
these techniques is that as airflow requirements increase in the datacom facility,
some of the servers may become “starved” or “choked.” It is expected that computer
manufacturers will have to assess the impact of these techniques on their servers and
qualify certain and specific configurations for use in this type of application. In addi-
tion, equipment access and fire and safety implications must be assessed.

LOCAL DISTRIBUTION
Local distribution systems aim to introduce chilled air as close to the cold aisle
as possible. The source of the chilled air is localized cooling equipment that is
mounted on, above, or adjacent to the electronic equipment racks.
Typically, local distribution systems are not intended to be installed as stand-
alone equipment cooling systems but rather as supplemental cooling systems for just
the high density load racks. Because of the proximity of the local cooling unit, the
problems associated with poor airflow distribution and mixing (both supply/chilled
airstreams and return/warm airstreams) are eliminated.
Local distribution systems require that liquid (either water or refrigerant) be
piped to the cooling equipment located near the racks, and this may be of concern
to certain end users. Cooling equipment redundancy measures should also be care-
fully evaluated.
Techniques that use air cooling at or near the rack have also started to emerge
(Stahl and Belady 2001). The fundamental thought is that the closer the evaporator
or chilled exchanger is to the source of the problem, the more effective the cooling
of the datacom facility and the better capacity may be achieved. While this is yet to
be determined, there are some interesting possibilities. Figures 4-10 through 4-12
offer such possibilities.
Figure 4-10 shows the schematic with the evaporator or chilled heat exchanger
on the top of the rack, but it could also be to the side of the rack. Figures 4-11 and
4-12 show the evaporator or heat exchanger on the exhaust and inlet side, respec-
tively.
The preferred technique is to have the exchanger on the exhaust side to limit
condensation exposure, which is the issue in Figure 4-13. In addition, the hot aisle
is not nearly as cool as that in Figure 4-12.
Note that these techniques offer some options for localizing the cooling, but
some flexibility may be lost in moving or swapping equipment. Also, note that for
all of the localized techniques, the use of the CRAC units and raised floor cooling
may still be required to provide general or ambient cooling of the overall room.
These are left to the user to determine.
Yet another variation of this technique is to have a heat exchanger built into the
base of a cabinet. Products utilizing this technique are being introduced in the
market. For some configurations using this technique, the airflow is completely
internal to the enclosure.

This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.

Datacom Equipment Power Trends and Cooling Applications ⏐37

Figure 4-10 Local cooling distribution using overhead cooling units mounted to
the ceiling.

Figure 4-11 Local cooling distribution using overhead cooling units mounted to
the rack.

This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.

38⏐ Air Cooling of Computer Equipment

Figure 4-12 Local cooling via integral rack cooling units on the exhaust side of the
rack.

Figure 4-13 Local cooling via integral rack cooling units on the inlet side of the
rack.

AIR COOLING EQUIPMENT


The previous discussions have revolved around distribution of the chilled air
and have not focused on the actual cooling equipment. The chilled air may be gener-
ated by a wide variety of systems, including exterior rooftop units and central station
air-handling systems, but the most popular technique has been to use CRAC units.
CRAC units are available with several types of cooling, including chilled water,
direct expansion air cooled, direct expansion water cooled, and direct expansion
glycol cooled.

This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.

Datacom Equipment Power Trends and Cooling Applications ⏐39

The direct expansion units typically have multiple-refrigerant compressors with


separate refrigeration circuits, air filters, humidifiers, reheat, and integrated control
systems with remote monitoring panels and interfaces. These units may also be
equipped with dry coolers and propylene glycol precooling coils to permit water-
side economizer operation where weather conditions make this strategy economical.
CRAC units utilizing chilled water for cooling do not contain refrigeration
equipment and generally require less servicing, can be more efficient, provide
smaller room temperature variations, and more readily support heat recovery strat-
egies than direct-expansion equipment.
Air-handling and refrigeration equipment may be located either inside or
outside datacom equipment rooms.

RELIABILITY
More often than not, the reliability associated with air systems has involved
utilizing a redundancy strategy such as N+1, N+2, etc., resulting in additional CRAC
units being located in the electronic equipment room. However, reliability or
availability is more than providing redundant CRAC units, components, etc. It is
about delivering a total solution, including the verification of the performance of the
system in meeting the loads.
ASHRAE’s Thermal Guidelines for Data Processing Centers (ASHRAE TC
9.9 2004) provided direction on measurement and monitoring points that can be used
to obtain the data required in order to verify whether the performance of the system
is as designed and can also be carried out during the testing phases to determine the
impact of “what if” scenarios such as a CRAC unit failure.
However, during the design phase the effort to accurately predict the perfor-
mance of the system is not as simple and can mean significant computational fluid
dynamics modeling (CFD) to discover the weak points and how the air system will
perform based on various failure scenarios.
Sections in the next chapter expand upon the reliability issue as it relates to
chilled water and other liquid aspects of reliability.

This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.

5
Liquid Cooling of
Computer Equipment
INTRODUCTION
As discussed in the previous chapter, the cooling systems presented are cate-
gorized into air-cooled and liquid-cooled systems. As a recap, for the purposes of
this book, the definitions of these categories are:

• Air Cooling – Conditioned air is supplied to the inlets of the rack/cabinet for
convection cooling of the heat rejected by the components of the electronic
equipment within the rack. It is understood that within the rack, the transport
of heat from the actual source component (e.g., CPU) within the rack itself
can be either liquid or air based, but the heat rejection media from the rack to
the terminal cooling device outside of the rack is air.
• Liquid Cooling – Conditioned liquid (e.g., water, etc., usually above dew
point) is channeled to the actual heat-producing electronic equipment compo-
nents and used to transport heat from that component where it is rejected via a
heat exchanger (air to liquid or liquid to liquid) or extended to the cooling ter-
minal device outside the rack.

The scope of this chapter is limited to the heat rejection associated with rack/
cabinet cooling and does not include the intricacies of component or board level
cooling at a component level. There are various liquid cooling methods (e.g., heat
pipes, thermosyphons, etc.) used to transport heat from the source component (e.g.,
CPU) to a location elsewhere, either within the packaging of the electronic equip-
ment or another location within the rack/cabinet itself.
For the purposes of this chapter, we will consider defining the liquid used to
transport the heat from the electronic equipment component to another location
within the packaging or the rack as being the “transport liquid.” The liquid cooling
methods considered all require a means of rejecting heat from the transport liquid
to the larger building cooling system, and the methods for rejecting that heat are
covered by the three basic strategies that are discussed in this chapter:

This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.

42⏐ Liquid Cooling of Computer Equipment

• Heat rejection by AIR cooling the heat transport liquid from the electronic
equipment
• Heat rejection by LIQUID cooling the heat transport liquid from the elec-
tronic equipment
• Heat rejection by extending the heat transport liquid from the electronic
equipment to a location remote from the rack/cabinet

LIQUID COOLING OVERVIEW


As heat load densities continue to rise, so does the challenge of cooling with air
due to the limits of heat sink/air moving device performance and rack level acoustic
limitations. Liquids, primarily because of their higher density, are much more effec-
tive in the removal of heat than air, making liquid cooling a viable choice as the
concentration of load continues to rise (Beaty 2004).
Within liquid cooling systems, piping connects the liquid cooling media
directly to the electronic equipment components from a cooling section of the equip-
ment rack or from a remote source. The two major types of liquid cooling media
available for datacom facilities are water (or a glycol mixture) and refrigerant. The
attributes described below provide some insight into the significant thermal perfor-
mance advantages that are possible by using water rather than air as the source of
cooling:

• The heat-carrying capacity of water is 3500 times greater than air.


• The heat transfer capability of water is 2 to 3 orders of magnitude greater than
air.

Although liquid cooling systems were prevalent as a means for cooling main-
frame computer systems, they have since fallen out of favor since the semiconductor
technologies did not initially require it but now are approaching limits that may again
require some form of liquid cooling.

LIQUID-COOLED COMPUTER EQUIPMENT


Most computers are cooled with forced air. However with the increased micro-
processor power densities and rack heat loads, some equipment may require liquid
cooling to maintain the equipment within the environmental specifications required
by the manufacturer.
The liquids considered for cooling electronic equipment are water, Fluori-
nertTM, or refrigerant. Manufacturers would normally supply the cooling system as
part of the computer equipment and the liquid loop would be internal to the equip-
ment. However, the transfer of heat from the liquid-cooled computer system to the
environment housing the racks takes place through a liquid-to-water or water/glycol
heat exchanger.

This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.

Datacom Equipment Power Trends and Cooling Applications ⏐43

Figures 5-2 and 5-3 depict the two possible liquid-cooled systems. Figure 5-1
shows a liquid loop internal to the rack where the exchange of heat with the room
occurs with a liquid to air heat exchanger. In this case the rack appears as an air-
cooled rack to the client and is classified as an air-cooled system. It is included here
to show the evolution to liquid-cooled systems. Figure 5-2 depicts a similar liquid
loop internal to the rack used to cool the electronics within the rack, but in this case
the heat exchange is with a liquid to chilled water heat exchanger. Typically the
liquid circulating within the rack is maintained above dew point to eliminate any
condensation concerns. Figure 5-3 depicts a design very similar to Figure 5-2 but
where some of the primary liquid loop components are housed outside the rack to
permit more space within the rack for electronic components.

Liquid Coolants for Computer Equipment


The liquid loops for cooling the electronics shown in Figures 5-1, 5-2, and 5-
3 are typically of three types:

• FluorinertsTM (fluorocarbon liquids)


• Water (or water/glycol mix but referred to as simply “water” throughout this
section for clarity)
• Refrigerants (pumped and vapor compression)

Observe that the name for each type of cooling method actually refers to the
primary coolant that is used to cool the computer equipment. Each option requires

Figure 5-1 Internal liquid cooling loop restricted within rack extent.

This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.

44⏐ Liquid Cooling of Computer Equipment

Figure 5-2 Internal liquid cooling loop with rack extents and liquid cooling loop
external to racks.

Figure 5-3 Internal liquid cooling loop extended to liquid-cooled external


modular cooling unit.

This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.

Datacom Equipment Power Trends and Cooling Applications ⏐45

a path (pipes or hoses) for the coolant to flow and work (pump or compressor) to
force the coolant through the system. Each option includes some combination of
valves, sensors, heat exchanger, and control logic within the cooling circuit. Some
of the factors that must be considered when choosing the cooling methodology are:

• Choice of logic and/or memory packaging technology


• Operating temperature of circuits
• Packaging and mechanical design objectives
• Serviceability parameters
• Compatibility with facility configuration (physical attributes and plant infra-
structure)
• Component heat flux (W/cm2)

Once the priorities of the system design have been established, the “best” cool-
ing option is selected. Some of the relative merits/trade-offs for the three primary
methodologies follow.

FLUORINERTTM
FluroinertsTM exhibit properties that make an attractive heat transfer media for
data processing applications. Foremost is an ability to contact the electronics
directly (eliminating some of the intermediary heat exchange steps), as well as the
transfer of high heat loads (via an evaporative cooling methodology). This technol-
ogy has containment concerns, metallurgical compatibility exposures, and tight
operating tolerances. FluorinertTM liquids are not to be confused with chlorinated
flurocarbons (CFCs), which are subject to environmental concerns.

WATER
Water is generally circulated throughout the electronic system between 15°C
and 25°C. The new ASHRAE recommendations (ASHRAE TC 9.9 2004) state that
the maximum dew point for a class 1 environment is 18°C. With this requirement the
logical design point would be to provide water to the electronics above 18°C to elim-
inate any condensation concerns.
The heat that is rejected by this water is either rejected through a water to air heat
exchanger (Figure 5-1) or to a water to water heat exchanger (Figures 5-2 and 5-3)
where the central plant supplies the chilled water to remove the heat. Liquid transfer,
for high density heat loads and where the system heat loads are high, is the optimum
design point for product design and client requirements. There are several reasons
for choosing a water cooling strategy:

• Less conversion losses (fewer steps between the heat load and the ultimate
heat sink). The heat transfer path would be from the electronic circuit to com-
ponent interface, to water, to central plant chilled water.

This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.

46⏐ Liquid Cooling of Computer Equipment

• Heat transfer capacity of water compared to air (water has several orders of
magnitude higher specific heat capacity compared to air)
• Minimal acoustical concerns
• Lower operating costs
• cost of installation: heat to air compared to heat to water is similar
• cost of operation based on electrical cost: water cooling is less costly than
air cooling
• More compact

REFRIGERANT
Refrigerants can be used either in a pumped loop technique or vapor compres-
sion cycle. The advantages of using refrigerants are similar to those of fluroinertsTM
in that they can contact the electronics without shorting out any of them. This tech-
nology has containment concerns, metallurgical compatibility exposures, and tight
operating tolerances.
In most cases the refrigerant requires the liquid lines to use copper piping
instead of hose to limit the loss of refrigerant over time. In the pumped loop meth-
odology, the refrigerant is at a low pressure such that when passing through an evap-
orator the liquid evaporates or passes into a two-phase flow situation and then passes
onto the condenser where the cycle begins again. If lower than ambient temperatures
are desired, then a vapor compression cycle may be employed. Similar concerns
exist with this system as with the pumped loop. Again, to limit refrigerant leaking,
no hoses are employed.
Clients view a system employing refrigerant as a “dry” liquid such that any leak
that does occur does not damage any of the electronics nor does it cause the elec-
tronics to fail when in operating mode. This can be viewed by some clients as a must
in their data center and is the preferred cooling methodology over other liquid cool-
ing technologies such as water.

DATACOM FACILITY CHILLED WATER SYSTEM


Chilled water may be provided by either a small chiller matched in capacity to
the computer equipment or a branch of the chilled water system serving the air-
handling units. Design and installation of chilled water or refrigerant piping and
selection of the operating temperatures should minimize the potential for leaks and
condensation, especially in the computer room, while satisfying the requirements of
the systems served.
Chilled water systems for liquid-cooled computer equipment must be designed
to:
1. provide water at a temperature within the manufacturer’s tolerances and
2. be capable of operating year-round, 24 hours per day.

This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.

Datacom Equipment Power Trends and Cooling Applications ⏐47

Chilled water distribution systems should be designed to the same standards of


quality, reliability, and flexibility as other computer room support systems. Where
growth is likely, the chilled water system should be designed for expansion or addi-
tion of new equipment without extensive shutdown.
Figure 5-4 illustrates a looped chilled water system with sectional valves and
multiple valved branch connections. The branches could serve air handlers or water-
cooled computer equipment. The valves permit modifications or repairs without
complete shutdown.

Figure 5-4 Typical example of chilled water loop and valve architecture.

This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.

48⏐ Liquid Cooling of Computer Equipment

Chilled water piping must be pressure tested, fully insulated, and protected with
an effective vapor retarder. The test pressure should be applied in increments to all
sections of pipe in the computer area. Drip pans piped to an effective drain should
be placed below any valves or other components in the computer room that cannot
be satisfactorily insulated. A good-quality strainer should be installed in the inlet to
local cooling equipment to prevent control valve and heat exchanger passages from
clogging.
If cross-connections with other systems are made, possible effects on the
computer room system of the introduction of dirt, scale, or other impurities must be
addressed.
System reliability is so vital that the potential cost of system failure may justify
redundant systems, capacity, and/or components. The designer should identify
potential points of failure that could cause the system to interrupt critical data
processing applications and should provide redundant or backup systems.
It may be desirable to cross-connect chilled water or refrigeration equipment for
backup, as suggested for air-handling equipment. Redundant refrigeration may be
required, the extent of the redundancy depending on the importance of the computer
installation. In many cases, standby power for the computer room air-conditioning
system is justified.

RELIABILITY
As discussed in the previous section, a strategy for configuring the piping
system and components must be planned to achieve the desired level of reliability
or availability. This applies not only to chilled water systems but any liquid cooling
system.
Cooling systems are as critical as electrical systems and therefore must be
planned to continuously perform during a power outage. Especially in high density
situations, the equipment temperatures can very quickly exceed their operational
limits during the time that the generators are being started, the power being trans-
ferred, and the cooling system being restarted.
To achieve the desired continuous operation during an outage can require
certain cooling equipment to be supplied from an uninterruptible power source
(UPS). Another measure may involve the use of a liquid standby storage. In the case
of chilled water this can be achieved through the use of thermal storage tanks, which
could provide sufficient cooling until the full cooling system is restored to full oper-
ation.
Where cooling towers or other configurations that require makeup water are
used, sufficient water storage on the premises should be considered. This provision
is to protect against a loss of water service to the site. Typical storage strategies for
makeup water are similar to generator fuel storage (e.g., 24, 48, 72 hours of reserve
or more) and can result in the need for very large multiple storage tanks, depending

This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.

Datacom Equipment Power Trends and Cooling Applications ⏐49

on the scale of the installation, so the impact to the site is a significant one and may
be problematic if not planned.
There is also often a concern over the presence of liquid near electronic equip-
ment. Liquid cooling has been effectively used for many years, for example, in the
mainframe environment. Just as with any other design conditions or parameters, it
requires effective planning, but it can be accomplished and the level of reliability
achieved.

This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.

References/Bibliography
REFERENCES
1. Mitchell-Jackson, J.D. 2001. Energy needs in an internet economy: A closer
look at data centers. Thesis, University of California, Berkeley, July 10.
2. ASHRAE TC 9.9. 2004. Thermal Guidelines for Data Processing Environ-
ments. Atlanta: American Society of Heating, Refrigerating and Air-Condi-
tioning Engineers, Inc.
3. The Uptime Institute. 2000. Heat Density Trends in Data Processing, Computer
Systems, and Telecommunications Equipment. White Paper.
4. Patankar, S.V., and K.C. Karki. 2004. Distribution of cooling airflow in a raised-
floor data center. ASHRAE Transactions 110 (2): 629-634.
5. Stahl, L., and C. Belady. 2001. Designing an alternative to conventional room
cooling. International Telecommunications and Energy Conference
(INTELEC), October 2001.
6. Beaty, D.L. 2004. Liquid cooling—Friend or foe? ASHRAE Transactions 110
(2): 643-652.

BIBLIOGRAPHY
1. Azar, K. 2002. Advanced cooling concepts and their challenges. Therminic—
2002. Advanced Thermal Solutions, Inc., Norwood, MA. <www.qats.com>.
2. Belady, C. 2001. Cooling and power considerations for semiconductors into the
next century (invited paper). Proceedings of the International Symposium on
Low Power Electronics and Design, August 2001.
3. Chu, R.C. 2003. The challenges of electronic cooling: Past, current and future.
Proceedings of IMECE: International Mechanical Engineering Exposition
and Congress, November 15-21, 2003, Washington D.C.
4. Garner, S.D. 1996. Heat pipes for electronics cooling applications. Electronics
Cooling Magazine.

This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.

52⏐ References/Bibliography

5. Guggari, S., D. Agonafer, C. Belady, and L. Stahl. 2003. A hybrid methodology


for the optimization of data center room layout. InterPACK ’03, July 6-11,
2003.
6. ITRS. 2003. International Technology Roadmap for Semiconductors, 2003 edi-
tion.
7. Kang, S., R. Schmidt, K.M. Kelkar, A. Radmehr, and S.V. Patankar. 2001. A
methodology for the design of perforated tiles in raised floor data centers
using computational flow analysis. IEEE Transactions on Components and
Packaging Technologies, 2001.
8. Karki, K.C., A. Radmehr, and S.V. Patankar. 2003. Use of computational fluid
dynamics for calculating flow rates through perforated tiles in raised-floor
data centers. Int. J. of HVAC&R Research, April.
9. Mitchell, R.L. 2003. Moving toward meltdown. Computer World Magazine.
10. Nakao, M., H. Hayama, and M. Nishioka. 1991. Which cooling air supply sys-
tem is better for a high heat density room: Underfloor or overhead? Thirteenth
International Telecommunications Energy Conference (INTELEC '91),
November 1991.
11. Noh, H.-K., K.S. Song, and S.K. Chun. 1998. The cooling characteristics on
the air supply and return flow systems in the telecommunication cabinet
room. Twentieth International Telecommunications Energy Conference
(INTELEC '98), October 1998.
12. Patel, C., C. Bash, C. Belady, L. Stahl, and D. Sullivan. 2001. Computational
fluid dynamics modeling of high density data centers to assure systems inlet
air specifications. Proceedings of InterPACK 2001 Conference, ASME, July
2001.
13. Patel C., R. Sharma, C. Bash, and A. Beitelmal. 2002. Thermal considerations
in cooling large scale compute density data centers. ITherm Conference, June
2002.
14. Quivey, B., and A.M. Bailey. 1999. Cooling of large computer rooms—Design
and construction of ASCI 10 TeraOps. InterPack 1999.
15. Schmidt, R. 2004. Thermal profile of a high density data center—Methodol-
ogy to thermally characterize a data center. ASHRAE Transactions 110 (2):
635-642.
16. Schmidt, R. 2001. Water cooling of electronics. Cooling Electronics in the
Next Decade, Sponsored by Cooling Zone, August 2001.
17. Schmidt, R. 2001. Effect of data center characteristics on data processing
equipment inlet temperatures. Advances in Electronic Packaging 2001 (Pro-
ceedings of IPACK ‘01, The Pacific Rim/ASME International Electronic
Packaging Technical Conference and Exhibition), vol. 2, Paper IPACK2001-
15870, July.

This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.

Datacom Equipment Power Trends and Cooling Applications ⏐53

18. Schmidt, R., and E. Cruz. 2002. Raised floor computer data center: Effect on
rack inlet temperatures of chilled air exiting both the hot and cold aisles. ITh-
erm conference, June 2002.
19. Schmidt, R., and E. Cruz. 2003. Raised floor computer data center: Effect on
rack inlet temperatures when rack flow rates are reduced. Interpack Confer-
ence, July 2003, to be published.
20. Schmidt, R., and E. Cruz. 2003. Raised floor computer data center: Effect on
rack inlet temperatures when adjacent racks are removed. Interpack Confer-
ence, July 2003, to be published.
21. Schmidt, R., and E. Cruz. 2002. Raised floor computer data center: Effect on
rack inlet temperatures when high powered racks are situated amongst lower
powered racks. IMECE conference, November 2002.
22. Schmidt, R., and E. Cruz. 2003. Clusters of high powered racks within a raised
floor computer data center: Effect of perforated tile flow distribution on rack
inlet air temperatures. IMECE Conference, November 2003, to be published.
23. Schmidt, R., K.C. Karki, K.M. Kelkar, A. Radmehr, and S.V. Patankar. 2001.
Measurements and predictions of the flow distribution through perforated
tiles in raised-floor data centers. Paper No. IPACK2001-15728, InterPack’01,
July 2001.
24. Schmidt, R., and B. Notohardjono. 2002. High-end server low-temperature
cooling. IBM J. Res. Develop. 46(6), November.
25. Ståhl, L. 2004. Cooling of high density rooms: Today and in the future.
ASHRAE Transactions 110 (1): 574-579.
26. Telcordia. 2001. GR-3028-CORE, Thermal Management in Telecommunica-
tions Central Offices. Telcordia Technologies Generic Requirements.
27. Vukovic, A. 2004. Communication network power efficiency–Assessment,
limitations and directions. Electronics Cooling Magazine, August.
28. Yamamoto, M., and T. Abe. 1994. The new energy-saving way achieved by
changing computer culture (Saving energy by changing the computer room
environment), IEEE Transactions on Power Systems, vol. 9, August.

This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.

Introduction to Appendices
One of the primary reasons behind the creation of the ASHRAE TC9.9 technical
committee that produced this book is to provide a better alignment between equip-
ment manufacturers and facility operations personnel to ensure proper and fault
tolerant operation within mission critical environments in response to the steady
increase of the power density of electronic equipment.
The content of the appendices is aimed at an audience that could either be from
industry or be stakeholders (e.g., facility owners, developers, end-users / clients)
with varying levels of technical knowledge about the two primary industries.
These appendices fall into two primary categories:
1. Some are included to supplement the content of the chapters by providing addi-
tional related material, much like a traditional appendix.
2. Some are included to provide a central location for attaining high level information
that spans both the facility cooling and the IT industries. This information may
normally be difficult to attain without referencing multiple sources that are dedi-
cated to a particular industry or a facet of that industry and, even then, the source
may have information that is too detailed or requires a greater knowledge level.
Therefore, some of the content of the appendices may be only indirectly related
to the content of the chapters of the book but may be appreciated by the audience for
general background information. An overview of the individual appendices is
provided below.
Appendix A—Collection of Terms. This contains a standardized list of industry-
related terms complete with high level definitions. It is not necessarily a collection
of terms for the book, although a number of terms used are defined, but is intended
as an easy reference.
Appendix B—Additional Trend Chart Information / Data. This contains the
trend charts in SI units as well as versions of the trend charts without logarithmic
scales for power density to provide a clearer picture as to the magnitude of the esca-

This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.

56⏐ Introduction to Appendices

lation of the loads. Also included is a tabular version of the trend values themselves
and, finally, graphs indicating the trends using the kW per rack and W/ft2 metrics.
Appendix C—Electronics, Semiconductors, Microprocessors, ITRS. This
provides some background information on the history of the semiconductor industry
and also about the ITRS roadmap information specifically for semiconductors.
Appendix D—A Micro-Macro Overview of Datacom Equipment Packaging.
This provides a high level graphical overview of the terminology of the packaging
and components associated with the datacom industry, with the approach beginning
at the small component level and building up to an entire facility. It is intended to
provide a simple definition in order to overcome the multiple meanings for the same
term given by the data processing and telecommunications industries.

This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.

Appendix A
Collection of Terms from
Facilities and IT Industries
Acoustics: Generally, a measure of the noise level in an environment or from a sound
source. For a point in an environment, the quantity is sound pressure level in decibels
(dB). For a sound source, the quantity is sound power level in either decibels (dB)
or bels (B). Either of these quantities may be stated in terms of individual frequency
bands or as an overall A-weighted value. Sound output typically is quantified by
sound pressure (dBA) or sound power (dB). Densely populated data and communi-
cations equipment centers may cause annoyance, affect performance, interfere with
communications, or even run the risk of exceeding OSHA noise limits (and thus
potentially causing hearing damage), and reference should be made to the appro-
priate OSHA regulations and guidelines (OSHA 1996). European occupational
noise limits are more stringent than OSHA’s and are mandated in EC Directive 2003/
10/EC (European Council 2003).

Air:
• Conditioned Air*: Air treated to control its temperature, relative humidity,
purity, pressure, and movement.
• Supply Air*: Air entering a space from an air-conditioning, heating, or
ventilating apparatus.
• Return Air: Air leaving a space and going to an air-conditioning, heating, or
ventilating apparatus.

Air and Liquid Cooling:


• Cooling: removal of heat.
• Air cooling: direct removal of heat at its source using air.
• Liquid cooling: direct removal of heat at its source using a liquid (usually
water, water / glycol mixture, fluroinert,TM or refrigerant).
• Air-cooled rack or cabinet: system conditioned by removal of heat using air.

* Asterisk denotes definitions from ASHRAE Terminology of Heating, Ventilation, Air


Conditioning, & Refrigeration.

This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.

58⏐ Appendix A—Collection of Terms from Facilities and IT Industries

• Liquid-cooled rack or cabinet: system conditioned by removal of heat using


a liquid.
• Air-cooled equipment: equipment conditioned by removal of heat using air.
• Liquid-cooled equipment: equipment conditioned by removal of heat using
a liquid.
• Air-cooled server: server conditioned by removal of heat using air.
• Liquid-cooled server: server conditioned by removal of heat using a liquid.
• Air-cooled blade: blade conditioned by removal of heat using air.
• Liquid-cooled blade: blade conditioned by removal of heat using a liquid.
• Air-cooled board: circuit board conditioned by removal of heat using air.
• Liquid-cooled board: circuit board conditioned by removal of heat using a
liquid.
• Air-cooled chip: chip conditioned by removal of heat from the chip using air.
• Liquid-cooled chip: chip conditioned by removal of heat using a liquid.

Air-Cooled Data Center: Facility cooled by forced air transmitted by raised floor,
overhead ducting, or some other method.

Air-Cooled System: Conditioned air is supplied to the inlets of the rack/cabinet for
convective cooling of the heat rejected by the components of the electronic equip-
ment within the rack. It is understood that within the rack, the transport of heat from
the actual source component (e.g., CPU) within the rack itself can be either liquid
or air based, but the heat rejection media from the rack to the terminal cooling device
outside of the rack is air.

Air Inlet Temperature: The temperature measured at the inlet at which air is drawn
into a piece of equipment for the purpose of conditioning its components.

Air Outlet Temperature: The temperature measured at the outlet at which air is
discharged from a piece of equipment.

ANSI: American National Standards Institute.

Backplane: A printed circuit board with connectors where other cards are plugged.
A backplane does not usually have many active components on it in contrast to a
system board.

Bandwidth: Data traffic through a device usually measured in bits-per-second.

Bay:
• A frame containing electronic equipment.
• A space in a rack into which a piece of electronic equipment of a certain size
can be physically mounted and connected to power and other input/output
devices.

This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.

Datacom Equipment Power Trends and Cooling Applications ⏐59

BIOS: Basic Input / Output System. The BIOS gives the computer a built-in set of
software instructions to run additional system software during computer bootup.

Bipolar Semiconductor Technology: This technology was popular for digital


applications until the CMOS semiconductor technology was developed. CMOS
draws considerably less power in standby mode and so it replaced many of the bipo-
lar applications around the early 1990s.

Blade Server: A modular electronic circuit board, containing one, two, or more
microprocessors and memory, that is intended for a single, dedicated application
(such as serving Web pages) and that can be easily inserted into a space-saving rack
with many similar servers. One product offering, for example, makes it possible to
install up to 280 blade server modules vertically in a single floor-standing cabinet.
Blade servers, which share a common high-speed bus, are designed to create less
heat and thus save energy costs as well as space.

Blower: An air-moving device (also see Fan).

Btu: The abbreviation for British thermal units. The amount of heat required to raise
one pound of water one degree Fahrenheit; a common measure of the quantity of
heat.

Cabinet: Frame for housing electronic equipment that is enclosed by doors and may
include vents for inlet and exhaust airflows and, in some cases, exhaust fans. Cabi-
nets generally house electronic equipment requiring additional security.

Central Office (CO): A CO is a building within a telephone network that houses


equipment for processing (receiving, transmitting, redirecting, etc.) voice signals
and digital data, connecting a larger number of lower speed to a smaller number of
higher speed lines.

CFD: Computational fluid dynamics. A computational technology that enables you


to study the dynamics of fluid flow and heat transfer numerically.

CFM: The abbreviation for cubic feet per minute, commonly used to measure the
rate of air flow in systems that move air.

Chassis: The physical framework of the computer system that houses all electronic
components, their interconnections, internal cooling hardware, and power supplies.

Chilled Water System: A type of air-conditioning system that has no refrigerant in


the unit itself. The refrigerant is contained in a chiller, which is located remotely. The
chiller cools water, which is piped to the air conditioner to cool the space.

This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.

60⏐ Appendix A—Collection of Terms from Facilities and IT Industries

Client: A server system that can operate independently but has some interdepen-
dence with another server system.

Cluster: Two or more interconnected servers that can access a common storage
pool. Clustering prevents the failure of a single file server from denying access to
data and adds computing power to the network for large numbers of users.

CMOS Electronic Technology: This technology draws considerably less power


than bipolar semiconductor technology in standby mode and so it replaced many of
the digital bipolar applications around the early 1990s.

Cold Plate: Cold plates are typically aluminum or copper plates of metal that are
mounted to electronic components. Cold plates can have various liquids circulating
within their channels.

Compute Server: Servers dedicated for computation or processing that are typi-
cally required to have greater processing power (and, hence, dissipate more heat)
than servers dedicated solely for storage (also see Server).

Compute-Intensive: Term that applies to any computer application that demands a


lot of computation, such as meteorology programs and other scientific applications.
A similar but distinct term, computer-intensive, refers to applications that require a
lot of computers, such as grid computing. The two types of applications are not
necessarily mutually exclusive; some applications are both compute- and computer-
intensive.

Computer System Availability: Probability that a computer system will be oper-


able at a future time (takes into account the effects of failure and repair/maintenance
of the system).

Computer System Reliability: Probability that a computer system will be operable


throughout its mission duration (only takes into account the effects of failure of the
system).

Communication Equipment: Equipment used for information transfer. The infor-


mation can be in the form of digital data, for data communications, or analog signals,
for traditional wireline voice communication.
• Core Network or Equipment: A core network is a central network into
which other networks feed. Traditionally, the core network has been the cir-
cuit-oriented telephone system. More recently, alternative optical networks
bypass the traditional core and implement packet-oriented technologies. Sig-
nificant to core networks is “the edge,” where networks and users exist. The
edge may perform intelligent functions that are not performed inside the core
network.

This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.

Datacom Equipment Power Trends and Cooling Applications ⏐61

• Edge Equipment or Devices: In general, edge devices provide access to


faster, more efficient backbone and core networks. The trend is to make the
edge smart and the core “dumb and fast.” Edge devices may translate between
one type of network protocol and another.

Condenser: Heat exchanger in which vapor is liquefied (state change) by the rejec-
tion of heat as a part of the refrigeration cycle.

Conditioned Air*: Air treated to control its temperature, relative humidity, purity,
pressure, and movement.

Cooling Tower: Heat-transfer device, often tower-like, in which atmospheric air


cools warm water, generally by direct contact (heat transfer and evaporation).

Core Network or Equipment: See Communication Equipment.

CPU: Central Processing Unit, also called a processor. In a computer the CPU is the
processor on an IC chip that serves as the heart of the computer, containing a control
unit, the arithmetic and logic unit (ALU), and some form of memory. It interprets and
carries out instructions, performs numeric computations, and controls the external
memory and peripherals connected to it.

CRAC (Computer Room Air Conditioning): A modular packaged environmental


control unit designed specifically to maintain the ambient air temperature and/or
humidity of spaces that typically contain datacom equipment. These products can
typically perform all (or a subset) of the following functions: cool, reheat, humidify,
dehumidify. They may have multiple steps for some of these functions. CRAC units
should be specifically designed for data and communications equipment room appli-
cations and meet the requirements of ANSI/ASHRAE Standard 127-2001, Method of
Testing for Rating Computer and Data Processing Room Unitary Air-Conditioners.

Data Center: A building or portion of a building whose primary function is to house


a computer room and its support areas. Data centers typically contain high-end serv-
ers and storage products with mission-critical functions.

Data Center Availability: Probability that a data center will be operable at a future
time (takes into account the effects of failure and repair/maintenance of the data-
center).

Data Center Reliability: Probability that a data center system will be operable
throughout its mission duration (only takes into account the effects of failure of the
data center).

This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.

62⏐ Appendix A—Collection of Terms from Facilities and IT Industries

Daughter Card: Also called daughter board, a printed circuit board that plugs into
another circuit board to provide extended feature(s). A daughter card accesses its
“parent” card’s circuitry directly through the interconnection between the boards.
• A mezzanine card is a kind of daughter card that is installed such that it lies in
the same plane but on a second level above its “parent.”

Dry-Bulb Temperature (DB): Temperature of air indicated by an ordinary ther-


mometer.

DDR Memory: Double Data Rate Memory, an advanced version of SDRAM


memory now used in most servers. DDR-SDRAM, sometimes called “SDRAM II,”
can transfer data twice as fast as regular SDRAM because it can send and receive
signals twice per clock cycle.

Dehumidification: The process of removing moisture from air.

DIMM: Dual In-line Memory Module, a small circuit board that holds memory
chips. A single in-line memory module (SIMM) has a 32-bit path to the memory
chips, whereas a DIMM has 64-bit path.

Diversity: Two definitions for diversity exist, diverse routing and diversity from
maximum.
• Systems that employ an alternate path for distribution are said to have diverse
routing. In terms of an HVAC system, it might be used in reference to an alter-
nate chilled water piping system. To be truly diverse (and of maximum bene-
fit) both the normal and alternate paths must each be able to support the entire
normal load.
• Diversity can also be defined as a ratio of maximum to actual for metrics such
as power loads. For example, the nominal power loading for a rack may be
based on the maximum configuration of components, all operating at their
maximum intensities. Diversity would take into account variations from the
maximum in terms of rack occupancy, equipment configuration, operational
intensity, etc., to provide a number that could be deemed to be more realistic.

Domain: A group of computers and devices on a network that are administered as


a unit with common rules and procedures. Within the Internet, domains are defined
by the IP address. All devices sharing a common part of the IP address are said to
be in the same domain.

Downflow: Refers to a type of air-conditioning system that discharges air down-


ward, directly beneath a raised floor, commonly found in computer rooms and
modern office spaces.

This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.

Datacom Equipment Power Trends and Cooling Applications ⏐63

Down Time: A period of time during which a system is not operational, due to a
malfunction or maintenance.

DRAM: Dynamic Random Access Memory is the most commonly used type of
memory in computers. A bank of DRAM memory usually forms the computer's
main memory. It is called dynamic because it needs to be refreshed periodically to
retain the data stored within.

Edge Equipment: See Communication Equipment.

Efficiency: The ratio of the output to the input of any system. Typically used in
relation to energy; smaller amounts of wasted energy denote high efficiencies.

Equipment: Refers to, but not limited to, servers, storage products, workstations,
personal computers, and transportable computers. May also be referred to as elec-
tronic equipment or IT equipment.

Equipment Recommended Operation Range vs. Manufacturer’s Specifica-


tions: A manufacturer’s specifications generally reference a range in which a piece
of equipment CAN function. A recommended range refers to the range at which
equipment is the most efficient and realizes the least amount of wear and tear,
extending its useful life.

Equipment Room: Data center or telecom central office room that houses computer
and/or telecom equipment. For rooms housing mostly telecom equipment, see
Telcordia GR-3028-CORE.

ESD: Electro Static Discharge, the sudden flow of electricity between two objects
at different electrical potentials. ESD is a primary cause of integrated circuit damage
or failure.

Ethernet: A networking system that enables high-speed data communication over


coaxial cables.

Evaporative Condenser: Condenser in which the removal of heat from the refrig-
erant is achieved by the evaporation of water from the exterior of the condensing
surface, induced by the forced circulation of air and sensible cooling by the air.

Fan*: Device for moving air by two or more blades or vanes attached to a rotating
shaft.
• Airfoil fan: shaped blade in a fan assembly to optimize flow with less turbu-
lence.
• Axial fan: fan that moves air in the general direction of the axis about which
it rotates.

This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.

64⏐ Appendix A—Collection of Terms from Facilities and IT Industries

• Centrifugal fan: fan in which the air enters the impeller axially and leaves it
substantially in a radial direction.
• Propeller fan: fan in which the air enters and leaves the impeller in a direc-
tion substantially parallel to its axis.

Fan Sink: A heat sink with a fan directly and permanently attached.

Fault Tolerance: The ability of a system to respond gracefully and meet the system
performance specifications to an unexpected hardware or software failure. There are
many levels of fault tolerance, the lowest being the ability to continue operation in
the event of a power failure. Many fault-tolerant computer systems mirror all oper-
ations—that is, every operation is performed on two or more duplicate systems, so
if one fails, the other can take over.

Firmware: Software that has been encoded onto read-only memory (ROM). Firm-
ware is a combination of software and hardware. ROMs, PROMs, and EPROMs that
have data or programs recorded on them are firmware.

FluorinertTM: A family of perfluorinated liquids offering unique properties ideally


suited to the demanding requirements of electronics manufacturing, heat transfer,
and other specialized applications. (Trademark 3MTM)

Flux: Amount of some quantity flowing across a given area (often a unit area perpen-
dicular to the flow) per unit time. Note: The quantity may be, for example, mass or
volume of a fluid, electromagnetic energy, or number of particles.

Footprint: In information technology, a footprint is the amount of space a particular


unit of hardware or software occupies. Marketing brochures frequently state that a
new hardware control unit or desktop display has a “smaller footprint,” meaning that
it occupies less space in the closet or on your desk. More recently, the term is used
to describe microcomponents that take less space inside a computer.

Heat:
• Total Heat (Enthalpy): A thermodynamic quantity equal to the sum of the
internal energy of a system plus the product of the pressure-volume work
done on the system.
h= E + pv
where h= enthalpy or total heat content, E = internal energy of the system, p =
pressure, and v = volume. For the purposes of this book, h = sensible heat +
latent heat.
• Sensible Heat: Heat that causes a change in temperature.
• Latent Heat: Change of enthalpy during a change of state.

Heat Exchanger*: Device to transfer heat between two physically separated fluids.

This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.

Datacom Equipment Power Trends and Cooling Applications ⏐65

• Counterflow heat exchanger: heat exchanger in which fluids flow in oppo-


site directions approximately parallel to each other.
• Cross-flow heat exchanger: heat exchanger in which fluids flow perpendicu-
lar to each other.
• Heat pipe heat exchanger: See heat pipe.
• Parallel-flow heat exchanger: heat exchanger in which fluids flow approxi-
mately parallel to each other and in the same direction.
• Plate heat exchanger or plate liquid cooler: thin plates formed so that liq-
uid to be cooled flows through passages between the plates and the cooling
fluid flows through alternate passages.

Heat Load per Product Footprint: Calculated by using product-measured heat


load divided by the actual area covered by the base of the cabinet or equipment.

Heat Pipe: Also defined as a type of heat exchanger. Tubular closed chamber
containing a fluid in which heating one end of the pipe causes the liquid to vaporize
and transfer to the other end where it condenses and dissipates its heat. The liquid
that forms flows back toward the hot end by gravity or by means of a capillary wick.

Heat Sink: A component designed to transfer heat from an electronic device to a


fluid. Processors, chipsets, and other high heat flux devices typically require heat
sinks.

Hot Aisle-Cold Aisle:


• A common arrangement for the perforated tiles and the datacom equipment.
Supply air is introduced into a region called the cold aisle.
• On each side of the cold aisle, equipment racks are placed with their intake
sides facing the cold aisle. A hot aisle is the region between the backs of two
rows of racks.
• The cooling air delivered is drawn into the intake side of the racks. This air
heats up inside the racks and is exhausted from the back of the racks into the
hot aisle.

HPC: High performance computing.

(HPCC) High Performance Computing and Communications High perfor-


mance computing includes scientific workstations, supercomputer systems, high
speed networks, special purpose and experimental systems, the new generation of
large-scale parallel systems, and application and systems software with all compo-
nents well integrated and linked over a high speed network.

This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.

66⏐ Appendix A—Collection of Terms from Facilities and IT Industries

Humidity*: Water vapor within a given space.


• Absolute Humidity: The mass of water vapor in a specific volume of a mix-
ture of water vapor and dry air.
• Relative Humidity:
• Ratio of the partial pressure or density of water vapor to the saturation
pressure or density, respectively, at the same dry-bulb temperature and
barometric pressure of the ambient air.
• Ratio of the mole fraction of water vapor to the mole fraction of water
vapor saturated at the same temperature and barometric pressure. At
100% relative humidity, the dry-bulb, wet-bulb, and dew-point tempera-
tures are equal.

Humidity Ratio: The ratio of the mass of water to the total mass of a moist air
sample. It is usually expressed as grams of water per kilogram of dry air (gw/kgda)
or as pounds of water per pound of dry air (lbw/lbda).

Humidification: The process of adding moisture to air or gases.

IEC: The International Electrotechnical Commission (IEC) is a global organization


that prepares and publishes international standards for all electrical, electronic, and
related technologies.

ITE: Information technology equipment.

ITRS: International Technology Roadmap for Semiconductors.

KVM: Keyboard-Video-Mouse switch. A piece of hardware that connects two or


more computers to a single keyboard, monitor, and mouse.

LAN: Local Area Network. A computer network that spans a relatively small area.
Most LANs are confined to a single building or group of buildings. However, one
LAN can be connected to other LANs over any distance via telephone lines and/or
radio waves. A system of LANs connected in this way is called a wide area network
(WAN).

Leakage Current: Refers to the small amount of current that flows (or “leaks”)
from an output device in the off state caused by semiconductor characteristics.

Liquid Cooled System: Conditioned liquid (e.g., water, etc., usually above dew
point) is channeled to the actual heat-producing electronic equipment components
and used to transport heat from that component where it is rejected via a heat
exchanger (air to liquid or liquid to liquid) or extended to the cooling terminal device
outside of the rack.

This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.

Datacom Equipment Power Trends and Cooling Applications ⏐67

Measured Power: The heat release in watts.

Memory: Internal storage areas in the computer. The term memory identifies data
storage that comes in the form of silicon, and the word storage is used for memory
that exists on tapes or disks. The term memory is usually used as a shorthand for
physical memory, which refers to the actual chips capable of holding data. Some
computers also use virtual memory, which expands physical memory onto a hard
disk.

Microprocessor: A chip that contains a CPU. The terms microprocessor and CPU
are quite often used interchangeably.

Midplane: Provides a fault-tolerant connection from the blade server to the server
chassis and other components. The midplane replaces an average of nine cables typi-
cally required in rack and pedestal server configurations, eliminating excessive
cables.

Motherboard: The main circuit board of a computer. The motherboard contains the
CPU, BIOS, memory, serial and parallel ports, expansion slots, connectors for
attaching additional boards and peripherals, and the controllers required to control
those devices.

Nameplate Rating: Term used for rating according to nameplate (IEC 60950, under
clause 1.7.1): “Equipment shall be provided with a power rating marking, the
purpose of which is to specify a supply of correct voltage and frequency, and of
adequate current-carrying capacity.”

Non-Raised Floor: Facilities without a raised floor utilize overhead ducted supply
air to cool equipment. Ducted overhead supply systems are typically limited to a
cooling capacity of 100 W/ft2 (Telcordia 2001).

ODM: Original Design Manufacturer. Describes a company that designs equipment


that is then marketed and sold to other companies under their own names. Most IT
equipment ODMs design and build servers in Taiwan and China.

OEM: Original Equipment Manufacturer. Describes a company that manufactures


equipment that is then marketed and sold to other companies under their own names.

Operating System: Operating systems perform basic tasks, such as recognizing


input from the keyboard, sending output to the display screen, keeping track of files
and directories on the disk, and controlling peripheral devices such as disk drives and
printers.
• It ensures that different programs and users running at the same time do not
interfere with each other. The operating system is also responsible for secu-
rity, ensuring that unauthorized users do not access the system.

This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.

68⏐ Appendix A—Collection of Terms from Facilities and IT Industries

• Operating systems provide a software platform on top of which other pro-


grams, called application programs, run.

PCB (Printed Circuit Board): Board that contains layers of circuitry used for inter-
connecting the other components.

Point of Presence (PoP): A PoP is a place where communication services are avail-
able to subscribers. Internet service providers have one or more PoPs within their
service area that local users dial into. This may be co-located at a carrier's central
office.

Power: Time rate of doing work, usually expressed in horsepower or kilowatts.

Processor: See CPU.

Pump*: Machine for imparting energy to a fluid, causing it to do work.


• Centrifugal pump: Pump having a stationary element (casing) and a rotary
element (impeller) fitted with vanes or blades arranged in a circular pattern
around an inlet opening at the center. The casing surrounds the impeller and
usually has the form of a scroll or volute.
• Diaphragm pump: Type of pump in which water is drawn in and forced out
of one or more chambers by a flexible diaphragm. Check valves let water into
and out of each chamber.
• Positive displacement pump: Has an expanding cavity on the suction side
and a decreasing cavity on the discharge side. Liquid flows into the pump as
the cavity on the suction side expands and the liquid flows out of the dis-
charge as the cavity collapses. Examples of positive displacement pumps
include reciprocating pumps and rotary pumps.
• Reciprocating pump: A back-and-forth motion of pistons inside of cylinders
provides the flow of fluid. Reciprocating pumps, like rotary pumps, operate
on the positive principle; that is, each stroke delivers a definite volume of liq-
uid to the system.
• Rotary pump: Pumps that deliver a constant volume of liquid regardless of
the pressure they encounter. A constant volume is pumped with each rotation
of the shaft and this type of pump is frequently used as a priming pump.

Rack: Structure for housing electronic equipment. Differing definitions exist


between the computing industry and the telecom industry.
• Computing Industry: A rack is an enclosed cabinet housing computer
equipment. The front and back panels may be solid, perforated, or open
depending on the cooling requirements of the equipment within.

This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.

Datacom Equipment Power Trends and Cooling Applications ⏐69

• Telecom Industry: A rack is a framework consisting of two vertical posts


mounted to the floor and a series of open shelves upon which electronic
equipment is placed. Typically, there are no enclosed panels on any side of the
rack.

Rack-Mounted Equipment: Equipment that is to be mounted in an EIA (Electronic


Industry Alliance) or similar cabinet. These systems are generally specified in EIA
units such as 1U, 2U, 3U, etc., where 1U = 1.75 inches (44 mm).

Rack Power: Used to denote the total amount of electrical power being delivered
to electronic equipment within a given rack. Often expressed in kilowatts (kW), this
is often incorrectly equated to be the heat dissipation from the electrical components
of the rack.

Raised Floor: Also known as access floor. Raised floors are a building system that
utilizes pedestals and floor panels to create a cavity between the building floor slab
and the finished floor where equipment and furnishings are located. The cavity can
be used as an air distribution plenum to provide conditioned air throughout the raised
floor area. The cavity can also be used for routing of power/data cabling infrastruc-
ture.

RAM: Random Access Memory, a configuration of memory cells that hold data for
processing by a computer's processor. The term random derives from the fact that
the processor can retrieve data from any individual location, or address, within
RAM.

Rated Voltage: The supply voltage as declared by the manufacturer.

Rated Voltage Range: The supply voltage range as declared by the manufacturer.

Rated Current: The rated current is the absolute maximum current that is required
by the unit from an electrical branch circuit.

Rated Frequency: The supply frequency as declared by the manufacturer.

Rated Frequency Range: The supply frequency range as declared by the manu-
facturer, expressed by its lower and upper rated frequencies.

Redundancy: “N” represents the number of pieces to satisfy the normal conditions.
Redundancy is often expressed compared to the baseline of “N”; some examples are
“N+1,” “N+2,” “2N,” and 2(N+1). A critical decision is whether “N” should repre-
sent just normal conditions or whether “N” includes full capacity during off-line
routine maintenance. Facility redundancy can apply to an entire site (backup site),
systems, or components. IT redundancy can apply to hardware and software.

This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.

70⏐ Appendix A—Collection of Terms from Facilities and IT Industries

Reliability: Reliability is a percentage value representing the probability that a piece


of equipment or system will be operable throughout its mission duration. Values of
99.9 percent (three 9s) and higher are common in data and communications equip-
ment areas. For individual components, the reliability is often determined through
testing. For assemblies and systems, reliability is often the result of a mathematical
evaluation based on the reliability of individual components and any redundancy or
diversity that may be employed.

Room Load Capacity: The point at which the equipment heat load in the room no
longer allows the equipment to run within the specified temperature requirements of
the equipment. The load capacity is influenced by many factors, the primary one
being the room’s theoretical capacity. Other factors, such as the layout of the room
and load distribution, also influence the room load capacity.

Room Theoretical Capacity: The capacity of the room based on the mechanical
room equipment capacity. This is the sensible tonnage of the mechanical room for
supporting the computer or telecom room heat loads.

Router: A device that connects any number of LANs. Routers use headers and a
forwarding table to determine where packets (pieces of data divided up for transit)
go, and they use Internet Control Message Protocol (ICMP) to communicate with
each other and configure the best route between any two hosts. Very little filtering
of data is done through routers. Routers do not care about the type of data they
handle.

Semiconductor: A material that is neither a good conductor of electricity nor a good


insulator. The most common semiconductor materials are silicon and germanium.
These materials are then doped to create an excess or lack of electrons and used to
build computer chips.

Server: A computer that provides some service for other computers connected to it
via a network. The most common example is a file server, which has a local disk and
services requests from remote clients to read and write files on that disk.

SRAM: Static RAM is random access memory (RAM) that retains data bits in its
memory as long as power is being supplied. SRAM provides faster access to data
and is typically used for a computer's cache memory.

STC: Sound Transmission Class. This is an acoustical rating for the reduction in
sound of an assembly. It is typically used to denote the sound attenuation properties
of building elements such as walls, floors, and ceilings. The higher the STC, the
better the sound-reducing performance of the element.

This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.

Datacom Equipment Power Trends and Cooling Applications ⏐71

Temperature:
• Dew Point: The temperature at which water vapor has reached the saturation
point (100% relative humidity).
• Dry Bulb: The temperature of air indicated by a thermometer.
• Wet Bulb: The temperature indicated by a psychrometer when the bulb of
one thermometer is covered with a water-saturated wick over which air is
caused to flow at approximately 4.5 m/s (900 ft/min) to reach an equilibrium
temperature of water evaporating into air, where the heat of vaporization is
supplied by the sensible heat of the air.

Thermosyphon: An arrangement of tubes for assisting circulation in a liquid


through the use of capillary action.

Tonnage: The unit of measure used in air conditioning to describe the heating or
cooling capacity of a system. One ton of heat represents the amount of heat needed
to melt one ton (2000 lb) of ice in one hour; 12,000 Btu/h equals one ton of heat.

Upflow: A type of air-conditioning system that discharges air upward, into an over-
head duct system.

Uptime:
• Uptime is a computer industry term for the time during which a computer is
operational. Downtime is the time when it isn’t operational.
• Uptime is sometimes measured in terms of a percentile. For example, one
standard for uptime that is sometimes discussed is a goal called five 9s—that
is, a computer that is operational 99.999 percent of the time.

Utility Computing: The vision of utility computing is to access information


services in a fashion similar to those provided by telephone, cable TV, or electric util-
ities.
• Utility computing is a service provisioning model in which a service provider
makes computing resources and infrastructure management available to the
customer as needed and charges them for specific usage rather than a flat rate.
Like other types of on-demand computing (such as grid computing), the util-
ity model seeks to maximize the efficient use of resources and/or minimize
associated costs.

Valve*: A device to stop or regulate the flow of fluid in a pipe or a duct by throttling.

Velocity:
• Vector quantity: Denotes the simultaneous time rate of distance moved and
the direction of a linear motion.

This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.

72⏐ Appendix A—Collection of Terms from Facilities and IT Industries

• Face velocity: Velocity obtained by dividing the volumetric flow rate by the
component face area.

Ventilation*: The process of supplying or removing air by natural or mechanical


means to or from any space. Such air may or may not have been conditioned.

Virtual: Common alternative to logical, often used to refer to the artificial objects
(such as addressable virtual memory larger than physical memory) created by a
computer system to help the system control access to shared resources.

Virtual Machine: A self-contained operating environment that behaves as if it is a


separate computer. For example, Java applets run in a Java virtual machine (VM) that
has no access to the host operating system. This design has two advantages:
• System Independence: A Java application will run the same in any Java VM,
regardless of the hardware and software underlying the system.
• Security: Because the VM has no contact with the operating system, there is
little possibility of a Java program damaging other files or applications.
The second advantage, however, has a downside. Because programs running in a
VM are separate from the operating system, they cannot take advantage of special
operating system features.

Virtual Server:
• A configuration of a World Wide Web server that appears to clients as an
independent server but is actually running on a computer that is shared by any
number of other virtual servers. Each virtual server can be configured as an
independent Web site, with its own hostname, content, and security settings.
• Virtual servers allow Internet service providers to share one computer
between multiple Web sites while allowing the owner of each Web site to use
and administer the server as though they had complete control.

Virtual Private Network:


• (VPN) The use of encryption in the lower protocol layers to provide a secure
connection through an otherwise insecure network, typically the Internet.
VPNs are generally cheaper than real private networks using private lines but
rely on having the same encryption system at both ends. The encryption may
be performed by firewall software or possibly by routers.

Wafer: Any thin but rigid plate of solid material, especially of discoidal shape; a
term used commonly to refer to the thin slices of silicon used as starting material for
the manufacture of integrated circuits.

This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.

Datacom Equipment Power Trends and Cooling Applications ⏐73

References
• http://www.computer-dictionary-online.org/
• http://whatis.techtarget.com/
• http://www.linktionary.com/linktionary.html
• ASHRAE Terminology of Heating, Ventilation, Air Conditioning, and Refrig-
eration
• ASHRAE Thermal Guidelines for Data Processing Environments
• OSHA. 1996. 29 CFR 1910.95: Occupational Noise Exposure.
• European Council. 2003. Directive 2003/10/EC of the European Parliament
and of the Council of 6 February 2003 on the minimum health and safety
requirements regarding the exposure of workers to the risks arising from
physical agents (noise).
• Telecordia. 2001. GR-3028-CORE. Thermal Management in Telecommuni-
cations Central Offices.

This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.

Appendix B
Additional Trend
Chart Information/Data
ASHRAE UPDATED AND EXPANDED
POWER TREND CHART—ADDITIONAL DATA
Additional versions and information related to the chart are provided for refer-
ence:

• Figure B-1 provides the complete updated and expanded trend chart in SI
units.
• Table B-1 provides the trend data in tabular form.
• Figure B-2 provides the complete updated and expanded trend chart without a
logarithmic y-axis scale to better understand the rate of change of the trends.
For this chart, the lines represent the median values of the updated and
expanded power trend chart bands shown in Chapter 3.
• Figure B-3 provides the complete updated and expanded trend chart in SI
units and without a logarithmic y-axis scale. For this chart, the lines represent
the median values of the updated and expanded power trend chart bands
shown in Figure B-1.
• Figure B-4 provides the trend chart expressed in kW per rack.
• Figure B-5 provides the trend chart expressed in watts per square foot.

This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.

76⏐ Appendix B—Additional Trend Chart Information/Data


Figure B-1 New ASHRAE updated and expanded power trend chart (SI units).

This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.

Datacom Equipment Power Trends and Cooling Applications ⏐77

This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.

78⏐ Appendix B—Additional Trend Chart Information/Data

Figure B-2 New ASHRAE updated and expanded power trend chart (non-log
scale, I-P units).

This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.

Datacom Equipment Power Trends and Cooling Applications ⏐79

Figure B-3 New ASHRAE updated and expanded power trend chart (non-log
scale, SI units).

This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.

80⏐ Appendix B—Additional Trend Chart Information/Data

Figure B-4 New ASHRAE updated and expanded power trend chart (non-log
scale, kW per rack).

This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.

Datacom Equipment Power Trends and Cooling Applications ⏐81

Figure B-5 New ASHRAE updated and expanded power trend chart (non-log
scale, watts per square foot).

This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.

Appendix C
Electronics, Semiconductors,
Microprocessors, ITRS
INTERNATIONAL TECHNOLOGY ROADMAP
FOR SEMICONDUCTORS (ITRS)
The International Technology Roadmap for Semiconductors (ITRS) is an
assessment of the semiconductor technology requirements. The objective of the
ITRS is to ensure advancements in the performance of integrated circuits. This
assessment, called roadmapping, is a cooperative effort of the global industry manu-
facturers and suppliers, government organizations, consortia, and universities.
The ITRS identifies the technological challenges and needs facing the semi-
conductor industry over the next 15 years. It is sponsored by the European Electronic
Component Association (EECA), the Japan Electronics and Information Technol-
ogy Industries Association (JEITA), the Korean Semiconductor Industry Associa-
tion (KSIA), the Semiconductor Industry Association (SIA), and Taiwan
Semiconductor Industry Association (TSIA).
International SEMATECH is the global communication center for this activity.
The ITRS team at International SEMATECH also coordinates the USA region
events.

SEMICONDUCTORS
When considering the whole facility or even just a datacom room within the
facility, semiconductors and chips seem like a tiny element and of little importance
or relevance. As a result, it is common for facilities-centric people to gloss over or
ignore trends and information about semiconductors and chips. However, semicon-
ductors and chips have a major impact on the load of a datacom facility and are a crit-
ical source for predicting the loads, especially future loads.
A source of information regarding trends at the chip level is Moore’s Law
(Figure C-1). Since the chips are the primary component used in the datacom equip-
ment, the chip trends can be considered an early indicator to future trends in that
equipment.

This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.

84⏐ Appendix C—Electronics, Semiconductors, Microprocessors, ITRS

Source: International Technology Roadmap for Semiconductors 2003)

Figure C-1 Moore’s law.

This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.

Datacom Equipment Power Trends and Cooling Applications ⏐85

Microprocessor power is the total of dynamic and leakage power. Dynamic


power is dissipated by the circuits themselves, while leakage power is energy dissi-
pated when the transistors are turned off. Microprocessor power is a function of
several variables, all of which have been changing with time. Previously, dynamic
power was the major component of total power. In recent years, leakage power has
increased and will soon be equal to dynamic power. Dynamic power is proportional
to frequency, capacitance, and the square of voltage.
The net result of all these effects is summarized in the International Technology
Roadmap for Semiconductors, the output of a collaborative effort by many industry
participants. Microprocessors may be classified as high performance (typically 64
bit) or cost-performance (typically 32 bit):

Year / Power in Watts 2004 2005 2006 2007 2010 2013 2016
High Performance 160 170 180 190 218 251 288
Cost Performance 85 92 98 104 120 138 158

BIPOLAR AND CMOS OVERVIEW


Prior to the early 1990s, the prevalent semiconductor technology for digital
applications was bipolar. The combination of increased power dissipation and
increased packaging density throughout the 1980s resulted in the module heat flux
requiring more and more advanced cooling technologies, the majority of which were
based on liquid-cooling techniques. However, the advent of CMOS technology in
the early 1990s resulted in a resetting of the module heat flux issue and allowed the
cooling to be accomplished much more simplistically through convective cooling
using airflow.
Again, being driven by the ever increasing need for more performance, resulting
in technology compaction, the air-cooled designs have been extended through the
use of more advanced thermal modeling capabilities, which resulted in more sophis-
ticated techniques to dissipate the heat from the source components while still allow-
ing the facility to remain air cooled (although the quantity of airflow also increased).
As shown in Figure C-2, the CMOS technologies have module heat flux values
approaching those that were experienced toward the end of the bipolar technologies.
These values are also approaching the limitations of an air-cooled system and
making the industry more open to pursuing other cooling technologies, including
reverting back to various liquid-cooled or hybrid air- and liquid-cooled solutions.

This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.

86⏐ Appendix C—Electronics, Semiconductors, Microprocessors, ITRS

Source: Courtesy of Roger Schmidt, IBM

Figure C-2 CMOS impact on heat flux.

This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.

Appendix D
Micro-Macro Overview of
Datacom Equipment Packaging
INTRODUCTION
A lot of different definitions and terms are used to describe the various elec-
tronic components associated with a datacom facility. This section is provided as a
reference to define the terms and also to define the hierarchy and groupings associ-
ated with those terms. In the context of this book, the descriptions of the components
will be provided with a bias toward the cooling considerations of the component.

PROCESSOR
The processor is the primary source of heat generation within a piece of elec-
tronic equipment with surface temperatures rising to greater than 100 degrees
Celsius (212 degrees Fahrenheit). The processor typically has some means of inte-
gral cooling to transport the heat away from the chip surface. This is sometimes
liquid based (such as a heat pipe, which is common in laptops) or a heat sink, typi-
cally with fan assistance.
Other terms used for this component include CPU (central processing unit).
Figure D-1 shows a typical processor, and Figure D-2 shows a processor with the
heat sink / fan assembly mounted to it.

MEMORY
The memory can be thought of as the working interface between the data storage
device and the processor. These memory chips are typically installed on cards with
multiple chips per card (Figure D-3). These cards have edge connectors that allow
them to be installed in sockets mounted on the board.

BOARD
The board (or motherboard) provides interconnections between the various
components (e.g., processors, memory, input/output, etc.). Typically, the boards
themselves are fairly thin and have printed circuitry, small components, and sockets.
A typical board layout is shown in Figure D-4.

This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.

88⏐ Appendix D—Micro-Macro Overview of Datacom Equipment Packaging

Figure D-1 Processor of CPU. Figure D-2 Processor with heat sink/fan
assembly.

Figure D-3 Typical memory chip.

SERVERS

Server Definition
In addition to the board described above, the other major components that make
up the packaging include the main storage device (hard drive), supplementary stor-
age devices for portable media types (e.g., CD-ROM drives, floppy disk drives, etc.),
input/output connectors (e.g., video graphics cards, sound cards, etc.), and the power
supply. These components are all collocated with the board in a single housing and
that housing is colloquially referred to as the server.

This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.

Datacom Equipment Power Trends and Cooling Applications ⏐89

Source: www.techtutorials.com

Figure D-4 Typical circuit board or motherboard.

Also a part of the server packaging is the cooling system. The cooling system
for the majority of servers involves air movement and utilizes one or more fans
mounted inside of the packaging to draw air in from the surrounding environment
through and around the server components. The air is channeled within the server
to transport the heat that is generated by the server components through convection
before the server fans exhaust the warmer air back out to the surrounding environ-
ment.

Rack-Mounted Servers

Recent advances in interconnect technology enabled servers to be configured as


smaller boxes that allow for a rack-mounted assembly (see subsequent section for
rack definition). These rack mount servers themselves are designed to have very
little space around them and can be packed into the rack framework at a very high
density.
The server sizes in a rack-mounted configuration are expressed in terms of units
(one unit or 1U represents 1.75 inches or 44.45 mm of vertical height within a rack).
Figure D-5 shows a schematic of some of the server sizes, from towers to 1U servers
readily available today.
As racks become more standardized, the rack-mounted servers are allowing for
more standardization of the airflow patterns through the server, with a lot of the serv-
ers today being manufactured with air inlets in the front face and hot air exhausted
in the rear of the server.

This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.

90⏐ Appendix D—Micro-Macro Overview of Datacom Equipment Packaging

Figure D-5 Various server packaging.

Compute Servers
Servers used for computing are available in rack mount and custom configura-
tions. As mentioned above, compute servers that are housed within standard racks
are typically expressed in terms of unit heights (1U, 2U, 3U, etc.). Typical dimensions
and sizes for standard rack mount compute servers are shown in Figure D-6, and a
typical custom compute server is shown in Figure D-7.

Storage Servers
Storage servers vary in their configurations and sizes based on the manufacturer.
Similar to compute servers, depending on the scale of deployment required, the
configuration may be a standard rack-mounted box with varying unit height (as
depicted in Figure D-6) or it can be a custom stand-alone piece of equipment.
Figure D-8 shows some of the typical custom storage server sizes.

Blade Servers
Even the packaging density of 1U and 2U rack-mounted compute servers has
not proven to meet the ever increasing demands of the marketplace. A new type of

This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.

Datacom Equipment Power Trends and Cooling Applications ⏐91

Figure D-6 Typical compute server rack and packaging.

This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.

92⏐ Appendix D—Micro-Macro Overview of Datacom Equipment Packaging

Figure D-7 Typical custom compute server rack.

This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.

Datacom Equipment Power Trends and Cooling Applications ⏐93

Figure D-8 Typical custom storage server rack.

This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.

94⏐ Appendix D—Micro-Macro Overview of Datacom Equipment Packaging

compute server packaging, blade servers, entered the market in May 2001 and threat-
ens to spark a period of rapid change in the equipment installed in datacom facilities.
Each blade consists of a board (complete with the processor, memory, and input /
output, etc.). The only other component from the server that is included in the pack-
aging is the hard drive, and these components are all contained within a minimal amount
of packaging, which results in a blade-like dimension (Figure D-9).
Server components that had previously been packaged inside the tower/pedestal
and rack mount boxes, such as fans, power supplies, etc., are still required, but these
components are now located within a chassis that is designed to house multiple blade
servers in a vertical side-by-side configuration (Figure D-10). These chassis are
typically 3U to 7U tall and can house up to 24 blades.
Blade servers initially used low power processors and compensated for low
individual performance with greatly increased density. High performance blades are
now available, some with multiple (first two, now four) processors per blade.
The blade server equipment is the result of technology compaction, which
allows for a greater processing density in the same equipment volume. The greater
processing density also results in a greater power density and a greater heat density.
This heat density increase has sparked the need in the industry to address the cooling
of high density heat loads.

Server Airflow
ASHRAE’s Thermal Guidelines for Data Processing Environments (ASHRAE
2004) introduced standardized nomenclature for defining the cooling airflow paths
for server equipment. This information was to be referenced by equipment manu-

Figure D-9 Typical blade servers.

This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.

Datacom Equipment Power Trends and Cooling Applications ⏐95

Figure D-10 Typical blade server chassis.

Source: Thermal Guidelines for Data Processing Environments (ASHRAE 2004)

Figure D-11 ASHRAE TC 9.9 definitions for airflow.

facturer’s literature via a standard thermal report (also introduced in the same publi-
cation) as an attempt to provide meaningful data to bridge the gap between the
equipment manufacturer’s data and the requirements of the facility cooling system.
The diagrammatic overview of that nomenclature can be seen in Figure D-11.

RACK

A rack can be thought of as the standard framework within which the servers
are located. Racks are defined differently by the telecommunications and the data
processing industries, but essentially, both definitions are related to this framework.
In the data processing industry, racks owned by multiple companies are often
collocated in a single data center, and, therefore, there is an increased need for secu-
rity. In such environments, the racks may end up being cabinets or enclosures that
have lockable doors on the front and rear to prevent unauthorized access to the equip-
ment within (Figure D-12).

This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.

96⏐ Appendix D—Micro-Macro Overview of Datacom Equipment Packaging

Figure D-12 Typical data processing rack configuration.

In the telecommunications industry, the rack is more often a completely open


framework with the equipment within the rack often exceeding the extent of the rack
itself (Figure D-13). In some instances, the equipment is mounted on a shelf that is
affixed to the rack framework. Typical dimensional data for a telecom rack is shown
in Figure D-14.
A standard 19 in. server rack has a footprint that is around 2 ft wide, 3 ft, 6 in.
deep, and around 6 ft, 4 in. tall (Figure D-6). This height allows for 42 units of verti-
cal equipment to be housed within it. Telecom racks also accommodate 42U, but
often their standard height is slightly taller at 7 ft to accommodate integral wire
management. In addition, telecom racks have two nominal width configurations, 19
in. and 23 in., and can be fitted with equipment shelves of varying depths and heights
(Figure D-14).
Rack-mounted equipment has become the industry standard with servers, stor-
age equipment, telecommunications equipment, etc., all configured to be able to
occupy multiple units within the standard 42U racks.
Modern racks have a wide variety of built-in provisions for power and data/
communication management, which provides for an organized installation with the
myriad of cabling connectivity that is required with the servers. With the emergence
of high density loads, a number of rack manufacturers are offering cooling solutions
within their racks as well.

This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.

Datacom Equipment Power Trends and Cooling Applications ⏐97

Figure D-13 Typical telecommunications rack configuration.

Figure D-14 Standard telecommunications rack.

This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.

98⏐ Appendix D—Micro-Macro Overview of Datacom Equipment Packaging

ROWS
As mentioned earlier, the configuration at a rack level is becoming more and
more standardized to reflect a front-to-back airflow configuration. This airflow
pattern also allows for multiple racks to be placed side by side with little or no clear-
ances to form a row of racks (Figure D-15).
The limiting constraint on the length of a row is based on a combination of
connectivity constraints between equipment. Rows are often placed with the fronts
facing each other, allowing for the hot-aisle/cold-aisle cooling method that is typi-
cally deployed (see chapter 4 for more information).

TECHNICAL SPACE (RAISED FLOOR)


The rows of racks are typically located on top of an elevated platform consisting
of removable tiles on top of a gridded support system. This assembly is known as
a raised floor. The floor cavity that is created by the raised floor can be used for one
or more of the following:

• Airflow Distribution—The floor acts as a supply air plenum and is pressur-


ized with cool air from CRAC units that are also located on the raised floor.
This cool air is then introduced into the space in front of the racks (i.e., on the
air inlet side of the racks) through perforated floor tiles.

Figure D-15 Typical datacom facility row alignment.

This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.

Datacom Equipment Power Trends and Cooling Applications ⏐99

• Water Piping Distribution—For liquid-cooled applications to the racks


themselves or for chilled water piping to the CRAC units, the floor acts as a
physical barrier between the piping and the racks, mitigating the risks associ-
ated with any leaks.
• Electrical Power Distribution—Conduits beneath the floor bring clean
power downstream of an uninterruptible power source (UPS) and power dis-
tribution unit (PDU) to the rack level.
• Fiber/Data Distribution—More often than not, fiber/data cabling is typically
routed overhead, but it could be in the raised floor cavity in certain applica-
tions.

The height of the raised floor varies based on what its purpose is as well as the
physical constraints of the building. Raised floors have been as little as 6 to 12 inches
all the way up to 48 inches for specific applications. The raised floor also often
houses some of the local power and cooling support equipment downstream of the
central plant area.

DATACOM FACILITY
The datacom facility itself is not only composed of the building extents but also
consists of the power and cooling support components that are required to be located
on the site somewhere. As covered in chapter 4, the support equipment location may
vary based on a particular preference toward more power and cooling support equip-
ment inside the building versus outside.
The facility may also have areas allocated for administration. Figure D-16
shows a typical facility setup.

This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.

100⏐ Appendix D—Micro-Macro Overview of Datacom Equipment Packaging


Figure D-16 Overview of a typical datacom facility.

This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.

Datacom Equipment Power Trends and Cooling Applications ⏐101

1U 16, 21, 23, 69, 89, 90


2U 16, 21, 23, 69, 90
A
absolute humidity 66
accurate 12, 17
acoustics 57
air 2, 13, 29-36, 38, 39, 41-43, 45-48, 52, 53, 57-59, 61-67, 69, 71-73, 85, 89, 98
air and liquid cooling 57
air cooling 2, 29, 30, 36, 38, 41, 42, 46, 57
air inlet temperature 58
air outlet temperature 58
air-conditioning 31, 48, 57, 59, 62, 71
air-cooled blade 58
air-cooled board 58
air-cooled chip 58
air-cooled data center 58
air-cooled equipment 58
air-cooled rack or cabinet 57
air-cooled server 58
air-cooled system 43, 58, 85
airflow 31, 33, 34, 36, 85, 89, 94, 95, 98
airfoil fan 63
air-handling 38, 39, 46, 48
ancillary 7
ANSI 58, 61
application 2, 4, 16, 25, 33, 36, 59, 60, 65, 68, 72
architects 1, 3
aspects 2, 4, 10, 39
availability 39, 48, 60, 61
average 7, 10, 11, 67
axial fan 63
B
backplane 58
baffles 34
bandwidth 13, 58
bay 58

This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.

102⏐ Index

BIOS 59, 67
bipolar semiconductor technology 59, 60
blade server 13, 21, 59, 67, 94, 95
blower 59
board 41, 58, 59, 62, 67, 68, 87-89, 94
broad 11
Btu 59, 71
budget 13
C
cabinet 25, 29, 36, 41, 42, 52, 57-59, 65, 68, 69
calculate 12
capability 4, 6, 42
capacity 2, 4, 6, 7, 26, 27, 36, 42, 46, 48, 67, 69-71
central office 33, 59, 63, 68
centrifugal fan 64
centrifugal pump 68
CFD 39, 59
CFM 59
chassis 59, 67, 94, 95
chilled water 7, 38, 39, 43, 45-48, 59, 62, 99
chilled water system 46, 47, 59
chillers 11, 13, 29
circuit 45, 58-60, 62, 63, 67-69, 89
classes 2
client 13, 43, 45, 60
cluster 60
CMOS 59, 60, 85, 86
CMOS electronic technology 60
cold plate 60
cold-aisle 30, 34, 98
collaboration 2, 12
communication 12, 16, 21, 23, 24, 26, 53, 60, 61, 63, 68, 83, 96
communication equipment 16, 21, 23, 26, 60, 61, 63
compaction 6, 13, 14, 25, 85, 94
company 4, 6, 7, 12, 67
compatibility 45, 46

This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.

Datacom Equipment Power Trends and Cooling Applications ⏐103

component 15, 29, 30, 41, 45, 56, 58, 65, 66, 72, 83, 85, 87, 94
compounding 13, 14
compressors 39
compute 4, 5, 7, 16, 21-23, 52, 60, 90-92, 94
compute server 7, 60, 90-92, 94
compute-intensive 60
computer 2, 12, 25, 26, 31, 36, 42, 43, 46-48, 51-53, 59-64, 66-73
computer system availability 60
computer system reliability 60
computing industry 68
condensation 36, 43, 45, 46
condenser 46, 61, 63
conditioned air 29, 41, 57, 58, 61, 69
configuration 12, 17, 21, 30-32, 35, 45, 62, 69, 72, 89, 90, 94, 96-98
consensus 15
consortium 15, 17, 19
constraints 11, 98, 99
construction 1, 2, 12, 14, 26, 52
consumption 14, 17, 19, 26
containment 45, 46
continuous 6, 48
contractors 1
convection 29, 41, 89
conversion 45
cooling 1-4, 7, 10-13, 25-27, 29-31, 33, 36-39, 41-46, 48, 49, 51-53, 55, 57-59, 61,
63, 65-68, 71, 85, 87, 89, 94-96, 98, 99
cooling tower 61
core network or equipment 60, 61
cost 10, 13, 15, 46, 48, 85
cost benefit 13
counterflow heat exchanger 65
CPU 29, 41, 58, 61, 67, 68, 87, 88
CRAC 7, 31-34, 36, 38, 39, 61, 98, 99
cross-flow heat exchanger 65

This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.

104⏐ Index

D
data 1, 11-13, 17, 19, 23, 25, 31, 33, 39, 45, 46, 48, 51-53, 55-64, 67, 69, 70, 73, 87,
94-96, 99
data center 1, 31, 46, 51-53, 58, 61, 63, 95
data center availability 61
data center reliability 61
data processing 12, 17, 19, 39, 45, 48, 51, 52, 56, 61, 73, 94-96
datacom 1-7, 10-17, 25-27, 29-31, 33-36, 39, 42, 46, 56, 61, 65, 83, 87, 94, 98-100
daughter card 62
day 1 25-27
DDR memory 62
dehumidification 62
delivery 11, 26
density 1-3, 10, 11, 13, 14, 16, 17, 21, 23, 24, 26, 29, 36, 42, 45, 48, 51-53, 55, 66,
85, 89, 90, 94, 96
design 1, 2, 10-12, 14, 19, 25-27, 29, 39, 43, 45, 46, 49, 51, 52, 67, 72
developer 3, 11
development 6, 19
dew point 30, 41, 43, 45, 66, 71
diaphragm pump 68
DIMM 62
discrepancy 12
disk storage 16, 21
dissipation 2, 12, 17, 69, 85
distribution 4, 7, 10, 11, 25, 26, 30, 31, 33, 35-38, 47, 51, 53, 62, 69, 70, 98, 99
diversity 62, 70
domain 62
down time 63
downflow 62
DRAM 63
dry bulb 71
dry-bulb temperature 62, 66
drycoolers 29
ducts 35

This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.

Datacom Equipment Power Trends and Cooling Applications ⏐105

E
edge equipment 61, 63
edge equipment or devices 61
efficiency 6, 11, 53, 63
electronic 13, 29, 30, 31, 36, 39, 41-43, 45, 49, 51, 52, 55, 58-60, 63, 65, 66, 68, 69,
83, 87
elements 4-6, 70
energy 1, 4, 6, 51-53, 59, 63, 64, 68, 85
engineers 1, 3, 25, 51
environment 5, 6, 11, 25, 31, 35, 42, 45, 49, 53, 57, 72, 89
equipment 1-7, 10-19, 21, 23-27, 29-31, 33, 34, 36, 38, 39, 41-43, 46-49, 51, 52, 55-
63, 65-70, 83, 87, 90, 94-96, 98, 99
equipment recommended operation range vs. manufacturer’s specifications 63
equipment room 6, 7, 11, 13, 39, 61, 63
ESD 63
ethernet 63
evaporative condenser 63
evaporator 36, 46
exhaust 34, 36, 38, 59, 89
expansion 10, 38, 39, 47, 67
experience 11, 13
extreme density 16, 21, 23, 26
F
face velocity 72
facility 1-7, 10-13, 15, 25-27, 29, 31, 33-36, 45, 46, 55, 56, 58, 69, 83, 85, 87, 95,
98-100
failure 39, 48, 60, 61, 63, 64
fan 59, 63, 64, 87, 88
fan sink 64
fault tolerance 64
firmware 64
flexibility 31, 36, 47
floor 67, 98
floor 3-7, 10, 11, 26, 31-36, 51-53, 58, 59, 62, 67, 69, 98, 99
fluid dynamics 31, 39, 52, 59
FluorinertTM 42, 45, 64

This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.

106⏐ Index

flux 45, 64, 65, 85, 86


focus 3, 4, 12-14
footprint 4, 6, 11-14, 23, 64, 65, 96
forecasting 19
future loads 15, 25, 26, 83
G
generators 11, 48
glycol 38, 39, 42, 43, 57
gross 7, 11
growth 3-7, 27, 47
H
hardware 4-7, 19, 25, 59, 64, 66, 69, 72
heat 2, 6, 10-13, 16, 17, 19, 23, 29, 30, 33, 36, 39, 41-43, 45, 46, 48, 51, 52, 57-61,
63-67, 69-71, 85-89, 94
heat exchanger 30, 36, 41-43, 45, 48, 61, 64-66
heat flux 45, 65, 85, 86
heat load per product footprint 65
heat pipe 41, 51, 65, 87
heat pipe heat exchanger 65
heat sink 42, 45, 64, 65, 87, 88
heat transfer 2, 42, 45, 46, 59, 61, 64
high density 16, 23, 29, 36, 45, 48, 52, 53, 89, 94, 96
high level 11, 55, 56
historical 2, 11, 25
holistic 1, 4, 10, 26
hoses 45, 46
hot aisle-cold aisle 65
hot spots 31
hot-aisle 30, 34, 98
HPC 65
(HPCC) high performance computing and communications 65
humidity 57, 61, 66, 71
humidity ratio 66
HVAC 11, 62

This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.

Datacom Equipment Power Trends and Cooling Applications ⏐107

I
IEC 66, 67
impact 1, 3, 5-7, 11, 25, 26, 29, 31, 33, 36, 39, 49, 83, 86
implementation 19, 31, 32, 34, 35
inception 12
increase 3, 6, 7, 10, 12, 14, 25, 26, 29, 34, 36, 55, 94
independent 12, 72
industry 1, 2, 6, 11-13, 17, 25, 26, 30, 55, 56, 68, 69, 71, 83, 85, 94-96
information 1-3, 11-13, 15-17, 19, 25, 29, 55, 56, 60, 64, 66, 71, 75, 83, 94, 98
intakes 30
integration 2
IS 1
IT 1-4, 6, 11-13, 25, 55, 63, 67, 69
ITE 66
ITRS 16, 52, 56, 66, 83
K
KVM 66
L
LAN 66
latent heat 64
leakage current 66
liquid cooled system 66
liquid cooling 2, 29, 30, 41-44, 46, 48, 49, 51, 57
liquid-cooled blade 58
liquid-cooled board 58
liquid-cooled chip 58
liquid-cooled equipment 58
liquid-cooled rack or cabinet 58
liquid-cooled server 58
local distribution 36
loop 42-44, 46, 47
M
magnitude 1, 14, 15, 42, 46, 55
mainframe 42, 49
managers 1

This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.

108⏐ Index

manufacturers 1, 12, 13, 15, 17, 19, 36, 42, 55, 83, 96
maximum 10, 17, 19, 21, 45, 62, 69
measured 11, 17, 19, 21, 58, 65, 67, 71
measured power 67
memory 19, 45, 59, 61-64, 67, 69, 70, 72, 87, 88, 94
method 12, 31, 33-35, 43, 58, 61, 98
metrics 3, 11-13, 25, 56, 62
microprocessor 42, 67, 85
midplane 67
minimum 6, 17, 73
motherboard 67, 87, 89
N
N+1 39, 69
nameplate 11, 12, 17, 67
nameplate rating 67
net 11, 85
non-raised floor 67
O
obsolete 14
obstacles 12
ODM 67
OEM 67
operating system 67, 72
operation 1, 19, 30, 39, 46, 48, 55, 63, 64
ordinances 11
organization 3, 6, 12, 66
outline 3
overhead 30, 31, 33, 37, 52, 58, 67, 71, 99
owner 3, 26, 72
P
packaging 6, 12, 17, 41, 45, 52, 56, 85, 88-90, 94
parallel-flow heat exchanger 65
parameters 19, 45, 49
PCB 68
PDU 99

This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.

Datacom Equipment Power Trends and Cooling Applications ⏐109

perforated 31, 52, 53, 65, 68, 98


performance 1, 4, 6, 39, 42, 57, 64, 65, 70, 83, 85, 94
piping 42, 46, 48, 62, 99
planners 1
planning 1-5, 10, 11, 15, 19, 26, 49
plant 29, 32, 33, 45, 99
plate heat exchanger or plate liquid cooler 65
plenum 31, 34, 69, 98
point of presence 68
population 12
positive displacement pump 68
power 1-7, 10-27, 33, 42, 48, 51, 53, 55, 57-60, 62, 64, 67-70, 75, 76, 78-81, 85, 88,
94, 96, 99
power supply 17, 88
predicting 15, 25, 83
preliminary 11, 13
premature 15
pressure 31, 33, 46, 48, 57, 61, 64, 66, 68
process 2, 3, 5, 11, 13, 62, 66, 72
processor 61, 68, 69, 87, 88, 94
product 12, 13, 17, 19, 25, 26, 45, 59, 64, 65
product cycles 25
professionals 11
project 12, 13, 26
projection 19, 22-25
propeller fan 64
provisioning 26, 71
pump 45, 68
R
rack 2, 11, 12, 14, 17, 29, 30, 34-38, 41-44, 53, 56-59, 62, 66-69, 75, 80, 89-99
rack power 69
rack-mounted equipment 69, 96
raised floor 11, 31-36, 52, 53, 58, 62, 67, 69, 98, 99
RAM 69, 70
rated current 69
rated frequency 69

This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.

110⏐ Index

rated frequency range 69


rated load 11
rated voltage 69
rated voltage range 69
reciprocating pump 68
redundancy 30, 36, 39, 48, 69, 70
refrigerant 36, 39, 42, 46, 57, 59, 63
refrigeration 39, 48, 57, 61, 73
regulatory 12
relative humidity 57, 61, 66, 71
reliability 25, 39, 47, 48, 49, 60, 61, 70
renewal 6
renovation 12, 14
return 33, 36, 52, 57
return air 57
room load capacity 70
room theoretical capacity 70
rotary pump 68
router 70
rows 65, 98
S
safety 12, 35, 36, 73
semiconductor 12, 16, 42, 56, 59, 60, 66, 70, 83, 85
sensible heat 64, 71
sequence 12
server 7, 10, 11, 13, 16, 21, 22, 53, 58-60, 67, 70, 72, 88-96
serviceability 45
space 3-7, 10, 15, 29, 33, 43, 57-59, 64, 66, 72, 89, 98
speed 13, 25, 59, 63, 65
SRAM 70
stakeholders 11, 55
standby 48, 59, 60
static pressure 33
STC 70
storage 4-7, 13, 16, 19-23, 48, 60, 61, 63, 67, 87, 88, 90, 93, 96
stranded cost 15

This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.

Datacom Equipment Power Trends and Cooling Applications ⏐111

strategy 6, 39, 45, 48


supply 17, 33, 34, 36, 42, 52, 57, 65, 67, 69, 88, 98
supply air 33, 34, 57, 65, 67, 98
support 5, 6, 11, 13, 25, 29, 39, 47, 61, 62, 98, 99
T
tape storage 16, 19, 20, 23
technical 11, 13, 52, 55, 98
technology 1, 3, 4, 6, 13, 14, 16, 21, 23, 25, 45, 46, 52, 59, 60, 64, 66, 83, 85, 89, 94
telecom 5, 33, 63, 68, 70, 96
telecom industry 69
temperature 39, 45, 53, 57, 58, 61, 62, 64, 66, 70, 71
thermal 12, 15, 17, 19, 39, 42, 48, 51-53, 59, 73, 85, 94, 95
thermal report 12, 17, 95
thermosyphon 71
tonnage 70, 71
total heat 64
trade-offs 1, 45
transactions 6, 51-53
transport 29, 30, 41, 42, 58, 66, 87, 89
trend chart 2, 5, 15-26, 55, 75, 76, 78-81
trends 1-3, 5, 6, 11, 14-16, 19, 21, 29, 51, 56, 75, 83
turnover 5, 6
two-post 17
typical 13, 17, 25, 47, 48, 87-100
U
underfloor 30, 31, 33, 52
upflow 33, 71
upgrades 7, 17, 25, 26
UPS 11, 48, 99
uptime 71
utility 11, 71
V
valve 47, 48, 71
vapor 43, 46, 48, 61, 66, 71
vector quantity 71

This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
© 2005, American Society of Heating, refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). For personal use only.
Additional reproduction, distribution, or transmission in either print or digital form is not permitted without ASHRAE’s prior written permission.

112⏐ Index

velocity 71, 72
ventilation 72
virtual 67, 72
virtual machine 72
virtual private network 72
virtual server 72
volume 13, 64, 66, 68, 94
W
wafer 72
water 7, 11, 30, 36, 38, 39, 41-43, 45-48, 52, 57, 59, 61-63, 66, 68, 71, 99
watts 7, 10-12, 14, 17, 19, 23, 25, 26, 67, 75, 81, 85
wet bulb 71
white-space 5
workload 6, 7
workstations 16, 19, 20, 23, 63, 65

This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012
This file is licensed to sergio torres morales (retos@correoinfinitum.com). License Date: 9-26-2012