INTRODUCTION
Modern data centres, operating under the Cloud computing model are hosting a
variety of applications ranging from those that run for a few seconds to those
that run for longer periods of time on shared hardware platforms. The need to
manage multiple applications in a data centre creates the challenge of on
demand resource provisioning and allocation in response to time-varying
workloads. Normally, data centre resources are statically allocated to
applications, based on peak load characteristics, in order to maintain isolation
and provide performance guarantees.
Until high performance has been the sole concern in data centre deployments,
this demand has been fulfilled without paying much attention to energy
consumption. Data centres are not only expensive to maintain, but also
unfriendly to the environment. And now it drive more in carbon emissions than
both Argentina and the Netherlands .High energy costs and huge carbon
footprints are incurred due to massive amounts of electricity needed to power
and cool numerous servers hosted in these data centres.
DATA CENTER
Data centers have their roots in the huge computer rooms of the 1940s, typified
by ENIAC, one of the earliest examples of a data center. Early computer
systems, complex to operate and maintain, required a special environment in
which to operate. Many cables were necessary to connect all the components,
and methods to accommodate and organize these were devised such as
standard racks to mount equipment, raised floors, and cable trays (installed
overhead or under the elevated floor). A single mainframe required a great deal
of power, and had to be cooled to avoid overheating. Security became
important – computers were expensive, and were often used
for military purposes. Basic design-guidelines for controlling access to the
computer room were therefore devised.
During the boom of the microcomputer industry, and especially during the
1980s, users started to deploy computers everywhere, in many cases with little
or no care about operating requirements. However, as information
technology (IT) operations started to grow in complexity, organizations grew
aware of the need to control IT resources. The advent of Unix from the early
1970s led to the subsequent proliferation of freely available Linux-
compatible PC operating-systems during the 1990s. These were called
"servers", as timesharing operating systems like Unix rely heavily on the client-
server model to facilitate sharing unique resources between multiple users. The
availability of inexpensive networking equipment, coupled with new standards
for network structured cabling, made it possible to use a hierarchical design that
put the servers in a specific room inside the company. The use of the term "data
center", as applied to specially designed computer rooms, started to gain
popular recognition about this time.
REQUIREMENTS FOR MODERN DATA CENTERS
The "lights-out" data center, also known as a darkened or a dark data center, is a
data center that, ideally, has all but eliminated the need for direct access by
personnel, except under extraordinary circumstances. Because of the lack of
need for staff to enter the data center, it can be operated without lighting. All of
the devices are accessed and managed by remote systems, with automation
programs used to perform unattended operations. In addition to the energy
savings, reduction in staffing costs and the ability to locate the site further from
population centers, implementing a lights-out data center reduces the threat of
malicious attacks upon the infrastructure.
ARCHITECTURE OF GREEN – CLOUD COMPUTING
Energy use is a central issue for data centers. Power draw for data centers
ranges from a few kW for a rack of servers in a closet to several tens of MW for
large facilities. Some facilities have power densities more than 100 times that of
a typical office building. For higher power density facilities, electricity costs are
a dominant operating expense and account for over 10% of the total cost of
ownership (TCO) of a data center. By 2012 the cost of power for the data center
is expected to exceed the cost of the original capital investment.
Siting is one of the factors that affect the energy consumption and
environmental effects of a datacenter. In areas where climate favors cooling and
lots of renewable electricity is available the environmental effects will be more
moderate. Thus countries with favorable conditions, such as: Canada,
Finland Sweden, Norway and Switzerland, are trying to attract cloud
computing data centers.
In an 18-month investigation by scholars at Rice University's Baker Institute for
Public Policy in Houston and the Institute for Sustainable and Applied
Infodynamics in Singapore, data center-related emissions will more than triple
by 2020.
ENERGY EFFICIENCY
The most commonly used metric to determine the energy efficiency of a data
center is power usage effectiveness, or PUE. This simple ratio is the total power
entering the data center divided by the power used by the IT equipment.
Total facility power consists of power used by IT equipment plus any overhead
power consumed by anything that is not considered a computing or data
communication device (i.e. cooling, lighting, etc.). An ideal PUE is 1.0 for the
hypothetical situation of zero overhead power. The average data center in the
US has a PUE of 2.0, meaning that the facility uses two watts of total power
(overhead + IT equipment) for every watt delivered to IT equipment. State-of-
the-art data center energy efficiency is estimated to be roughly 1.2. Some large
data center operators like Microsoft and Yahoo! have published projections of
PUE for facilities in development; Googlepublishes quarterly actual efficiency
performance from data centers in operation.
The U.S. Environmental Protection Agency has an Energy Star rating for
standalone or large data centers. To qualify for the ecolabel, a data center must
be within the top quartile of energy efficiency of all reported facilities.
European Union also has a similar initiative: EU Code of Conduct for Data
Centres.
SECURITY
Physical security also plays a large role with data centers. Physical access to the
site is usually restricted to selected personnel, with controls including a layered
security system often starting with fencing, bollards and mantraps. Video
camera surveillance and permanent security guards are almost always present if
the data center is large or contains sensitive information on any of the systems
within. The use of finger print recognition mantraps is starting to be
commonplace.
APPLICATIONS
The main purpose of a data center is running the IT systems applications that
handle the core business and operational data of the organization. Such systems
may be proprietary and developed internally by the organization, or bought
from enterprise software vendors. Such common applications
are ERP and CRM systems.
Data centers are also used for off site backups. Companies may subscribe to
backup services provided by a data center. This is often used in conjunction
with backup tapes. Backups can be taken off servers locally on to tapes.
However, tapes stored on site pose a security threat and are also susceptible to
fire and flooding. Larger companies may also send their backups off site for
added security. This can be done by backing up to a data center. Encrypted
backups can be sent over the Internet to another data center where they can be
stored securely.
CARRIER NUTRALITY
Today many data centers are run by Internet service providers solely for the
purpose of hosting their own and third party servers.
However traditionally data centers were either built for the sole use of one large
company, or as carrier hotels or Network-neutral data centers.
These facilities enable interconnection of carriers and act as regional fiber hubs
serving local business in addition to hosting content servers.
CONCLUSION AND FUTURE WORK
As the prevalence of Cloud computing continues to rise, the need for power
saving mechanisms within the Cloud also increases. This paper presents a Green
Cloud framework for improving system efficiency in a data center. To
demonstrate the potential of framework, presented new energy efficient
scheduling. Though in this paper, we have found new ways to save vast
amounts of energy while minimally impacting performance. Not only do the
components discussed in this paper complement each other, they leave space for
future work. Future opportunities could explore a scheduling system that is both
power-aware and thermal-aware to maximise energy savings both from physical
servers and the cooling systems used. Such a scheduler would also drive the
need for better data center designs, both in server placements within racks and
closedloop cooling systems integrated into each rack. While a number of the
Cloud techniques are discussed in this paper, there is a growing need for
improvements in Cloud infrastructure, both in the academic and commercial
sectors.