Anda di halaman 1dari 15

Computer Networks 91 (2015) 438452

Contents lists available at ScienceDirect

Computer Networks
journal homepage:

Towards energy management in Cloud federation: A survey in

the perspective of future sustainable and cost-saving strategies
Maurizio Giacobbe, Antonio Celesti, Maria Fazio, Massimo Villari,
Antonio Puliato
DICIEAMA, University of Messina Contrada di Dio, S. Agata, Messina 98166, Italy

a r t i c l e

i n f o

Article history:
Received 5 November 2014
Revised 8 June 2015
Accepted 27 August 2015
Available online 8 September 2015
Cloud computing
Cloud federation
Energy eciency
Energy costs saving
Energy sustainability

a b s t r a c t
Nowadays, the increasing interest in Cloud computing is motivated by the possibility to promote a new economy of scale in different contexts. In addition, the emerging concept of Cloud
federation allows providers to optimize the utilization of their resources establishing business partnerships. In this scenario, the massive exploitation of ICT solutions is increasing the
energy consumption of providers, thus many researchers are currently investigating new energy management strategies. Nevertheless, balancing Quality of Service (QoS) with both energy sustainability and cost saving concepts is not trivial at all. The growing interest in this
area has been highlighted by the increasing number of contributions that are appearing in
literature. Currently, most of energy management strategies are specically focused on independent Cloud providers, others are beginning to look at Cloud federation. In this paper, we
present a survey that helps researchers to identify the future trends of energy management in
Cloud federation. In particular, we select the major contributions dealing with energy sustainability and cost-saving strategies aimed at Cloud computing and federation and we present
a taxonomy useful to analyze the current state-of-the-art. In the end, we highlight possible
directions for future research efforts.
2015 Elsevier B.V. All rights reserved.

1. Introduction
Nowadays, there is an increasing interest regarding energy management in Cloud computing. The growing success
of Cloud computing is mainly due to its better exibility, reliability, and scalability than traditional information and communication technology (ICT) systems and the capability to
satisfy the worldwide demand of highly specialized and customized services. In the context of ICT processes, the cost of
energy is one of the major factors for determining the cost
of services provided to users. Typically, such a cost is due to
the management of the infrastructure and human resources.

Corresponding author. Tel.: +39 090 397 7344.

E-mail addresses: (M. Giacobbe),
(A. Celesti), (M. Fazio), (M. Villari), (A. Puliato).
1389-1286/ 2015 Elsevier B.V. All rights reserved.

Data processing, storage, and transport imply the use of a

set of devices that use electricity for various reasons. In fact,
the cost of energy is typically due to ICT equipments, electrical equipments, and cooling equipments. In this context, if
on one hand, we are observing an increasing globalization in
supplying ICT services, on the other hand, there is an increment in the complexity of ICT systems to full the requests
of more customized solutions. The Digital Agenda for Europe (DAE) [1] has recently identied the priorities on digital
technologies for years 20132014. In particular, priority No.
6 stresses the need to accelerate Cloud computing through
public sector buying power. At the same time, the National
Energy Research Scientic Computing Center (NERSC) at the U.S.
Department of Energy (DOE) has tested the effectiveness of
Cloud computing in terms of energy eciency. Energy management is, in fact, one of the major keywords in the Cloud
computing literature [2], including two main aspects that are

M. Giacobbe et al. / Computer Networks 91 (2015) 438452


Fig. 1. Energy eciency in the perspective of Cloud federation.

generally uncorrelated: energy costs-saving and energy sustainability. Another interesting aspect is the energy market
(and the electrical energy market in particular) is becoming
highly dynamic, both in terms of cost and quality of provisioning. The advent of the free market and the increasing
widespread utilization of primary renewable energy sources
along with the continuous necessity to balance the distribution in power grids is changing the way to provide energy.
In the recent decade, new solutions for energy production
have modied the dynamics of the market, leading to new
forms of agreements between providers and consumers, in a
way that is very different from the past. By now, the management policies of the electricity market have been developed according to the automated demand response (ADR)
paradigm [3,4], where energy providers and consumers dynamically x their production/management policies according to the current market conditions. As a consequence, it is
possible to carry out a dynamic management of ICT services
following the ADR principles. In this panorama, considering
the ICT processes, the cost of energy is one of the major factors that has to be considered for pricing services offered to
end-users. Data processing, storage, and transport imply the
usage of a set of devices that consume electricity for different
purposes. This energy consumption is typically due to power
supply of ICT, electrical, and cooling equipments.
Currently, a wide variety of contributions are available in
literature focusing on energy eciency in Cloud computing.
However, many existing surveys have tried to assess the energy management issues only focusing on data centers (DCs)
needs, see [58]. Indeed, most of the analyzed scientic contributions focus on how to reduce the waste of energy in
DCs and they are not specic to the Cloud. Considering Cloud
computing, scientic contributions have regarded mainly solutions that are conned in the DC of an independent Cloud
provider managed by a specic administrative domain of an

organization. In this scenario, independent Clouds can be

viewed as isolated islands in the ocean of Cloud computing. At the same time, there is an increasing number of scientic contributions oriented to Cloud federation. Differently
from independent Clouds, a Cloud federation considers an
ecosystem of different providers that are interconnected in
a cooperative decentralized computing environment. Thus,
activities and services are driven by specic agreements in
an ubiquitous system. The increasing interest of both industrial and academic communities toward Cloud federation is
opening to new business opportunities especially for small
and medium enterprises. Cloud federation allows providers
to optimize their ICT resources and services over a worldwide extended area taking advantages of dynamic and elastic management of physical resources, and even more taking
into consideration the dynamic energy management strategies. Fig. 1 shows an example of Cloud federation including
four providers interconnected through the Internet, powered
by renewable and/or no renewable energy suppliers. In this
case, the scale in the centre of Fig. 1 highlights how providers
need to nd the right trade-off between energy cost-saving
and sustainability.
Differently from existing surveys, the main contribution
of our paper is twofold: (1) providing the rst survey focused on energy management in Cloud computing and federation; (2) highlighting the current research trends falling
into energy sustainability and energy cost-saving strategies
in Cloud federation. For a comprehensive assessment we
collected and analyzed the main scientic contributions focusing on energy management in Cloud computing. Moreover, the survey was also thought gathering the published
scientic contributions that will be useful to Cloud federation architects that need to plan in advance future energy sustainability and energy cost-saving strategies. To this
end, by starting from the current state-of-the-art on Cloud


M. Giacobbe et al. / Computer Networks 91 (2015) 438452

computing and federation both in terms of energy sustainability and energy cost-saving, we present a taxonomy that
allows researchers to better understand these two main aspects. Currently, there are many ways to structure a taxonomy. In our opinion, many existing surveys use an approach
that is too dispersive. Many authors use many labels and
build several taxonomic trees to analyze the state of the art
of a particular research area. As a consequence, the reader
has to spend considerable time to understand the taxonomy. In our paper, we present a compacted taxonomic tree
that allows the reader to quickly analyze the state of the
art and classify existing scientic contributions. In particular,
our taxonomic tree represents a compact key of interpretation to quickly understand the current state of the art regarding energy management in Cloud computing. In this manner,
we hope our survey can be useful to researcher interested in
designing new energy management solutions for federated
Cloud ecosystems.
The rest of the paper is organized as follows. Section 2 describes our taxonomy. Section 3 and Section 4 respectively
analyze scientic contributions focusing on energy sustainability and energy cost-saving. A critical discussion of the
current state of the art and future research trends is presented in Section 5. Section 6 concludes the paper.
2. Taxonomic analysis
In this section, rst of all, we provide a few denitions
regarding the key concepts treated in this paper, and then,
we describe the taxonomic tree used to analyze the current
state of the art.
2.1. Cloud related terminology
In the following, we analyze the concepts of Independent
Cloud and Cloud federation.

in an ubiquitous system [9]. From a logic point of view,

Cloud federation refers to a type of system organization characterized by a joining of partially self-governing Clouds,
managed by different administrative domains, united by a
central government (the federation). In a federation, each
self-governing status of the involved Clouds is typically independent and may not be altered by a unilateral decision
of the central government. Thanks to Cloud federation, virtual resources and services can be moved among providers.
The modelling of an energy management strategy for Cloud
federation is quite different than the one adopted in an independent Cloud. For this reason, a Cloud brokering activity is
typically required.
2.2. Energy related terminology
In the following, we analyze the concepts of energy eciency, energy cost-saving, energy sustainability, and renewable energy source.
2.2.1. Energy eciency
Energy eciency is the goal to reduce the amount of energy required to provide products and/or services. As described in the Directive 2012/27/EU [10], the European Union
(EU) is facing the unprecedented challenges resulting from
the increased dependence on energy imports, the shortage
of energy resources, and the need to limit climate changes.
Energy eciency is a valuable means to address these challenges. For the Environmental and Energy Study Institute
(EESI) [11] energy eciency is an approach to use less energy
to perform the same task, i.e., eliminating waste of energy. As
a consequence, an energy eciency strategy is an investment
to reduce the kiloWatts per Hour (kWh) in order to meet the
increasing energy demand.
2.2.2. Energy cost-saving
Energy cost indicates the amount of money required to
produce, transmit, and consume energy. Therefore, energy
cost-saving strategies are a set policies oriented to reduce the
above mentioned monetary costs.

2.1.1. Independent Cloud

An independent Cloud is a provider, composed of one or
more distributed DCs, that is managed by the administrative domain of an organization. An independent Cloud offers
services by promoting an economy of scale exploiting the
virtualization technology to dynamically scale up/down its
infrastructure. Considering a scenario including several independent Clouds, small providers cannot directly compete with mega-providers such as Amazon, Google, and
Rackspace. The result is that often small/medium Cloud
providers have to exploit the services of mega-providers in
order to develop their business. This scenario is commonly
indicated with the term of Cloud Bursting. Typically, the energy management strategy is enforced by the organization
which holds the Cloud provider.

2.2.3. Energy sustainability

We refer to energy sustainability as part of the concept of
sustainable development that is the development that meets
the needs of the present, without compromising the ability of
the future generations to meet their own needs [12]. ICT systems consume electricity and the combustion of fossil fuels
to generate electricity is a signicant source of carbon dioxide (CO2 ) emissions. Therefore, it is very important pay serious attention to the use of renewable energy sources and to
solutions and strategies able to maximize the eciency with
the minimum energy wasting.

2.1.2. Cloud federation

Cloud federation is an alternative approach to Cloud bursting. It implies a cooperation among small/medium Cloud
providers, thus enabling the sharing of virtual resources (e.g.,
processing, storage, network). In particular, it refers to a mesh
of providers that are interconnected based on open standards
to provide a universal decentralized computing environment
where everything is driven by constraints and agreements

2.2.4. Renewable energy

As described in the Directive 201a1/77/EU, the EU specied the priority to promote an increase in the contribution of renewable energy sources for the electricity production [13]. Renewable energy is generally dened as
energy that comes from resources which are naturally replenished on a human timescale such as sunlight, wind, rain,
tides, waves and geothermal heat. Instead, non-renewable

M. Giacobbe et al. / Computer Networks 91 (2015) 438452


Fig. 2. Energy eciency taxonomic tree.

energy is dened as energy that comes from fossil fuels

(e.g., coal, oil, natural gas) and uranium. On this bases,
we mainly distinguish energy sources in renewables and
2.3. Taxonomic tree
In literature there are many scientic contributions addressing different aspects regarding energy management in
Cloud computing and federation. Thus, a taxonomy is a good

scientic approach to better analyze the current state-of-theart. In order to achieve such a goal, we developed a taxonomic
tree highlighting the main involved aspects. The taxonomic
tree is depicted in Fig. 2. During the development of our taxonomic tree, we identied several sub-trees, under different
parent nodes, that had the same structure. However, we realized that the graphical representation of all these sub-trees
was not possible for space constrains because the taxonomic
tree became very huge. Therefore, in order to build a compact taxonomic tree, we adopted the notion of block to


M. Giacobbe et al. / Computer Networks 91 (2015) 438452

indicate two or more sub-trees that have the same structure

and that are connected to different parent nodes. According
to our taxonomic tree, we enable the reader to quickly analyze the state of the art and to easily classify existing scientic contributions. The taxonomic tree is mainly organized
Rectangles, i.e., contextual blocks each one identied by a
letter (label). Each block represents a sub-tree structure.
For simplicity, the sub-tree structure of each block is entirely shown only in a solid rectangle. Instead, a dashed
rectangle represents a block that has the same sub-tree
structure of the solid rectangle identied by the same label.
Circles, i.e., contexts.
Arrows, i.e., connections between two or more circles
(contexts) to form a taxonomic path.
A path is a route from the root node (energy management)
to a particular node or leaf of the taxonomic tree helpful to
analyze how a particular scientic contribution is catalogued
according to our taxonomy.
Looking Fig. 2, the Energy Management node is the root of
our taxonomic tree, under which, we can nd the two nodes
representing the main topics analyzed in this paper, that are
Energy Sustainability and Energy Cost-Saving. The Q blocks
group solutions that refer to energy optimization strategies
for cost-saving or for energy sustainability. The P blocks
distinguish the presence of Renewables, or No Renewables primary energy sources. Within each P block of the tree, different types of Cloud systems can make use of the specic
energy source. Specically, we mainly distinguish between
Independent Clouds and Federated Clouds. Independent Cloud
providers work just considering internal management strategies. Furthermore, in a Cloud federation, providers might
cooperate to improve the overall energy eciency. For example, federated Clouds cooperate to share resources or to organize a vertical supply chain, where providers are clients of
other providers. The O blocks characterize the types of resources deployed in the Cloud, mainly identied and labelled
in the tree as Processing, Storage and Transport. It is exploded
for both independent and federated Clouds. Differences in
strategies and trends regarding the taxonomic keywords are
explained in the next sections. In the following, we provide
a more in-depth description about these types of resources
and how they would inuence energy eciency strategies.
2.3.1. Processing
Processing resources include all the computational resources available in the Cloud. As shown at the N block,
they can be Fixed or Mobile. Fixed resources are typically
deployed in traditional DCs, whereas mobile resources include mobile systems, such as Cloud-based Internet of Things
(IoT) infrastructures and platforms composed by many different pervasive elements (e.g., smart sensors, smartphones,
and other mobile devices). At the G and M blocks,
both xed and mobile resources can be either Physical
or Virtual, thus forming the following paths within N:
Fixed-Physical-Machine, Fixed-Physical-Cluster, Fixed-VirtualMachine, Fixed-Virtual-Cluster for G, and Mobile-PhysicalDevice, Mobile-Physical-Server, Mobile-Virtual-Device, MobileVirtual-Server for M.

About Processing-Fixed Cloud systems, for physical machines and clusters, the taxonomic tree highlights the typical elements consuming energy, that are Information Technology (IT), electrical (ELE), and cooling (COOL) equipments.
Regarding IT equipments, energy eciency can be improved
implementing specic strategies on task scheduling (i.e., reducing the active period of tasks), load balancing (i.e., in
order to reduce the total amount of energy wasting), peak
sharing (i.e., minimizing the peaks of workloads that occur
in cycling loads), ON/OFF triggers (i.e., turning ON/OFF a device to reduce the total amount of energy consumption), no
load power (i.e., minimizing the load power during idle cycles), split plane power policies (i.e., delivering separate power
supply to processor and north-bridge), clock gating (i.e., activating and deactivating the clock for reducing dynamic
power dissipation), and by using energy ecient components
(i.e., CPUs, memories, storage with high energy eciency life
Regarding ELE equipments, energy management strategies include UPS ON/OFF and ECO-lighting policies. ECOlighting aims to reduce energy wasting by using specic
policies in the illumination management, such as temporized or automatized turning on/off of lamps. Uninterruptible
power supply (UPS) and battery/ywheel backup are electrical equipments that provide emergency power to a load
when the input power source fails.
Regarding COOL equipments, energy management policies mainly include Free cooling (i.e., a technique to extract
heat from DCs considering Temperature (T) and Relative Humidity (RH) of equipments) and the Heating, Ventilating, Air
Conditioning, and Refrigerating (HVAC(R)) activities [14].
Virtualization techniques allow systems to abstract physical resources thus to make them more exible in a virtual
form. Generally, a virtual machines (VM) is a virtual environment that emulates a physical environment. Regarding
this concept, our taxonomic tree distinguishes two blocks:
E and F. The E block identies the issues related to the
specic VM, such as IT equipments policies and VMs Allocation,
and Power Metering strategies. The F block extends the E
block to include the issues related to clusters of VMs, that are
VMs Migration and Redundant VMs management.
With reference to Processing-Mobile Cloud systems, the
taxonomic tree shows very different energy strategies for
physical and virtual resources. Physical resources can be
mobile devices or server able to support mobility. The H
block summarizes the energy eciency strategies operating
on IT, such as Resource management (i.e., optimizing interfaces, ports, tx/rx physical components), Computation ooading (i.e., delegating the computation to Cloud off-device), GPS
querying reduction (i.e., minimizing the energy consumption
due to the on-board GPS), Task delegation (i.e., delegating
tasks to Cloud infrastructure off-device) [15]. Regarding ELE
equipments for Physical device, power supply optimization
mainly involves advances in Battery technologies [16].
Mobile physical server node into the tree identies the
policies concerning a Cloud infrastructure in a mobile scenario. Specically, the J block includes the same elements
of the D block presented above, plus new components,
such as Matching algorithms [17] (i.e., nding the best matching routes to minimize the use of GPS query via Serverside), and Crowd-sourced systems [18] (i.e., crowd-sourced

M. Giacobbe et al. / Computer Networks 91 (2015) 438452


Fig. 4. Transport taxonomic sub-tree.

Fig. 3. Storage taxonomic sub-tree.

route database to track the movement of a mobile device via

Server-side, when mobile devices are equipped with altitude
sensors in order to reduce energy consumption due to GPS).
The K block identies solutions for Virtual device. It includes the H block for the management of resources along
with VMs allocation and VMs power metering strategies.
2.3.2. Storage
Fig. 3 explodes the S block of the taxonomic tree to describe features and issues of Storage resources.
In particular, by referring to each reported item, hard
drive (HD) Power Consumption identies the issue to reduce power consumption due to the technological components used to accomplish storage physical supports. It exists
(although with evident different implementations) in both
xed infrastructures and mobile devices (we have avoided
duplication in the representation). Per-File Power Consumption depends on the format and size of les, unlike the PerUser Power Consumption that is related to the number of accesses to the storage resources. Storage Capacity characterizes
the available disk space (both used and free) at the physical layer. In a Cloud, and much more in a federated Cloud, it
is possible to extend the storage concept to the disk capacity in terms of virtual machines (VMs) and disk images. Data
Backup Policies [19] identify the backup strategies to make the
storage service reliable. Storage Resource Location [20] refers
to strategies that can be used by Cloud providers to reduce
energy costs and/or C02 emissions. In order to reduce energy
costs, it is possible to place storage resources in geographical areas where the energy cost is cheaper (according to the
worldwide energy market). In addition, by locating storage
resources closer to the end user it is also possible to reduce

the amount of involved electrical devices required to operate

a storage access, hence reducing C02 emissions.
2.3.3. Transport
Energy management in Cloud environments is also conditioned by the energy consumption due to the transport of
data, i.e., networking, both internally and externally to the
Cloud DC. Fig. 4 explodes the T block of the taxonomic tree.
In the T block, we distinguish between transport for Public
and Private Clouds. As well as the main taxonomic tree, in
Fig. 4, we distinguish between Fixed and Mobile environments. These last include additional issues mainly concerning wireless connectivity. Both Public and Fixed environments
include Passive Optical Networks (PONs), WDM, and Optical
Bypass aimed to reduce the energy consumption due to the
data transport over long distances (excluding the end-user
terminals). Moreover, the T1 block includes Energy Ecient
Transmission and Switching Equipments, Cloud Infrastructures
Location, and Trac (e.g., downloads per hour solutions).
The T2 block characterizes Public-Mobile environments,
where the most important issues concern Wireless Networks
optimization and Mobile Devices location. T2 also includes
the sub-tree of T1 block to describe energy management
policies also in mobile systems. The T2 block can be also
considered for Private-Mobile environments. Regarding
Private-Fixed environments, we include the Corporate networks optimization issue along with sub-tree of T1 block.
In the following, we provide an overview of recent scientic contributions in literature on energy management in
Cloud computing. We organize our analysis distinguishing
between solutions for energy sustainability and energy costsaving. Moreover, for each of them, we specify the taxonomic
sub-tree it belongs to.


M. Giacobbe et al. / Computer Networks 91 (2015) 438452

3. Energy sustainability analysis

Contributions in the literature about energy sustainability
can be classied according to the type of Cloud environment
they refer to, as thought in our taxonomic tree, i.e., Independent Clouds and Federated Clouds.
3.1. Energy sustainability in independent Clouds
DCs in Europe are currently estimated to consume 60
TWh and it will reach 104,000 GWh (104 TWh) by 2020 [21].
Consequently, it will be necessary to optimize energy management solutions to minimize the environmental impact.
The authors of [22] proposed a technique to reduce power
consumption in a heterogeneous cluster of nodes aimed at
serving multiple web-applications. Specically, they introduced an algorithm to periodically monitor load of CPU,
disk storage and network interface, switching nodes on/off
to minimize the whole power consumption. However, since
the algorithm runs on a master node that adds and removes nodes in the system, the solution is not very scalable.
In [23], an integrated mechanism to identify energy saving
opportunities within DCs using knowledge-based agents is
presented. In [24] the authors discusses of load balancing
techniques from energy consumption and carbon emission
They assert that considering these two perspective for improving energy-eciency in Cloud, determines many energy
management issues. Therefore they introduce two different
evaluation factors: reducing energy consumption and reducing carbon emission, for balancing the workload across all the
nodes of a Cloud. However, the study is limited to the green
computing-oriented Cloud computing, without deepening
the above mentioned techniques toward Cloud federation.
3.1.1. Datacenter energy eciency metrics
ASHRAE TC 9.9 and The Green Grid Association provide
recommendations on measuring and publishing values for
power usage effectiveness (PUE) of DCs [25]. Specically PUE
is the recommended metric to characterize the DC infrastructure eciency. Based on the above mentioned recommendations we dene PUE by the following formula:



where ETOT is the total DC energy consumption or power and is

dened as:



EIT is the IT Equipment energy consumption or power;

EEE is the Electrical Equipment energy consumption or
ECE is the Cooling Equipment energy consumption or
The DC infrastructure Eciency (DciE) is the inverse of PUE,
but it is considered as a surpassed metric in industry adoption since the original publications (2007):

DciE = 100



A wide variety of scientic contributions that propose

strategies and solutions aimed at energy eciency in Cloud

infrastructure invoke the PUE as evaluation index. However,

they refer to the heterogeneity of the DC and not to the
mechanisms involving federated environments. In this regard Kansal et al. [26] invoke PUE and focus on Infrastructure
as a Service (IaaS) Clouds and quantify the difference in energy consumption caused by VM schedulers. Beloglazov et al.
[27] introduce computer housing and cooling invoke the PUE.
However, PUE is recommended for a measurement of the
goodness of a single DC and this is a crucial limit for our purposes. Suppose we have two DCs with the same equipment
but located in different geographic areas with different temperature: PUE is affected by the adopted cooling techniques
and policies. Thus, the ecacy of PUE evaluation falls in a
Cloud environment.
In addition to PUE and DciE, Jain et al. [28] present several
other evaluation indexes useful to measure power consumption of a processor or DC. Furthermore, the authors propose
different techniques to minimize the power consumption in
Cloud environment: reducing CPU power dissipation by free
cooling, using advance clock gating, split plane power, energyeciency processors, and by using renewable energy sources.
3.1.2. Energy sustainability in independent Clouds
Some international projects and advanced studies in literature specically aim to manage energy in independent
Cloud infrastructures for reducing CO2 emissions.
CoolEmAll (INFSO-ICT-288701) [29] is a European Union
co-funded project within the Seventh Framework Programme.
The main goal of CoolEmAll is to provide advanced simulation, visualization and decision support tools along with
blueprints of computing building blocks for modular DCs.
It considers three aspects that have major impact on actual energy consumption: cooling model, application properties, and workload and resource management policies. The
project objectives require an interdisciplinary strategy in
cooling and heat transfer in DCs dealing with simulation and
visualization techniques, HPC workload and resource management, application monitoring, benchmarks, and metrics,
hardware design and monitoring, virtualization workload
management, dissemination, exploitation, and market analysis in green ICT. The above mentioned tools and blueprints
should help to minimize the energy consumption, and consequently the CO2 emissions of the whole IT infrastructure with
related facilities. To this end, the designers consider worthwhile:
To design different types of computing building blocks
(ComputeBox Blueprints) that will be well dened by energy eciency metrics.
To develop simulation, visualization and decision support
toolkit (SVD Toolkit) that will enable analysis and optimization of ICT infrastructures, these built by using the
above blocks.
Moreover, the CoolEmAll platform will enable studies
of dynamic states of ICT infrastructures based on changing
workloads, management policies, cooling method, and ambient temperature. Therefore, the CoolEmAll Project refers to
policies applied inside the IT infrastructures.
All4Green [30] is another project co-funded by the European Union within the Seventh Framework Programme. It
aims to improve the operational eciency of an ICT services

M. Giacobbe et al. / Computer Networks 91 (2015) 438452

ecosystem that comprises DCs, energy producers and the ICT

users, proposing the usage of green SLAs to enable new energy saving policies. This project mainly aims to match energy demand patterns of DCs and energy production/supply
patterns of the energy producers/providers. This goal will result in peak shaving, reduction of ineciencies in energy production, and exploitation of renewable energy sources without endangering the stability of power grids. The All4Green
project claims to optimize the management of excessive
energy provision in peak times reducing unnecessary CO2
emissions and loss of energy. Moreover it aims to integrate
renewable energy sources within a ecosystem composed of
DCs, energy providers and ICT users. The trials activities of
All4Green are:
Peak load detection and metering on the different layers
of DC operation.
Communication settling between the DC and the energy
Feasibility check of the created monitoring and management processes.
Evaluating the ICT energy reduction efforts within the DC
ecosystem in a cooperating federation of DCs sites.
Cloud specic green SLA experimentation.
In this regard the authors of [31] present a generic architecture to enable Demand/Response mechanism between
energy provider and DCs realized in All4Green. This mechanism is used in power grids to manage the power consumption of the customers during critical situations.
The approach presented in [32] instead provides support
to design ecient strategies to modify the process deployment, in order to continuously guarantee good performance
and energy eciency. A very interesting paper in terms of investigated techniques and experimental analysis is presented
in [33]. It is focused on the reduction of the IT equipment
power consumption and discusses some truths commonly
assumed regarding energy usage of servers, links between resources, workloads and consumed energy, impact of ON/OFF
models, and some assumptions on the relationship between
energy consumption and virtualization techniques. It analyzes the energy consumption of distributed resources. The
assessment is not easy because the power consumption of
each node is inuenced by several factors.
Srikantaiah et al. in [34] discuss the energy aware consolidation problem in Cloud computing. They studied the
relationship between energy consumption and resource
utilization, focusing on two main aspects: performance degradation and power variation. In their experiments, the authors
consider a scenario with virtualized heterogeneous systems.
They study the problem of request scheduling for multitiered web-applications to minimize energy consumption
with strict performance requirements.
A Tabu Search Algorithm for the location of Cloud DCs and
software components in Green Cloud Computing (GCC) networks, simultaneously optimizing routing and reducing CO2
emissions is presented in [35]. A scheduling solution named
e-STAB is introduced in [36]. It considers trac requirements
of Cloud applications in order to provide energy ecient job
allocation and trac load balancing in DCs networks.
The authors of [37] present a new energy consumption
model, and a related analysis tool for Cloud computing en-


vironments, measuring energy consumption in Cloud environments based on different runtime tasks. They think to integrate their research results into Cloud systems to monitor
energy consumption and support static or dynamic systemlevel optimization. In [6], the authors describe interesting
metrics to make Cloud computing greener. Specically, they
discuss different power and energy models, identifying the
major challenges to build a green Cloud. To evaluate the
goodness of solutions and techniques to decrease CO2 emissions (e.g. utilization of green energy sources, reduction of
the number of physical and virtual machines, usage of the
greener machines), a set of metrics to measure the greenness of a Cloud infrastructure are presented in [38]. Yamagiwa et al. [39] propose a high performance environment
as a Green Cloud platform using solar power and low consumption electricity computers to overcome the problem of continuous energy requirements (this is undesirable in terms of
environmental concerns because of the increased CO2 emissions). Moreover, the authors consider possible solar panels and sealed lead acid battery necessary to support mobile
3.1.3. Virtual resource management in DCs
With specic regard to the Virtualization Technology, many
efforts try to increase the exibility of different types of Virtual Machine Managers (VMMs). The software that controls
the virtualization is usually called hypervisor, and the software program that emulates a specic hardware system is
commonly called Virtual Machine (VM). The introduced software layer emulates the operative system and allocates necessary hardware resources. In [40] the authors study VMs
allocation problem, that is how to allocate VMs in a DC in
order to full VM demands and to reduce the total amount
of energy consumption. Applying ecient VM placement algorithms allows Cloud providers to reduce carbon footprint
[41]. In [42], the authors consider the ecient use of virtualized resources by applications, and not only of the underlying infrastructure for energy eciency and CO2 -aware
Cloud computing. Cardosa et al. [43] propose a solution for
ecient allocation of VMs in virtualized heterogeneous computing environments. The authors have leveraged the min,
max and shared parameters of VMM, which represent minimum, maximum and proportion of the CPU allocated to VMs
sharing the same resource. In [44], the authors discuss power
provisioning and power tracking applications, presenting a solution, named Joulmeter, for VM power metering. Even if their
study is limited to a single Cloud, they show many interesting
aspects and discuss useful techniques for federated Clouds.
The authors of [45] propose energy-ecient resource allocation policies and scheduling algorithms based on QoS expectations and power usage features of the involved devices.
Moreover, they propose an architectural framework and basic principles for energy-ecient management of Clouds. In
particular, this framework manages VMs as modules that
can be dynamically started and stopped on a single physical
machine, according to the incoming requests. The proposed
policies improve the exibility in conguring resources and
running applications on different operating systems on the
same physical machine, satisfying different requirements
of service requests. Another important aspect for energy
saving in terms of resource management is the dynamical


M. Giacobbe et al. / Computer Networks 91 (2015) 438452

migration of VMs across different physical machines. To this

regard, the authors of [46] consider that unused resources
can be switched to a low-power mode, turned-off or congured to operate at low-performance level. The authors validated their approach through a performance evaluation assessment using the CloudSim toolkit [47].
Live virtual machine migration is attracting great interest.
It allows to move a running VM from a physical machine to
another, without service discontinuance. Soni et al. [48] discuss several techniques for live VM migration to reduce both
downtime and total migration time. However, despite the authors cite energy management as additional benet of these
techniques and introduce a power management scenario to
decrease energy usage in DCs, they do not deal with federated Clouds scenarios and live VM migration between DCs
belonging to independent Clouds.

3.1.4. Mobile Cloud

The widespread availability of many types of Cloud services inevitably leads to an increase of end-user access
through mobile devices. In literature, there are many contributions that aim to reduce the energy consumption of
mobile devices, and are usually based on the monitoring of
the device physical resources (CPU, storage, memory), power
source and operation of user-side interfaces. However, the
most of these contributions look at single physical device
(hardware and/or software resources) and do not consider
the whole Cloud computing system.
Mobile Cloud is an emerging paradigm, where energy efciency in the management of both physical and virtual resources is crucial. In particular, the authors of [49] explore
how to minimize the energy consumption of the back-light
when displaying a video stream without adversely impacting the users visual experience. They present a cloud-based
energy-saving service and show a possible way to realize the
service. They claim that the results of their experiments show
energy savings of 1549 percentage on commercial mobile
Energy awareness is one of the aspects discussed in [8].
Specically the authors show a survey on energy proling and
energy usage estimation for mobile Cloud computing. However,
they discuss solutions focused on the monitoring of physical
mobile system, but they do not cite solutions for the optimization of energy consumption.
Many mobile applications include the possibility to detect
the position and track the movement of mobile users (e.g., by
the use of generally named Location Based Services). To this
aim, the usage of GPS, and relative GPS-querying, is not optimal for the battery life of mobile devices. Chatterjee et al.
[50] address this problem, proposing some alternative solutions to overcome this issue. They describe a prototype system consisting of a crowd-sourced route database deployed in
the Cloud, a GPS enabled mobile device, and a mobile device
with altitude sensors.
In mobile Cloud environment, where wireless links and
user mobility make a complex scenario, reducing the power
consumption during data transmission is a very useful issue.
Indeed, the authors of [51] observe that transmitting data under bad connectivity conditions could consume more energy
than in good connectivity situation. They propose a novel

Table 1
Energy sustainability in independent Clouds.
Taxonomic path




[24], [25], [26], [27], [33], [35], [36], [41]

[27], [34]
[42], [43], [44], [45], [46], [47], [48]
[28], [37], [38], [39], [49]
[6], [40], [50]
[9], [51], [52], [53], [54]
[5], [55], [56], [57]


i.e. 66.04% of the selected contributions

energy-ecient data transmission strategy, named eTime, to

connect a Cloud infrastructure with mobile devices.
A particular case of mobile Cloud computing involves Vehicular Networks and it is generically named Vehicular Cloud
Computing (VCC). Whaiduzzaman et al. [52] present a stateof-the-art survey of VCC. They refer to the generally underutilized vehicular resources, and to energy management in relation to the management of energy consumption on board
the vehicle.
3.1.5. Networking
In literature, many contributions deal with this problem,
but they envisage solutions limited to network equipments
inside DCs. As discussed in [53], there are several ways to
build a cluster-scale network whose power consumption is
more proportional to the amount of transmitted network
trac. In [54], very interesting alternative solutions to the
traditional electrical networks are presented, examining energy consumption in optical IP networks. Fiorani et al. [55]
address network energy eciency at the architectural and
service levels. Specically, they propose a unied network
architecture that provides both intra-DC and inter-DC connectivity together with interconnection toward legacy IP networks. In [5] Bianzino et al. identify four branches of research
in green networks. The challenging topics originate from the
causes of in-network energy waste: adaptive link speed, interface proxy-aware, energy infrastructure and energy-aware
To better understand the contribution of the state of the
art on energy sustainability in independent Clouds, in Table 1
we summarize the relationship between each contribution in
literature and the related block in our taxonomic tree (shown
in Fig. 2) it belongs to. Specically, the column on the leftside of Table 1 identies the Taxonomic paths, that is the set
of blocks and sub-blocks of the taxonomic tree which are covered by the References on the right column of Table 1. Column
Num quanties the number of scientic contributions dealing with the specic topic.
3.2. Energy sustainability in federated clouds
Current literature presents many solutions and policies
in scenarios with independent Clouds, but just early ideas
for Cloud federation. Interoperability among Cloud service
providers in a federation implies the joint management of
both physical and virtual resources. The Eco2Clouds Project
[56,57] is partially funded by the European Union under the
Seventh Framework Programme and its goal is to improve

M. Giacobbe et al. / Computer Networks 91 (2015) 438452

energy eciency in the development and deployment of applications in federated environments. The Eco2Clouds Consortium includes several European partners, whose research
groups have produced several signicant technical and scientic contributions. A description of new metrics and a
monitoring infrastructure based on the Zabbix [58] monitoring framework for eco-ecient Cloud federations are show
in [59]. In [60], the authors compare the EU CoolEmAll and
Eco2Clouds projects. They describe the metrics used in these
projects to assess energy-eciency of DCs and Cloud resources, and energy-costs of application/workload execution
for various DC granularity levels and federation sites.
In [61] the authors present a multi-objective genetic algorithm, named MO-GA, to optimize the energy consumption,
reduce CO2 emissions and generate prot in a geographically
distributed Cloud computing infrastructure. They propose a
Pareto resource allocation approach for Clouds focused on energy, green house gas emission and prot and use MO-GA to
nd the best scheduling according to the above mentioned
goals. Their work differs from other studies because it deals
with both computing and energy consumption in the proposed energy model, and their approach exploits the geographical distribution of a Cloud federation.
On completion of this section, and looking at possible
scenarios where a dynamic resource management could allow a reduction of emissions and greater energy savings, we
quote a model developed by Lawrence Berkeley National lab
and Northwestern University, named CLEER Model Cloud Energy and Emission Research Model [62]. It aims to provide a
comprehensive and user-friendly model able to assess the
net energy and GreenHouse Gas (GHG) emissions of Cloud
systems. GHG emissions derive from both the concrete use
of electricity along with the associated servers manufacturing compared to existing physical and digital systems that
they can replace. Specically, CLEER Model taxonomy consists of sub-models of utilization (e.g., DC operational energy
utilization and business-client IT device operational energy
utilization). For each sub-model there are other further submodels, able to represent many different solutions involving:
infrastructures, devices, networks segments, transportation,
applications (in both business and residential). The CLEER
Model calculates the cumulative network energy utilization
adding up the contributes in energy utilization of each segment by mathematical equations applied to network submodels based on Bagila et al. denitions [63]. Baliga et al. estimate the per-bit energy consumption of transmission and
switching for a public Cloud to be around 2.7 J/b, and for
a private Cloud to be around 0.46 J/b, claiming that transport presents a more signicant energy cost in public Cloud
services than in private Cloud services. At the same time,
they claim that power consumption in transport represents a
signicant proportion of total power consumption for Cloud
storage services at medium and high usage rates. The peculiarity which we found interesting in CLEER Model is mainly
the ability to make choices in a user-friendly and custom scenario for the use of Cloud systems, for example by selecting
the country where you locate your Cloud on your own needs.
So you can get information on carbon intensity of electricity source (kgCO2e /kWh) and on Primary Energy (MJ/kWh)
for different geographical areas, and so for different parameters (e.g. Average transmission energy source (J/bit)). CLEER


Table 2
Energy sustainability in federated Clouds.
Taxonomic path





[11], [59], [61], [62]


i.e. 13.21% of the selected contributions

Model is also proposed as a model for use and development

by the research community.
Similar to Table 1, in Table 2 we summarize the relationship between each contribution in the eld of energy sustainability in federated Clouds and the related block in our
taxonomic tree (shown in Fig. 2) it belongs to.
4. Energy cost-saving analysis
In this section, we describe contributions focusing on energy cost-saving, distinguishing among independent Clouds
and federated Clouds.
4.1. Energy cost-saving in independent clouds
Datacenters all over the world are estimated to consume
2% of world electricity production, and the expected energy
consumption will increase by a factor of 4 by 2020 [64].
Recent study of Google [65], highlights as its additional
Cloud-based energy consumption (including both farm and
networking) has an additional incidence from 1 to 5 (kWh
per employee per year) on total required energy. It corresponds to 36-263(kWh per employee per year) in an ICT large
company without Cloud and 5-84(kWh per employee per
year) in an ICT large company using Cloud saving 6887% of
the total required energy. It is very important to locate sites
where the incidence of energy costs-per-kWh is the most affordable.
In [66] the authors investigate the following variables energy price, peak power charge and energy consumed by cooling system for understanding how to reduce electricity costs
for HPC Cloud providers with multiple geographically distributed DCs. They simulate a single front-end, located on
the East Coast of the United States, that distributes requests
to three equal-size DCs located in three different areas:
Northern California, Georgia and Switzerland. Therefore they
introduce some baseline policies for a comparison (e.g. SCA
Static Cost-Aware Ordering). Bila et al. [67] describe partial VM migration for desktops motivating its usefulness as an
energy-saving technique. In particular they show this technique can reduce annual energy costs in overnight hours for
companies with 100 or more desktops by at least 55%. They
refer to a third-party infrastructure as a service (IaaS) Cloud,
an intranet back-end, or even a federation of other PCs in the
same domain, but not to a Clouds federation.
In [68] Goiri et al. present Parasol (a prototype green DC
including a set of solar panels) and GreenSwitch (an approach
to dynamically schedule the workload and to select the energy source to use) in order to demonstrate that intelligent
workload and energy source management can reduce energy


M. Giacobbe et al. / Computer Networks 91 (2015) 438452

Table 3
Energy cost-saving in independent clouds.

Table 4
Energy cost-saving in federated Clouds.

Taxonomic path



Taxonomic path








[7], [71], [75]


i.e. 11.32% of the selected contributions


i.e. 9.43% of the selected contributions

A brief summary on the relationship between each contribution in the eld of cost-saving in Independent Clouds
and the related block in our taxonomic tree is reported in
Table 3.
4.2. Energy cost-saving in federated Clouds
The problem of energy management in a federated Cloud
environment is addressed in [69]. The authors show a modelling approach based on stochastic reward nets (SRN) to investigate the more convenient strategies to manage a Cloud
federation. They consider a scenario in which N federated IaaS
Clouds cooperate with each other in order to reduce energy
costs and to satisfy the users requests. Specically, the authors describe interesting indexes, that properly combined,
allows providers to reduce the energy costs. Hongxing Li et al.
[70] introduce an ecient algorithm for the scheduling of
resources in a federation of geo-distributed Clouds. In particular, the authors optimize the scheduling delay for each
job in order to maximize the prot for each involved Cloud.
However, they do not deepen how to minimize the energy
costs. In [71] the authors present very interesting policies to
address energy cost-saving strategies in federated Clouds. In
their policies they consider three cost factors: energy price,
peak power charge, and the energy consumed by the cooling
system. They consider that the electricity cost has two components: the cost of energy consumed (energy price), and
the cost for the peak power drawn at any particular time
(peak power price). They assume these costs are expressed
in dollars per kWh and that the peak power cost is dened as
the maximum power drawn within some accounting period.
Therefore, they assume there are two peak power charges,
one for the on-peak hours and one for off-peak hours. In
addition, they assert that the DC would be charged for 15
min with the highest average power drawn across all onpeak hours in the month, and for the 15 min with the highest power drawn across all off-peak hours in a period of 1
month. In the end, they conclude that cost aware load placement policies can signicantly push down providers operational costs. In this regard, the authors of [72] introduce an
experimental platform and show a wide variety of experimental evaluations, assuming xed energy price and peak
tariff. Buyya et al. [73] assert that next generation Cloud service providers should be able to dynamically expand or resize their provisioning capability, but not only. In particular
they evoke the factors that pose dicult problems in effective provisioning and delivery of application services, thus
evoke the criticality to nding ecient solutions for following challenges to exploiting the potential of federated Cloud
infrastructures. Therefore they refer to Flexible Mapping of Ser-

Fig. 5. Percentage quotas for the four principal examined solutions.

vices to Resources as a critical issue to maximize eciency,

cost effectiveness, and utilization [74] in environments with
increased operating costs and energy requirements. Furthermore, the authors report that to reduce power consumption
cost new on-line algorithms for energy-aware placement and
live migration of VMs between Clouds would need to be developed. Live migration refers to the process of moving a running VMs between different physical machines without disconnecting the client, preserving its status (memory, storage,
and network connectivity). More specically, the authors introduced a component to allocate VMs in Cloud nodes according to both users QoS and energy management policies.
Therefore Buyya et al. identify one of the limitations in inexible congurations within a Cloud for the adopted approach and propose new mechanisms to overcome these limitations, as for example migrating to a Cloud that is located in
a region with the lowest energy cost. In conclusion they suggest the application of market-based utility to reduce energy
The relationship between each contribution in the eld of
cost-saving in federated Clouds and the related block in our
taxonomic tree is reported in Table 4.
5. Current trends and future research directions
From the above analysis of the state of the art, it is
evident the great attention of the scientic community in
energy management. However, the efforts in this research
area do not well cover all the related issues. Fig. 5 shows
a pie chart on the percentage of work for the four main
area of interest discussed in this paper, that are energy
sustainability in independent Cloud, energy cost-saving
in independent Cloud, energy sustainability in federated
Clouds and energy cost-saving in federated Clouds. Most
of contributions is focused on energy strategies inside DCs
and, hence, there are many solutions suitable for independent Clouds. On the contrary, some issues with regard
to federated Clouds are still unexplored. Indeed, Cloud

M. Giacobbe et al. / Computer Networks 91 (2015) 438452


Table 5
References for the covered taxonomic paths of the examined solutions.
Taxonomic path

Independent Clouds


Federated Clouds


Cost saving


Cost saving








Fig. 6. Percentage of available solutions for each taxonomic path.

federation is an evolution of the traditional Cloud systems,

characterized by a more complex model for service offerings, exibility, and scalability. A Cloud federation is a new
ecosystem that needs new policies for the management
of resources and, hence, also for improving energy eciency.
Fig. 5 also shows there are few activities on energy costsaving both in the scenarios of independent and federated
Clouds. Energy cost-saving means reducing costs\kWh.
The estimated energy cost has two components: xed and
variable costs, which must be both appropriately determined
analyzing costs for infrastructures, platforms, services, and
not always a reduction in terms of energy costs corresponds to a reduction of the environmental impact (and vice
In Table 5, we summarize the number of research contributions presented in the paper according to the taxonomic sub-trees discussed in the previous sections. That allows us to better identify the main topic covered by the
scientic community. Following the tree organization (see
Section 2), the same taxonomic path (i.e., a set of blocks in
the tree) is analyzed considering both independent and federated Clouds, in terms of energy sustainability and costsaving.
The pie chart in Fig. 6 shows the percentage of available solutions for each Taxonomic path. From the results presented in Table 5 and Fig. 6, it is evident that the most efforts
are focused on processing resources (block N) for xed infrastructures (block G). However, this research area should

be further explored to improve energy eciency in IT equipments for virtual machines (path A-E-G-N) and in electrical equipments for physical machines (path B-D-G-N). Important projects, such as Eco2Clouds in Europe, aim to research and propose solutions optimized for the reduction of
CO2 emissions, but aspects of the inuence of the techniques
and federation policies on the reduction of environmental
impact, however, still remain to be explored (path F-G-N). As
the energy consumption of a DC, at the same time, the energy
consumed for the transport of information across the network is important in energy saving, and cannot be neglected
in a complete investigation. Indeed, switches and networking
equipments lead to an increase in energy consumption compared with the baseline consumption. The usage of Cloudbased services produces an additional energy network consumption, because it increments the trac on the Internet,
and this is more signicant in public Clouds than privates, as
already discussed in this work. There are few contributions
that focus on reducing the energy consumption due to the
transport of information (path T). Moreover, they are often
limited to network devices within the DC or networking for
non-federated Cloud environments.
Cost saving is totally unexplored in the management of
virtual xed processing resources (paths E-G-N and E-F-GN), whereas many contributions in this elds are provided
for energy sustainability in independent Clouds. Cost saving
is also unexplored in mobile physical servers (path H-I-M-N).
In the scenario of federated Clouds, there are not contributions in literature (to the best of our knowledge) to
improve energy sustainability or energy cost-saving, when
virtual xed processing resource (paths D-G-N, E-G-N and EF-G-N) and mobile resources for storage (path H-I-M-S) are
The proposed taxonomic tree is very useful to identify unexplored strategies for energy saving in Clouds. Indeed, the
Taxonomic paths not present in Table 5 identify possible research activities that, at the best of our knowledge, are not
still investigated. Moreover, our taxonomy gives us the opportunity to think up innovative strategies specically designed for Clouds, especially in federated environments. For
example, not always the site with lower energy costs is the
most favorable to reduce the environmental footprint (e.g.,
CO2 equivalent emissions). In order to reduce pollution, more


M. Giacobbe et al. / Computer Networks 91 (2015) 438452

ecient technologies and equipments should be developed

with the same or even better performance. In this sense,
technology comes to us thanks to the continuous evolution
of components and devices more and more ecient and less
The evolution of the energy supply technologies, the level
of utilization of distribution networks in general as the supply/demand ratio, are mostly related to the specic characteristics of places. They vary according to the particular geographical, political and environmental features. Similarly,
it is realistic to think that, in order to provide IT services,
the amount of information generated and transmitted, the
level of use of the networks and the supply/demand ratio are
related to the local productivity, current legislation and to
the specic anthropological characteristics. For example, the
availability of renewable primary sources, particularly solar
and wind power, is also linked to seasonal or even weather
conditions of the site where the production plant is installed.
As just introduced, if it has a decisive role in the management
policies of DCs, it becomes even more important and complex as regards Cloud federation. The above considerations
about cost-effectiveness in using IT resources, located at a
particular site compared to other available simultaneously,
become even more signicant. Finding places and times to
minimize electricity in a Cloud federation could become the
determining factor for the reduction of tariffs for IT services
6. Conclusion
Currently, there is a growing interest regarding energy
management in Cloud computing. In particular on how to reduce carbon dioxide emissions and/or energy costs providing
services with a good QoS is a big challenge. In a federated
Cloud environment, resources and services can be moved
from a Cloud provider to another in order to enforce energy
cost saving and/or energy sustainability strategies. The analysis of scientic contributions reported in this assessment
provides several hints to researchers regarding how it can
be possible to develop energy management strategies in a
federated Cloud ecosystem. The main contribution of our paper has been twofold: (1) providing the rst survey specifically focused on energy management in Cloud computing
and federation; (2) highlighting the current research directions as well as possible future research trends regarding
energy sustainability and energy cost-saving in Cloud federation. We specically selected and analyzed the main scientic contributions focusing on energy management. Moreover, the survey was thought considering also the published
scientic contributions that will be useful to Cloud federation
architects to plan in advance future energy sustainability and
energy cost-saving strategies.
From our study, we realized how energy management
strategies that aim to nd the right compromise between energy cost-saving and energy sustainability do not exist yet.
Instead, in our opinion, future research directions should be
oriented to dene energy strategies providing high performance IT services with the lowest costs and the highest level
of sustainability. In particular, we gure out a future sustainable ecosystems where different federated Clouds work
together to achieve a common global benet. We hope our

taxonomy can provide to the scientic community a clear

overview of the state-of-the-art in the eld of energy management in Cloud computing. In particular, we wish our
study can give to researchers interesting hints in the eld of
energy cost-saving and energy sustainability in Cloud federation. In addition, we hope to succeeded in stimulating the
debate among computer science researchers and the rising
of new models and algorithms aimed at minimizing the energy cost and carbon dioxide emissions in a federated Cloud
ecosystem. Such activities could improve the health of our
planet, thereby protecting the environment and health of the
worlds population.
The research was partially supported by the PON 20072013 SIGMA project and by the POR FESR Sicilia 2007-2013
SIMONE project.
[1] The Digital Agenda for Europe (DAE).
[2] L. Heilig, S. Voss, A scientometric analysis of cloud computing literature, IEEE Trans. Cloud Comput. 2 (3) (2014) 266278.
[3] OpenADR Alliance, The OpenADR Primer.
[4] S. Kiliccote, M.A. Piette, E. Koch, D. Hennage, Utilizing automated demand response in commercial buildings as non-spinning reserve product for ancillary services markets, in: CDC-ECE, 2011, pp. 43544360.
[5] A.P. Bianzino, C. Chaudet, D. Rossi, J.-L. Rougier, A survey of green networking research, IEEE Commun. Surv. Tutor. 14 (1) (2012) 320.
[6] B. Priya, E.S. Pilli, R.C. Joshi, A survey on energy and power consumption
models for greener cloud, in: IACC, 2013, pp. 7682.
[7] T. Mastelic, A. Oleksiak, H. Claussen, I. Brandic, J.-M. Pierson, A.V. Vasilakos, Cloud computing: survey on energy eciency, ACM Comput.
Surv. 47 (2) (2014) 33:133:36.
[8] N. Fernando, S.W. Loke, W. Rahayu, Mobile cloud computing: a survey,
Future Gener. Comput. Syst. 29 (1) (2013) 84106.
[9] M. Giacobbe, A. Celesti, M. Fazio, M. Villari, A. Puliato, An approach to
reduce carbon dioxide emissions through virtual machine migrations
in a sustainable cloud federation, in: Sustainable Internet and ICT for
Sustainability (SustainIT), 2015, pp. 14.
[10] Directive 2012/27/EU of the European Parliament and of the Council of 25 October 2012 on energy eciency. Ocial Journal L 315/1,
[11] Environmental and Energy Study Institute.
[12] United Nations General Assembly (1987) Report of the World Commission on Environment and Development: Our Common Future.
[13] Directive 2001/77/EC of the European Parliament and of the Council of
27 September 2001 on the promotion of electricity produced from renewable energy sources in the internal electricity market. Ocial Journal L 283, 27/10/2001 P. 0033-0040.
[14] N. Lu, An evaluation of the HVAC load potential for providing load
balancing service, IEEE Trans. Smart Grid 3 (3) (2012) 12631270,
[15] H. Flores, S. Srirama, R. Buyya, Computational ooading or data binding? bridging the cloud infrastructure to the proximity of the mobile
user, in: 2nd IEEE International Conference on Mobile Cloud Computing, Services, and Engineering (MobileCloud), 2014, pp. 1018.
[16] X. Liu, C. Yuan, Z. Yang, Z. Hu, An energy saving algorithm based on
user-provided resources in mobile cloud computing, in: IEEE 78th Vehicular Technology Conference (VTC Fall), 2013, pp. 15.
[17] H. Li, L. Zhang, R. Jiang, Study of manufacturing cloud service matching
algorithm based on OWL-S, in: Control and Decision Conference (2014
CCDC), The 26th Chinese, 2014, pp. 41554160.
[18] I. Stojmenovic, Keynote 1: mobile cloud and crowd computing and
sensing, in: Parallel and Distributed Systems (ICPADS), 2012 IEEE 18th
International Conference on, 2012, p. xxix.
[19] S. Vishnupriya, P. Saranya, A. Rajasri, Secure multicloud storage with
policy based access control and cooperative provable data possession,
in: International Conference on Information Communication and Embedded Systems (ICICES), 2014, pp. 16.

M. Giacobbe et al. / Computer Networks 91 (2015) 438452

[20] C. Saravanakumar, C. Arun, Location awareness of the cloud storage
with trust management using common deployment model, in: 2013
Fourth International Conference on Computing, Communications and
Networking Technologies (ICCCNT), 2013, pp. 15.
[21] EU Joint Research Centre statement, November 2012. http://ec.europa.
[22] E. Pinheiro, R. Bianchini, E.V. Carrera, T. Heath, Load balancing and unbalancing for power and performance in cluster-based systems, 2001,
[23] A.M. Ferreira, B. Pernici, Using intelligent agents to discover energy saving opportunities within data centers, in: RE4SuSy@RE, 2013.
[24] N.J. Kansal, I. Chana, Cloud load balancing techniques: a step towards
green computing, IJCSI Intl. J. Comput. Sci. Issues 9 (2012) 238246.
[25] R. American Society of Heating, A.-C. Engineers, Ashrae, PUE: a Comprehensive Examination of the Metric, ASHRAE datacom series, Amer
Society of Heating, 2014.
[26] T. Knauth, C. Fetzer, Energy-aware scheduling for infrastructure clouds,
in: CloudCom, 2012, pp. 5865.
[27] S.S. Rawat, N. Sharma, A new way to save energy and cost cloud computing, Intl J Emerg. Technol. Adv. Eng. 2 (2012) 8389.
[28] A. Jain, M. Mishra, S. Peddoju, N. Jain, Energy ecient computing green
cloud computing, in: Energy Ecient Technologies for Sustainability
(ICEETS), 2013 International Conference on, 2013, pp. 978982.
[29] The CoolEmAll Project,
[30] The All4Green Project,
[31] R. Basmadjian, G. Lovasz, M. Beck, H. de Meer, A generic architecture for
demand response: the all4green approach, in: CGC, 2013, pp. 464471.
[32] C. Cappiello, P. Plebani, M. Vitali, Energy-aware process design optimization, in: CGC, 2013, pp. 451458.
[33] A.-C. Orgerie, L. Lefvre, J.-P. Gelas, Demystifying energy consumption
in grids and clouds, in: Green Computing Conference, 2010, pp. 335
[34] S. Srikantaiah, A. Kansal, F. Zhao, Energy aware consolidation for cloud
computing, in: Proceedings of the 2008 Conference on Power Aware
Computing and Systems, in: HotPower08, USENIX Association, Berkeley, CA, USA, 2008, p. 10.
[35] F. Larumbe, B. Sanso, A tabu search algorithm for the location of data
centers and software components in green cloud computing networks,
IEEE Trans. Cloud Comput. 1 (2013) 2235.
[36] D. Kliazovich, S. Arzo, F. Granelli, P. Bouvry, S. Khan, e-STAB: energyecient scheduling for cloud computing applications with trac load
balancing, in: IEEE International Conference on Green Computing and
Communications (GreenCom), IEEE Internet of Things and IEEE Cyber,
Physical and Social Computing(iThings/CPSCom), 2013, pp. 713.
[37] F. Chen, J.-G. Schneider, Y. Yang, J. Grundy, Q. He, An energy consumption model and analysis tool for cloud computing environments, in:
GREENS, 2012, pp. 4550.
[38] C. Cappiello, S. Datre, M. Fugini, M. Gienger, P. Melia, B. Pernici, Monitoring and assessing energy consumption and CO2 emissions in cloudbased systems, in: SMC, 2013, pp. 127132.
[39] M. Yamagiwa, M. Uehara, A proposal for development of cloud platform
using solar power generation, in: CISIS, 2012, pp. 263268.
[40] R. Xie, X. Jia, K. Yang, B. Zhang, Energy saving virtual machine allocation
in cloud computing 0 (2013) 132137.
[41] A. Khosravi, S.K. Garg, R. Buyya, Energy and carbon-ecient placement of virtual machines in distributed cloud data centers, in: EuroPar, 2013, pp. 317328.
[42] U. Wajid, B. Pernici, G. Francis, Energy ecient and CO2 aware cloud
computing: requirements and case study, in: SMC, 2013, pp. 121126.
[43] M. Cardosa, M. Korupolu, A. Singh, Shares and utilities based power
consolidation in virtualized server environments, in: Proceedings of
the 2009 IFIP/IEEE International Symposium on Integrated Network
Management, in: IM09, 2009, pp. 327 334.
[44] A. Kansal, F. Zhao, J. Liu, N. Kothari, A.A. Bhattacharya, Virtual machine
power metering and provisioning, in: SoCC, 2010, pp. 3950.
[45] A. Beloglazov, J. Abawajy, R. Buyya, Energy-aware resource allocation
heuristics for ecient management of data centers for cloud computing, Future Gener. Comput. Syst. - Special Sec.: Energy Ec. Large-Scale
Distribris. Syst. 28 (2012) 755768.
[46] M. Emami, Y. Ghiasi, N. Jaberi, Energy-aware scheduling using dynamic
voltage-frequency scaling, distributed, parallel, and cluster computing,
Cornell Univerity Library, CoRR abs/1206.1984 (2012).
[47] R.N. Calheiros, R. Ranjan, A. Beloglazov, C.A.F.D. Rose, R. Buyya,
Cloudsim: a toolkit for modeling and simulation of cloud computing environments and evaluation of resource provisioning algorithms,
Softw., Pract. Exper. 41 (1) (2011) 2350.
[48] G. Soni, M. Kalra, Article: comparative study of live virtual machine migration techniques in cloud, Intl. J. Comput. Appl. 84 (14) (2013) 1925.
[49] C.-H. Lin, P.-C. Hsiu, C.-K. Hsieh, Dynamic backlight scaling optimization: a cloud-based energy-saving service for mobile streaming applications, IEEE Trans. Comput. 63 (2) (2014) 335348.


[50] S. Chatterjee, J.K. Nurminen, M. Siekkinen, Design of energy-ecient

location-based cloud services using cheap sensors, Int. J. Pervas. Comput. Commun. 9 (2) (2013) 115138.
[51] P. Shu, F. Liu, H. Jin, M. Chen, F. Wen, Y. Qu, B. Li, eTime: energy-ecient
transmission between cloud and mobile devices, in: INFOCOM, 2013,
pp. 195199.
[52] M. Whaiduzzaman, M. Sookhak, A. Gani, R. Buyya, A survey on vehicular cloud computing, J. Netw. Comput. Appl. 40 (2014) 325344.
[53] D. Abts, M.R. Marty, P.M. Wells, P. Klausler, H. Liu, Energy proportional
datacenter networks, in: ISCA, 2010, pp. 338347.
[54] J. Baliga, R. Ayre, K. Hinton, W. Sorin, R.S. Tucker, Energy consumption
in optical IP networks, J. Lightwave Technol. 27 (13) (2009) 23912403.
[55] M. Fiorani, S. Aleksic, P. Monti, J. Chen, M. Casoni, L. Wosinska, Energy
eciency of an integrated intra-data-center and core network with
edge caching, IEEE/OSA J. Opt. Commun. Netw. 6 (4) (2014) 421432.
[56] The Eco2Clouds Project,
[57] B. Pernici, U. Wajid, Assessment of the environmental impact of applications in federated clouds, in: SMARTGREENS 2014 - Proceedings of
the 3rd International Conference on Smart Grids and Green IT Systems,
Barcelona, Spain, 3-4 April, 2014, 2014, pp. 256261.
[58] Zabbix,
[59] M.G. Axel Tenschert, Cloud federation monitoring for an improved ecoeciency, in: eChallenges e-2013 Conference, Dublin, Ireland, 2013.
[60] E. Volk, A. Tenschert, M. Gienger, A. Oleksiak, L. Sis, J. Salom, Improving energy eciency in data centers and federated cloud environments, in: CGC, 2013, pp. 443450.
[61] Y. Kessaci, N. Melab, E.-G. Talbi, A pareto-based metaheuristic for
scheduling HPC applications on a geographically distributed cloud federation, Cluster Comput. 16 (3) (2013) 451468.
[62] CLEER Model, Cloud Energy and Emission Research Mode. http://
[63] J. Baliga, R. Ayre, K. Hinton, R.S. Tucker, Green cloud computing: balancing energy in processing, storage, and transport, Proc. IEEE 99 (1)
(2011) 149167.
[64] J.M. Kaplan, W. Forest, N. Kindler, Revolutionizing data center energy
eciency, technical report, mckinsey and company, 2008,
[65] Google Apps: Energy Eciency in the Cloud, 2012. http://static.
[66] K. Le, R. Bianchini, J. Zhang, Y. Jaluria, J. Meng, T.D. Nguyen, Reducing electricity cost through virtual machine placement in high performance computing clouds, in: SC, 2011, p. 22.
[67] N. Bila, E. de Lara, M. Hiltunen, K. Joshi, H.A. Lagar-Cavilla, M. Satyanarayanan, The case for energy-oriented partial desktop migration, in:
Proceedings of the 2Nd USENIX Conference on Hot Topics in Cloud
Computing, in: HotCloud10, USENIX Association, Berkeley, CA, USA,
2010, p. 3.
[68] I.n. Goiri, W. Katsak, K. Le, T.D. Nguyen, R. Bianchini, Parasol and
greenswitch: Managing datacenters powered by renewable energy,
SIGPLAN Not. 48 (4) (2013) 5164.
[69] D. Bruneo, F. Longo, A. Puliato, Evaluating energy consumption in a
cloud infrastructure, in: WOWMOM, 2011, pp. 16.
[70] H. Li, C. Wu, Z. Li, F.C.M. Lau, Prot-maximizing virtual machine trading
in a federation of selsh clouds, in: INFOCOM, 2013, pp. 2529.
[71] K. Le, R. Bianchini, J. Zhang, Y. Jaluria, J. Meng, T.D. Nguyen, Reducing electricity cost through virtual machine placement in high performance computing clouds, SC Conf. 0 (2011) 112.
[72] S. Govindan, A. Sivasubramaniam, B. Urgaonkar, Benets and limitations of tapping into stored energy for datacenters, SIGARCH Comput.
Archit. News 39 (3) (2011) 341352.
[73] R. Buyya, R. Ranjan, R.N. Calheiros, Intercloud: utility-oriented federation of cloud computing environments for scaling of application services, in: ICA3PP (1), 2010, pp. 1331.
[74] A. Quiroz, H. Kim, M. Parashar, N. Gnanasambandam, N. Sharma,
Towards autonomic workload provisioning for enterprise grids and
clouds, in: GRID, 2009, pp. 5057.

Maurizio Giacobbe Maurizio Giacobbe received

the master degree in Computer Science at University of Messina (Italy). His current position is
Assistant Researcher at the University of Messina.
From 2013 is member of the Multimedia and
Distributed Systems Laboratory (MDSLab). His
research activity has been focused on energy
management systems, distributed systems and
cloud computing. His main research interests include Cloud computing, federation, distributed
services, wireless sensor networks.


M. Giacobbe et al. / Computer Networks 91 (2015) 438452

Antonio Celesti Antonio Celesti received the

master degree in Computer Science at University of Messina (Italy). From 2008 he is one of
the members of the Multimedia and Distributed
Systems Laboratory (MDSLab). In 2010, he won
the best paper award at the Second International Conference on Advances in Future Internet,
Venice, Italy, and in 2011, the best paper award
at the Third International Conference on Evolving Internet, held in Luxembourg. In 2012, he received the Ph.D. in Advanced Technology for Information Engineering in 2012 at the University
of Messina (Italy). From 2012 his Assistant Researcher at the Faculty of Engineering of the University of Messina (Italy).
His scientic activity has been focused on studying distributed systems and
cloud computing. His main research interests include cloud federation, services, information retrieval and security.
Maria Fazio Maria Fazio received the degree in
Electronic Engineering in 2002 and the Ph.D. in
2006 at the University of Messina (Italy). Her scientic activities have been focused on distributed
systems and mobile networks, especially with
regard to wireless multi-hop networks. She has
been exchange visitor at the Department of Computer Science of the University of California in Los
Angeles in 2005, where she has been involved in
research on Vehicular Networks. She received a
post-doc fellowship in 2006 and was Assistant
Researcher from 2009 to 2010 at the University
of Messina (Italy). Current research activities include Cloud computing, with particular attention on the integration of different communication technologies, federation and services provisioning.
Massimo Villari Massimo Villari got his Ph.D. in
2003 in Computer Science School of Engineering
and the Laurea degree (bachelors degree + masters) in 1999 in Electronic Engineering, University of Messina, Italy. Since 2006 he is an Aggregate Professor at University of Messina. He is
actively working as IT Security and Distributed
Systems Analyst in cloud computing, virtualization and Storage for the European Union Project
VISION-CLOUD and the previous EU initiative
RESERVOIR. Previously, he was an academic advisor of STMicroelectronics, help an internship in
Cisco Systems, in Sophia Antipolis, and worked on
the MPEG4IP and IPv6-NEMO projects. He investigated issues related with
user mobility and security, in wireless and ad hoc and sensor networks. He
is IEEE member. Currently he is strongly involved on EU Future Internet initiatives, specically Cloud Computing and Security in Distributed Systems.
His main research interests include virtualization, migration, security, federation, and autonomic systems. Cloud Architect @ UniMe for the development of a cloud middleware, named CLEVER. In UniMe, Engineering Faculty,
he teaches Java Object Oriented Programming and Database courses.

Antonio Puliato Antonio Puliato is a full professor of computer engineering at the University
of Messina, Italy. His interests include parallel and distributed systems, networking, wireless, GRID and Cloud computing. During 1994
1995 he spent 12 months as visiting professor
at the Department of Electrical Engineering of
Duke University, North Carolina USA, where he
was involved in research on advanced analytical
modelling techniques. He is the coordinator of
the Ph.D. course in Advanced Technologies for Information Engineering currently available at the
University of Messina and the responsible for the
course of study in computers engineering. He was a referee for the European
Community for the projects of the fourth, fth and sixth Framework Program and he is currently acting as a referee also in the seventh FP. Puliato
is co-author (with R. Sahner and Kishor S. Trivedi) of the text entitled Performance and Reliability Analysis of Computer Systems: An Example-Based
Approach Using the SHARPE Software Package. He is currently the director
of the RFIDLab, a joint research lab with Oracle and Intel on RFID and wireless. From 2006 to 2008 he acted as the technical director of the Project 901,
aiming at creating a wireless/wired communication infrastructure (winner
of the CISCO innovation award). He is currently a member of the general assembly and of the technical committee of the FP7 Vision project.