Anda di halaman 1dari 23

54

CHAPTER 4

THE RESOURCE PROVISIONING FRAMEWORK

Cloud computing is an enabling paradigm that meets on-demand


resource provisioning in a diversified and dynamic workload requirements of
applications. It has to outperform server consolidation, application consolidation
and pay-as-you-go utility model for efficient and optimized resource
management (Zhang et.al 2016a). Cloud providers are focusing their work
towards improvement in resource utilization for high throughput and profit. For
satisfying the users request the provider should make great effort of lessening the
resources to accommodate huge requests without violating the guaranteed
quality of the service to the entire user. Efficient resource utilization and the
quality of service necessitate the adaptive resource provisioning mechanism.

Application deployed in the cloud is diversified on their performance


of tasks such as dependent or independent tasks, fault tolerance tasks or energy
efficient tasks. The application not only varies in tasks but also with
performance objectives. Most of the resource provisioning and task scheduling
algorithms have static optimization goals, thus lacking flexibility and
adaptability during changes in the workload. Though resource provisioning and
task scheduling, designed with different performance objectives, they share
common functionality components deployed in similar framework for
implementation. In this chapter we propose the Speculative Resource
Provisioning (SRP) framework, which mechanizes the resource provisioning by
predicting effective resources for the client/developer based on its past
utilization. We use speculation induced model named Speculation Resource
55

Provisioning (SRP) for adaptive resource provisioning, and to improve


the instance setup time which probably attract large cloud users.

Speculation is used to predict the exact resources for an application,


and whose accuracy provides solutions for under-or-over-utilization of
the resource. In particular, we use patterns from past resource accesses to
speculate the exact resources needed to execute the application: the prior
knowledge of the characteristics of the host, from past resource provision,
prevents over-or under-utilization. Thus, the speculation ensures Quality of
Service (QoS), provider reliability, consumers Service Level Agreement (SLA),
and enhances user trust, to make them feel they only pay for what they use.

4.1 SYSTEM ARCHITECTURE

Figure 4.1 Shows the High Level Architecture for supporting


speculative resource provisioning mechanism in cloud computing. There are
basically three main entities.

1. SLA Managenment
2. Speculative Resource Provisioning model
3. Resource Allocation System
56

USER INTERFACE

SLA Violation Accounting Metering and SLA


SLA
Estimation Information Pricing Management

Speculative Speculative
Resource Dispatcher / Load
Predictor
Provisioning Scheduler Balancer
Speculative
Resource
Provisioning
Monitoring Confidence History Table Pattern Model
Estimation Table

Resource Allocation Service Allocation Policies Resource


Policies
Allocation
System

Virtual
resources
Cloud
resources
Instances
Figure 4.1 Speculation resource provisioning architecture

4.2 SLA MANAGEMENT

The SLA Management Acts as an interface between the User and the
Cloud computing components. It requires the interaction between the following
mechanisms that supports SLA oriented resource allocation

4.2.1 SLA Violation Estimation

The User service request is first interpreted by the Violation


Estimation module that interprets the QoS requirements to determine whether to
accept or reject the requests. The QoS requirement is formulated in the format of
SLA which is depicted in terms of maximum response time and minimal
throughput. This will vary from application to application. It evaluates the SLA
negotiation by reducing resource overloading whereby many service request are
57

not accepted due to resource unavailability. Therefore, it is necessary to know


the current status information of resources and workload processing.

The monitoring modules in our model provide the current status of


the resources such as Resource Density and Load Factor of the servers. This
factors are taken as an input to estimate whether resources are available, and
meet with service Level Objective (SLO), which are numerical value such as
Minimum/Maximum CPU, Memory, Bandwidth requirements etc.,

4.2.2 Accounting Information

The Accounting mechanism is used to track the resources' usage by


request to estimate the final cost that can be charged to the user (Sekar &
Maniatis 2011). In addition to the historical usage information, it can also be
used by the SLA Violation Estimation for adjudging SLO to improve the
efficient resource provisioning

4.2.3 Metering and Prizing

The Metering service is otherwise termed as pay-per-use, is a type of


payment structure which allows the customer to access unlimited resources, but
they will be charged only for what they use. Metering is based on transaction
and leveraging of the cloud computing cost model. The Prizing mechanism is
used to manage the service demand and helps to maximize the profit of the
provider. The requests are charged based on availability of the resources and
workload properties, during submission, and hence the provider offer same
service requests at different prizing models and QoS levels.

VM resource allocation is the process of mapping the virtual


resources into physical host machines as in Figure 1. After receiving requests
from the user, the Resource Allocation System (RAS) in the data center
controller will first apply Evaluation filters to select eligible hosts to provide
58

computing capacities, CPU cores, RAM and image properties. The RAS in cloud
controller should decide which host to run the user application in a VM. Early
studies shows that there are two host level resource allocation strategies, like
Random (rand) and Max Core First (mcf), which are used during resource
provisioning by (Hu et al. 2013).

4.3 SPECULATIVE RESOURCE PROVISIONING MODEL


(SRPM)
Many high performance systems like multiprocessors, file systems
etc. undergo speculation approach for quick response. The speculation is the
process that executes the operation before it is considered as vital or not.

4.3.1 Speculative Predictor

The speculation mechanism of program execution happens in two


ways: 1. Predicting the subordinate processes, when required, during critical
path processing and verifying the predicted subordinate process in parallel, 2.
predicting which subordinate processes should proceed to reach an outcome. If it
turns out the subordinate process is not needed and any changes made by it gets
reverted and the results are ignored. The objective is to provide more
concurrency if more resources are available. The usage of speculation approach
is based as illustrated in Fig.1. In our Speculative Resource Provisioning System
(SRP), the speculation module is executed alongside the resource identification
module.

The Resource Provider offer the resource (R) using traditional


resource allocation mechanism, and our resource predictor module predict the
resources (R') using pattern matching algorithm (Caron et al. 2011), which is
described in detail in next section. The Figure 4.2 shows the total time reduced
(nearly half of the time) when resource is speculated correctly, and in contrast, if
59

it fails it won’t add any extra timing delay. It is equivalent to the timing without
using speculation as by its mechanism.

Resource Provider Resource consumer

Validation
R==R' ?
Re-execute with predicted
Predictive resource
operation
Speculative Resource

Total Time Without speculation

Total time with correct


speculation

Total Time with incorrect speculation

Figure 4.2 Resource speculation

Branch prediction in computer architecture concentrates on enhancing


the execution of pipelined microprocessors, by precisely foreseeing early,
regardless of whether an adjustment in control flow will happen, as per (Tuah et
al. 2003). Our prediction mechanism is logically inspired by hardware supported
Two-level Adaptive Training Branch Prediction, that supports high performance
processors, by minimizing the penalty associated with wrongly predicted
branches on the premise of data gathered at run-time, as indicated in Figure 4.3.
60

Branch Address

History Pattern
Hash Function Hash Function Table
Table
HF HF (PT)
(HT)

Updates

Figure 4.3 Two level adaptive prediction mechanism

Predictions are generated by pattern-based predictors which use


patterns of past resource provisioning to figure future resource allocation. The
predictor helps dynamic resource provisioning by a priori instantiating
demanded virtual machines, dynamic resource provisioning, workload
administration, framework estimation, scope, organization and element guideline
era in the cloud (Urgaonkar et al. 2010). The client who seeks service relies on
cloud brokering services to lease resources for their application. There is
prediction based resource measurement and provisioning strategies available,
which uses many machine learning algorithms, such as Neural Network (NN)
and Linear Regression (LR). The machine learning algorithms are used for
designing adaptive resource management to satisfy the resource demands. Our
work differs in case of predicting the performance of the resource a priori even
before execution by considering its past performance. This procedure is invoked
due to the genuineness of appointing right work, to the right individual, for
showing signs of improvement in result. The cloud controller gets the user's jobs
along with the SLA (Serrano et al. 2016). The SLA determines the qualities of
the application, the minimum and maximum requirements of the computational
resources and the due date.

4.3.2 Pattern Matching Algorithm

The present usage of the pattern of the cloud resource usage is used to
identify the historical patterns that are close to it. Assumptions are made as
61

clients operating within the same application domain, and also the historical
knowledge of the pattern matching are considered for predicting resource usage.
Weighted interpolation is used for estimating the similarity between present and
historical patterns, and hence resource usage for the patterns is predicted for
capacity planning (Candeia et al. 2015). The pattern matching is closer to the
string matching algorithm. Our pattern matching algorithmic rule uses the
fundamental of Knuth-Morris-Pratt (KMP) string pattern matching algorithm.
The KMP algorithm consists of preprocessing step which divide the input data
into independent blocks that runs in parallel. The KMP matching algorithm rule
uses degenerating property (pattern having same sub-patterns superficial over
once among the pattern) of the pattern, and sliding window approach is used for
finding possible matches.

The fundamental plan behind KMP’s algorithmic rule is that it


effortlessly predicts the sub patterns shown within the string, even if crisscross
happens amid string comparison, and it is scale independent. We tend to take
this information to withdraw from coordinating the characters that we all know
can in any case match. The KMP algorithmic rule ought to be somewhat
adjusted for approximate matching. There are two types of guess matching
errors: the instant error and cumulative error, for guaranteeing scale
independence, amid pattern comparison, is used in the algorithm. The instant
error is the amount of identified sequence differing on the possible interval, and
cumulative error is the amount of sequence differing by the pattern as a whole.
Both the errors and length of the patterns are the parameters to be determined for
fine tuning the algorithm outcomes. In order to discover the distinction between
the current and the pattern table entry, it is standardized to a floating point
interval [0 to 1], rather than expressing in huge whole numbers.

Our predictor pattern use both CPU and memory, and also consider
scaling component. There may be potentially unlimited number of user accessed
62

applications unpredictably. The Dynamic unpredictable workload scenario


demands short response time, high reliability and availability, from the
application. The resource pattern under different workload scenario is analyzed
for predicting resource scaling and decision making, taken for resource pooling
to meet the on demand services.

4.3.3 Monitoring

Our system attempts to maximize the application performance with an


optimal cost for the service provider (Al-Sayed et al. 2016; Voorsluys et al.
2009). Our model is implemented in a hierarchical architectural based cloud
framework with the components of centralized cloud controller and instance
controller. The resource submission and resource selection supporting modules
are initiated by Cloud controller, and resource control and monitoring modules
are incorporated in the instance controller. The monitoring module is used to
find the current status of computing resources and workload processing. The
instance controller intermittently screens the resources, for example, Resource
density, Load factor and the current quantitative units are provided to the cloud
controller, when the new service request arrives. The Resource Density (RD)
parameter represents the intensity of computing power per communication
bandwidth. The lower the value indicates the more bandwidth resultant of high
communication.

(4.1)

RDi denotes the Resource Density of the Host. It is the aggregation of


set of virtual resources j= (1…n), the other parameter Load Factor (LF) provides
the overall load at some instant; this is essential to take care the computation
component. The resources that have a high computation compared to
communication would prefer LF unit than RD

(4.2)
63

ET denotes the execution time to run ‘N’ jobs on pn processors. t1(N)


is the sequential time, tcomm(N) is the communication time. The communication
overhead may depend on factors such as network latencies, message delays and
communication volumes.

(4.3)

ECi is the Execution cost of the application runs in the host i, ɸ is the
unit of cost determined by resource provider. The cost may comprise many
resource components like CPU utilization rate, memory usage, network
bandwidth consumption and disk accesses.

ɸ (4.4)

The workload characteristics include service request generated


transaction, interactive command, processes and threads, HTTP requests and the
service usage, which are the major component of our workload monitoring
model. The intensity of workload is characterized by number of request arrival
rate, and service demand such as number of clients, number of processes or
threads, in execution. These are the describing factors of resource consumption
during the workload, such as processor time, memory usage, I/O time, disk
operation etc. CPU time, I/O time is the pair which characterizes the execution
of request at the server. For an approximation of resource (CPU, memory,
network) usage, related to the number of client session, we dissect the workload
of the service for 6 months.

Workload identification is critical while predicting the accurate


resource to forecast the resource prerequisite right then and there. Our
monitoring module estimate the number of request arrivals in each particular
Measurement Interval of time, where the sequence of values forms a time series.
Using this time series we predict the next arrival values using simple linear
regression model. The service demand of each request forecast the load imposed
64

by the request. To estimate the service demand we compute probability


distribution of the per request service demand. The sampled data set is then used
to automate the resource provisioning by multiplexing the appropriate number of
Virtual Machines (VM) to the host/Physical Machines (PM) based on workload.

The client sends its demand to the cloud controller alongside the
SLA. In view of client SLA, the cloud controller alleviates computational risk
management, and guarantees that the Client’s SLA never violates the cloud
infrastructure QoS. Assuming that it is succeeding, the request is further
processed by submitting it to the instance controller. When a request is
submitted to the instance controller, the Speculative Resource Provisioning
(SRP) module is executed in parallel to know the prior resource requirement for
the request.

4.3.4 History Table and Pattern Table

Our predictor maintains a history table and pattern table, for each and
every service requisition conforming to SLAs. The history table is a sequence of
‘n’ history entries that records the most recent service request along with SLA.
The pattern table is a record of all the observed sequences of requests alongside
the predicted patterns as given in Figure 4.4

Service RD LF ET EC conf
id … … … … … …
0 … <REQ, xx xx xx xx 4
… … SLA>
t-1 … … … … … … …
T <REQ,SLA> Pattern Table
History Table history depth=1 Efficient speculative values for SLA

history depth=1

Figure 4.4 Two level predictor


65

When the cloud controller receives the service request (Sreq) it


validates the request to ensure whether it violates the service provider QoS, and
then it stores in the history table, before searching the resource availability. At
whatever point entries in the history table (HT) are made, it is searched in the
pattern table (PT), to ascertain if first entries match the current contents of the
history table, using the KMP algorithm discussed above. The pattern table
consists of application characteristics along with specifications like Resource
Density (RD), Load Factor (LF), Execution Time (ET) and Execution Cost (EC).
The pattern values are the exact resource requirement for the user's job
specification based on past computation. If there is no match between the service
request pattern and patterns within the pattern table, the service pattern is
recorded as a new entry within the pattern table. The cloud controller undergoes
a bin-packing optimization scheduling algorithm during the resource discovery,
and therefore the result of the resource demand from the speculation module is
taken for consideration.

The dynamic resource scaling that provides inappropriate patterns at


the situation of Slashdot effect or sudden traffic surge, must be seriously
considered during pattern table entries. Meantime multi tenancy nature of the
cloud uses a statistical standard deviation (σ) approach for normalized patterns,
especially for performance parameter such as resource density and load factor.

σ (4.5)

N is the total number of patterns, is the mean value of the


patterns, is the set of patterns.

4.3.5 Speculative Resource Provisioning

Our speculative based Resource Allocation System uses two


operational messages: named SPEC (speculation) and REQ (resource request).
66

Whenever a cloud controller receives the resource requisition, it also receives the
resource list with current resource parameters like RD, LF from instance
controller. In meantime it performs traditional Resource identification using
REQ operation to spot the available resources. Meanwhile, once a pattern of
resource provisioning for the service request is acquired from the pattern table,
the controller can use this information to predict future provision for the similar
request, and speculatively issue SPEC operation of the potential resource. The
user request would be handled by the SPEC initiated resource. The execution
time and the cost will also be judged in advance via pattern table entries. If it
gets slightly deviated the best values will be recorded along with Execution
Time (ET) and Execution Cost (EC) in the pattern table. Conceptually, the
controller should also attempt to issue SPEC whenever a resource scaled up or
down.

When a server receives a SPEC request, it first checks the time stamp
to ensure the instantaneous availability of virtual resources. If this check fails, it
generates a response message that sets the confidence level for the pattern that
triggered the SPEC request, to the minimum value, to inhibit future SPEC
requests, until the resource provision pattern is re-established. If the resource
receives the REQ message followed by SPEC message, then the confidence level
(conf) gets incremented ensuring high confidence in envisioning future
prediction. However, our algorithm still avoids some portion of the resource
allocation latency while issuing server state updates based on SPEC request, and
allowing the controller to resume processing. Meanwhile, when the response to
the resource request arrives at the remote node, it is discarded.

When the resource is scaled up or down, the servers that are


encountering load variations, issue SPEC while waiting for the remaining similar
servers to adopt the load variation. This methodology lessens the overhead of
speculative processing by overlapping it with the latency of the newly
67

configured cloud resources. In any case, once all servers are configured, any
remaining speculative processing servers are permitted to depart from the
adjustment of load variation and resource density. Speculative Processing
ensures non-expansion of idleness, because of workload variations. Controller
executes a speculative action, updating the load variation in the predicted server
using a SPEC request message. Notwithstanding the resource provisioning
parameters, the SPEC request contains the current vector time stamp. The
controller records speculative activities in the history table as though they have
triggered by an actual resource request by it. Neglecting to do so could lead our
predictor to observe false patterns. For example, if a SPEC request is not
recorded in the history table, but rather succeeded in staying away from resource
provision latency, the predictor would record this as a pattern.

At this stage, if the node had detached during scale down, it won’t be
considered for resource prediction in future iterations (Doyle et al. 2016;Shyam
& Manvi 2016). In the event of immense load and resource density deviation
after scale up or down, the server is idle, and is willing to designate resource to
the recently arrived request. This recently changed state of the server is upgraded
in the controller part. Finally, at the time the SPEC request arrives, the host
server may officially issue its redesigned state to the controller, in lieu of giving
resource for the SPEC request. In this situation, the predictor has effectively
predicted the server; however it is not early enough to avoid a load and resource
conflict.

4.3.6 Confidence Estimation

Confidence estimation is a process for assessing the performance of


the prediction. . The confidence level (labeled conf in Figure 4.4) distinguishes
the patterns having a low probability of predicting future accesses. The
confidence level is maintained within a small range, and increments and
decrements beyond this range are simply reset as high or low confidence value
68

respectively. For every prediction, confidence estimator assigns a counter value


to track “high confidence CHC” or “low confidence CLC” to the predictor. The
high confidence parameter gets incremented when our speculation works prior
enough to predict the exact resources for the request i.e., SPEC message
followed by REQ message. The lower confidence estimates the delay in
predicting the resource i.e., REQ message followed by SPEC message. The
prediction hit (pred_hit) and prediction miss (pred_miss) are calculated for
evaluating accuracy during prediction.

(4.6)

(4.7)

The percentage of the prediction hit ratio will increase the QoS such
as responsiveness and throughput.

4.3.7 Dispatcher / Scheduler

The workload of a web application varies over time scale. Static


allocation of resources to application is practically problematic as it may
sometime leads to resource over/under provisioning that result in SLA violation.
An alternate approach is to allocate the resources to application based on the
workload. In such case each application is provided a minimum share of
resources, and a remaining resource capacity is allocated to the application based
on their instantaneous needs. The cloud data center model is categorized in two
approaches: dedicated and shared model. In dedicated model cluster of nodes are
statically configured for application needs, and determined a priori, how many
nodes to allocate to the applications. In shared model, application can share
resources with other applications.

Most of the cloud data center model is structured as shared model,


thanks to Generalized Processor Sharing (GPS). In our work we model our
69

resources with ‘n’ queues and each queue corresponds to particular application.
The Application tasks may vary depends on its computing capacity. In a multi
tiered web application the components involved are DNS server for User
Interface, Application server for business logic and back end database server
Aljazzaf (2015).

When a user performs the read only tasks in the web application only
DNS server faces the workload as compared to other components. Hence the
queues are assigned weights based on their task performance.

In our model the weights of the queues are determined by Speculative


resource provisioning model, by speculating task performed by the user through
patterns. The weighted fair queuing (WFQ) scheduling algorithm is applied for
fairly allocating resources to the backlogged queues from the queue, which does
not utilize its allocated share. The server resources are classified as hardware
resources such as CPU, memory, disk bandwidth and network bandwidth, while
software resources includes sockets, session management and file interface.
Significant Quality of Service (QoS) requirement of an application is mean
response time, which is mentioned in a SLA as Target response time. The
response time is measured as the amount of time between the request made and
receiving the first response. In a cloud scheduling the virtual instance setup for
performing the request and allocating appropriate host to instance, is termed as
first response. The virtual resources are pre-compiled images with varying sizes
such as small, medium, large, extra large based on its computing units (CPU
core, RAM size, storage capacity and network bandwidth).

4.3.8 Load Balancer

The performance of the queue is measured by the arrival rate, and the
service rate of the queue. When the length of the queue increases over a period
of time, it is termed as lower service rate than arrival rate. When a queue length
70

is negligible, arrival rate is lower than service rate. The under provisioning of
resource cause increased queue length, and over provisioning of resource results
in negligible queue length. The queue length is determined by threshold values.
Conserving right amount of resource through prediction causes fair queue length
and efficient utilization of resources. The Load balancer also gets initiated
through WFQ by knowing the change that will happen in workload
characteristics. Each incoming request is serviced by hardware and software
resources on the server. The response time is coined by multiple resource
specific response time. The Speculative induced scheduler and load balancer
motivates the achievement of Target Response time.

(4.8)

(4.9)

The length of the queue (Q) at the instant of time is measured as sum
of initial Queue length (Q0) and the arrival rate ( of the tasks minus service
rate (µ) at the interval of time (t). The length of the queue must not be negative.

The service rate ( ) of an application is measured as the resource


share ( ) and computing capacity (C) per service demand per request ( . The
service demand per request is number of CPU cycles per CPU request, number
of bytes per packet etc. The queue may sometimes become empty based on
arrival and service rate at a period of time interval.

(4.10)

(4.11)

Substituting Equation (10) in (12)

(4.12)
71

Depending upon the non-empty period of Queue ( ) at the rate of


Interval T, The average response time Ri is measured as the sum of mean queuing
delay and request service time. From the Equation (4.12), there is a non-linear
relationship between the response time (Ri) and the resource share ( .

We use on-line optimization approach to estimate the resource share


dynamically. In this technique the workload estimation plays the crucial factor.
The workload characteristics are classified based upon request arrival and service
demand distribution.

4.4 RESOURCE ALLOCATION SYSTEM

The Resource Allocation System (RAS) in the cloud computing is


defined as the process of assigning the resources to the needed application. The
Efficient RAS must address the challenges of Scarcity of resources, Resource
Contention, over and under provisioning of resources. The resource modeling is
the important point to be considered during resource allocation. In a queuing
resource model as discussed above we make on-line control decisions based on
changes in application workload. The RAS must be driven by controlling policies
such as Service Allocation Policy and VM Allocation Policy.

4.4.1 Service Allocation policy

Each application has its own Admission control policy to check


whether a new request can be admitted or not. In our model, we use this module to
control the decision of resource allocation to the request, to meet out the
maximum utilization of resources at economical cost. This module acts as a
resource allocation controller component. The resources provided for the request
is configured based upon the speculated capacity, and a SPEC message request is
created and sends to the Host. The parameter of the SPEC message is Host id,
max_num_VMInstance, resc_Required (CPU, memory, network bandwidth),
packet_size. The Capacity planning decision to ensure sufficient capacity is
72

aggregated just in time, when it was required to undergo in this module. The
Service Allocation policy includes: Determine the resource pool capacity to
support workload characteristics, Adjust capacity to a resource pool.

4.4.2 Resource Allocation Policy

The data-center manages several hosts, which in turn manage several


virtual instances. The Resources represented in this module is Host and Virtual
Machines. As we all know Virtual machines are pre-configured, and isolated until
it is launched in Host. The Infrastructure as a Service can be extended by the data-
center component in the cloudsim. The data center manages the number of hosts,
and the hosts manage number of virtual instances. The Allocation policy is
implemented in both Host and VM. The VM allocation policy determines the
mapping of number of VMs into the host that match the critical characteristics
(storage, memory) and configuration (software environment). Our Speculation
Resource Provisioning module predicts the hardware characteristics such as
number of cores, CPU share, and amount of memory and number of VM’s to be
allocated for a host to better efficient resource provisioning.

4.5 DYNAMIC SCHEDULING OF VIRTUALIZED RESOURCES


IN SRPM
Cloud computing technology facilitates computing intensive
applications in dynamically provisioned virtualized resources. Elasticity of cloud
jobs is a challenging research problem in cloud environment. The working
methodology of dynamic scheduling is shown in Figure 4.5.

SLA SLA Violation Speculative Resource


estimation Provisioning

Service Allocation Policy Dispatcher/


Resource Allocation Scheduler
Policy

Figure 4.5 Dynamic scheduling in SRP model


73

Dynamic scheduling in cloud computing is responsible for selection of


suitable massive resources for task execution. In our Framework when tasks are
submitted by the user, the cloud controller ensures whether it violates the QoS and
SLA, and whether it can be accepted or not. SLA is used to define service quality
parameter required by the user from the provider. The risk management factor of
SLA is out of focus in our work. Service requirement of user may change over
time and hence our Speculative Resource provisioning module self configure the
resource components according to the service request. The tasks are quickly
dispatched to the suitable pre-configured VM by dispatcher module, and
appropriate processing powers are allotted to the VM by the host machine through
Service Allocation Policy and Resource Allocation Policy

4.6 ENERGY AWARE RESOURCE PROVISIONING IN SRPM

The challenge of future datacenter is its maintenance and energy


consumption (Chen et al. 2008). The cloud datacenter has enormous computing
resources and because of virtualization technique it was portrayed as infinite
resources to the user.
Monitoring Speculative Speculative Resource
predictor Provisioning

Figure 4.6 Energy aware resource provisioning in SRP model

To support modeling and simulation of different power consumption


models and power management techniques, such as Dynamic Voltage and
Frequency Sharing (DVFS), monitoring components periodically monitor
performance status of server, such as load condition and processing share, and
return the power consumption value as shown in the Figure 4.6. The energy
consumption during resource utilization was recorded as per speculative
mechanism to record the energy efficiency of the resources during task execution.
Speculative resource provisioning for self configured resource utilization must be
74

aware of energy consumption for the task execution, in order to improve the
energy efficiency.

4.7 APPLICATION SCALING AND PROVISIONING IN SRPM

Cloud resource provisioning is instantiating virtual resources based on


user requirement. The resource pooling of servers are essential that permit
multiple application workloads to share each server in the pool. Several minutes
delay occurs when initializing virtual instances.

Speculative Speculative Resource


Monitoring
predictor Provisioning

Service Allocation Policy Dispatcher


Resource Allocation
Policy

Figure 4.7 Application scaling and provisioning in SRP model

To support the Application scaling in our simulator as shown in the


Figure 4.7 we have scaled the user session, up and down to know the performance
factor of task execution. The monitoring module periodically tracks the status of
the resource like resource density and load factor, and it is given as input to the
Speculative resource provisioning module. The Speculative Resource
Provisioning issue the SPEC message to the dispatcher for resource configuration
a priori by correctly guessing the events of load fluctuation. The weighted fair
queue algorithm is implemented to share the processing unit to the needed queue,
in order to avoid under provisioning of resource. The greedy algorithm is
implemented in resource allocation policy to quickly allocate the task to the
predicted VM, (Nejad et al. 2015).
75

4.8 RESOURCE PRIZING AND MEASURING TRUST


WORTHINESS IN SRPM

The prizing Model is the space where cloud users get attracted by the
service provider. The computing resources provided are concealed in the data-
center. In a cloud investment market, the user and provider have different
objectives and requirements. The need of trustworthiness between user and
provider will increase the popularity of cloud platform.

SLA Violation Speculative Speculative Resource


estimation Predictor Provisioning

Metering and Accounting Dispatcher /


Pricing Scheduler

Figure 4.8 Resource prizing and measuring trustworthiness in SRP model

SLA Violation Estimation validates the service request and send


service request to the Speculative predictor as shown in Figure 4.8. The
speculative predictor use history table and pattern table to ensure the historical
records of the service request. The accuracy of the predictor is assured by
confidence estimation procedure. The resource configuration decision is done in
Speculative Resource Provisioning module after getting the response from
Speculative Predictor. The resource configuration and capacity planning
information is fed to dispatcher / Scheduler to prepare messages to the appropriate
Host. The task completion time is accounted by the Accounting module, and then
resource unit cost for executing task is performed by Metering and Prizing
module.

4.9 SUMMARY

The objective of cloud resource management is quick provisioning of


resources for the clients request. The server workload analysis influences the
76

resource provisioning strategy. Hence there is a need for intelligent and adaptive
framework, which understands the change in workload and occurrences of
transaction in the server to quickly make a decision in resource allocation and
provisioning. We also evaluated effectiveness of our methodology which avoids
the cause of SLA violation by providing adequate resource, with lesser energy
consumption. From the result we observed that our SRPM adapts to the change
in workload, and intelligently provide the resources based on the expected
service request occurrence through historical records. The Summative Cloud
Resource Management Framework and its Features are shown as follows,

Table.4.1 Cloud Resource Management Framework


Cloud Energy Resource
Application
Resource Dynamic Aware Prizing and
Scaling and
Management Scheduling Resource Measuring
Provisioning
Framework Provisioning Trustworthiness
PRESS
No No Yes No
(2010)
BOSS
Yes No No Yes
(2013)
DRIVE
No Yes No No
(2013)
CDARA
No No No Yes
(2014)
ANGEL
Yes Yes No No
(2015)
FESTAL,
Yes Yes No No
(2015)
CloudArmor
No No No Yes
(2016)
FASTER
Yes Yes No No
(2016)
HEIFER
Yes Yes Yes No
(2016)
SRPF
Yes Yes Yes Yes
(2017)

Anda mungkin juga menyukai