Anda di halaman 1dari 101

St.

Angelos Professional Education

Cloud Infrastructure (Networking)

Cloud Infrastructure
(Networking)

St. Angelos Professional Education

Cloud Infrastructure (Networking)

INDEX
Sr. No.
1
2
3
4
5
6
7
8
9
10
11
12

Particular
Introduction to Cloud Computing
Web Application
Cloud Server Virtualisation
Installing & Configuring Virtual Server
Virtualisation
Window Server 2008 Hyper V
Configuration & Management of Hyper V
Google App & Microsoft Office 365
Web Application Security
Cloud Interoperability and Solution
Backup and Recovery of cloud data
Server Performance & Monitoring

Page Number
4
10
14
21
39
46
53
61
74
80
85
91

St. Angelos Professional Education

Cloud Infrastructure (Networking)

Introduction

The boom in cloud computing over the past few years has led to a situation that is common to many
innovations and new technologies: many have heard of it, but far fewer actually understand what it is and,
more importantly, how it can benefit them. This is an attempt to clarify these issues by offering a
comprehensive definition of cloud computing, and the business benefits it can bring.

In an attempt to gain a competitive edge, businesses are increasingly looking for new and innovative ways
to cut costs while maximizing value especially now, during a global economic downturn. They
recognize that they need to grow, but are simultaneously under pressure to save money. This has forced
the realization that new ideas and methods may produce better results than the tried and tested formulas of
yesteryear. It is the growing acceptance of innovative technologies that has seen cloud computing become
the biggest buzzword in IT.

However, before an organization decides to make the jump to the cloud, it is important to understand
what, why, how and from whom. Not all cloud computing providers are the same. The range and quality
of services on offer varies tremendously, so we recommend that you investigate the market thoroughly,
with a clearly Defined set of requirements in mind.

St. Angelos Professional Education

Cloud Infrastructure (Networking)

Chapter 1

Introduction to Cloud Computing


What is cloud computing?
Many people are confused as to exactly what cloud computing is, especially as the term can be used to
mean almost anything. Roughly, it describes highly scalable computing resources provided as an external
service via the internet on a pay-as-you-go basis. The cloud is simply a metaphor for the internet, based on
the symbol used to represent the worldwide network in computer network diagrams.
Economically, the main appeal of cloud computing is that customers only use what they need, and only
pay for what they actually use. Resources are available to be accessed from the cloud at any time, and
from any location via the internet. Theres no need to worry about how things are being maintained behind
the scenes you simply purchase the IT service you require as you would any other utility. Because of
this, cloud computing has also been called utility computing, or IT on demand.
This new, web-based generation of computing utilizes remote servers housed in highly secure data centres
for data storage and management, so organizations no longer need to purchase and look after their IT
solutions in-house.
What does it comprises
Cloud computing can be visualized as a pyramid consisting of three
sections:
Cloud Application
This is the apex of the cloud pyramid, where applications are run and
interacted with via a web browser, hosted desktop or remote client. A
hallmark of commercial cloud computing applications is that users
never need to purchase expensive software licenses themselves.
Instead, the cost is incorporated into the subscription fee. A cloud
application eliminates the need to install and run the application on the
customer's own computer, thus removing the burden of software
maintenance, ongoing operation and support.
Cloud Platform
The middle layer of the cloud pyramid, which provides a computing platform or framework as a service.
A cloud computing platform dynamically provisions, configures, reconfigures and de-provisions servers
as needed to cope with increases or decreases in demand. This in reality is a distributed computing model,
where many services pull together to deliver an application or infrastructure request.
Cloud Infrastructure
The foundation of the cloud pyramid is the delivery of IT infrastructure through virtualisation.
Virtualisation allows the splitting of a single physical piece of hardware into independent, self governed
environments, which can be scaled in terms of CPU, RAM, Disk and other elements. The infrastructure
4

St. Angelos Professional Education

Cloud Infrastructure (Networking)

includes servers, networks and other hardware appliances delivered as either Infrastructure Web
Services, farms or "cloud centres". These are then interlinked with others for resilience and additional
capacity.
Types of Cloud Computing
1. Public Cloud
Public cloud (also referred to as external cloud) describes the conventional meaning of cloud computing:
scalable, dynamically provisioned, often virtualised resources available over the Internet from an off-site
third-party provider, which divides up resources and bills its customers on a utility basis. An example is
Think Grid, a company that provides a multi-tenant architecture for supplying services such as Hosted
Desktops, Software as a Service and Platform as a Service. Other popular cloud vendors include
Salesforce.com, Amazon EC2 and Flexi scale.
2. Private Cloud
Private cloud (also referred to as corporate or internal cloud) is a term used to denote a proprietary
Computing architecture providing hosted services on private networks. This type of cloud computing is
generally used by large companies, and allows their corporate network and data centre administrators to
effectively become in-house service providers catering to customers within the corporation. However,
it negates many of the benefits of cloud computing, as organizations still need to purchase, set up and
manage their own clouds.
3. Hybrid Cloud
It has been suggested that a hybrid cloud environment combining resources from both internal and
external providers will become the most popular choice for enterprises. For example, a company could
choose to use a public cloud service for general computing, but store its business-critical data within its
own data centre. This may be because larger organisations are likely to have already invested heavily in
the infrastructure required to provide resources in-house or they may be concerned about the security of
public clouds.

Services used on the cloud


There are numerous services that can be delivered through cloud computing, taking advantage of the
Distributed cloud model. Here are some brief descriptions of a few of the most popular cloud-based IT
Solutions:
1. Hosted Desktops
Hosted desktops remove the need for traditional desktop PCs in the office environment, and reduce the
cost of providing the services that you need. A hosted desktop looks and behaves like a regular desktop
PC, but the software and data customers use are housed in remote, highly secure data centres, rather than
on their own machines. Users can simply access their hosted desktops via an internet connection from
anywhere in the world, using either an existing PC or laptop or, for maximum cost efficiency, a
specialised device called a thin client.
2. Hosted Email
As more organisations look for a secure, reliable email solution that will not cost the earth, they are
Increasingly turning to hosted Microsoft Exchange email plans. Using the worlds premier email platform,
this service lets organisations both large and small reap the benefits of using MS Exchange accounts
5

St. Angelos Professional Education

Cloud Infrastructure (Networking)

without having to invest in the costly infrastructure themselves. Email is stored centrally on managed
servers, providing redundancy and fast connectivity from any location. This allows users to access their
email, calendar, contacts and shared files by a variety of means, including Outlook, Outlook Mobile
Access (OMA) and Outlook Web Access (OWA).
3. Hosted Telephony (VOIP)
VOIP (Voice Over IP) is a means of carrying phone calls and services across digital internet networks. In
terms of basic usage and functionality, VOIP is no different to traditional telephony, and a VOIP-enabled
telephone works exactly like a 'normal' one, but it has distinct cost advantages. A hosted VOIP system
replaces expensive phone systems, installation, handsets, BT lines and numbers with a simple, costefficient alternative that is available to use on a monthly subscription basis. Typically, a pre-configured
handset just needs to be plugged into your broadband or office network to allow you to access features
such as voicemail, IVR and more.
4. Cloud Storage
Cloud storage is growing in popularity due to the benefits it provides, such as simple, CapEx-free costs,
anywhere access and the removal of the burden of in-house maintenance and management. It is basically
the delivery of data storage as a service, from a third party provider, with access via the internet and
billing calculated on capacity used in a certain period (e.g. per month).
5. Dynamic Servers
Dynamic servers are the next generation of server environment, replacing the conventional concept of the
dedicated server. A provider like Think Grid gives its customers access to resources that look and feel
exactly like a dedicated server, but that are fully scalable. You can directly control the amount of
processing power and space you use, meaning you don't have to pay for hardware you don't need.
Typically, you can make changes to your dynamic server at any time, on the fly, without the costs
associated with moving from one server to another.
Why switch from traditional IT to the cloud
There are many reasons why organisations of all sizes and types are adopting this model of IT. It provides
a way to increase capacity or add capabilities on the fly without investing in new infrastructure, training
new personnel, or licensing new software. Ultimately, it can save companies a considerable amount of
money.
1. Removal / reduction of capital expenditure
Customers can avoid spending large amounts of capital on purchasing and installing their IT infrastructure
or applications by moving to the cloud model. Capital expenditure on IT reduces available working capital
for other critical operations and business investments. Cloud computing offers a simple operational
expense that is easier to budget for month-by-month, and prevents money being wasted on depreciating
assets. Additionally, customers do not need to pay for excess resource capacity in-house to meet
fluctuating demand.
2. Reduced administration costs
IT solutions can be deployed extremely quickly and managed, maintained, patched and upgraded remotely
by your service provider. Technical support is provided round the clock by reputable providers like
ThinkGrid for no extra charge, reducing the burden on IT staff. This means that they are free to focus on
business-critical tasks, and businesses can avoid incurring additional manpower and training costs. IT
6

St. Angelos Professional Education

Cloud Infrastructure (Networking)

giant IBM has pointed out that cloud computing allows organisations to streamline procurement
processes, and eliminates the need to duplicate certain computer administrative skills related to setup,
configuration, and support.
3. Improved resource utilisation
Combining resources into large clouds reduces costs and maximizes utilisation by delivering resources
only when they are needed. Businesses neednt worry about over-provisioning for a service whose use
does not meet their predictions, or under-provisioning for one that becomes unexpectedly popular. Moving
more and more applications, infrastructure, and even support into the cloud can free up precious time,
effort and budgets to concentrate on the real job of exploiting technology to improve the mission of the
company. It really comes down to making better use of your time focusing on your business and
allowing cloud providers to manage the resources to get you to where you need to go. Sharing computing
power among multiple tenants can improve utilisation rates, as servers are not left idle, which can reduce
costs significantly while increasing the speed of application development. A side effect of this approach is
that computer capacity rises dramatically, as customers do not have to engineer for peak loads.
4. Economies of scale
Cloud computing customers can benefit from the economies of scale enjoyed by providers, who typically
use very large-scale data centres operating at much higher efficiency levels, and multi-tenant architecture
to share resources between many different customers. This model of IT provision allows them to pass on
savings to their customers.
5. Scalability on demand
Scalability and flexibility are highly valuable advantages offered by cloud computing, allowing customers
to react quickly to changing IT needs, adding or subtracting capacity and users as and when required and
responding to real rather than projected requirements. Even better, because cloud-computing follows a
utility model in which service costs are based on actual consumption, you only pay for what you use.
Customers benefit from greater elasticity of resources, without paying a premium for large scale.
6. Quick and easy implementation
Without the need to purchase hardware, software licenses or implementation services, a company can get
its cloud-computing arrangement off the ground in minutes. Helps smaller businesses compete
historically; there has been a huge disparity between the IT resources available to small businesses and to
enterprises. Cloud computing has made it possible for smaller companies to compete on an even playing
field with much bigger competitors. Renting IT services instead of investing in hardware and software
makes them much more affordable, and means that capital can instead be used for other vital projects.
Providers like ThinkGrid take enterprise technology and offer SMBs services that would otherwise cost
hundreds of thousands of pounds for a low monthly fee.
7. Quality of service
Your selected vendor should offer 24/7 customer support and an immediate response to emergency
Situations.
8. Guaranteed uptime, SLAs.
Always ask a prospective provider about reliability and guaranteed service levels ensure your
applications and/or services are always online and accessible.
7

St. Angelos Professional Education

Cloud Infrastructure (Networking)

9. Anywhere Access
Cloud-based IT services let you access your applications and data securely from any location via an
internet connection. Its easier to collaborate too; with both the application and the data stored in the
cloud, multiple users can work together on the same project, share calendars and contacts etc. It has been
pointed out that if your internet connection fails, you will not be able to access your data. However, due to
the anywhere access nature of the cloud, users can simply connect from a different location so if your
office connection fails and you have no redundancy, you can access your data from home or the nearest
Wi-Fi enabled point. Because of this, flexible / remote working is easily enabled, allowing you to cut
overheads, meet new working regulations and keep your staff happy.
10. Technical Support
A good cloud computing provider will offer round the clock technical support. Think Grid customers, for
instance, are assigned one of our support pods, and all subsequent contact is then handled by the same
small group of skilled engineers, who are available 24/7. This type of support model allows a provider to
build a better understanding of your business requirements, effectively becoming an extension of your
team.
11. Disaster recovery / backup
Recent research has indicated that around 90% of businesses do not have adequate disaster recovery or
Business continuity plans, leaving them vulnerable to any disruptions that might occur. Providers like
ThinkGrid can provide an array of disaster recovery services, from cloud backup (allowing you to store
important files from your desktop or office network within their data centres) to having ready-to-go
desktops and services in case your business is hit by problems. Hosted Desktops (or Hosted VDI) from
ThinkGrid, for example, mean you dont have to worry about worry about data backup or disaster
recovery, as this is taken care of as part of the service. Files are stored twice at different remote locations
to ensure that there's always a copy available 24 hours a day, 7 days per week.

Concerned about security


Many companies that are considering adopting cloud computing raise concerns over the security of data
being stored and accessed via the internet. What a lot of people dont realise is that good vendors adhere
to strict privacy policies and sophisticated security measures, with data encryption being one example of
this. Companies can choose to encrypt data before even storing it on a third-party providers servers. As a
result, many cloud-computing vendors offer greater data security and confidentiality than companies that
choose to store their data in-house. However, not all vendors will offer the same level of security. It is
recommended that anyone with concerns over security and access should research vendors' policies before
using their services. Technology analyst and consulting firm Gartner lists seven security issues to bear in
mind when considering a particular vendors services:
1. Privileged user accessenquire about who has access to data and about the hiring and
management of such administrators
2. Regulatory compliancemakes sure a vendor is willing to undergo external audits and/or
security Certifications.
3. Data locationask if a provider allows for any control over the location of data
4. Data segregationmake sure that encryption is available at all stages and that these "encryption
Schemes were designed and tested by experienced professionals".
5. Recoveryfind out what will happen to data in the case of a disaster; do they offer complete
8

St. Angelos Professional Education

Cloud Infrastructure (Networking)

Restoration and, if so, how long that would take.


6. Investigative Supportinquire whether a vendor has the ability to investigate any inappropriate
or illegal activity
7. Long-term viabilityask what will happen to data if the company goes out of business; how will
data be returned and in what format Generally speaking, however, security is usually improved by
keeping data in one centralised location. In high security data centres like those used by Think
Grid, security is typically as good as or better than traditional systems, in part because providers
are able to devote resources to solving security issues that many customers cannot afford.

St. Angelos Professional Education

Cloud Infrastructure (Networking)

Chapter 2
Web Application
In earlier computing models, e.g. in client-server, the load for the application was shared between code on
the server and code installed on each client locally. In other words, an application had its own client
program which served as its user interface and had to be separately installed on each user's personal
computer. An upgrade to the server-side code of the application would typically also require an upgrade to
the client-side code installed on each user workstation, adding to the support cost and
decreasing productivity.
A web based application is a software package that can be accessed through the web browser. The
software and database reside on a central server rather than being installed on the desktop system and is
accessed over a network.
Web based applications are the ultimate way to take advantage of today's technology to enhance your
organizations productivity & efficiency. Web based application gives you an opportunity to access your
business information from anywhere in the world at anytime. It also facilitates you to save time & money
and improve the interactivity with your customers and partners.
It allow your administration staff to work from any location and sales staff to access information remotely
24 hours a day, 7 days a week. With a computer connected to the Internet, a web browser and the right
user name and password you can access the systems from any location. Web-based applications are easy
to use and can be implemented without interrupting your existing work process. Whether you need a
content managed solution or an e-commerce system, we can develop a customized web application that
fulfills your business requirements.
The Pros and Cons of Cloud Service Development
Why would you choose to develop new applications using the cloud services model? There are several
good reasons to doand a few reasons to be, perhaps, a bit more cautious.
Advantages of Cloud Development
One of the underlying advantages of cloud development is that of economy of scale. By taking advantage
of the infrastructure provided by a cloud computing vendor, a developer can offer better, cheaper, and
more reliable applications than is possible within a single enterprise. The application can utilize the full
resources of the cloud, if neededwithout requiring a company to invest in similar physical resources.
Speaking of cost, because cloud services follow the one-to-many model, cost is significantly reduced over
individual desktop program deployment. Instead of purchasing or licensing physical copies of software
programs (one for each desktop), cloud applications are typically rented, priced on a per-user basis.
Its more of a subscription model than an asset purchase (and subsequent depreciation) model, which
means theres less up-front investment and a more predictable monthly expense stream. IT departments
like cloud applications because all management activities are managed from a central location rather than
from individual sites or workstations.
This enables IT staff to access applications remotely via the web. Theres also the advantage of quickly
outfitting users with the software they need (known as rapid provisioning), and adding more computing
resources as more users tax the system (automatic scaling). When you need more storage space or
bandwidth, companies can just add another virtual server from the cloud. Its a lot easier than purchasing,
installing, and configuring a new server in their data center.
10

St. Angelos Professional Education

Cloud Infrastructure (Networking)

For developers, its also easier to upgrade a cloud application than with traditional desktop software.
Application features can be quickly and easily updated by upgrading the centralized application, instead of
manually upgrading individual applications located on each and every desktop PC in the organization.
With a cloud service, a single change affects every user running the application, which greatly reduces the
developers workload.
Disadvantages of Cloud Development
Perhaps the biggest perceived disadvantage of cloud development is the same one that plagues all webbased applications: Is it secure? Web-based applications have long been considered potential security
risks. For this reason, many businesses prefer to keep their applications, data, and IT operations under
their own control.
That said, there have been few instances of data loss with cloud-hosted applications and storage. It could
even be argued that a large cloud hosting operation is likely to have better data security and redundancy
tools than the average enterprise. In any case, however, even the perceived security danger from hosting
critical data and services offsite might discourage some companies from going this route.
Another potential disadvantage is what happens if the cloud
computing host goes offline. Although most companies say this
isnt possible, it has happened. Amazons EC2 service suffered a
massive outage on February 15, 2008, that wiped out some
customer application data. (The outage was caused by a software
deployment that erroneously terminated an unknown number of
user instances.) For clients expecting a safe and secure platform,
having that platform go down and your data disappear is a
somewhat rude awakening. And, if a company relies on a thirdparty cloud platform to host all of its data with no other physical
backup, that data can be at risk.
Types of Cloud Service Development
The concept of cloud services development encompasses several different types of development. Lets
look at the different ways a company can use cloud computing to develop its own business applications.
Software as a Service
Software as a service, or SaaS, is probably the most common type
of cloud service development. With SaaS, a single application is
delivered to thousands of users from the vendors servers.
Customers dont pay for owning the software; rather, they pay for
using it. Users access an application via an API
Accessible over the web. Each organization served by the vendor
is called a tenant, and this type of arrangement is called a
multitenant architecture. The vendors servers are virtually
partitioned so that each organization works with a customized virtual application instance. For customers;
SaaS requires no upfront investment in servers or software licensing. For the application developer, there
is only one application to maintain for multiple clients.
Many different types of companies are developing applications using the SaaS model. Perhaps the bestknown SaaS applications are those offered by Google to its consumer base.
11

St. Angelos Professional Education

Cloud Infrastructure (Networking)

Platform as a Service
In this variation of SaaS, the development environment is offered as a service. The developer uses the
building blocks of the vendors development environment to create his own custom application. Its
kind of like creating an application using Legos; building the app is made easier by use of these
predefined blocks of code, even if the resulting app is somewhat constrained by the types of code blocks
available.
Web Services
A web service is an application that operates over a networktypically, over the Internet. Most typically,
a web service is an API that can be accessed over the Internet. The service is then executed on a remote
system that hosts the requested services. This type of web API lets developers exploit shared functionality
over the Internet, rather than deliver their own full-blown applications. The result is a customized webbased application where a large hunk of that application is delivered by a third party, thus easing
development and bandwidth demands for the custom program.
A good example of web services are the mashups created by users of the Google Maps API. With these
custom apps, the data that feeds the map is provided by the developer, where the engine that creates the
map itself is provided by Google. The developer doesnt have to code or serve a map
application; all he has to do is hook into Googles web API.
As you might suspect, the advantages of web services include faster (and lower-cost) application
development, leaner applications, and reduced storage and bandwidth demands. In essence, web services
keep developers from having to reinvent the wheel every time they develop a new application. By reusing
code from the web services provider, they get a jump-start on the development of their own applications.
On-Demand Computing
As the name implies, on-demand computing packages computer resources (processing, storage, and so
forth) as a metered service similar to that of a public utility. In this model, customers pay for as much or as
little processing and storage as they need. Companies that have large demand peaks followed by much
lower normal usage periods particularly benefit from utility computing. The company pays more for their
peak usage, of course, but their bills rapidly decline when the peak
ends and normal usage patterns resume. Clients of on-demand
computing services essentially use these services as offsite virtual
servers. Instead of investing in their own physical infrastructure, a
company operates on a pay-as-you-go plan with a cloud services
provider.
On-demand computing itself is not a new concept, but has
acquired new life thanks to cloud computing. In previous years,
on-demand computing was provided from a single server via some
sort of time-sharing arrangement.
Today, the service is based on large grids of computers operating
as a single cloud.
Discovering Cloud Services Development Services and Tools
As youre aware, cloud computing is at an early stage of its development. This can be seen by observing
the large number of small and start-up companies offering cloud development tools. In a more established
industry, the smaller players eventually fall by the wayside as larger companies take center stage.

12

St. Angelos Professional Education

Cloud Infrastructure (Networking)

That said, cloud services development services and tools are offered by a variety of companies, both large
and small. The most basic offerings provide cloud-based hosting for applications developed from scratch.
The more fully featured offerings include development tools and pre-built applications that
developers can use as the building blocks for their own unique web-based applications. So lets settle back
and take a look at who is offering what in terms of cloud service development. Its an interesting mix of
companies and services.
Amazon
Thats right, Amazon, one of the largest retailers on the Internet, is also one of the primary providers of
cloud development services. Think of it this way: Amazon has spent a lot of time and money setting up a
multitude of servers to service its popular website, and is making those vast hardware resources available
for all developers to use.
Google App Engine
Google is a leader in web-based applications, so its not surprising that the company also offers cloud
development services. These services come in the form of the Google App Engine, which enables
developers to build their own web applications utilizing the same infrastructure that powers Googles
powerful applications.
The Google App Engine provides a fully integrated application environment. Using Googles
development tools and computing cloud, App Engine applications are easy to build, easy to maintain, and
easy to scale.
IBM
Its not surprising, given the companys strength in enterprise-level computer hardware, that IBM is
offering a cloud computing solution. The company is targeting small- and medium-sized businesses with a
suite of cloud-based on demand services via its Blue Cloud initiative. Blue Cloud is a series of cloud
computing offerings that enables enterprises to distribute their computing needs across a globally
accessible resource grid.
One such offering is the Express Advantage suite, which includes data backup and recovery, email
continuity and archiving, and data security functionality some of the more data-intensive processes
handled by a typical IT department.

13

St. Angelos Professional Education

Cloud Infrastructure (Networking)

Chapter 3

Cloud Server Virtualisation


Virtualization abstracts the underlying physical structure of various technologies. Virtualization, in
computing, is the creation of a virtual (rather than actual) version of something, such as a hardware
platform, operating system, a storage device or network resources
Server virtualization
Creates multiple isolated environments
Allows multiple OSs and workloads to run on the same physical hardware
Solves the problem of tight coupling between OSs and hardware
You

Know Virtualization Is

Real When It Makes It to delibert

The Traditional Server Concept

The Virtual Server Concept

14

St. Angelos Professional Education

Cloud Infrastructure (Networking)


Virtual Machines
Virtual machines provide:
Hardware independence Guest VM sees the same
hardware regardless of the host hardware
Isolation VMs operating system is isolated from the
host operating system
Encapsulation Entire VM encapsulated into a single file

Benefits of Virtualization

Simplified administration
Hardware independence/portability
Increased hardware utilization
Server consolidation
Decreased provisioning times
Improved security

Software Development
Testing / Quality Assurance
Product evaluations /demonstrations
Training
Disaster Recovery

Server Consolidation

15

St. Angelos Professional Education

Cloud Infrastructure (Networking)


Top Reasons for Virtualization

Reduce Physical Infrastructure Cost

(E.g. Power & Cooling)

Minimize Lost Revenue Due to downtime

Virtualization Reduce Energy Consumption


Highest consolidation rates on most
secure and reliable virtualization
platform
Safely improve utilization rates
80% energy reduction
Dynamic server and storage migration
Power off unneeded servers in real-time
Migrate storage dynamically

25% energy reduction


Host desktop PCs in the datacenter Use
thin clients; double refresh cycle Reduce
storage for similar desktop images
70% energy reduction

16

St. Angelos Professional Education

Cloud Infrastructure (Networking)


Virtualisation software Available Today

VMware

VMware released ESX and GSX 1.0 in 2001. Virtual Center released in 2003.
Has the most experience
Is the farthest along
Very mature product suite
Focus is on integrating IT process automation around virtualization
Citrix

Citrix Xenserver acquired Xensource on August 15th, 2007


Has working low cost server virtualization solution
Focus is on client virtualization
Microsoft

Microsoft Hyper-V (formerly Windows Server Virtualization)


Standalone version released in October 2008
Real solution (one with HA) has been out since August 2009.
What is Available from VMware
VMwares vSphere

Key Features

Market Leader
Virtualizes 54 Guest OSs
Server virtualization solution with HA
and load balancing
Enhanced vMotion
Memory Over commit
Transparent Page Sharing
Patch Management
Fault Tolerance built in

Modernizing the Desktop Virtual Desktop Infrastructure

17

Certified on over 450 servers


FC, iSCSI, NFS Supported
Power Management
Distributed switch
Supports storage
management
Storage vMotion

St. Angelos Professional Education

Cloud Infrastructure (Networking)


The Disadvantages of

Virtualization

Virtualization may not work well for:


Resource-intensive applications

VMs may have RAM/CPU/SMP limitations


Performance testing
Hardware compatibility testing
Specific hardware requirements

Custom hardware devices

Some hardware architectures or features are impossible to virtualized


Certain registers or state not exposed
Unusual devices and device control
Clocks, time, and real-time behavior
Server Virtualization Techniques
There are three ways to create virtual servers: full virtualization, Para - Virtualization and OS-level
virtualization. They all share a few common traits. The physical server is called the host. The virtual servers
are called guests. The virtual servers behave like physical machines. Each system uses a different approach to
allocate physical server resources to virtual server needs.
Full virtualization - uses a special kind of software called a hypervisor. The hypervisor interacts directly with
the physical server's CPU and disk space. It serves as a platform for the virtual servers' operating systems. The
hypervisor keeps each virtual server completely independent and unaware of the other virtual servers running
on the physical machine. Each guest server runs on its own OS -- you can even have one guest running
on Linux and another on Windows.
The hypervisor monitors the physical server's resources. As virtual servers run applications, the hypervisor
relays resources from the physical machine to the appropriate virtual server. Hypervisors have their own
processing needs, which mean that the physical server must reserve some processing power and resources to
run the hypervisor application. This can impact overall server performance and slow down applications.
Para-virtualization - Approach is a little different. Unlike the full virtualization technique, the guest servers
in a para-virtualization system are aware of one another. A para-virtualization hypervisor doesn't need as
much processing power to manage the guest operating systems, because each OS is already aware of the
demands the other operating systems are placing on the physical server. The entire system works together as a
cohesive unit.
OS-level virtualization - Approach doesn't use a hypervisor at all. Instead, the virtualization capability is part
of the host OS, which performs all the functions of a fully virtualized hypervisor. The biggest limitation of this
approach is that all the guest servers must run the same OS. Each virtual server remains independent from all
the others, but you can't mix and match operating systems among them. Because all the guest operating
systems must be the same, this is called a homogeneous environment.
Which method is best? That largely depends on the network administrator's needs. If the administrator's
physical servers all run on the same operating system, then an OS-level approach might work best. OS-level
systems tend to be faster and more efficient than other methods. On the other hand, if the administrator is
running servers on several different operating systems, para-virtualization might be a better choice. One
18

St. Angelos Professional Education

Cloud Infrastructure (Networking)

potential drawback for para-virtualization systems is support -- the technique is relatively new and only a few
companies offer para-virtualization software. More companies support full virtualization, but interest in paravirtualization is growing and may replace full virtualization in time.
VPS (Virtual Private Server)
A VPS, or Virtual Private Server, is a logical segment of a physical machine set aside for the exclusive use of
a single business or other type of entity. Although a single server can run several VPS configurations, each
segment offers the same functionality that a dedicated server would provide.
What is VPS Hosting?
Most small to medium-sized businesses prefer to use web hosting services instead of maintaining a
proprietary, in-house server room for most, if not all, of their computing needs. Instead of making do with
outdated machines or dealing with expensive upgrades, out-sourced hosting allows both individuals and
organizations to have the use of state-of-the-art equipment with 24/7 support for a mere fraction of the cost.
In the past, interested clients had two choices in the hosting
realm: shared or dedicated. Shared hosting is exactly what it
sounds like. Multiple clients use a single server to run a variety
of applications. While this works well in theory, individual
systems were often impacted when another application on the
shared server used more than its fair share of bandwidth,
storage space, or CPU cycles.
A dedicated server eliminates this problem by providing an
individual server for each client. However, this option can be
very expensive for anyone on a tight budget. Virtualization
through VPS hosting bridges the gap between shared and
dedicated hosting by providing an affordable solution to allow
clients to share a physical machine without the ability to impact
neighboring systems.
How Does a VPS Work?
To create a Virtual Private Server, hosting companies often use the following two methods to partition the
machines:
Hypervisor Also known as a virtual machine manager, or VMM, the hypervisor manages, or supervises,
the resources of the virtual servers to allow multiple OS installations to run on the same physical machine.
Popular hypervisor virtualization solutions include VMware ESX , Microsoft Hyper-V, Xen, and KVM.

Container This mode is also known as operating system-level virtualization or a kernel-based system. In
this method, separate containers, or user spaces, are created for each VPS. Popular container virtualization
solutions include Parallels Virtuozzo and OpenVZ.
In some cases, a Virtual Private Server is called a VDS, or virtual dedicated server. However, the terms refer
to the same concept where one physical machine is configured to function like multiple servers dedicated to
meet each customers individual needs with the same level of privacy and configuration options as a true
independent server.

19

St. Angelos Professional Education

Cloud Infrastructure (Networking)

VPS is like Your Own Server


A Virtual Private Server makes system provisioning quick and easy. When the need arises, simply let your
VPS hosting service know that you need to expand or contract the resources allocated for your system. In
most cases, the adjustment can be made immediately. Some VPS Hosting providers have self-service features
that allow you to make these adjustments yourself for the fastest results possible.
Resources that can be expanded or contracted on demand include:

RAM / Memory
CPU
Disk Space / Hard Disk
Bandwidth
IP Addresses
VPS (Virtual Private Server) and the Cloud
Cloud hosting involves spreading resources across multiple servers at one or more remote locations. The user
doesnt know where or how the information is stored but is fully aware that the system or stored data is easily
accessible at any time. Because the typical client is sharing large banks of servers with other customers, the
cloud is inherently virtualized, just like a VPS.
Using a VPS now will help ease your transition to cloud hosting services in the future as this new technology
matures because your logical process will already be separated from the physical hardware needs.

20

St. Angelos Professional Education

Cloud Infrastructure (Networking)

Chapter 4
Installing & Configuring Virtual Server
Before you left click that mouse to go to that other work related page, wait a few seconds while talking
about there is a ton of hyped up, blown out and super hyperventilated information out there about how the
cloud makes your life better, reduces your workload and ultimately makes your coffee and butters your toast,
not much is said about how the cloud can help your company save or make money.
Before starting the explanation, first lets say that there is no such thing as a free lunch and no one gets
something for nothing. The cloud, like any other technology or methodology in IT, requires CAPEX
investment in order to be effectively utilized and have the desired benefitsand ultimately drive OPEX costs
down over time (within the ROI horizon) or provide efficiencies that increase revenues. No hocus-pocus, no
magicit takes careful thought and some hard work but, yes Virginia, revenue benefits and cost savings do
exist in the cloud. Of course you must calculate savings after all implementation expenses are accounted
forthings like hardware and software acquisition costs, personnel and space requirements, training, etc.
Secondly frame this discussion based on an internal, private cloud only (but many of the same characteristics
exist for other types of clouds), as I just dont have the space to explicitly differentiate here.
Third, compare the costs based on a relatively mature traditional datacenter against the same data center but
with a cloud infrastructure implemented and running in a steady-state. A traditional datacenter, in my view, is
partially (<30%) virtualized with little to no automation or orchestration and is moderately managed from a
holistic perspective.
Were all straight now so how we lay out the rest of this post is that we will first describe a couple of
scenarios that exist in a traditional datacenter and then explain how they would be done in a cloud
infrastructure.
Time to Market/Value
1. Traditional:
1. A business owner or LOB owner decides they need an application built that will provide a new revenue
stream to the organization so they describe, to a Business Analyst, what they want the application to do in
the form of business requirements.
2. The Business Analyst then takes those requirements and translates them to functional requirements
(iterating with the Business as to end results required) and then uses those as the basis for the technical
requirements which describe the supporting hardware and software (COTS or purpose built).
3. A Technical Analyst or developer uses the technical requirements and produces a series of hardware
and software specifications for the procurement of the hardware or software resources required to support
the requested application.
4. Once completed, a cost analysis is done to determine the acquisition costs of the hardware, any COTS
software, an estimate of in-house developed software, testing and QA of the application, and the eventual
rollout.
5. The business analyst then takes that cost analysis and creates an ROI/TCO business case which the
Business owner or LOB owner then takes to Senior Management to get the application approved.
6. Upon approval, the application is assigned a project number and the entire package is turned over to
Procurement who will then write and farm out an RFP, or, check an approved vendor list, or otherwise go
through their processes in order to acquire the hardware and software resources.
21

St. Angelos Professional Education

Cloud Infrastructure (Networking)

7. Approximately 8 to 16 weeks from the beginning of the process, the equipment is on the dock and
shortly thereafter racked and stacked waiting for the Developer group to begin work on the application.
2.
Cloud:
1. A business owner or LOB owner decides they need an application built that will provide a new revenue
stream to the organization so they describe, to a Business Analyst, what they want the application to do to
in the form of business requirements.
2. The Business Analyst then takes those requirements and translates them to functional requirements
(iterating with the Business as to end results required) and then uses those as the basis for the technical
requirements which describe the supporting hardware and software.
3. A Technical Analyst or developer uses the technical requirements and produces a series of hardware
and software configurations required to support the requested application.
4. Once completed, a cost analysis is done to determine the start-up and monthly utilization costs
(chargeback details), an estimate of any in-house developed software, testing/QA, and the eventual rollout
of the application.
5. The business analyst then takes that cost analysis and creates an ROI/TCO business case which the
business owner or LOB owner then takes to Senior Management to get the application approved.
6. Upon approval notification, the Developer group accesses a self-service portal where they select the
required resources from a Service Catalog. The resources are ready within a few hours.
7. Approximately 3 to 6 weeks from the beginning of the process (up to 10 weeks earlier than a traditional
datacenter), the computing resources are waiting for the Developer group to begin work on the application.
3.
Savings/Benefit:
1. If the potential revenue from the proposed application is $250,000 a week (an arbitrary, round number),
then having that application ready up to 10 weeks earlier means an additional $2,500,000 in revenue.
NOTE: The greater the disparity of resource availability, traditional versus cloud infrastructure, the greater
the potential benefit.
Hardware Acquisition
4.
Traditional:
1. A business owner or LOB owner decides they need an application built that will provide a new revenue
stream to the organization so they describe, to a Business Analyst, what they want the application to do in
the form of business requirements.
2. The Business Analyst then takes those requirements and translates them to functional requirements
(iterating with the Business as to end results required) and then uses those as the basis for the technical
requirements which describe the supporting hardware and software (COTS or purpose built).
3. A Technical Analyst or developer uses the technical requirements and produces a series of hardware
and software specifications for the procurement of the hardware or software resources required to support
the requested application. The hardware specifications are based on the predicted PEAK load of the
application plus a margin of safety (overhead) to ensure application stability over time.
4. That safety margin could be between 15% and 30% which effectively means that the procurement of the
equipment is always aligned to the worst case scenario (peak processing/peak bandwidth/peak I/O) so for
every application, the most expensive hardware configuration has to be specified.

22

St. Angelos Professional Education

Cloud Infrastructure (Networking)

5.
Cloud:
1. A business owner or LOB owner decides they need an application built that will provide a new revenue
stream to the organization so they describe, to a Business Analyst, what they want the application to do in the
form of business requirements.
2. The Business Analyst then takes those requirements and translates them to functional requirements
(iterating with the Business as to end results required) and then uses those as the basis for the technical
requirements which describe the supporting hardware and software (COTS or purpose built).
3. A Technical Analyst or developer uses the technical requirements and produces a series of hardware and
software configurations required to support the requested application.
4. The required configurations for the cloud infrastructure compute resources are documented and given to
the developer group.
6.
Savings/Benefit:
Because the hardware resources within the cloud infrastructure are abstracted and managed apart from the
actual hardware, equipment specifications no longer drive procurement decisions.
The standard becomes the lowest-cost, highest quality commodity class of server versus the individually
specified purpose built (highest cost) class of server thus saving approximately 15%-50 of ongoing server
hardware costs.
NOTE: I mentioned this earlier but think it needs to be said again: savings become real after all cloud
infrastructure implementation costs are recovered.
These are just two examples of where an internal cloud can specifically help an organization derive direct
revenue benefit or cost savings (there are many more). But, as always, it depends on your environment, what
you want to do, how much you want to spend, and how long you want to take to get there.
Cloud computing server architecture: Designing for cloud
A server is one of those industry terms whose definition is broadly understood yet at the same time
ambiguous. Yes, "server" means a computing platform on which software is hosted and from which client
access is provided. However, the generalizations end there. Not only are there many different vendors that
manufacture servers, but there are also a variety of server architectures, each with its own requirements. A
mail server, a content server, a Web server and a transaction server might all need a different mixture of
compute, network and storage resources. The question for many providers is: What does a cloud computing
server need?
The answer will depend on the target market for the cloud service and how that market is reflected in the
applications users will run. Servers provide four things: compute power from microprocessor chips, memory
for application execution,
I/O access for information storage and retrieval, and network access for connecting to other resources. Any
given application will likely consume each of these resources to varying degrees, meaning applications can be
classified by their resource needs. That classification can be combined with cloud business plans to yield a
model for optimum cloud computing server architecture.
For a starting point in cloud computing server architectures, it's useful to consider the Facebook Open
Compute project's framework. Facebook's social networking service is a fairly typical large-scale Web/cloud
application, and so its specific capabilities are a guide for similar applications. We'll also discuss how these
capabilities would change for other cloud applications.
23

St. Angelos Professional Education

Cloud Infrastructure (Networking)

Cloud computing servers needs may not align with facebook Open Compute
The Open Compute baseline is a two- socket design that allows up to 12 cores per socket in the Version 2.x
designs. Memory capacity depends on the dual inline memory modules (DIMMs) used, but up to 256 GB is
practical. The design uses a taller tower for blades to allow for better cooling with large lower-powered fans.
Standard serial advanced technology attachment (SATA) interfaces are provided for storage and Gigabit
Ethernet is used for the network interface. Facebook and the Open Compute project claim a 24% cost of
ownership advantage over traditional blade servers. Backup power is provided by 48-volt battery systems,
familiar for those who have been building to the telco Network Equipment Building System (NEBS) standard.
The Open Compute reference has a high CPU density, which is why a higher tower and good fans are
important. However, many cloud applications will not benefit from this high of a CPU density for several
reasons:
Some cloud providers may not want to concentrate too many users, applications or virtual machines onto a
single cloud computing server for reliability reasons.
The applications running on a cloud computing server may be constrained by the available memory or by
disk access, and the full potential of the CPUs might not be realized.
The applications might be constrained by network performance and similarly be unable to fully utilize the
CPUs/cores that could be installed.
If any of these constraints apply, then it may be unnecessary to consider the higher cooling potential of the
Open Compute design, and shorter towers may be easier to install to support a higher overall density of cloud
computing servers.
How storage I/O affects cloud computing server needs
The next consideration for cloud computing server architecture is storage. Web applications typically dont
require a lot of storage and don't typically make large numbers of storage I/O accesses per second. That's
important because applications that are waiting on storage I/O are holding memory capacity while they wait.
Consider using larger memory configurations for cloud applications that are more likely to use storage I/O
frequently to avoid having to page the application in and out of memory. Also, it may be difficult to justify
the maximum number of CPUs/cores for applications that do frequent storage I/O, as CPU usage is normally
minimal when an application is waiting for I/O to complete.
A specific storage issue cloud operators may have with the Open Compute is the storage interface. Web
applications are not heavy users of disk I/O, and SATA is best suited for dedicated local server access rather
than storage pool access.
Additionally, it is likely that a Fibre Channel interface would be preferable to SATA for applications that
demand more data storage than typical Web servers -- including many of the Platforms as a Service (PaaS)
offerings that will be tightly coupled with enterprise IT in hybrid clouds. Software as a Service (SaaS)
providers must examine the storage usage of their applications to determine whether more sophisticated
storage interfaces are justified.
Cloud computing server guidelines to consider
Here are some summary observations for cloud providers looking for quick guidance on cloud computing
server architecture:
You will need more sophisticated storage interfaces and more installed memory, but likely fewer
CPUs/cores for applications that do considerable storage I/O. This means that business intelligence (BI),
report generation and other applications that routinely examine many data records based on a single user
24

St. Angelos Professional Education

Cloud Infrastructure (Networking)

request will deviate from the Open Compute model. Cloud providers may also need more memory in these
applications to limit application paging overhead.
Cloud providers will need more CPUs/cores and memory for applications that use little storage -particularly simple Web applications -- because only memory and CPU cores will limit the number of
users that can be served in these applications.
Pricing models that prevail for Infrastructure as a Service (IaaS) offerings tend to discourage applications
with high levels of storage, so most IaaS services can likely be hosted on Open Compute model servers
with high efficiency.
PaaS services are the most difficult to map to optimum server configurations, due to potentially significant
variations in how the servers will utilize memory, CPU and especially server resources.
For SaaS clouds, the specific nature of the application will determine which server resources are most
used and which can be constrained without affecting performance.

The gold standard for server design is benchmarking. A typical mix of cloud applications running on a
maximum-sized, high-performance configuration can be analyzed for resource utilization. The goal is to avoid
having one resource type -- CPU capacity, for example-- become exhausted when other resources are still
plentiful. This wastes resources and power, lowering your overall return on investment (ROI). By testing
applications where possible and carefully monitoring resource utilization to make adjustments, cloud
providers can sustain the best ROI on cloud computing servers and the lowest power consumption. That's key
in meeting competitive price points while maximizing profits.
How can you use the cloud?
The cloud makes it possible for you to access your information from anywhere at any time. While a traditional
computer setup requires you to be in the same location as your data storage device, the cloud takes away that
step. The cloud removes the need for you to be in the same physical location as the hardware that stores your
data. Your cloud provider can both own and house the hardware and software necessary to run your home or
business applications. This is especially helpful for businesses that cannot afford the same amount of hardware
and storage space as a bigger company. Small companies can store their information in the cloud, removing
the cost of purchasing and storing memory devices. Additionally, because you only need to buy the amount of
storage space you will use, a business can purchase more space or reduce their subscription as their business
grows or as they find they need less storage space. One requirement is that you need to have an internet
connection in order to access the cloud. This means that if you want to look at a specific document you have
housed in the cloud, you must first establish an internet connection either through a wireless or wired internet
or a mobile broadband connection. The benefit is that you can access that same document from wherever you
are with any device that can access the internet. These devices could be a desktop, laptop, tablet, or phone.
This can also help your business to function more smoothly because anyone who can connect to the internet
and your cloud can work on documents, access software, and store data. Imagine picking up your Smartphone
and downloading a .pdf document to review instead of having to stop by the office to print it or upload it to
your laptop. This is the freedom that the Cloud can provide for you or your organization.
Configuring Websites in Windows .NET Server/IIS 6.0
At times there might be situations where you need to host your ASP.NET applications from your corporate
server or your own machine. A scenario where this might be needed is when you have large amounts of data
on your Web site and you are concerned about the big bucks your hosting provider will charge you for disk
space, bandwidth and database maintenance. Internet Information Services 6 (IIS 6) can be used for hosting
your Web site. IIS 6 is a powerful platform for hosting Web sites. Creating and configuring Web sites and
25

St. Angelos Professional Education

Cloud Infrastructure (Networking)

Virtual Directories using IIS are as easy as 1-2-3. In this section we will see how we can create a Website
using IIS 6.0 and configure it.
Creating a Website
The first thing you need before creating a Web site using IIS 6.0 is a unique IP address that identifies your
computer on the network. This address takes the form of a string of four numbers separated by periods (.). For
your site to be up and running you also need a connection to the Internet. You need to lease a line from an
Internet Service Provider (ISP) or a telephone company. When you open IIS Manager in Administrative Tools
and select Web sites in the console tree, and right-click on default Web site and open its properties you will
find that the IP address for the default Web site is All Unassigned. This means any IP address not specifically
assigned to another Web site on the machine opens the Default Web site instead. A typical use for the Default
Web site is to display general information like a corporate logo and contact information.
Let's assume that we will use the IP address 169.16.13.211 for creating Startvbdotnet.com and
C:\Startvbdotnet is the folder where the homepage for this site is located. To create the Start vbdotnet Web
site, right-click on the Web Sites node and select New->Web Site to start the Web Site Creation Wizard as
shown in the images below.

Click next on the Web site creation wizard dialog and type a description for the site as shown in the image
below.

26

St. Angelos Professional Education

Cloud Infrastructure (Networking)

After typing the description click next to open the dialog where you need to specify the IP address and port
number for your Web site. As mentioned above, type 169.16.13.211 in the IP address textbox and 80 in the
TCP port textbox. The dialog looks like the image below.
Click Next and specify C:\Startvbdotnet as the home
directory for the site. Notice the checkbox that says "Allow
anonymous access to this Web site". By default, it is
checked, which means the Web site which we are creating
is accessible by general public on the Internet. If you are
creating an intranet site which will be used only by
authenticated users then you need to uncheck this
checkbox. The image below displays that.

Click Next to get to the Web Site Access


Permissions dialog. By default, the Read and Run
scripts checkboxes are checked which means that
your Web site will run scripts such as ASP and is
only a read-only Web site where users can't make
changes to it. If you want users to download
content from your Web site, modify it and upload the modified content then you need to check the Write
checkbox. The image below displays that.

27

St. Angelos Professional Education

Cloud Infrastructure (Networking)

Click Next and then Finish to create the new Web site. The
image below displays the new Web site which we created in
IIS.

Virtual Directories
A virtual directory is a friendly name, or
alias, either for a physical directory on your
server hard drive that does not reside in the
home directory, or for the home directory on
another computer. Because an alias is
usually shorter in length than the path of the
physical directory, it is more convenient for
users to type. The use of aliases is also
secure because users do not know where
your files are physically located on the
server and therefore cannot use that
information to modify your files. Aliases
also make it easier for you to move
directories in your site. Rather than
changing the URL for the directory, you
change the mapping between the alias and
the physical location of the directory.
You must create virtual directories if your Web site contains files that are located in a directory other than the
home directory, or on other computer's hard drive. To use a directory on another computer, you must specify
the directory's Universal Naming Convention (UNC) name, and provide a user name and password for access
rights.
Also, if you want to publish content from any directory not contained within your home directory, you must
create a virtual directory.
Creating a Virtual Directory
Let's say Start vbdotnet keeps their contacts in a folder called C:\StartvbdotnetContacts on their web server
and would like users to be able to use the URL http://169.16.13.211/contacts when they need to access contact
information. To do this we need to create a virtual directory that associates the /contacts portion of the URL,
the alias for the virtual directory, with the physical directory C:\StartvbdotnetContacts where these documents
are actually located.
28

St. Angelos Professional Education

Cloud Infrastructure (Networking)

To create a new virtual directory, right-click on Start vbdotnet Web site and select New->Virtual Directory to
start the Virtual Directory Creation Wizard. The images below display that.

Click Next and type the alias for the virtual directory, say, contacts as shown in the image below.

Click Next and specify the physical folder on the


local server to map to this alias. The physical folder
on the server is C:\StartvbdotnetContacts. The image
below shows that.

Click Next and specify permissions for this Virtual Directory as shown in the image below.

29

St. Angelos Professional Education

Cloud Infrastructure (Networking)

Click next and finish the virtual directory creation wizard. The images below display the result. You can
see the new virtual directory, contacts, with a gear symbol in the IIS wizard.

When users type http://169.16.13.211/contacts in their browser they will be shown a page with contact
information for Start vbdotnet Web site. What actually happens is the content comes from a directory located
outside the Web site directory but the address bar in the browser shows that the directory is part of the Web.
Controlling Access to Web Site
Now that we created a Web site and a virtual directory we will look at some of the administrative tasks that
are required to control the Web site. The settings in this article apply only to Start vbdotnet Web site which we
created in IIS and not to all Web sites under IIS. The procedure is same if you want to set the properties for all
Web sites. If you want to set the following properties for all Web sites under IIS then you need to right-click
on Web Sites in IIS and select properties from the menu and follow the steps which are mentioned.
When you right-click on the Start vbdotnet Web site in IIS and select properties, the properties window that is
displayed looks like the image below.
30

St. Angelos Professional Education

Cloud Infrastructure (Networking)

As you might notice from the above image the dialog box
displays information as tabs, all of which are discussed
below.
Web Site Information (Web Site Tab)
By defaut, the Web site tab is displayed when you rightclick and select properties for any of the Web sites in IIS.
The information under Web site tab is discussed below.
Web site identification
The Web site identification part displays general information
like the description of the Website, IP address and the port
number it is using.
Connections Connection timeout
Connection timeouts are used to reduce the amount of
memory resources that are consumed by idle connections.
Time-out settings also allow you to specify how long server resources are allocated to specific tasks or clients.
The default connection timeout setting set by IIS is 120 seconds which means that when a visitor accesses
your site and has no activity on your site for 2 mins his connection will be timed out.
Enable HTTP Keep-Alives
Most Web browsers request that the server keep the client connection open while the server sends multiple
elements like .htm files and .gif or .jpeg files to the client. Keeping the client connection open in this way is
referred to as an HTTP Keep-Alive. Keep-Alive is an HTTP specification that improves server performance.
HTTP Keep - Alives is enabled by default in IIS.
Enable Logging
The logging feature allows you to collect information
about user activity on your site. Information such as who
has visited your site, what the visitor viewed, and when the
information was last viewed, etc, can be collected with this
feature. The default logging format is the W3C Extended
Log File Format. You can also change the logging format
based on your preferences. To change the logging format
you need to make a selection from the active log format
drop-down list.
To set how often you want your new log file to be created
click the properties button to open the Logging Properties
dialog as shown in the image below.
The Logging Properties dialog shown in the image above
allows you to record log information on an hourly basis or
daily or weekly or monthly basis or based on file size. If you select the Weekly option then a log file is created
once every week. You can also change the location of the log file on your server in the Logging Properties
dialog.
31

St. Angelos Professional Education

Cloud Infrastructure (Networking)

Performance (Performance Tab)


The Performance tab let's you control the performance of your Web site, like, setting the amount of bandwidth
per second and allowing the number of simultaneous connections accessing the Web site at a given time. The
dialog looks like the image below.
Bandwidth throttling
If the network or Internet connection used by our Web
server is also used by other services such as e-mail,
then we might want to limit the bandwidth used by our
Web server so that it is available for those other
services. If our Web server hosts more than one Web
site, you can individually throttle the bandwidth used
by each site. By default, bandwidth throttling is
disabled. If you want to enable it, check the checkbox
and enter the bandwidth you want in kbps.
Web site connections
Connection limits restrict the number of simultaneous
client connections to our Web site. Limiting
connections not only conserves memory but also
protects against malicious attacks designed to overload
our Web server with thousands of client requests. By
default, unlimited connections are allowed. If you want
to limit the number of connections then you need to select the "Connections limited to" radio button and enter
the number of connections you want to access your site at a given time.
Home Directory
The Home Directory tab in the properties dialog for the Web site is displayed below.
As you can see from the image above, the content for this Web site comes from the local path on the server. If
you want the content for this Web site to come from another
computer located on a network you need to select the radio
button which says "A share located on another computer" and
enter the computer on the network.
Redirecting
Sometimes when your site is experiencing technical difficulties
or if you are doing maintenance you need to redirect visitors to
another site or to another page informing what is going on. IIS
lets you redirect a Web site to a different file or folder on the
same machine or to an URL on the Internet. To configure
redirection you need to select the "A redirection to a URL"
radio button under the home directory and choose the
redirection option you want to use and specify the path as
shown
in
the
image
below.

32

St. Angelos Professional Education

Cloud Infrastructure (Networking)

Controlling Access to Web Site


Custom Errors
You can configure Internet Information Services (IIS) to send default HTTP 1.1 error messages or custom
error messages. Custom error messages can be mapped to a file name or to a URL. The image below displays
Custom Errors dialog.

You can also configure your own custom error messages. To do that, click the HTTP error that you want to
change, and then click Edit to open the Edit Custom Error Properties dialog as shown in the image below.

33

St. Angelos Professional Education

Cloud Infrastructure (Networking)

To configure your own custom error, in the Message Type list box, click either File to return a custom error
file or URL to direct the request to a custom error URL on the local machine.
Note that you cannot customize the following errors: 400, 403.9, 411, 414, 500, 500.11, 500.14, 500.15, 501,
503, and 505.
Documents (Documents Tab)
The Documents dialog is displayed in the image below.
Enable default content page
The enable default content page lets you designate the
default page for your Web site. You can specify names
such as index.aspx, default.aspx, login.aspx, etc. To add a
new type you need to click the Add button and add the file
which you want to be displayed to your users when they
first enter your site.
Enable document footer
The enable document footer option lets you add a HTML
formatted footer to each and every document on your site.
By default, it is disabled.
HTTP Headers (HTTP Headers Tab)
The HTTP Headers dialog looks like the image below.

34

St. Angelos Professional Education

Cloud Infrastructure (Networking)

Enable content expiration


By default, this is disabled. If you enable content expiration and set a date then the content on your site expires
after the set date. If you notice from the above image, the content for Start vbdotnet is set to expire on
Tuesday, February 23, 2010 at 12 AM.
Content rating
Content rating allows classifying your site from four predefined values which are Violence, Sex, Nudity and
Language. By default, content rating is disabled. To enable content rating, click the edit Ratings button to
open the Content Ratings dialog as shown in the image below.

35

St. Angelos Professional Education

Cloud Infrastructure (Networking)

In the Content Ratings dialog, enable the checkbox which says Enable ratings for this content and select a
category under which your site falls and drag the track bar to indicate the level of the rating. You can also
include an email address for contact and set an expiration date for this content as shown in the image above.
Directory Security (Directory Security Tab)
The Directory Security dialog looks like the image below.

Authentication and access control


Authentication and access control allows us to setup
access to our site using Authentication Methods. If
you click the Edit button the Authentication Methods
dialog that is displayed looks like the image below.

36

St. Angelos Professional Education

Cloud Infrastructure (Networking)

By default, the enable anonymous access checkbox is checked which means that your site will be accessed by
everyone using the IUSR_COMPUTERNAME (default IIS account). If you want to enforce restrictions and
want users to be authenticated before they access your site you need to set it in this dialog.
IP address and domain name restrictions
The IP address and domain name restrictions allows us to grant or deny access to users based on their IP
address. If you click the Edit button the IP Address and Domain Name Restrictions dialog that is displayed
looks like the image below.
By default, all computers will be granted access. If you want
to deny/block a particular user or a group of computers then
you need to select the Denied access radio button and click
the Add button to open the Grant Access dialog as shown in
the
image
below.

If you want to block a single computer enter the IP address of the machine and click OK. If you want to deny
a group of computers then select the Group of
computers radio button and enter the network
address and Subnet mask number to deny that
group. If you want to deny users based on a
domain name the select the Domain name option
and enter the domain name.
Starting and Stopping Web site
You can start and stop a Web site in IIS manager.
To start a Web site, select the Web site, rightclick on it and from the menu select start/stop as
shown below.

37

St. Angelos Professional Education

Cloud Infrastructure (Networking)

Application programming interface (API)


An application programming interface (API) is a specification intended to be used as an interface by software
components to communicate with each other. An API may include specifications for routines, data structures,
object classes, and variables. An API specification can take many forms, including an International Standard
such as POSIX or vendor documentation such as the Microsoft Windows API, or the libraries of a
programming language, e.g. Standard Template Library in C++ or Java API.
An API differs from an application binary interface (ABI) in that the former is source code based while the
latter is a binary interface. For instance POSIX is an API, while the Linux Standard Base is an ABI.
API sharing and reuse via virtual machine
Some languages like those running in a virtual machine (e.g. .NET CLI compliant languages in the Common
Language Runtime and JVM compliant languages in the Java Virtual Machine) can share APIs.
In this case the virtual machine enables the language interoperation thanks to the common denominator of the
virtual machine that abstracts from the specific language using an intermediate byte code and its language
binding.
Hence this approach maximizes the code reuse potential for all the existing libraries and related APIs.
Web APIs
When used in the context of web development, an API is typically defined as a set of Hypertext Transfer
Protocol (HTTP) request messages, along with a definition of the structure of response messages, which is
usually in an Extensible Markup Language (XML) or JavaScript Object Notation (JSON) format. While "Web
API" is virtually a synonym for web service, the recent trend (so-called Web 2.0) has been moving away from
Simple Object Access Protocol (SOAP) based services towards more direct Representational State Transfer
(REST) style communications. Web APIs allow the combination of multiple services into new applications
known as mashups.

38

St. Angelos Professional Education

Cloud Infrastructure (Networking)

Chapter 5
Virtualisation
Virtualization may be a new concept to many; it is actually an old practice at least in the technology
world. Virtualization has been used since the 1960s to help maximize the power and potential of mainframe
computers.
So, how can a 40+ year old technology be applicable today? Its really not the same technology rising from the
ashes as much as its a technology architecture that has found an application in todays IT infrastructure. And
its potential for improving IT operations is tremendous.
The purpose of this white paper is to provide a primer on virtualization and how it can be used by mid-sized
organizations to lower costs and improve operations (the constant goal of IT).
In todays highly decentralized IT environment, we often see a one-to-one relationship between server and
application (i.e. one application configured for one server). The dramatic drop in the cost of powerful servers
has made this practice popular. The ever-swinging pendulum of centralization and decentralization in IT
practices is now heading back to a more centralized model. This is where virtualization fits in.
So, what is virtualization? At its simplest level, virtualization allows a single computer to do the job of many
computers. The term often used is Meta Computer.
Why is this important? Virtualization technology can save money and simplify IT operations. Virtualization
also represents a technology framework by which all IT infrastructures will run in the future.
One of the reasons for this pendulum swing is the opportunity to access
the vast amount of surplus capacity that exists in servers today.
According to industry analyst International Data Corporation (IDC),
typical server deployments show an average utilization as 10%-to-15%
of total capacity. That means that 85%-to-90% of the server processing
power is not used. Extrapolate this unused capacity out over hundreds
and thousands of servers across business and industry and you get a
sense of the magnitude of the opportunity. One of the goals of
virtualization is to target this unused capacity.
Industry Utilisation
Utilities provide an excellent model for the challenges IT faces in capturing
and managing the unused CPU power available.
Managing surplus capacity is a practice electrical utility companies have
tackled for decades. Power grids across the country are taxed at different
times of the day and year requiring the efficient management of electricity
flow. To address this issue, electric utilities create power grids that create
redundant paths and lines so that power can be routed from any power plant
to any load center, through a variety of routes, based on the economics of
the transmission path and the cost of power.
The IT industry coined the phrase Utility Computing to define the
computing model in which resources (CPU power, storage space, etc.) are
39

St. Angelos Professional Education

Cloud Infrastructure (Networking)

made available to the user on an as-needed basis (like a utility grid). The goal of the utility computing model is
to maximize the efficient use of computing resources and minimize user costs. Organizations are able to dial
up or dial down usage in real time, to meet the varying demands of business. Utility computing uses
virtualization as a technology foundation.
The business value of virtualization is significant as well. It will allow IT greater flexibility and speed in
adapting to changing business requirements.
Virtualization An Old IT Practice Revitalized
Virtualization was first implemented more than 30 years ago as a way to logically partition mainframe
computers into separate virtual machines. This was called multitasking the ability to run multiple
applications and processes at the same time. Since mainframes were very expensive, getting the most from the
computing power was critical.
In the 1990s, client/server computing became the rage. The client/server architecture created a distributed
processing environment where transaction processing was split between a server and a desktop computer. So,
unlike the mainframe environment where resources were shared centrally, a percentage of the processing was
distributed to the desktop. Client/server architectures took advantage of the new more powerful PCs and
empowered users to support their own application and business requirements. But, over the next decade, IT
found these environments to be difficult-to-manage and difficult-to-control islands of computing.
The client/server architecture is a good example of the centralization/decentralization dynamics of the IT
world. As soon as users became enabled with desktop spreadsheets, word processing or modules of ERP
systems, the IT architecture became decentralized. Security became an overriding issue. Data storage and
recovery became a concern.
As a result the technology swing is back towards centralization. In effect, we have come full circle back to
the future so to speak. The architectural profile, though, is different than before and quite advanced.
Understanding the ubiquitous nature of the Internet helps illustrate the concept of virtualization. When you surf
the Internet, you have no idea (nor do you care) where a transaction is being processed. If a purchase is made
on Amazon or eBay,a user isnt wondering whether that transaction is occurring in San Francisco, Paris or
Beijing. The transaction simply happens. This modern day definition of distributed processing means the
following:

The architecture is multi-tiered


Each tier is capable of significant processing power
The topology is virtual (cloud)
The user is insulated from knowing where a transaction is processed

Virtualization Used Today in Industry


The framework in which virtualization aligns in todays IT world is much broader than the 1960s. In practical
implementation, virtualization can be used for a variety of IT infrastructure challenges. Here are some
examples:

Machine Virtualization multiple computers acting as one (Meta Computers).


Operating System Virtualization enabling multiple isolated and secure virtualized servers to run of
a physical server (the same OS kernel is used to implement the guest environments).
Application Virtualization using a software virtualization layer to encapsulate a server or desktop
application from the local operating system. The application still executes locally using local
40

St. Angelos Professional Education

Cloud Infrastructure (Networking)

resources, but without being installed in the traditional sense.


Network Virtualization the network is configured to Navigate to virtual devices where processing is
completed cross platform.

Much of the virtualization technology is accomplished with software. Virtualization creates an abstraction
layer (a way of hiding the implementation details of a particular set of functionality) that insulates the business
(i.e. transactions) from the structure of the IT architecture. The hiding of this technical detail is called
encapsulation.
The Benefits of Virtualization
The benefits of virtualization reach beyond the ability to take advantage of unused processor capacity. Here
are some of those benefits:

Resource utilization: virtualization provides economies of scale that allows existing platforms to be better
utilized.
Infrastructure Cost Reduction: IT can reduce the number of servers and related IT hardware.
Application and Network Design Flexibility: Application programmers can design and build distributed
applications that span systems and geography.
Operational Flexibility: IT administrators can accomplish more with a single stroke in a virtual
environment than the support and maintenance of multiple devices.
Portability: Virtual machines combined with the encapsulation of virtual data makes it easy to move
virtual machines from one environment to another:
a. For maintenance
b. For replication (disaster recovery)
c. For resource utilization
System Availability: significant reductions in downtime for system maintenance can be achieved with
virtualization.
Recovery: in case of disaster, virtualized systems can be recovered more quickly.

The Virtual Infrastructure


Virtualization of a server or multiple servers is one thing. Designing an entire IT infrastructure for
virtualization is another. The next step in this technology and resource consolidation is to virtualize the
complete IT infrastructure. This is where the economies of scale become even more favorable. In fact, the
broader the implementation scale of virtualization, the benefits listed above grow dramatically.
For the virtual infrastructure, we are taking into account the servers, operating systems, applications, network,
ata storage, user systems (desktops, remote laptops, etc.) disaster recovery mechanisms and more.
The virtual infrastructure represents the design and layout knowledgebase of all components of the
infrastructure. The illustration below shows the abstraction layer of the virtualization architecture.
Applications and users are insulated from connecting to a specific server. Instead users connect to
applications and data stored throughout the network. They dont, and should not have to, be concerned about
where the actual transaction is being processed.
pplication

Operating System

Application

Opera ting System

Applica on

Operatin

41

St. Angelos Professional Education

Cloud Infrastructure (Networking)

Companies offering Virtualization Technology


1. VMware
Find a major data center anywhere in the world that doesn't use VMware, and then pat yourself on the back
because you've found one of the few. VMware dominates the server virtualization market. Its domination
doesn't stop with its commercial product, vSphere. VMware also dominates the desktop-level virtualization
market and perhaps even the free server virtualization market with its VMware Server product. VMware
remains in the dominant spot due to its innovations, strategic partnerships and rock-solid products.
2. Citrix
Citrix was once the lone wolf of application virtualization, but now it also owns the world's most-used cloud
vendor software: Xen (the basis for its commercial XenServer). Amazon uses Xen for its Elastic Compute
Cloud (EC2) services. So do Rackspace, Carpathia, SoftLayer and 1and1 for their cloud offerings. On the
corporate side, you're in good company with Bechtel, SAP and TESCO.
3. Oracle
If Oracle's world domination of the enterprise database server market doesn't impress you, its acquisition of
Sun Microsystems now makes it an impressive virtualization player. Additionally, Oracle owns an operating
system (Sun Solaris), multiple virtualization software solutions (Solaris Zones, LDoms and xVM) and server
hardware (SPARC). What happens when you pit an unstoppable force (Oracle) against an immovable object
(the Data Center)? You get the Oracle-centered Data Center.
4. Microsoft
Microsoft came up with the only non-Linux hypervisor, Hyper-V, to compete in a tight server virtualization
market that VMware currently dominates. Not easily outdone in the data center space, Microsoft offers
attractive licensing for its Hyper-V product and the operating systems that live on it. For all Microsoft shops,

42

St. Angelos Professional Education

Cloud Infrastructure (Networking)

Hyper-V is a competitive solution. And, for those who have used Microsoft's Virtual PC product, virtual
machines migrate to Hyper-V quite nicely.
5. Red Hat
For the past 15 years, everyone has recognized Red Hat as an industry leader and open source champion.
Hailed as the most successful open source company, Red Hat entered the world of virtualization in 2008 when
it purchased Qumranet and with it, its own virtual solution: KVM and SPICE (Simple Protocol for
Independent Computing Environment). Red Hat released the SPICE protocol as open source in December
2009.
6. Amazon
Amazon's Elastic Compute Cloud (EC2) is the industry standard virtualization platform. Ubuntu's Cloud
Server supports seamless integration with Amazon's EC2 services. Engine Yards Ruby application services
leverage Amazon's cloud as well.
7. Google
When you think of Google, virtualization might not make the top of the list of things that come to mind, but
its Google Apps, AppEngine and extensive Business Services list demonstrates how it has embraced cloudoriented services.
Three Kinds of Server Virtualization
There are three ways to create virtual servers: full virtualization, Para-virtualization and OS-level
virtualization. They all share a few common traits. The physical server is called the host. The virtual servers
are called guests. The virtual servers behave like physical machines. Each system uses a different approach to
allocate physical server resources to virtual server needs.
Full virtualization uses a special kind of software called a hypervisor. The hypervisor interacts directly with
the physical server's CPU and disk space. It serves as a platform for the virtual servers' operating systems. The
hypervisor keeps each virtual server completely independent and unaware of the other virtual servers running
on the physical machine. Each guest server runs on its own OS -- you can even have one guest running on
Linux and another on Windows.
The hypervisor monitors the physical server's resources. As virtual servers run applications, the hypervisor
relays resources from the physical machine to the appropriate virtual server. Hypervisors have their own
processing need, which means that the physical server must reserve some processing power and resources to
run the hypervisor application. This can impact overall server performance and slow down applications.
The Para-virtualization approach is a little different. Unlike the full virtualization technique, the guest
servers in a para-virtualization system are aware of one another. A para-virtualization hypervisor doesn't need
as much processing power to manage the guest operating systems, because each OS is already aware of the
demands the other operating systems are placing on the physical server. The entire system works together as a
cohesive unit.
An OS-level virtualization approach doesn't use a hypervisor at all. Instead, the virtualization capability is
part of the host OS, which performs all the functions of a fully virtualized hypervisor. The biggest limitation
of this approach is that all the guest servers must run the same OS. Each virtual server remains independent
from all the others, but you can't mix and match operating systems among them. Because all the guest
operating systems must be the same, this is called a homogeneous environment.
43

St. Angelos Professional Education

Cloud Infrastructure (Networking)

The Challenges of Managing a Virtual Environment


While virtualization offers a number of significant business benefits, it also introduces some new management
challenges that must be considered and planned for by companies considering a virtualization strategy.
The key management challenges for companies adopting virtualization include:
Policy-based management
Bandwidth implications
Image proliferation
Security
Human issues
Some of the other challenges of deploying and managing a virtual environment include: a new level of
complexity for capacity planning; a lack of vendor support for applications running on virtual systems; an
increased reliance on hardware availability; cost accounting difficulties (i.e. measuring usage not just for
physical servers but also for individual parts of a server); an additional layer of monitoring complexity; the
potential for significant up-front costs; and an overall increase in the complexity of the IT environment.
Policy-Based Management
Enterprises should look to deploy automated policy based management alongside their virtualization strategy.
Resource management, for example, should include automated policy-based tools for disk allocation and
usage, I/O rates, CPU usage, memory allocation and usage, and network I/O. Management tools need to be
able to throttle resources in shared environments, to maintain service levels and response times appropriate to
each virtual environment. Administrators should be able to set maximum limits, and allocate resources across
virtual environments proportionally. Allocations need to have the capability to change dynamically to respond
to peaks and troughs in load characteristics. Management tools will also be required to automate physical to
virtual, virtual to virtual, and virtual to physical migration.
Bandwidth Implications
Enterprises should make sure they have the appropriate network bandwidth for their virtualization
requirements. For example, instead of one server using a 100Mb Ethernet cable, now 10 or even 100 virtual
servers must share the same physical pipe. While less of a problem within the datacenter or for
communication between virtual servers running in a single machine, network bandwidth is a significant issue
for application streaming and remote desktop virtualization. These technologies deliver quite substantial
traffic to end users, in most cases significantly higher than is required for standard-installed desktop
computing. Streaming technologies, although in many cases more efficient than complete application
delivery, also impose high bandwidth requirements.
Image Proliferation
Operating system and server virtualization can lead to a rapid proliferation of system images, because it is so
much easier and faster to deploy a new virtual image than to deploy a new physical server, without approval
or hardware procurement. This can impose very high management and maintenance costs, and potentially lead
to significant licensing issues including higher costs and compliance risks. This proliferation also leads to
significant storage issues, such as competing I/O and extreme fragmentation, requiring much faster and multichannel disk access, and more maintenance time, effort, and cost. Enterprises need to manage their virtual
environment with the same level of discipline as their physical infrastructure, using discovery tools to detect
and prevent new systems from being created without following proper process.
Security
While virtualization can have many worthwhile security benefits, security also becomes more of a
management issue in a virtualized environment. There will be more systems to secure, more points of entry,
44

St. Angelos Professional Education

Cloud Infrastructure (Networking)

more holes to patch, and more interconnection points across virtual systems (where there is most likely no
router or firewall), as well as across physical systems. Access to the host environment becomes more critical,
as it will often allow access to multiple guest images and applications. Enterprises need to secure virtual
images just as well as they secure physical systems.
Human Issues
Enterprises should not underestimate the potential for human issues to affect their virtualization plans
adversely. Virtualization requires a new set of skills and methodologies, not just within IT, but often (certainly
in the case of application and desktop virtualization) in the end-user community. Perhaps most importantly,
this new technology requires new and creative thinking, not just new training and skills.

45

St. Angelos Professional Education

Cloud Infrastructure (Networking)

Chapter 6
Window Server 2008 Hyper V
Hyper-V is a role in Windows Server 2008 and Windows Server 2008 R2 that provides you with the tools
and services you can use to create a virtualized server computing environment. This type of environment is
useful because you can create and manage virtual machines, which allow you to run multiple operating
systems on one physical computer and isolate the operating systems from each other. This guide introduces
Hyper-V by providing instructions for installing this role and configuring a virtual machine.
Hyper-V has specific requirements. Hyper-V requires an x64-based processor, hardware-assisted
virtualization, and hardware data execution prevention (DEP). Hyper-V is available in x64-based versions of
Windows Server 2008specifically, the x64-based versions of Windows Server 2008 Standard, Windows
Server 2008 Enterprise, and Windows Server 2008 Datacenter.
If your computer is running Windows Server 2008, verify that your computer has been updated with the
release version of Hyper-V before you install Hyper-V. If your computer is running Windows Server 2008
R2, skip this step.
The release version (RTM) of Windows Server 2008 included the pre-release version of Hyper-V. The release
version of Hyper-V is offered through Windows Update as a recommended update, Hyper-V Update for
Windows Server 2008 x64 Edition (KB950050). However, you also can obtain the update through the
Microsoft Download Center.
On a full installation of Windows Server 2008, click Start, click Windows Update, click View update
history, and then click Installed Updates.
On a Server Core installation, at the command prompt, type: wmic qfe list
Look for update number kbid=950050, which indicates that the update for Hyper-V has been installed.
You can install Hyper-V on either a full installation or a Server Core installation. You can use Server
Manager to install Hyper-V on a full installation, as described in the following procedure. To install on a
Server Core installation, you must perform the installation from a command prompt. Run the following
command: Start /w ocsetup Microsoft-Hyper-V
To install Hyper-V on a full installation of Windows Server 2008
1. Click Start, and then click Server Manager.
2. In the Roles Summary area of the Server Manager main window, click Add Roles.
3. On the Select Server Roles page, click Hyper-V.
4. On the Create Virtual Networks page, click one or more network adapters if you want to make their
network connection available to virtual machines.
Note
The type of network you can create in this step is called an external virtual network. If you create it now you
can connect the virtual machine to it when you create the virtual machine in Step 2. To create virtual networks
later or reconfigure existing networks, see Step 4: Configure virtual networks.
5. On the Confirm Installation Selections page, click Install.
6. The computer must be restarted to complete the installation. Click Close to finish the wizard, and then
click yes to restart the computer.
46

St. Angelos Professional Education

Cloud Infrastructure (Networking)

7. After you restart the computer, log on with the same account you used to install the role. After the
Resume Configuration Wizard completes the installation, click Close to finish the wizard.
Step 2: Create and set up a virtual machine
After you have installed Hyper-V, you can create a virtual machine and set up an operating system on the
virtual machine.
Before you create the virtual machine, you may find it helpful to consider the following questions. You can
provide answers to the questions when you use the New Virtual Machine Wizard to create the virtual
machine.
Is the installation media available for the operating system you want to install on the virtual machine?
You can use physical media, a remote image server, or an .ISO file. The method you want to use
determines how you should configure the virtual machine.
How much memory will you allocate to the virtual machine?
Where do you want to store the virtual machine and what do you want to name it?
To create and set up a virtual machine
1. Open Hyper-V Manager. Click Start, point to Administrative Tools, and then click Hyper-V
Manager.
2. From the Action pane, click New, and then click Virtual Machine.
3. From the New Virtual Machine Wizard, click Next.
4. On the Specify Name and Location page, specify what you want to name the virtual machine and
where you want to store it.
5. On the Memory page, specify enough memory to run the guest operating system you want to use on
the virtual machine.
6. On the Networking page, connect the network adapter to an existing virtual network if you want to
establish network connectivity at this point.
Note
If you want to use a remote image server to install an operating system on your test virtual machine,
select the external network.
7. On the Connect Virtual Hard Disk page, specify a name, location, and size to create a virtual hard
disk so you can install an operating system on it.
8. On the Installation Options page, choose the method you want to use to install the operating system:
o Install an operating system from a boot CD/DVD-ROM. You can use either physical media or
an image file (.iso file).
o Install an operating system from a boot floppy disk.
o Install an operating system from a network-based installation server. To use this option, you
must configure the virtual machine with a legacy network adapter connected to an external
virtual network. The external virtual network must have access to the same network as the
image server.
9. Click Finish.
After you create the virtual machine, you can start the virtual machine and install the operating system.

47

St. Angelos Professional Education

Cloud Infrastructure (Networking)

Step 3: Install the guest operating system and integration services


In the final step of this process, you connect to the virtual machine to set up the operating system, which is
referred to as the guest operating system. As part of the setup, you install a software package that improves
integration between the virtualization server and the virtual machine.
The instructions in this step assume the following:
You specified the location of the installation media when you created the virtual machine.
You are installing an operating system for which integration services are available.
To install the guest operating system and integration services
1. From the Virtual Machines section of the results pane, right-click the name of the virtual machine
you created in step 2 and click Connect. The Virtual Machine Connection tool will open.
2. From the Action menu in the Virtual Machine Connection window, click Start.
3. Proceed through the installation.
Notes
o

When you are at the point where you need to provide input to complete the process, move the mouse
cursor over the image of the setup window. After the mouse pointer changes to a small dot, click
anywhere in the virtual machine window. This action "captures" the mouse so that keyboard and
mouse input is sent to the virtual machine. To return the input to the physical computer, press
Ctrl+Alt+Left arrow and then move the mouse pointer outside of the virtual machine window.
After the operating system is set up, you are ready to install the integration services. From the Action
menu of Virtual Machine Connection, click Insert Integration Services Setup Disk. On Windows
operating systems, you must close the New Hardware Wizard to start the installation. If Autorun does
not start the installation automatically, you can start it manually. Click anywhere in the guest operating
system window and navigate to the CD drive. Use the method that is appropriate for the guest
operating system to start the installation package from the CD drive.

After you have completed the setup and integration services are installed, you can begin using the virtual
machine. You can view or modify the virtual hardware that is configured for the virtual machine by reviewing
the settings of the virtual machine. From the Virtual Machines pane, right-click the name of the virtual
machine that you created in step 3 and click Settings. From the Settings window, click the name of the
hardware to view or change it.
Step 4: Configure virtual networks
You can create virtual networks on the server running Hyper-V to define various networking topologies for
virtual machines and the virtualization server. There are three types of virtual networks you can create:
1. An external network, which provides communication between a virtual machine and a physical
network by creating an association to a physical network adapter on the virtualization server.
2. An internal network, which provides communication between the virtualization server and virtual
machines.
3. A private network, which provides communication between virtual machines only.
The following procedures provide the basic instructions for configuring virtual networks.

48

St. Angelos Professional Education

Cloud Infrastructure (Networking)

To create a virtual network


1. Open Hyper-V Manager.
2. From the Actions menu, click Virtual Network Manager.
3. Under Create virtual network, select the type of network you want to create. The types of network
are External, Internal, and Private. If the network you want to create is an external network, see
Additional considerations below.
4. Click Add. The New Virtual Network page appears.
5. Type a name for the new network. Review the other properties and modify them if necessary.
Note
You can use virtual LAN identification as a way to isolate network traffic. However, this type of
configuration must be supported by the physical network adapter.
6. Click OK to create the virtual network and close Virtual Network Manager, or click Apply to create
the virtual network and continue using Virtual Network Manager.
To add a network adapter to a virtual machine
1. Open Hyper-V Manager. Click Start, point to Administrative Tools, and then click Hyper-V
Manager.
2. In the results pane, under Virtual Machines, select the virtual machine that you want to configure.
3. In the Action pane, under the virtual machine name, click Settings.
4. In the navigation pane, click Add Hardware.
5. On the Add Hardware page, choose a network adapter or a legacy network adapter. Network adapters
can only be added to a virtual machine when the machine is turned off. For more information about
each type of adapter, see "Additional considerations" below.
6. Click Add. The Network Adapter or Legacy Network Adapter page appears.
7. Under Network, select the virtual network you want to connect to.
8. If you want to configure a static MAC address or virtual LAN identifier, specify the address or
identifier you want to use.
9. Click OK.
Additional considerations
By default, membership in the local Administrators group, or equivalent, is the minimum required to
complete this procedure. However, an administrator can use Authorization Manager to modify the
authorization policy so that a user or group of users can complete this procedure.
A legacy network adapter works without installing a virtual machine driver because the driver is
already available on most operating systems. The legacy network adapter emulates a physical network
adapter, multiport DEC 21140 10/100TX 100 MB. A legacy network adapter also supports networkbased installations because it includes the ability to boot to the Pre-Boot Execution Environment
(PXE). The legacy network adapter is not supported in the 64-bit edition of Windows Server 2003 or
the Windows XP Professional x64 Edition.
When you create an external virtual network, it affects how networking is configured on the physical
network adapter. After installation, the management operating system uses a virtual network adapter to
connect to the physical network. (The management operating system runs the Hyper-V role.) When
you look at Network Connections in the management operating system, you will see the original
network adapter and a new virtual network adapter. The original physical network adapter has nothing
bound to it except the Microsoft Virtual Network Switch Protocol, and the virtual network adapter
49

St. Angelos Professional Education

Cloud Infrastructure (Networking)

now has all of the standard protocols and services bound to it. The virtual network adapter that appears
under Network Connections will have the same name as the virtual network with which it is
associated. It is possible to create an internal virtual network, which will expose a virtual network
adapter to the parent partition without the need to have a physical network adapter associated with it.
Hyper-V only binds the virtual network service to a physical network adapter when an external virtual
network is created. However, networking will get disrupted for a short period of time on the network
adapter when a virtual network gets created or deleted.
VHD (virtual hard disk)
In Windows 7 and Windows 8, a virtual hard disk can be used as the running operating system on designated
hardware without any other parent operating system, virtual machine, or hypervisor. Windows diskmanagement tools, the DiskPart tool and the Disk Management Microsoft Management Console
(Diskmgmt.msc), can be used to create a VHD file. A supported Windows image (.WIM) file can be deployed
to the VHD and the .vhd file can be copied to multiple systems. The Windows 8 boot manager can be
configured to boot directly into the VHD.
The .VHD file can also be connected to a virtual machine for use with the Hyper-V Role in Windows Server.
Native-boot VHD files are not designed or intended to replace full image deployment on all client or server
systems. Enterprise environments already managing and using .VHD files for virtual machine deployment
will get the most benefit from the native-boot VHD capabilities. Using the .VHD file as a common image
container format for virtual machines and designated hardware simplifies image management and deployment
in an enterprise environment.
Common Scenarios
VHDs with native boot are frequently used in the following scenarios:
Using disk-management tools to create and attach a VHD for offline image management. You can
attach a VHD by using the Attach vdisk command which activates the VHD so that it appears on the
host as a disk drive instead of as a .vhd or .vhdx file.
Mounting reference VHD images on remote shares for image servicing.
Maintaining and deploying a common reference VHD image to execute in either virtual or physical
computers.
Configuring VHD files for native boot without requiring a full parent installation.
Configuring a computer to boot multiple local VHD files that contain different application workloads,
without requiring separate disk partitions.
Using Windows Deployment Services (WDS) for network deployment of VHD images to target
computers for native boot.
Managing desktop image deployment.
Requirements
Native VHD boot has the following dependencies:

The local disk must have at least two partitions: a system partition that contains the Windows 8 bootenvironment files and Boot Configuration Data (BCD) store, and a partition to store the VHD file. The
.vhd file format is supported for native boot on a computer with a Windows 7 boot environment, but
you will have to update the system partition to a Windows 8 environment to use the .vhdx file format.

The local disk partition that contains the VHD file must have enough free disk space for expanding a
dynamic VHD to its maximum size and for the page file created when booting the VHD. The page file
50

St. Angelos Professional Education

Cloud Infrastructure (Networking)

is created outside the VHD file, unlike with a virtual machine where the page file is contained inside
the VHD.
Benefits
The benefits of native boot capabilities for VHDs include the following:
Using the same image-management tools for creating, deploying, and maintaining system images to be
installed on designated hardware or on a virtual machine.
Deploying an image on a virtual machine or a designated computer, depending on capacity planning
and availability.
Deploying Windows for multiple boot scenarios without requiring separate disk partitions.
Deploying supported Windows images in a VHD container file for faster deployment of reusable
development and testing environments.
Replacing VHD images for server redeployment or recovery.
Limitations
Native VHD support has the following limitations:
Native VHD disk management support can attach approximately 512 VHD files concurrently.
Native VHD boot does not support hibernation of the system, although sleep mode is supported.
VHD files cannot be nested.
Native VHD boot is not supported over Server Message Block (SMB) shares.
Windows BitLocker Drive Encryption cannot be used to encrypt the host volume that contains VHD
files that are used for native VHD boot, and BitLocker cannot be used on volumes that are contained
inside a VHD.
The parent partition of a VHD file cannot be part of a volume snapshot.
An attached VHD cannot be configured as a dynamic disk. A dynamic disk provides features that basic
disks do not, such as the ability to create volumes that span multiple disks (spanned and striped
volumes), and the ability to create fault-tolerant volumes (mirrored and RAID-5 volumes). All
volumes on dynamic disks are known as dynamic volumes.
The parent volume of the VHD cannot be configured as a dynamic disk.
Recommended Precautions
The following are recommended precautions for using VHDs with native boot:
Use Fixed VHD disk types for production servers, to increase performance and help protect user data.
Use Dynamic or Differencing VHD disk types only in non-production environments, such as for
development and testing.
When using Dynamic VHDs, store critical application or user data on disk partitions that are outside
the VHD file, when it is possible. This reduces the size requirements of the VHD. It also makes it
easier to recover application or user data if the VHD image is no longer usable due to a catastrophic
system shutdown such as a power outage.
Types of Virtual Hard Disks
Three types of VHD files can be created by using the disk-management tools:
Fixed hard-disk image. A fixed hard-disk image is a file that is allocated to the size of the virtual
disk. For example, if you create a virtual hard disk that is 2 gigabytes (GB) in size, the system will
create a host file approximately 2 GB in size. Fixed hard-disk images are recommended for production
servers and working with customer data.
51

St. Angelos Professional Education

Cloud Infrastructure (Networking)

Dynamic hard-disk image. A dynamic hard-disk image is a file that is as large as the actual data
written to it at any given time. As more data is written, the file dynamically increases in size. For
example, the size of a file backing a virtual 2 GB hard disk is initially around 2 megabytes (MB) on
the host file system. As data is written to this image, it grows with a maximum size of 2 GB.
Dynamic hard-disk images are recommended for development and testing environments. Dynamic
VHD files are smaller, easier to copy, and will expand once mounted.
Differencing hard-disk image. A differencing hard-disk image describes a modification of a parent
image. This type of hard-disk image is not independent; it depends on another hard-disk image to be
fully functional. The parent hard-disk image can be any of the mentioned hard-disk image types,
including another differencing hard-disk image.

52

St. Angelos Professional Education

Cloud Infrastructure (Networking)

Chapter 7
Configuration & Management of Hyper V
The Hyper-V role enables you to create a virtualized server computing environment using a technology that is
part of Windows Server 2008. This type of environment is useful because you can create and manage virtual
machines, which allows you to run multiple operating systems on one physical computer and isolate the
operating systems from each other. As a result, you can use a virtualized computing environment to improve
the efficiency of your computing resources by utilizing more of your hardware resources.
Hyper-V provides software infrastructure and basic management tools that you can use to create and manage
a virtualized server computing environment. This virtualized environment can be used to address a variety of
business goals aimed at improving efficiency and reducing costs. For example, a virtualized server
environment can help you:
Reduce the costs of operating and maintaining physical servers by increasing your hardware
utilization. You can reduce the amount of hardware needed to run your server workloads.
Increase development and test efficiency by reducing the amount of time it takes to set up hardware
and software and reproduce test environments.
Improve server availability without using as many physical computers as you would need in a failover
configuration that uses only physical computers.
Hyper-V requires specific hardware. You can identify systems that support the x64 architecture and Hyper-V
by searching the Windows Server catalog for Hyper-V as an additional qualification.
To install and use the Hyper-V role, you need the following:
An x64-based processor. Hyper-V is available in x64-based versions of Windows Server 2008
specifically, the x64-based versions of Windows Server 2008 Standard, Windows Server 2008
Enterprise, and Windows Server 2008 Datacenter.
Hardware-assisted virtualization. This is available in processors that include a virtualization option
specifically, Intel Virtualization Technology (Intel VT) or AMD Virtualization (AMD-V).
Hardware-enforced Data Execution Prevention (DEP) must be available and be enabled. Specifically,
you must enable the Intel XD bit (execute disable bit) or AMD NX bit (no execute bit).
Configure Hyper-V
You can configure the server running Hyper-V by modifying Hyper-V settings. You can use these settings to
specify where certain files are stored by default and control interactions such as keyboard combinations and
logon credentials.
There are two categories of Hyper-V settings:
Server settings, which specify the default location of virtual hard disks and virtual machines.
User settings, which enable you to customize interactions with Virtual Machine Connection, and
display messages and wizard pages if you hid them previously. Settings for Virtual Machine
Connection include the mouse release key and Windows key combinations.
To configure Hyper-V settings
1. Open Hyper-V Manager. Click Start, point to Administrative Tools, and then click Hyper-V
Manager.
2. In the Actions pane, click Hyper-V Settings.
3. In the navigation pane, click the setting that you want to configure.

53

St. Angelos Professional Education

Cloud Infrastructure (Networking)

4. Click OK to save the changes and close Hyper-V Settings, or click Apply to save the changes and
configure other settings.
Create Virtual Machines
The New Virtual Machine Wizard provides you with a simple and flexible way to create a virtual machine.
The New Virtual Machine Wizard is available from Hyper-V Manager.
When you use the wizard, you have two basic options for creating a virtual machine:
You can use default settings to create a virtual machine without proceeding through all the
configuration pages of the wizard. This type of virtual machine is configured as follows:
Name
New Virtual Machine
Location
The default location configured for the server running Hyper-V
Memory
512 MB
Network
Not connected
connection
Virtual hard disk Dynamically expanding hard disk with a storage capacity of 127 gigabytes
Installation options No media is specified
No disks connected to this device. Integration services are required in the guest
SCSI controller
operating system in order to use this device. Some newer versions of Windows
include integration services.
You can use settings that you specify on the configuration pages to create a virtual machine that is
customized for your needs.
To create a default virtual machine
1. Open Hyper-V Manager. Click Start, point to Administrative Tools, and then click Hyper-V
Manager.
2. From the Action pane, click New, and then click Virtual Machine.
3. On the Before You Begin page, click Finish. Or, if the Specify Name and Location page is the first
page that appears, click Finish on that page.
To create a customized virtual machine
1. Open Hyper-V Manager. Click Start, point to Administrative Tools, and then click Hyper-V
Manager.
2. From the Action pane, click New, and then click Virtual Machine.
3. Proceed through the pages of the wizard to specify the custom settings that you want to make. You can
click Next to move through each page of the wizard, or you can click the name of a page in the left
pane to move directly to that page.
4. After you have finished configuring the virtual machine, click Finish.
Create Virtual Hard Disks
The New Virtual Hard Disk Wizard provides you with a simple way to create a virtual hard disk. This wizard
creates the following types of disks:
Fixed virtual hard disk
Dynamically expanding virtual hard disk
Differencing virtual hard disk

54

St. Angelos Professional Education

Cloud Infrastructure (Networking)

Many of the options you can use to customize the virtual hard disk differ depending on the type of virtual hard
disk you create. In all cases, the virtual hard disk requires a name and a storage location.
To create a virtual hard disk
1. Open Hyper-V Manager. Click Start, point to Administrative Tools, and then click Hyper-V
Manager.
2. In the Action pane, click New, and then click Hard Disk.
3. Proceed through the pages of the wizard to customize the virtual hard disk. You can click Next to
move through each page of the wizard, or you can click the name of a page in the left pane to move
directly to that page.
4. After you have finished configuring the virtual hard disk, click Finish.
Install a Guest Operating System
A guest operating system is the operating system that you install and run in a virtual machine. Before you can
install the guest operating system, if you did not specify the location of the installation media when you
created the virtual machine, you will need to perform one of the following steps:
Obtain the installation media for the operating system and configure the virtual machine to use the
CD/DVD drive to access the installation media.

If you want to perform a network-based installation, configure the virtual machine to use a legacy network
adapter that is connected to an external virtual network. This type of network provides connectivity to a
physical network by routing traffic through an external virtual network to a physical network adapter.

After you have configured the virtual machine appropriately, you can install the guest operating system.
To install the guest operating system
1. Open Hyper-V Manager. Click Start, point to Administrative Tools, and then click Hyper-V
Manager.
2. Connect to the virtual machine. From the Virtual Machines section of the results pane, using one of
the following methods:
o Right-click the name of the virtual machine and click Connect.
o Select name of the virtual machine. In the Action pane, click Connect.
3. The Virtual Machine Connection tool opens.
4. From the Action menu in the Virtual Machine Connection window, click Start.
5. The virtual machine starts, searches the startup devices, and loads the installation package.
6. Proceed through the installation.
Connect to a Virtual Machine
Virtual Machine Connection is a tool that you use to connect to a virtual machine so that you can install or
interact with the guest operating system in a virtual machine. Some of the tasks that you can perform by using
Virtual Machine Connection include the following:
Connect to the video output of a virtual machine
Control the state of a virtual machine
Take snapshots of a virtual machine
Modify the settings of a virtual machine

55

St. Angelos Professional Education

Cloud Infrastructure (Networking)

Before you begin


Virtual Machine Connection is installed automatically when you install the Hyper-V role on a full installation
of Windows Server 2008.
By default, Virtual Machine Connection uses the same credentials you used to log on to your current
Windows session to also establish a session to a running virtual machine. If you want Virtual Machine
Connection to use a different set of credentials, you can configure Hyper-V so that you will be prompted for
credentials.
To connect to a virtual machine from Hyper-V Manager
1. Open Hyper-V Manager. Click Start, point to Administrative Tools, and then click Hyper-V
Manager.
2. In the results pane, under Virtual Machines, right-click the name of the virtual machine and click
Connect.
3. The Virtual Machine Connection tool opens.
Manage Virtual Networks
You can create many virtual networks on the server running Hyper-V to provide a variety of communications
channels. For example, you can create networks to provide the following:
Communications between virtual machines only. This type of virtual network is called a private
network.
Communications between the virtualization server and virtual machines. This type of virtual network
is called an internal network.
Communications between a virtual machine and a physical network by creating an association to a
physical network adapter on the virtualization server. This type of virtual network is called an external
network. As a best practice, we recommend that you provide the physical computer with more than
one physical network adapter. Use one physical network adapter to provide virtual machines with an
external virtual network, including remote access to the virtual machines. Use the other network
adapter for all network communications with the management operating system, including remote
access to the Hyper-V role. The management operating system runs the Hyper-V role.
You can use Virtual Network Manager to add, remove, and modify the virtual networks. Virtual Network
Manager is available from Hyper-V Manager.
To add a virtual network
1. Open Hyper-V Manager.
2. From the Actions menu, click Virtual Network Manager.
3. Under Create virtual network, select the type of network you want to create.
4. Click Add. The New Virtual Network page appears.
5. Type a name for the new network. Review the other properties and modify them if necessary.
6. Click OK to save the virtual network and close Virtual Network Manager, or click Apply to save the
virtual network and continue using Virtual Network Manager.
To modify a virtual network
1. Open Hyper-V Manager.
2. From the Actions menu, click Virtual Network Manager.
3. Under Virtual Networks, click the name of the network you want to modify.
4. Under Virtual Network Properties, edit the appropriate properties to modify the virtual network.
56

St. Angelos Professional Education

Cloud Infrastructure (Networking)

5. Click OK to save the changes and close Virtual Network Manager, or click Apply to save the changes
and continue using Virtual Network Manager.
To remove a virtual network
1. Open Hyper-V Manager.
2. From the Actions menu, click Virtual Network Manager.
3. Under Virtual Networks, click the name of the network you want to remove.
4. Under Virtual Network Properties, click Remove.
5. Click OK to save the changes and close Virtual Network Manager, or click Apply to save the changes
and continue using Virtual Network Manager.
Configure a Virtual Machine for High Availability
A highly available virtual machine requires that you perform some specific configuration and tasks. Some
configuration and tasks must be done before you make the virtual machine highly available, while others must
be done afterward. Before you try to make a virtual machine highly available, review the following
information to make sure that the virtual machine is configured appropriately.
Making the virtual machine highly available
Before you make the virtual machine highly available, you need to review, and in some cases, modify the
networking, security, and storage in your environment.

Networking. All nodes in the same cluster must use the same name for the virtual network that provides
external networking for the virtual machines. For example, if you created a virtual network through the
Add Roles Wizard when you installed the role, a name is assigned to the virtual network based on the
network adapter. This name will be different on each physical computer. You must delete the virtual
network and then recreate it, using the same name on each physical computer in the cluster. Also note
that if a physical network adapter uses static settings, such as a static IP address, and IPv6 is not disabled,
the static settings will be deleted when you connect a virtual network to the physical adapter. In that case,
you must reconfigure the static settings.

Processor. If the nodes in your cluster use different processor versions make sure that you configure the
virtual machine for processor compatibility. This helps ensure that you can failover or migrate a virtual
machine without encountering problems due to different virtualization features on different versions of
the same manufacturers processor. However, this does not provide compatibility between different
processor manufacturers.

Security. To avoid potential problems you might encounter trying to administer highly available virtual
machines, all nodes in the cluster must use the same authorization policy. There are two ways you can
accomplish this:
o Use a local, XML-based authorization store on each node. The policy must be configured the
same on each store in the cluster.
o Use an authorization store located in Active Directory Domain Services (AD DS).
Storage. You must use shared storage that is supported by both the Failover Clustering feature and
Hyper-V. When you create the virtual machine, choose the option to store the virtual machine in a new
folder and specify the location of the shared storage.
After you make the virtual machine highly available
After you make the virtual machine highly available, do the following:
Install the guest operating system and the integration services.
57

St. Angelos Professional Education

Cloud Infrastructure (Networking)

Configure the virtual machine to take no action through Hyper-V if the physical computer shuts down
by modifying the Automatic Stop Action setting to none. Virtual machine state must be managed
through the Failover Clustering feature.
If you use snapshots on a clustered virtual machine, all resources associated with each snapshot must
be stored on the same shared storage as the virtual machine.
If you change the configuration of a virtual machine, we recommend that you use the Failover
Manager snap-in to access the virtual machine settings. When you do this, the cluster is updated
automatically with the configuration changes. However, if you make changes to the virtual machine
settings from the Hyper-V Manager snap-in, you must update the cluster manually after you make the
changes. If the configuration is not refreshed after networking or storage changes are made, a
subsequent failover may not succeed or may succeed but result in the virtual machine being configured
incorrectly.

Monitor the performance of the Hyper-V server


While most of the principles of analyzing performance of a guest operating system installed on a Hyper-V
virtual machine performance are the same as analyzing the performance of an operating system installed on a
physical machine, many of the collection methods are different. The following considerations are significant
when evaluating performance of your BizTalk Server solution running on a guest operating system installed
on a Hyper-V virtual machine.
Measuring Disk I/O Performance
The following considerations apply when measuring disk I/O performance on a guest operating system
installed on a Hyper-V virtual machine:
Measure disk latency on a Hyper-V host operating system The best initial indicator of disk
performance on a Hyper-V host operating system is obtained by using the \Logical Disk(*)\Avg.
sec/Read and \Logical Disk(*)\Avg. sec/Write performance monitor counters. These performance
monitor counters measure the amount of time that read and write operations take to respond to the
operating system. As a general rule of thumb, average response times greater than 15ms are considered
sub-optimal. This is based on the typical seek time of a single 7200 RPM disk drive without cache. The
use of logical disk versus physical disk performance monitor counters is recommended because Windows
applications and services utilize logical drives represented as drive letters wherein the physical disk
(LUN) presented to the operating system can be comprised of multiple physical disk drives in a disk array.
Use the following rule of thumb when measuring disk latency on the Hyper-V host operating system using
the \Logical Disk(*)\Avg. Disk sec/Read or \Logical Disk(*)\Avg. Disk sec/Write performance monitor
counters:
1ms to 15ms = Healthy
15ms to 25ms = Warning or Monitor
26ms or greater = Critical, performance will be adversely affected
Measure disk latency on guest operating systems Response times of the disks used by the guest operating
systems can be measured using the same performance monitor counters used to measure response times of
the disks used by the Hyper-V host operating system.
Use the following performance monitor counters to measure the impact of available memory on the
performance of a guest operating system installed on a Hyper-V virtual machine:
Measure available memory on the Hyper-V host operating system The amount of physical
memory available to the Hyper-V host operating system can be determined by monitoring the
\Memory\Available MBytes performance monitor counter on the physical computer. This counter
58

St. Angelos Professional Education

Cloud Infrastructure (Networking)

reports the amount of free physical memory available to the host operating system. Use the following
rules of thumb when evaluating available physical memory available to the host operating system:
o \Memory\Available Mbytes Available MBytes measures the amount of physical memory
available to processes running on the computer, as a percentage of physical memory installed
on the computer. The following guidelines apply when measuring the value of this
performance monitor counter:
50% of free memory available or more = Healthy
25% of free memory available = Monitor
10% of free memory available = Warning
Less than 5% of free memory available = Critical, performance will be adversely
affected
Hyper-V allows guest computers to share the same physical network adapter. While this helps to consolidate
hardware, take care not to saturate the physical adapter. Use the following methods to ensure the health of the
network used by the Hyper-V virtual machines:

Test network latency Ping each virtual machine to ensure adequate network latency. On local area
networks, expect to receive less than 1ms response times.

Test for packet loss Use the pathping.exe utility to test packet loss between virtual machines.
Pathping.exe measures packet loss on the network and is available with all versions of Windows
Server since Windows Server 2000. Pathping.exe sends out a burst of 100 ping requests to each
network node and calculates how many pings are returned. On local area networks there should be no
loss of ping requests from the pathping.exe utility.

Test network file transfers Copy a 100MB file between virtual machines and measure the length of
time required to complete the copy. On a healthy 100Mbit (megabit) network, a 100MB (megabyte)
file should copy in 10 to 20 seconds. On a healthy 1Gbit network, a 100MB file should copy in about
3 to 5 seconds. Copy times outside of these parameters are indicative of a network problem. One
common cause of poor network transfers occurs when the network adapter has auto detected a
10MB half-duplex network which prevents the network adapter from taking full advantage of
available bandwidth.

Measure network utilization on the Hyper-V host operating system Use the following
performance monitor counters to measure network utilization on the Hyper-V host operating system:

The following considerations apply when evaluating processor performance on a guest operating system
installed on a Hyper-V virtual machine:
Guest operating system processors do not have a set affinity to physical processors/cores The
hypervisor determines how physical resources are used. In the case of processor utilization, the
hypervisor schedules the guest processor time to physical processor in the form of threads. This means
the processor load of virtual machines will be spread across the processors of the physical computer.
Furthermore, virtual machines cannot exceed the processor utilization of the configured number of
logical processors, for example if a single virtual machine is configured to run with 2 logical
processors on a physical computer with 8 processors/cores, then the virtual machine cannot exceed the
processor capacity of the number of configured logical processors (in this case 2 processors).

59

St. Angelos Professional Education

Cloud Infrastructure (Networking)

Measure overall processor utilization of the Hyper-V environment using Hyper-V performance
monitor counters For purposes of measuring processor utilization, the host operating system is
logically viewed as just another guest operating system. Therefore, the \Processor(*)\% Processor
Time monitor counter measures the processor utilization of the host operating system only. To
measure total physical processor utilization of the host operating system and all guest operating
systems, use the \Hyper-V Hypervisor Logical Processor(_Total)\% Total Run Time performance
monitor counter. This counter measures the total percentage of time spent by the processor running the
both the host operating system and all guest operating systems. Use the following thresholds to
evaluate overall processor utilization of the Hyper-V environment using the \Hyper-V Hypervisor
Logical Processor(_Total)\% Total Run Time performance monitor counter:

Less than 60% consumed = Healthy


60% - 89% consumed = Monitor or Caution
90% - 100% consumed = Critical, performance will be adversely affected

Tools for Measuring Performance


The following tools can be used to measure performance of Server.
1. Performance Analysis of Logs (PAL) tool
The PAL tool is used to generate an HTML-based report that graphically charts important performance
monitor counters and generates alerts when thresholds for these counters are exceeded. PAL is an excellent
tool for identifying bottlenecks in a BizTalk Server solution to facilitate the appropriate allocation of
resources when optimizing the performance of the solution.
2. SQLIO
The SQLIO tool was developed by Microsoft to evaluate the I/O capacity of a given configuration. As the
name of the tool implies, SQLIO is a valuable tool for measuring the impact of file system I/O on SQL Server
performance.
3. SQL Profiler
Microsoft SQL Server Profiler can be used to capture Transact-SQL statements that are sent to SQL Server
and the SQL Server result sets from these statements. Because BizTalk Server is tightly integrated with SQL
Server, the analysis of a SQL Server Profile trace can be a useful tool for analyzing problems that may occur
in BizTalk Server when reading from and writing to SQL Server databases. For information about how to use
SQL Server Profiler.
4. LoadGen
BizTalk LoadGen 2007 is a load generation tool used to run performance and stress tests against BizTalk
Server.
5. BizUnit
BizUnit is a framework designed for automated testing of BizTalk solutions. BizUnit is an excellent tool for
testing end-to-end BizTalk Server scenarios.
6. IOMeter
IOMeter is an open source tool used for measuring disk I/O performance.
60

St. Angelos Professional Education

Cloud Infrastructure (Networking)

Chapter 8
Google App & Microsoft Office 365
About two decades ago, if any one would have mentioned the term
cloud computing people would have considered it Greek or
Latin. But today almost all businesses irrespective of the size are
relying on it for getting reliable tools to help them grow and
expand in a sustainable manner. Of all the providers out there in
the market, Google Cloud Computing is considered the best not
for one but many reasons. The primary reason for peoples
preference of Googles applications is the fact that many of them
have been integrated together in a very seamless and flawless
manner making things really easy for getting the job done.
For instance, Gmail, Google Docs, AdWords, Maps, Picassa all can be accessed easily using one email id.
This makes the entire operation easier and also highly affordable. If one had to get some software coded to
meet all these purposes, it would have cost millions of dollars. Moreover, the time spent on developing the
same and maintaining it would also be substantial. Let us now take a detailed look at what all services Google
Cloud Computing has on offer.
1. Google Mail (Gmail) The most commonly used service of Google. Though it does not come
directly under the purview of cloud computing, one needs to have a Gmail account in order to use the
other services. Gmail was one of the first mail servers to have an integrated chat applet. This applet
was even able to store conversations automatically in the form of an email.
2. Google Docs In simple words Google Docs is Microsoft Office online. You can create all the
documents such as work sheets, documents and even presentations online and store them on Googles
server. Moreover, since the files are store on a remote server you can virtually access them from any
place in the world that has an internet connection which makes them very handy. At the same time you
do not need to worry about the security of the files as they are encrypted with advance technology and
only people whom you permit can access them.
3. Google Analytics using Googles Analytics tool you can monitor the entire traffic come on to your
website with data being updated hourly. This way you make decisions after being well informed about
the situation and modify or update your sites accordingly.
4. Google AdWords and AdSense Once you have got to know your target audience, you can use these
revolutionary advertising tools to get the most value for every cent you spend on advertising. With
AdWords, your message is sure to reach the precise target you intend to communicate with and thus
get good returns. The best part is that data from any of Googles other cloud computing applications
can be integrated with each other and you will not have to rephrase the same.
5. Picasa This is one of the best and most interactive applications in the Google Cloud Computing
bouquet. For those who intend to sell physical products online, Picasa is a great way of uploading
photographs and images of your products and exhibit them to clients. This application again can be
integrated with your site.

61

St. Angelos Professional Education

Cloud Infrastructure (Networking)

Google Apps for Business - Gmail


More than email

Gmail groups replies to a message so its easier to follow the conversatio n.

You can start a text, voice, or video chat right from your inbox.

62

St. Angelos Professional Education

Cloud Infrastructure (Networking)

Stay in the loop when youre on the go from your mobile phone or tablet.
Email wherever you work
Gmail works on any computer or mobile device with a data connection and offline support lets you keep
working even when youre disconnected. Whether you're at your desk, in a meeting, or on a plane, your email
is there.
Work fast, save time
Gmail is designed to make you more productive. 25GB of storage means you never have to delete anything,
powerful search lets you find everything, and labels and filters help you stay organized.
Connect with people
Your inbox isn't just about messages, it's about people too. Text, voice, and video chat lets you see whos
online and connect instantly. See your contacts profile photos, recent updates and shared docs next to each
email.

63

St. Angelos Professional Education

Cloud Infrastructure (Networking)

Calendar - Stay in sync with your team

Shared calendars make it easy to see when the people you work with are free.

Creating a new event is as easy as typing a sentence.

64

St. Angelos Professional Education

Cloud Infrastructure (Networking)

When youre having trouble scheduling a meeting, Calendar can suggest a time that works for everyone.
Stay organized and on schedule

Organize your day with Calendar and get event reminders on your phone or in your inbox. Attach files or docs
to your event so you have the right materials when your meeting starts.
Find time with your team

Calendar sharing makes it easy to find time with the people you work with and the smart scheduling feature
suggests meeting times that work for everyone.
Publish calendars to the web

Create an event calendar and embed it on your website or set up appointment slots so customers can choose
the best time for them.

Drive - Store everything, share anything


Apps users who wish to use Google Drive now can opt-in at drive.google.com/start

65

St. Angelos Professional Education

Cloud Infrastructure (Networking)

All your files are accessible from any web browser.

Automatically sync files from your Mac or PC to your personal drive in Googles cloud.
Access your files anywhere
Google Drive on your Mac, PC or mobile device (or your browser) gives you a single place for up-to-date
versions of your files from anywhere. In addition to any file type you choose to upload, Google Docs are also
stored in Google Drive.
Bring your files to life
Share individual files or whole folders with individual people, your entire team or even customers, vendors
and partners. Create and reply to comments on files to get feedback or add ideas.
66

St. Angelos Professional Education

Cloud Infrastructure (Networking)

Buy what you need & grow flexibly


Start with 5 GB of included storage for each of your users. Need more? For as little as $4/month for 20 GB,
administrators can centrally purchase and manage up to 16TB (Yes, thats 16,000 GB!) of additional storage
for each user.

Docs - Work together better

Create rich documents and work together on the same doc at the same time.

Share lists, track projects and analyze data with powerful spreadsheets.

67

St. Angelos Professional Education

Cloud Infrastructure (Networking)

View and even edit your documents from your mobile phone or tablet.
Create or upload
Create awesome documents, spreadsheets and presentations with Google Docs. Or, you can upload your
existing work to view, share and edit online.
Share securely
Your docs are stored on the web, so sharing them can be as easy as sending a link. You can make your
documents as public or as private as you like and control who can view, comment on and edit each document
at any time.
Work together
Google Docs is designed for teams. Multiple people can edit a document at the same time and integrated chat
and commenting make it easy to work together.
Word processing
Create rich documents with images, tables, equations, drawings, links and more. Gather input and manage
feedback with social commenting.
Spreadsheets
Keep and share lists, track projects, analyze data and track results with our powerful spreadsheet editor. Use
tools like advanced formulas, embedded charts, filters and pivot tables to get new perspectives on your data.
Presentations
Create beautiful slides with our presentation editor, which supports things like embedded videos, animations
and dynamic slide transitions. Publish your presentations on the web so anyone can view them, or share them
privately.
68

St. Angelos Professional Education

Cloud Infrastructure (Networking)

Sites - Shared workspaces for your team

Build customs project sites that include videos, calendars, documents and more.

Building a project site is as easy as writing a document, no coding skills required.

69

St. Angelos Professional Education

Cloud Infrastructure (Networking)

Share your project sites with the right people, inside and outside your company.
Easy to build
Build project sites without writing a single line of code. It's as easy as writing a document. And, to save even
more time, you can choose from hundreds of pre-built templates.
Simple to organize
Use your team site to organize everything from calendars to documents to presentations to videos. Built-in
Google-powered search makes it easy to find exactly what you're looking for later.
Quick to share
Share your site with your team, your entire company or even a customer or partner with the click of a button.
You control who can view and who can edit your site and you can always adjust settings later.

Vault

- Add archiving and e-discovery to Google Apps

Vault is optional and adds archiving, e-discovery and information governance capabilities for an additional
$5/user/month

70

St. Angelos Professional Education

Cloud Infrastructure (Networking)

Google Apps Vault helps protect your business from legal risks.

Search the archive for relevant email and chat messages.

Preserve messages beyond their standard retention periods.

71

St. Angelos Professional Education

Cloud Infrastructure (Networking)

Export messages for further review and analysis.


Retention policies

Define retention policies that are automatically applied to your email and chat messages.
Email and chat archiving

Your email and chat messages are archived and retained according to your policies, preventing inadvertent
deletions.
E-discovery

Be prepared for litigation and compliance audits with powerful search tools that help you find and retrieve
relevant email and chat messages.
Legal hold

Place legal holds on users as needed. Email and chat messages can't be deleted by users when they're placed
on hold.
Export

Export specific email and chat messages to standard formats for additional processing and review.
Audits

Run reports on user activity and actions in the archive. Searches, message views, exports and more are shown.

More - Apps comes with even more handy tools

72

St. Angelos Professional Education

Cloud Infrastructure (Networking)


Chrome for Business*

Chrome was built from the ground up to deliver the best experience for Gmail, Docs, Calendar
and more, and supports the most advanced functionality such as offline support and desktop
notifications.
Groups for Business
Google Groups can be used as mailing lists and to share calendars, docs, sites and videos quickly
with coworkers. Groups are easy for users to set up and the group owner can manage
membership in each group.
Cloud Connect for Microsoft Office
Google Cloud Connect for Microsoft Office brings collaborative multi-person editing to the
Microsoft Office experience. You can share, backup and simultaneously edit Microsoft Word,
PowerPoint and Excel documents with coworkers.

Microsoft Office 365


Office is productivity software (including Word, PowerPoint, Excel, Outlook, and OneNote) that is installed
on your desktop or laptop computer. Office 365 is an online subscription service that provides email, shared
calendars, the ability to create and edit documents online, instant messaging, web conferencing, a public
website for your business, and internal team sitesall accessible anywhere from nearly any device.
Customers with Office 2010 installed on their computer can quickly configure their software to work with
Office 365. These users can easily retrieve, edit and save Office docs in the Office 365 cloud, co-author docs
in real-time with others, quickly initiate PC-to-PC calls, instant messages and web conferences with others.
Office 365 is also compatible with Office 2007 and newer editions of Office and select Office 365 plans
include Office Professional Plus.
Microsoft Office 365 is a set of cloud-based messaging and collaboration solutions, built on the most trusted
name in office productivity. With Microsoft Office 365, you can help businesses of all sizes to be more
productive. Extend your reach to new markets and deepen engagements by delivering solutions that enable
workers to collaborate from virtually anywhere, using almost any deviceincluding PCs, phones, and
browsers.

Office 365 includes the following products:


Microsoft Exchange Online
Microsoft SharePoint Online
Microsoft Lync Online
Microsoft Office Web Apps
Microsoft Office Professional Plus
To do free trial logon to: http://www.microsoft.com/en-in/office365/online-software.aspx

73

St. Angelos Professional Education

Cloud Infrastructure (Networking)

Chapter 9
Web Application Security
In earlier computing models, e.g. in client-server, the load for the application was shared between code on the
server and code installed on each client locally. In other words, an application had its own client program
which served as its user interface and had to be separately installed on each user's personal computer. An
upgrade to the server-side code of the application would typically also require an upgrade to the client-side
code installed on each user workstation, adding to the support cost and decreasing productivity.
In contrast, web applications use web documents written in a standard format such as HTML and JavaScript,
which are supported by a variety of web browsers. Web applications can be considered as a specific variant of
client-server software where the client software is downloaded to the client machine when visiting the
relevant web page, using standard procedures such as HTTP. Client web software update may happen each
time the web page is visited. During the session, the web browser interprets and displays the pages, and acts
as the universal client for any web application.
In the early days of the Web each individual web page was delivered to the client as a static document, but the
sequence of pages could provide an interactive experience, as user input is returned through web form
elements embedded in the page markup.
What is a Client?
The 'client' is used in client-server environment to refer to the program the person uses to run the application.
A client-server environment is one in which multiple computers share information such as entering
information into a database. The 'client' is the application used to enter the information, and the 'server' is the
application used to store the information.
Benefits of a Web Application
A web application relieves the developer of the responsibility of building a client for a specific type of
computer or a specific operating system. Since the client runs in a web browser, the user could be using an
IBM-compatible or a Mac. They can be running Windows XP or Windows Vista. They can even be using
Internet Explorer or Firefox, though some applications require a specific web browser.
Web applications commonly use a combination of server-side script (ASP, PHP, etc) and client-side script
(HTML, Javascript, etc.) to develop the application. The client-side script deals with the presentation of the
information while the server-side script deals with all the hard stuff like storing and retrieving the information.
How Long Have Web Applications Been Around
Web Applications have been around since before the web gained mainstream popularity. For example, Larry
Wall developed Perl, a popular server-side scripting language, in 1987. That was seven years before the
Internet really started gaining popularity outside of academic and technology circles.
The first mainstream web applications were relatively simple, but the late 90's saw a push toward more
complex web applications. Nowadays, millions of Americans use a web application to file their income taxes
on the web.
Future of Web Applications
Most web applications are based on the client-server architecture where the client enters information while the
server stores and retrieves information. Internet mail is an example of this, with companies like Yahoo and
MSN offering web-based email clients. The new push for web applications is crossing the line into those

74

St. Angelos Professional Education

Cloud Infrastructure (Networking)

applications that do not normally need a server to store the information. Your word processor, for example,
stores documents on your computer, and doesn't need a server.
Web applications can provide the same functionality and gain the benefit of working across multiple
platforms. For example, a web application can act as a word processor, storing information and allowing you
to 'download' the document onto your personal hard drive.
If you have seen the new Gmail or Yahoo mail clients, you have seen how sophisticated web applications
have become in the past few years. Much of that sophistication is because of AJAX, which is a programming
model for creating more responsive web applications.Google Apps, Microsoft Office Live, and WebEx Web
Office are examples of the newest generation of web applications.
Web Application security
Application security encompasses measures taken throughout the application's life-cycle to prevent exceptions
in the security policy of an application or the underlying system (vulnerabilities) through flaws in the design,
development, deployment, upgrade, or maintenance of the application.
Applications only control the use of resources granted to them, and not which resources are granted to them.
They, in turn, determine the use of these resources by users of the application through application security.
Open Web Application Security Project (OWASP) and Web Application Security Consortium (WASC)
updates on the latest threats which impair web based applications. This aids developers, security testers and
architects to focus on better design and mitigation strategy. OWASP Top 10 has become an industrial norm in
assessing Web Applications.
Methodology
According to the patterns & practices Improving Web Application Security book, a principle-based approach
for application security includes:
Knowing your threats.
Securing the network, host and application.
Incorporating security into your software development process
Note that this approach is technology / platform independent. It is focused on principles, patterns, and
practices.
Threats, Attacks, Vulnerabilities, and Countermeasures
According to the patterns & practices Improving Web Application Security book, the following terms are
relevant to application security:
Asset. A resource of value such as the data in a database or on the file system, or a system resource.
Threat. A negative effect.
Vulnerability. A weakness that makes a threat possible.
Attack (or exploit). An action taken to harm an asset.
Countermeasure. A safeguard that addresses a threat and mitigates risk.

Application Threats / Attacks


According to the patterns & practices Improving Web Application Security book, the following are classes of
common application security threats / attacks:

75

St. Angelos Professional Education

Cloud Infrastructure (Networking)


Category

Threats / Attacks

Input Validation

Buffer overflow; cross-site scripting; SQL injection; canonicalization

Authentication

Network eavesdropping ; Brute force attack; dictionary attacks; cookie replay; credential
theft

Authorization

Elevation of privilege; disclosure of confidential data; data tampering; luring attacks

Configuration
management

Unauthorized access to administration interfaces; unauthorized access to configuration


stores; retrieval of clear text configuration data; lack of individual accountability; overprivileged process and service accounts

Sensitive
information

Access sensitive data in storage; network eavesdropping; data tampering

Session
management

Session hijacking; session replay; man in the middle

Cryptography

Poor key generation or key management; weak or custom encryption

Parameter
manipulation

Query string manipulation; form field manipulation; cookie manipulation; HTTP header
manipulation

Exception
management

Information disclosure; denial of service

Auditing and
logging

User denies performing an operation; attacker exploits an application without trace;


attacker covers his or her tracks

Security in the cloud


When looking to move workload to cloud environments, most Chief Information Officers will say that
security is the number one concern. To address those concerns, IT organizations must consider several aspects
of security to ensure they do not put their organizations at risk as they explore cloud computing. Some of
these concerns regarding security have to do with what the cloud providers service and operational
procedures, and other concerns, have to do with new processes that must be considered, and that did not have
to be considered before in the traditional IT model.
To provide effective security for a cloud environment, both the cloud provider and consumer must partner to
provide solutions to the following security concerns:
Governance and Enterprise Risk Management The ability of an organization to govern and
measure enterprise risk that is introduced by cloud computing. This concern includes items such as
76

St. Angelos Professional Education

Cloud Infrastructure (Networking)

legal precedence for agreement breaches, ability of user organizations to adequately assess risk of a
cloud provider, responsibility to protect sensitive data when both user and provider may be at fault
Compliance and Audit Maintaining and proving compliance when using cloud computing. Issues
involve evaluating how cloud computing affects compliance with internal security policies, and also
various compliance requirements.
Application Security Securing application software that is running on or being developed in the
cloud. This concern includes items such as whether it is appropriate to migrate or design an application
to run in the cloud.
Encryption and Key Management Identifying proper encryption usage and scalable key
management. This concern addresses access controls of both access to resources and for protecting
data.
Identity and Access Management - Managing identities and leveraging directory services to provide
access control. The focus is on issues that are encountered when extending an organizations identity
into the cloud.

Although, Governance and Enterprise Risk Management are existing functions within most IT organizations,
cloud computing introduces several unique challenges around this topic. Part of the responsibilities are that of
the cloud provider, and other components are that of the consumer to ensure that the overall solution that is
being leveraged meets the governance and Enterprise Risk Management standards of the organization. For
example, in the IBM Smart Cloud Enterprise offering, IBM requires its customers to secure the application
and operating system that is being used, although IBM does provide a base operating system image with basic
security configurations.
In addition, most organizations are bound by some form of security compliance guidelines. These guidelines
and regulations do not change when moving a workload into the cloud environment. Therefore, consumers of
cloud must look at their existing compliance and audit guidelines to ensure that the workloads they move to
the cloud still comply with the guidelines by which their organizations are bound. Also, consumers must
ensure that any audit requirements can still be met even though the workload has been moved into a cloud
environment.
Securing application software that is running or being developed in the cloud is another consideration for
security. Standard application security might need to be changed or enhanced based on a cloud providers
environment or customer requirements. Encryption and Key Management becomes critical when moving a
workload to the cloud. Using encryption and a scalable key management solution must be considered when
leveraging cloud solutions. For example, IBM Smart Cloud Enterprise provides a robust key management
system for secure access to all Linux compute resources.
Finally, Identity and Access Management is critical to the success of cloud solutions. This ensures only
authenticated and authorized individuals get access to the correct components of the workloads that are hosted
in cloud solutions. Solutions such as Tivoli Access Manager with its WebSEAL reverse proxy can help with
the authorization of individuals; solutions such as Tivoli Identity Manager can help with the authentication of
users.
Cloud Security Concerns
When addressing security in a cloud environment, consider five key areas to help ensure that your cloud
provider and you as consumers of cloud are creating a cloud solution that meets the business needs. It is
critical to consider the governance and enterprise risk aspects of cloud computing along with the compliance
77

St. Angelos Professional Education

Cloud Infrastructure (Networking)

and audit implications to the organization. In addition, application security concerns such as encryption, key
management, and identity and access management must be addressed to ensure security risks are mitigated in
a cloud environment. Although many of these disciplines exist in traditional IT, many of the disciplines must
be reviewed when you move workload to a cloud environment.
Applications in cloud environments Security Recommendations
With the adoption of cloud computing continuing to accelerate, the need to adopt and maintain effective
security and compliance practices is more important than ever.
To help enterprises maintain compliance while using the cloud,
and keep their networks, applications and data safe, Verizon is
offering
the
following
best
practices
and
tips.
Cloud infrastructure security is at the heart of the matter:
Physical security: Facilities should be hardened with climate
control, fire prevention and suppression systems, and
uninterruptable power supplies, and have round-the-clock onsite
security personnel. Look for a provider that offers biometric
capabilities, such as fingerprints or facial recognition, for
physical access control, and video cameras for facility monitoring.
Network security and logical separation: Virtualized versions of firewalls and intrusion prevention systems
should be utilized. Portions of the cloud environment containing sensitive systems and data should be isolated.
Regularly scheduled audits using industry-recognized methods and standards, such as SAS 70 Type II, the
Payment Card Industry Data Security Standard, ISO 27001/27002 and the Cloud Security Alliance Cloud
Controls Matrix, should be conducted.
Inspection: Anti-virus and anti-malware applications, as well as content filtering should be employed at
gateways. Data loss prevention capabilities should be considered when dealing with sensitive information,
such as financial and personal data, and proprietary intellectual property.
Administration: Special attention should be paid to cloud hypervisors, the servers that run multiple operating
systems, since they provide the ability to manage an entire cloud environment. Many security and compliance
requirements mandate different network and cloud administrators to provide a separation of duties and added
level of protection. Access to virtual environment management interfaces should be highly restricted, and
application programming interfaces, or APIs, should be locked down or disabled.
Comprehensive monitoring and logging: Nearly all security standards require the ability to monitor and
control access to networking, systems, applications and data. A cloud environment, whether in-house or
outsourced, must offer the same ability. Cloud application security is a must:
System security. Virtual machines, or VMs, should be protected by cloud-specific firewalls, intrusion
prevention systems and anti-virus applications, as well as consistent and programmatic patch-management
processes.
Application and data security. Applications should utilize dedicated databases wherever possible, and
78

St. Angelos Professional Education

Cloud Infrastructure (Networking)

application access to databases should be limited. Many security compliance standards require monitoring and
logging of applications and related databases.
Authentication and authorization. Two-factor authentication, such as digital authentication, should be used
for user names and passwords and are a necessity for remote access and any type of privileged access. Roles
of authorized users should be clearly defined and kept to the minimum necessary to complete their assigned
tasks. Password encryption is advisable. In addition, authentication, authorization and accounting packages
should not be highly customized, as this often leads to weakened security protection.
Vulnerability management. Applications should be designed to be invulnerable to common exploits, such as
those listed in the Open Web Application Security Project (OWASP) Top 10. Applications deployed in the
cloud require regular patching, vulnerability scanning, independent security testing and continuous
monitoring.
Data storage. Enterprises should know the types of data stored in a cloud environment and segregate data
types, as appropriate. Additionally, the physical and logical location of data should be known due to potential
security and privacy implications.
Change management. It is highly recommended that change-management policies for network, system,
application and data administrators be clearly documented and understood to avoid inadvertent issues and
potential data loss.
Encryption. Encryption of data in a cloud environment can be more complex and requires special attention.
Many standards contain requirements for both in-transit and at-rest data. For instance, financial data and
personal health information should always be encrypted. Public, private and hybrid clouds.
Shared virtualized environment. Public clouds, many using a shared virtualized environment, offer basic
security features. This means proper segmentation and isolation of processing resources should be an area of
focus. To meet your enterprise's security and compliance requirements, be prudent in choosing the proper
cloud environment.
Public cloud providers. While the economics may be attractive, some public cloud providers may not
sufficiently support the types of controls required by enterprises to meet security and compliance
requirements. Be sure to ask a lot of questions.
Prudent security measures. No matter what the cloud model, enterprises should employ segmentation,
firewalls, intrusion protection systems, monitoring, logging, and access controls and data encryption.
Private clouds are a double-edged sword. Private clouds can be hosted on premises or at a service
provider's facility. As with traditional environments, security design and controls are critical. When using a
service provider's offering, it's important to select the right cloud that meets your requirements. Just because
it's a private cloud, doesn't mean it's inherently secure.
Hybrid clouds. Hybrid clouds can offer enterprises the best of both of worlds, enabling them to meet a wide
range of IT and business objectives. Hybrid clouds offer an enterprise the ability to house applications in the
most suitable environment while leveraging the benefits and features of shared and on-premises cloud
environments and the ability to move applications and data between the two.
79

St. Angelos Professional Education

Cloud Infrastructure (Networking)

Chapter 10
Cloud Interoperability and Solution
Interoperability
The concept of interoperability as it applies to cloud computing is at its simplest, the requirement for the components of
a processing system to work together to achieve their intended result. Components should be replaceable by new or
different components from different providers and continue to work. Components that make up a system consist of the
hardware and software elements required to build and operate it. Typical components required of a cloud system
include:

Hardware
Operating systems
Virtualization
Networks
Storage
Software (Application frameworks, middleware, libraries, applications)
Data Security.

It should be apparent to most readers that these components compose almost any IT system. You might ask
how these components and therefore how interoperability concerns differ from any IT system where all of
the components would logically be expected to work together as well.
Organizations must approach the cloud with the understanding that they may have to change providers in the
future. Portability and interoperability must be considered up front as part of the risk management and
security assurance of any cloud program.
Large cloud providers can offer geographic redundancy in the cloud, hopefully enabling high availability with
a single provider. Nonetheless, its advisable to do basic business continuity planning, to help minimize the
impact of a worst-case scenario. Various companies will in the future suddenly find themselves with urgent
needs to switch cloud providers for varying reasons, including:

An unacceptable increase in cost at contract renewal time.


A provider ceases business operations.
A provider suddenly closes one or more services being used, without acceptable migration plans.
Unacceptable decrease in service quality, such as a failure to meet key performance requirements or
achieve service level agreements (SLAs).
A business dispute between cloud customer and provider.

Some simple architectural considerations can help minimize the damage should these kinds of scenarios
occur. However, the means to address these issues depend on the type of cloud service. With Software as a
Service (SaaS), the cloud customer will by definition be substituting new software applications for old ones.
Therefore, the focus is not upon portability of applications, but on preserving or enhancing the security
functionality provided by the legacy application and achieving a successful data migration. With Platform as a
Service (PaaS), the expectation is that some degree of application modification will be necessary to achieve
portability. The focus is minimizing the amount of application rewriting while preserving or enhancing
security controls, along with achieving a successful data migration.
With Infrastructure as a Service (IaaS), the focus and expectation is that both the applications and data should
be able to migrate to and run at a new cloud provider.
80

St. Angelos Professional Education

Cloud Infrastructure (Networking)

Due to a general lack of interoperability standards, and the lack of sufficient market pressure for these
standards, transitioning between cloud providers may be a painful manual process. From a security
perspective, our primary concern is maintaining consistency of security controls while changing
environments.
Recommendations
For All Cloud Solutions:
Substituting cloud providers is in virtually all cases a negative business transaction for at least one
party, which can cause an unexpected negative reaction from the legacy cloud provider. This must be
planned for in the contractual process as outlined in Domain 3, in your Business Continuity Program
as outlined in Domain 7, and as a part of your overall governance in Domain 2.
Understand the size of data sets hosted at a cloud provider. The sheer size of data may cause an
interruption of service during a transition, or a longer transition period than anticipated. Many
customers have found that using a courier to ship hard drives is faster than electronic transmission for
large data sets.
Document the security architecture and configuration of individual component security controls so
they can be used to support internal audits, as well as to facilitate migration to new providers.
For IaaS Cloud Solutions:
Understand how virtual machine images can be captured and ported to new cloud providers, who may
use different virtualization technologies.
Identify and eliminate (or at least document) any provider-specific extensions to the virtual machine
environment.
Understand what practices are in place to make sure appropriate deprovisioning of VM images occurs
after an application is ported from the cloud provider.
Understand the practices used for decommissioning of disks and storage devices.
Understand hardware/platform based dependencies that need to be identified before migration of the
application/data.
Ask for access to system logs, traces, and access and billing records from the legacy cloud provider.
Identify options to resume or extend service with the legacy cloud provider in part or in whole if new
service proves to be inferior.
Determine if there are any management-level functions, interfaces, or APIs being used that are
incompatible with or unimplemented by the new provider.
For PaaS Cloud Solutions:
When possible, use platform components with a standard syntax, open APIs, and open standards.
Understand what tools are available for secure data transfer, backup, and restore.
Understand and document application components and modules specific to the PaaS provider, and
develop application architecture with layers of abstraction to minimize direct access to proprietary
modules.
Understand how base services like monitoring, logging, and auditing would transfer over to a new
vendor.
Understand control functions provided by the legacy cloud provider and how they would translate to
the new provider.
When migrating to a new platform, understand the impacts on performance and availability of the
application, and how these impacts will be measured.
81

St. Angelos Professional Education

Cloud Infrastructure (Networking)

Understand how testing will be completed prior to and after migration, to verify that the services or
applications are operating correctly. Ensure that both provider and user responsibilities for testing are
well known and documented.

For SaaS Solutions:


Perform regular data extractions and backups to a format that is usable without the SaaS provider.
Understand whether metadata can be preserved and migrated.
Understand that any custom tools being implemented will have to be redeveloped, or the new vendor
must provide those tools.
Assure consistency of control effectiveness across old and new providers.
Assure the possibility of migration of backups and other copies of logs, access records, and any other
pertinent information which may be required for legal and compliance reasons.
Understand management, monitoring, and reporting interfaces and their integration between
environments.
Is there a provision for the new vendor to test and evaluate the applications before migration.
Choosing a Cloud Firewall
A "cloud firewall" is a network firewall appliance which is specifically designed to work with cloud-based
security solutions. The Edge2WAN cloud firewall includes built-in link bonding technology specifically
optimized for cloud-security solutions. When combined with our cloud-based security clients, Cloud5ecure,
our customers will receive complete protection against Internet threats like web-based malware, spyware, and
viruses. The Edge2WAN cloud firewalls purpose is to reduce security risks and ensure the highest possible
uptime for these applications which includes greater network redundancy as well as comprehensive
network/endpoint security.
Cloud firewalls can improve any organizations overall security and business continuity; we've done it
for these customers; we can do it for you.

Cloud Firewall Security


The EdgeXOS cloud firewall platform provides next generation network security by incorporating both
appliance level and endpoint protection while utilizing the cloud to ensure instant and realtime security
updates in order to prevent zero hour security threats.
What this means is that the EdgeXOS cloud firewall platform can react faster and ensure better defenses than
other non-cloud enabled solutions. XRoads Networks has utilized its partnership with Webroot to provide
centralized and global web security via the EdgeXOS appliances and enable endpoint protection through a
82

St. Angelos Professional Education

Cloud Infrastructure (Networking)

simple download and installation process managed through the EdgeXOS platform which can ensure that all
users are connecting securely to the network.
No other solution on the market surpasses the security capabilities offered via the EdgeXOS cloud
firewall platform.

Hardened Network Firewall


The EdgeXOS platforms has a built-in network layer firewall which can be used to create granular security
policies to allow, deny, and log network traffic based on various administrative criteria. The packet inspection
firewall can quickly determine if specific types of attacks are being made against it and adjust accordingly,
including DoS attacks, SYN floods, fragmented packets, redirects, source route attacks, and reverse path
filtering.

The firewall is easy to configure and is rules based which allows for simple grouping and modification as
needed. Rules can be grouped based on name and applied based on group order. User control is also built-in to
the firewall which automatically detects all physical addresses on the LAN network for easy administration
and management.
Security Policies
XOS Preview

The EdgeXOS platform is a deep inspection firewall that works in conjunction with our packet classifier
technology to secure your network connections. In addition to the rules-based firewall the EdgeXOS can
manage individual users through network access control and authentication.
83

St. Angelos Professional Education

Cloud Infrastructure (Networking)

Network access control utilizes end-users MAC/physical address and IP address in order to identify the enduser and then grant access through a web-based interface, or when the administrator specifically allows that
user access to the network. The NAC server can be used in conjunction with the Cloud5ecure client in order
to enforce endpoint security for all users on the local area network.
The EdgeXOS cloud firewall platform delivers its #1 rated antispyware technology by leveraging partnerships
with leading cloud-security vendors like Webroot, the leader in antispyware detection and prevention. These
cloud-security vendors utilize research systems that scour the entire Web and discover spyware faster than any
other solution available today. These updates are immediately distributed through our solution to end-users,
thus preventing new outbreaks and reducing lost productivity and headaches.

In order to delivery enterprise class antivirus detection the EdgeXOS cloud firewall incorporates technology
from Webroot, which is a best-of-breed virus scanner, and includes an integrated Host Intrusion Prevention
System (HIPS) to detect potential malware before it creates problems on an end-users system. Behavioral
analysis automatically finds potential threats and can quarantine them before they become a security issue.

Patent-pending technology from XRoads Networks allows these third party vendors to operate at top speed
and ensure highly redundant connectivity to these critical security services.

84

St. Angelos Professional Education

Cloud Infrastructure (Networking)

Chapter 11
Backup and Recovery of cloud data
In information technology, a backup or the process of backing up is making copies of data which may be
used to restore the original after a data loss event.
Backups have two distinct purposes. The primary purpose is to recover data after its loss, be it by data
deletion or corruption. Data loss can be a common experience of computer users. A 2008 survey found that
66% of respondents had lost files on their home PC. The secondary purpose of backups is to recover data
from an earlier time, according to a user-defined data retention policy, typically configured within a backup
application for how long copies of data are required. Though backups popularly represent a simple form of
disaster recovery, and should be part of a disaster recovery plan, by themselves, backups should not alone be
considered disaster recovery. Not all backup systems or backup applications are able to reconstitute a
computer system, or in turn other complex configurations such as a computer cluster, active directory servers,
or a database server, by restoring only data from a backup.
Since a backup system contains at least one copy of all data worth saving, the data storage requirements are
considerable. Organizing this storage space and managing the backup process is a complicated undertaking. A
data repository model can be used to provide structure to the storage. In the modern era of computing there
are many different types of data storage devices that are useful for making backups. There are also many
different ways in which these devices can be arranged to provide geographic redundancy, data security, and
portability.
Before data is sent to its storage location, it is selected, extracted, and manipulated. Many different techniques
have been developed to optimize the backup procedure. These include optimizations for dealing with open
files and live data sources as well as compression, encryption, and de-duplication, among others. Many
organizations and individuals try to have confidence that the process is working as expected and work to
define measurements and validation techniques. It is also important to recognize the limitations and human
factors involved in any backup scheme.
Managing the backup process
It is important to understand that backing up is a process. As long as new data is being created and changes
are being made, backups will need to be updated. Individuals and organizations with anything from one
computer to thousands (or even millions) of computer systems all have requirements for protecting data.
While the scale is different, the objectives and limitations are essentially the same. Likewise, those who
perform backups need to know to what extent they were successful, regardless of scale.
Recovery point objective (RPO)
The point in time that the restarted infrastructure will reflect. Essentially, this is the roll-back that will be
experienced as a result of the recovery. The most desirable RPO would be the point just prior to the data loss
event. Making a more recent recovery point achievable requires increasing the frequency of synchronization
between the source data and the backup repository.
Recovery time objective (RTO)
The amount of time elapsed between disaster and restoration of business functions.

85

St. Angelos Professional Education

Cloud Infrastructure (Networking)

Data security
In addition to preserving access to data for its owners, data must be restricted from unauthorized access.
Backups must be performed in a manner that does not compromise the original owner's undertaking. This can
be achieved with data encryption and proper media handling policies.
Limitations
An effective backup scheme will take into consideration the limitations of the situation.
Backup window
The period of time when backups are permitted to run on a system is called the backup window. This is
typically the time when the system sees the least usage and the backup process will have the least amount of
interference with normal operations. The backup window is usually planned with users' convenience in mind.
If a backup extends past the defined backup window, a decision is made whether it is more beneficial to abort
the backup or to lengthen the backup window.
Performance impact
All backup schemes have some performance impact on the system being backed up. For example, for the
period of time that a computer system is being backed up, the hard drive is busy reading files for the purpose
of backing up, and its full bandwidth is no longer available for other tasks. Such impacts should be analyzed.
Costs of hardware, software, labor All types of storage media have a finite capacity with a real cost. Matching
the correct amount of storage capacity (over time) with the backup needs is an important part of the design of
a backup scheme. Any backup scheme has some labor requirement, but complicated schemes have
considerably higher labor requirements. The cost of commercial backup software can also be considerable.
Network bandwidth
Distributed backup systems can be affected by limited network bandwidth.
Backup Implementation
Meeting the defined objectives in the face of the above limitations can be a difficult task. The tools and
concepts below can make that task more achievable.
Scheduling
Using a job scheduler can greatly improve the reliability and consistency of backups by removing part of the
human element. Many backup software packages include this functionality.
Authentication
Over the course of regular operations, the user accounts and/or system agents that perform the backups need
to be authenticated at some level. The power to copy all data off of or onto a system requires unrestricted
access. Using an authentication mechanism is a good way to prevent the backup scheme from being used for
unauthorized activity.
Chain of trust
Removable storage media are physical items and must only be handled by trusted individuals. Establishing a
chain of trusted individuals (and vendors) is critical to defining the security of the data.

86

St. Angelos Professional Education

Cloud Infrastructure (Networking)

Measuring the process


To ensure that the backup scheme is working as expected, the process needs to include monitoring key factors
and maintaining historical data.
Backup validation
Also known as "backup success validation the process by which owners of data can get information about
how their data was backed up. This same process is also used to prove compliance to regulatory bodies
outside of the organization, for example, an insurance company might be required under HIPAA to show
"proof" that their patient data are meeting records retention requirements.[16] Disaster, data complexity, data
value and increasing dependence upon ever-growing volumes of data all contribute to the anxiety around and
dependence upon successful backups to ensure business continuity. For that reason, many organizations rely
on third-party or "independent" solutions to test, validate, and optimize their backup operations (backup
reporting).
Reporting
In larger configurations, reports are useful for monitoring media usage, device status, errors, vault
coordination and other information about the backup process.
Logging
In addition to the history of computer generated reports, activity and change logs are useful for monitoring
backup system events.
Validation
Many backup programs make use of checksums or hashes to validate that the data was accurately copied.
These offer several advantages. First, they allow data integrity to be verified without reference to the original
file: if the file as stored on the backup medium has the same checksum as the saved value, then it is very
probably correct. Second, some backup programs can use checksums to avoid making redundant copies of
files, to improve backup speed. This is particularly useful for the de-duplication process.
Monitored backup
Backup processes are monitored by a third party monitoring center. This center alerts users to any errors that
occur during automated backups. Monitored backup requires software capable of pinging the monitoring
center's servers in the case of errors. Some monitoring services also allow collection of historical meta-data,
which can be used for Storage Resource Management purposes like projection of data growth, locating
redundant primary storage capacity and reclaimable backup capacity.
Selection and extraction of data
A successful backup job starts with selecting and extracting coherent units of data. Most data on modern
computer systems is stored in discrete units, known as files. These files are organized into file systems. Files
that are actively being updated can be thought of as "live" and present a challenge to back up. It is also useful
to save metadata that describes the computer or the file system being backed up.
Deciding what to back up at any given time is a harder process than it seems. By backing up too much
redundant data, the data repository will fill up too quickly. Backing up an insufficient amount of data can
eventually lead to the loss of critical information.

87

St. Angelos Professional Education

Cloud Infrastructure (Networking)

Copying files
Making copies of files is the simplest and most common way to perform a backup. A means to perform this
basic function is included in all backup software and all operating systems.
Partial file copying
Instead of copying whole files, one can limit the backup to only the blocks or bytes within a file that have
changed in a given period of time. This technique can use substantially less storage space on the backup
medium, but requires a high level of sophistication to reconstruct files in a restore situation. Some
implementations require integration with the source file system.
File Systems
File system dump Instead of copying files within a file system, a copy of the whole file system itself can be
made. This is also known as a raw partition backup and is related to disk imaging. The process usually
involves unmounting the filesystem and running a program like dd (Unix). Because the disk is read
sequentially and with large buffers, this type of backup can be much faster than reading every file normally,
especially when the filesystem contains many small files, is highly fragmented, or is nearly full. But because
this method also reads the free disk blocks that contain no useful data, this method can also be slower than
conventional reading, especially when the filesystem is nearly empty. Some filesystems, such as XFS, provide
a "dump" utility that reads the disk sequentially for high performance while skipping unused sections. The
corresponding restore utility can selectively restore individual files or the entire volume at the operator's
choice.
Identification of changes
Some filesystems have an archive bit for each file that says it was recently changed. Some backup software
looks at the date of the file and compares it with the last backup to determine whether the file was changed.
Versioning file system
A versioning file system keeps track of all changes to a file and makes those changes accessible to the user.
Generally this gives access to any previous version, all the way back to the file's creation time. An example of
this is the Way back versioning file system for Linux.
Live data
If a computer system is in use while it is being backed up, the possibility of files being open for reading or
writing is real. If a file is open, the contents on disk may not correctly represent what the owner of the file
intends. This is especially true for database files of all kinds. The term fuzzy backup can be used to describe a
backup of live data that looks like it ran correctly, but does not represent the state of the data at any single
point in time. This is because the data being backed up changed in the period of time between when the
backup started and when it finished. For databases in particular, fuzzy backups are worthless.
Snapshot backup
A snapshot is an instantaneous function of some storage systems that presents a copy of the file system as if it
were frozen at a specific point in time, often by a copy-on-write mechanism. An effective way to back up live
data is to temporarily quiesce it (e.g. close all files), take a snapshot, and then resume live operations. At this
point the snapshot can be backed up through normal methods. While a snapshot is very handy for viewing a
filesystem as it was at a different point in time, it is hardly an effective backup mechanism by itself.

88

St. Angelos Professional Education

Cloud Infrastructure (Networking)

Open file backup


Many backup software packages feature the ability to handle open files in backup operations. Some simply
check for openness and try again later. File locking is useful for regulating access to open files.
When attempting to understand the logistics of backing up open files, one must consider that the backup
process could take several minutes to back up a large file such as a database. In order to back up a file that is
in use, it is vital that the entire backup represent a single-moment snapshot of the file, rather than a simple
copy of a read-through. This represents a challenge when backing up a file that is constantly changing. Either
the database file must be locked to prevent changes, or a method must be implemented to ensure that the
original snapshot is preserved long enough to be copied, all while changes are being preserved. Backing up a
file while it is being changed, in a manner that causes the first part of the backup to represent data before
changes occur to be combined with later parts of the backup after the change results in a corrupted file that is
unusable, as most large files contain internal references between their various parts that must remain
consistent throughout the file.
Cold database backup
During a cold backup, the database is closed or locked and not available to users. The datafiles do not change
during the backup process so the database is in a consistent state when it is returned to normal operation.
Hot database backup
Some database management systems offer a means to generate a backup image of the database while it is
online and usable ("hot"). This usually includes an inconsistent image of the data files plus a log of changes
made while the procedure is running. Upon a restore, the changes in the log files are reapplied to bring the
copy of the database up-to-date (the point in time at which the initial hot backup ended).
Metadata
Not all information stored on the computer is stored in files. Accurately recovering a complete system from
scratch requires keeping track of this non-file data too.
System description
System specifications are needed to procure an exact replacement after a disaster.
Boot sector
The boot sector can sometimes be recreated more easily than saving it. Still, it usually isn't a normal
file and the system won't boot without it.
Partition layout
The layout of the original disk, as well as partition tables and file system settings, is needed to
properly recreate the original system.
File metadata
Each file's permissions, owner, group, ACLs, and any other metadata need to be backed up for a
restore to properly recreate the original environment.
System metadata
Different operating systems have different ways of storing configuration information. Microsoft
Windows keeps a registry of system information that is more difficult to restore than a typical file.
Benefits of Cloud Backup and Disaster Recovery
The key reasons why small and medium-sized businesses should implement a comprehensive data backup and
disaster recovery process. With a best practice, two-tier backup system, you have both local, immediate
restore and sharing data backup, combined with a secondary backup in an offsite location, for which we
recommended using a cloud-based solution.
89

St. Angelos Professional Education

Cloud Infrastructure (Networking)

Many companies are already embracing cloud data backup, and for good reason, as there are many technical,
economic and security benefits. The advent of cloud storage has created an affordable multi-tier BDR option
to backup critical data for retrieval after a data loss event or other disaster.
Here are a few of the key benefits companies can gain with backup disaster recovery in the cloud:
Improved data protection: Cloud backup assures that your data is recoverable and protected. With
industry-leading encryption and security practices, cloud-based data is highly secure. Dont buy into the hype
about the cloud being less secure. Its just not true.
Low Total Cost of Ownership (TCO): Zero capital expense and predictable subscription pricing keep
total cost of ownership low.
Ease of use: Cloud BDR solutions tend to have easy-to-use web-based management and user interfaces.
At most, cloud solutions require a light-weight agent on devices used for data synch to the cloud, so you have
minimal maintenance.
Fast implementation: Rapid deployment in a matter of minutes with set-up and configuration wizards.
Leverage existing IT: Most cloud solutions interoperate with existing storage devices, applications and
operating systems.
Lower energy consumption: No need for larger server rooms, saving power consumption and energy
bills. If you have a Green IT initiative, this is a great way to lower your carbon footprint and energy
consumption by leveraging a proven cloud vendor.
Agility and scalability: Because its cloud-based, increasing or decreasing your storage capacity should
be painless. Try to avoid any tiered pricing as you increase data volumes.

90

St. Angelos Professional Education

Cloud Infrastructure (Networking)

Chapter 12
Server Performance & Monitoring
Server Performance and Activity Monitoring
The goal of monitoring databases is to assess how a server is performing. Effective monitoring involves
taking periodic snapshots of current performance to isolate processes that are causing problems, and gathering
data continuously over time to track performance trends. Microsoft SQL Server and the Microsoft Windows
operating system provide utilities that let you view the current condition of the database and to track
performance as conditions change.
Performance Monitoring Differs from Traditional Server Monitoring
Since we have various types of IT Components in a cloud, the traditional performance management which
focuses on specific components will not work for Cloud. They are not well equipped to provide a more
holistic view of the cloud environment. More than independent management of Physical and virtual
infrastructure elements, focus should be on how they perform to deliver the Business Service to the User.
Service Level Agreements (SLAs) are very important in a Cloud environment. Since the Customer pays for
the services/Infrastructure he uses, customer needs to be assured of a level of service at any time. Thus
performance Monitoring of Cloud should monitor the capability of components of cloud in delivering the
expected service. To a large extend Clouds are based on virtualized resources. Most of the Virtualization
vendors provide management/monitoring solutions which collect a robust set of resource utilization statistics,
even though they provide a good view of individual components, they fail to provide a complete picture of the
performance of the entire cloud environment For example with VMwares VCenter/vSphere we can get basic
resource utilization information of the ESX/ESXi host and virtual machines, they fail to provide a visibility
into the performance of the business applications hosted on these platforms. Although it is possible to
approximate the performance of Physical infrastructure based on how its resources are utilized, that is not the
case with virtual components due to the shared and dynamic nature of resource allocation. The monitoring
solution should be capable of dynamically identifying the VMs in which the application is currently running
and then collect the parameters. Also depending on the agreement between the service provider and the
customer, resources will be dynamically allocated to applications. Thus Virtualized/Cloud environment
requires a specialized monitoring solution which can provide effective, convenient and holistic view of the
entire environment. In a nutshell we need a monitoring model for Cloud which can provide a view of the
health of the entire cloud in delivering a service. It should help the provider assess whether customers
demands can be met with the current resources/performance. Also we need to get a view of individual
applications hosted on cloud.
IV. Cloud Performance Monitoring
When we consider monitoring performance of a Cloud, we can broadly classify it to 2 categories. Monitoring
from Service providers view and Monitoring from Cloud Consumers view.
Infrastructure Performance - A Cloud service provider is interested in this kind of report. This involves
performance of the various infrastructure components in the cloud like Virtual Machines, Storage, and
Network etc. Since individual components performance fail to provide an accurate view of the overall cloud
performance, a new approach called Infrastructure Response Time is being researched upon to get a more
accurate picture of the performance of a virtualized/ cloud environment. Infrastructure Response Time (IRT)
is defined as the time it takes for any workload (application) to place a request for work on the virtual
91

St. Angelos Professional Education

Cloud Infrastructure (Networking)

environment and for the virtual environment to complete the request (from the guest to spindle and back
again). The request could be a simple data exchange between 2 VMs or a complex request which involves
database transaction and writes into a storage array.

Parameters of Interest for Cloud Service Provider


Cloud Service Provider needs to get an exhaustive view of the health of the entire cloud to assess the
situation. Lot of decision making and determining SLAs are driven by the Cloud performance.
Cloud service providers would be interested in the below details to assess the actual performance of cloud.
A. Resource Utilization details Just like in any other performance monitoring, utilization parameters of
physical servers/infrastructure is an important factor in cloud monitoring, as these servers make up the cloud.
B. Infrastructure Response Time (IRT) As already discussed, IRT gives a clear picture of the overall
Performance of the cloud as it checks the time taken for each transaction to complete. IRT is very crucial as it
has an impact on the application performance and availability which in turn affects the SLAs
C. Virtualization metrics Similar to the physical machines, we need to collect the resource utilization data
from the Virtual machines. This provides a picture of how much of the VM is being utilized and this data
helps in analysing the resource utilization by applications and to decide on the scaling requirements.
Other important parameters related to Virtual Machines like Number of VMs used by application Time
taken to create a new VM Time taken to move an app from one VM to another Time taken to allocate
additional resources to VM are of importance as they also contribute to IRT and performance of the
applications hosted in cloud.
D. Transaction metrics Transaction metrics can be considered as a derivative from IRT. Metrics like Success
percentage of transactions, count of transactions etc. for an application would give a clearer picture of the
performance of an application in cloud at a particular instance. An ideal monitoring solution for Cloud should
be capable of providing all the above details.Reporting and Collecting Performance Data
Reporting
The following reports would help the service provider to understand the cloud usage and its performance.
Multi-dimensional reports - Different levels of Report for different users like overall infrastructure usage,
usage reports of specific resources/datacenters etc
92

St. Angelos Professional Education

Cloud Infrastructure (Networking)

- Application level reports like, Reports showing the infrastructure usage by each application, infrastructure
reports like, Performance of the resources in the cloud
Busy-hour / peak Usage Report - Helps to get a clear view of the usage of the application for better planning
of resources and SLAs What If analysis Trend Analysis
Collecting
Monitoring application should collect Performance parameters like CPU utilization, memory utilization etc.
from physical as well as virtual hosts.
Solution to be capable enough to collect the response time per transaction and per application. Could be
done either using an agent residing in the VM or an external monitoring agent. The agent needs to track and
capture details of each transaction happening with the applications hosted in the VM.
Collect virtualization metrics from underlying virtualization platform. Need to ensure that the solution works
well with different virtualization platforms.
Derive transaction metrics/data from collected data on response time.

To perform monitoring tasks with Windows tools


Use System Monitor to monitor the utilization of system resources. Collect and view real-time performance
data in the form of counters, for server resources such as processor and memory use, and for many Microsoft
SQL Server resources such as locks and transactions.
To start System Monitor in Windows
On the Start menu, point to Run, type perfmon in the Run dialog box, and then click OK.
Monitoring Server Performance and Activity

93

St. Angelos Professional Education

Cloud Infrastructure (Networking)

Monitoring a server isn't something you should do haphazardly. You need to have a clear plana set of goals
that you hope to achieve. Let's take a look at the reasons you may want to monitor a server and at the tools
you can use to do this.

Figure 3-10: Use the Open dialog box to open the saved event log in a new view.
Why Monitor Your Server
Troubleshooting server performance problems is a key reason for monitoring. For example, users may be
having problems connecting to the server and you may want to monitor the server to troubleshoot these
problems. Here, your goal would be to track down the problem using the available monitoring resources and
then to resolve it.
Another common reason for wanting to monitor a server is to improve server performance. You do this by
improving disk I/O, reducing CPU usage, and cutting down on the network traffic load on the server.
Unfortunately, there are often trade-offs to be made when it comes to resource usage. For example, as the
number of users accessing a server grows, you may not be able to reduce the network traffic load, but you
may be able to improve server performance through load balancing or by distributing key data files on
separate drives.
Getting Ready to Monitor
Before you start monitoring a server, you may want to establish baseline performance metrics for your server.
To do this, you measure server performance at various times and under different load conditions. You can
then compare the baseline performance with subsequent performance to determine how the server is
performing. Performance metrics that are well above the baseline measurements may indicate areas where the
server needs to be optimized or reconfigured.
After you establish the baseline metrics, you should formulate a monitoring plan. A comprehensive
monitoring plan includes the following steps:
1. Determining which server events should be monitored in order to help you accomplish your goal.
2. Setting filters to reduce the amount of information collected.
3. Configuring monitors and alerts to watch the events.
4. Logging the event data so that it can be analyzed.
5. Analyzing the event data in Performance Monitor.
These procedures are examined later in the chapter. While you should develop a monitoring plan in most
cases, there are times when you may not want to go through all these steps to monitor your server. For
example, you may want to monitor and analyze activity as it happens rather than logging and analyzing the
data later.

94

St. Angelos Professional Education

Cloud Infrastructure (Networking)

Using Performance Monitor


Performance Monitor graphically displays statistics for the set of performance parameters you've selected for
display. These performance parameters are referred to as counters. You can also update the available counters
when you install services and add-ons on the server. For example, when you configure DNS on a server,
Performance Monitor is updated with a set of objects and counters for tracking DNS performance.
Performance Monitor creates a graph depicting the various counters you're tracking. The update interval for
this graph is completely configurable but by default is set to one second. As you'll see when you work with
Performance Monitor, the tracking information is most valuable when you record the information in a log file
and when you configure alerts to send messages when certain events occur or when certain thresholds are
reached, such as when a the CPU processor time reaches 99 percent. The sections that follow examine key
techniques you'll use to work with performance monitor.
Choosing Counters to Monitor
The Performance Monitor only displays information for counters you're tracking. Dozens of counters are
availableand as you add services, you'll find there are even more. These counters are organized into
groupings called performance objects. For example, all CPU-related counters are associated with the
Processor object.
To select which counters you want to monitor, complete the following steps:
1. Select the Performance option on the Administrative Tools menu. This displays the Performance
console.
2. Select the System Monitor entry in the left pane, shown in Figure 3-11.

Figure 3-11: Counters are listed in the lower portion of the Performance Monitor window.
3. Performance Monitor has several different viewing modes. Make sure you're in View Chart display
mode by selecting the View Chart button on the Performance Monitor toolbar.
4. To add counters, select the Add button on the Performance Monitor toolbar. This displays the Add
Counters dialog box shown in Figure 3-12. The key fields are
o Use Local Computer Counters Configure performance options for the local computer.
Select Counters From Computer Enter the Universal Naming Convention (UNC) name
of the server you want to work with, such as \\ZETA. Or use the selection list to select
the server from a list of computers you have access to over the network.
o Performance Object Select the type of object you want to work with, such as Processor.
Note: The easiest way to learn what you can track is to explore the objects and counters
available in the Add Counters dialog box. Select an object in the Performance Object field,
click the Explain button, and then scroll through the list of counters for this object.
o All Counters Select all counters for the current object.
95

St. Angelos Professional Education

Cloud Infrastructure (Networking)

Select Counters from List Select one or more counters for the current object. For
example, you could select % Processor Time and % User Time.
All Instances Select all counter instances for monitoring.
Select Instances From List Select one or more counter instances to monitor.

Figure 3-12: Select counters you want to monitor.


Tip Don't try to chart too many counters or counter instances at once. You'll make the display difficult to read
and you'll use system resourcesnamely CPU time and memorythat may affect server responsiveness.
5. When you've selected all the necessary options, click Add to add the counters to the chart. Then repeat
this process, as necessary, to add other performance parameters.
6. Click Done when you're finished adding counters.
7. You can delete counters later by clicking on their entry in the lower portion of the Performance
window and then clicking Delete.
Using Performance Logs
You can use performance logs to track the performance of a server and you can replay them later. As you set
out to work with logs, keep in mind that parameters that you track in log files are recorded separately from
parameters that you chart in the Performance window. You can configure log files to update counter data
automatically or manually. With automatic logging, a snapshot of key parameters is recorded at specific time
intervals, such as every 10 seconds. With manual logging, you determine when snapshots are made. Two
types of performance logs are available:
Counter Logs These logs record performance data on the selected counters when a predetermined
update interval has elapsed.
Trace Logs These logs record performance data whenever their related events occur.
Creating and Managing Performance Logging
To create and manage performance logging, complete the following steps:
1. Access the Performance console by selecting the Performance option on the Administrative Tools
menu.
2. Expand the Performance Logs and Alerts node by clicking the plus sign (+) next to it. If you want to
configure a counter log, select Counter Logs. Otherwise, select Trace Logs.
3. As shown in Figure 3-13, you should see a list of current logs in the right pane (if any). A green log
symbol next to the log name indicates logging is active. A red log symbol indicates logging is stopped.
4. You can create a new log by right-clicking in the right pane and selecting New Log Settings from the
shortcut menu. A New Log Settings box appears, asking you to give a name to the new log settings.
Type a descriptive name here before continuing.

96

St. Angelos Professional Education

Cloud Infrastructure (Networking)

Figure 3-13: Current performance logs are listed with summary information
5. To manage an existing log, right-click its entry in the right pane and then select one of the following
options:
o Start To activate logging.
o Stop To halt logging.
o Delete To delete the log.
o Properties To display the log properties dialog box.
Creating Counter Logs
Counter logs record performance data on the selected counters at a specific sample interval. For example, you
could sample performance data for the CPU every 15 minutes. To create a counter log, complete the following
steps:
1. Select Counter Logs in the left pane of the Performance console and then right-click in the right pane
to display the shortcut menu. Choose New Log Settings.
2. In the New Log Settings dialog box, type a name for the log, such as System Performance Monitor or
Processor Status Monitor. Then click OK.
3. In the General tab, click Add to display the Select Counters dialog box. This dialog box is identical to
the Add Counters dialog box shown previously in Figure 3-12.
4. Use the Select Counters dialog box to add counters for logging. Click Close when you're finished.
5. In the Sample Data Every ... field, type in a sample interval and select a time unit in seconds, minutes,
hours, or days. The sample interval specifies when new data is collected. For example, if you sample
every 15 minutes, the log is updated every 15 minutes.
6. Click the Log Files tab, shown in Figure 3-14, and then specify how the log file should be created
using the following fields:
o Location Sets the folder location for the log file.
o File Name Sets the name of the log file.
o End File Names With Sets an automatic suffix for each new file created when you run the
counter log. Logs can have a numeric suffix or a suffix in a specific date format.
o Start Numbering At Sets the first serial number for a log that uses an automatic numeric
suffix.
o Log File Type Sets the type of log file to create. Use Text File CSV for a log file with
comma-separated entries. Use Text File TSV for a log file with tab-separated entries. Use
Binary File to create a binary file that can be read by Performance Monitor. Use Binary
97

St. Angelos Professional Education

Cloud Infrastructure (Networking)

Circular File to create a binary file that overwrites old data with new data when the file reaches
a specified size limit.

Figure 3-14: Configure the log file format and usage


7. Tip If you plan to use Performance Monitor to analyze or view the log, use one of the binary file
formats.
o Comment Sets an optional description of the log, which is displayed in the Comment column.
o Maximum Limit Sets no predefined limit on the size of the log file.
o Limit Of Sets a specific limit in KB on the size of the log file.
8. Click the Schedule tab, shown in Figure 3-15, and then specify when logging should start and stop.
9. You can configure the logging to start manually or automatically at a specific date. Select the
appropriate option and then specify a start date if necessary.
Tip Log files can grow in size very quickly. If you plan to log data for an extended period, be sure to
place the log file on a drive with lots of free space. Remember, the more frequently you update the log
file, the higher the drive space and CPU resource usage on the system.

Figure 3-15: Specify when logging starts and stops


10. The log file can be configured to stop
o Manually
o After a specified period of time, such as seven days
o At a specific date and time
o When the log file is full (if you've set a specific file size limit)

98

St. Angelos Professional Education

Cloud Infrastructure (Networking)

11. Click OK when you've finished setting the logging schedule. The log is then created, and you can
manage it as explained in the "Creating and Managing Performance Logging" section of this chapter.

Creating Trace Logs


Trace logs record performance data whenever events for their source providers occur. A source provider is an
application or operating system service that has traceable events. On domain controllers you'll find two source
providers: the operating system itself and Active Directory:NetLogon. On other servers, the operating system
will probably be the only provider available.
To create a trace log, complete the following steps:
1. Select Trace Logs in the left pane of the Performance console and then right-click in the right pane to
display the shortcut menu. Choose New, and then select New Log Settings.
2. In the New Log Settings dialog box, type a name for the log, such as Logon Trace or Disk I/O Trace.
Then click OK. This opens the dialog box shown in Figure 3-16.
3. If you want to trace operating system events, select the Events Logged By System Provider option
button. As shown in Figure 3-16, you can now select system events to trace.
Caution: Collecting page faults and file detail events puts a heavy load on the server and causes the
log file to grow rapidly. Because of this, you should collect page faults and file details only for a
limited amount of time.
4. If you want to trace another provider, select the Nonsystem Providers option button and then click
Add. This displays the Add Nonsystem Providers dialog box, which you'll use to select the provider to
trace.
5. When you're finished selecting providers and events to trace, click the Log Files tab. You can now
configure the trace file as detailed in step 6 of the section of this chapter entitled "Creating Counter
Logs." The only change is that the log file types are different. With trace logs, you have two log types:
o Sequential Trace File Writes events to the trace log sequentially up to the maximum file size
(if any).
o Circular Trace File Overwrites old data with new data when the file reaches a specified size
limit.

Figure 3-16: Use the General tab to select the provider to use in the trace
6. Choose the Schedule tab and then specify when tracing starts and stops.
7. You can configure the logging to start manually or automatically at a specific date. Select the
appropriate option and then specify a start date, if necessary.
99

St. Angelos Professional Education

Cloud Infrastructure (Networking)

8. You can configure the log file to stop manually, after a specified period of time (such as seven days),
at a specific date and time, or when the log file is full (if you've set a specific file size limit).
9. When you've finished setting the logging schedule, click OK. The log is then created and can be
managed as explained in the section of this chapter entitled "Creating and Managing Performance
Logging."
Replaying Performance Logs
When you're troubleshooting problems, you'll often want to log performance data over an extended period of
time and analyze the data later. To do this, complete the following steps:
1. Configure automatic logging as described in the "Using Performance Logs" section of this chapter.
2. Load the log file in Performance Monitor when you're ready to analyze the data. To do this, select the
View Log File Data button on the Performance Monitor toolbar. This displays the Select Log File
dialog box.
3. Use the Look In selection list to access the log directory, and then select the log you want to view.
Click Open.
4. Counters you've logged are available for charting. Click the Add button on the toolbar and then select
the counters you want to display.
Configuring Alerts for Performance Counters
You can configure alerts to notify you when certain events occur or when certain performance thresholds are
reached. You can send these alerts as network messages and as events that are logged in the application event
log. You can also configure alerts to start applications and performance logs.
To add alerts in Performance Monitor, complete the following steps:
1. Select Alerts in the left pane of the Performance console, and then right-click in the right pane to
display the shortcut menu. Choose New Alert Settings.
2. In the New Alert Settings dialog box, type a name for the alert, such as Processor Alert or Disk I/O
Alert. Then click OK. This opens the dialog box shown in Figure 3-17.
3. In the General tab, type an optional description of the alert. Then click Add to display the Select
Counters To Log dialog box. This dialog box is identical to the Add Counters dialog box shown
previously in Figure 3-12.

Figure 3-17: Use the Alert dialog box to configure counters that trigger alerts.
4. Use the Select Counters to Log dialog box to add counters that trigger the alert. Click Close when
you're finished.
5. In the Counters panel, select the first counter and then use the Alert When the Value Is ... field to set
the occasion when an alert for this counter is triggered. Alerts can be triggered when the counter is
100

St. Angelos Professional Education

Cloud Infrastructure (Networking)

over or under a specific value. Select Over or under, and then set the trigger value. The unit of
measurement is whatever makes sense for the currently selected counter(s). For example, to alert if
processor time is over 98 percent, you would select over and then type 98 as the limit. Repeat this
process to configure other counters you've selected.
6. In the Sample Data Every ... field, type in a sample interval and select a time unit in seconds, minutes,
hours, or days. The sample interval specifies when new data is collected. For example, if you sample
every 10 minutes, the log is updated every 10 minutes.
Caution: Don't sample too frequently. You'll use system resources and may cause the server to seem
unresponsive to user requests.
7. Select the Action tab, shown in Figure 3-18. You can now specify any of the following actions to
happen when an alert is triggered:
o Log An Entry In The Application Event Log Creates log entries for alerts.
o Send A Network Message To Sends a network message to the computer specified.
o Run This Program Sets the complete file path of a program or script to run when the alert
occurs.
o Start Performance Data Log Sets a counter log to start when an alert occurs.
Tip You can run any type of executable file, including batch scripts with the .BAT or .CMD extension
and Windows scripts with the .VB, .JS, .PL, or .WSC extension. To pass arguments to a script or
application, use the options of the Command Line Arguments panel. Normally, arguments are passed
as individual strings. However, if you select Single Argument String, the arguments are passed in a
comma-separated list within a single string. The Example Command Line Arguments list at the bottom
of the tab shows how the arguments would be passed.
8. Choose the Schedule tab and then specify when alerting starts and stops. For example, you could
configure the alerts to start on a Friday evening and stop on Monday morning. Then each time an alert
occurs during this period, the specified action(s) are executed.

Figure 3-18: Set actions that are executed when the alert occurs
9. You can configure alerts to start manually or automatically at a specific date. Select the appropriate
option and then specify a start date, if necessary.
10. You can configure alerts to stop manually, after a specified period of time, such as seven days, or at a
specific date and time.
11. When you've finished setting the alert schedule, click OK. The alert is then created, and you can
manage it in much the same way that you manage counter and trace logs.
101

Anda mungkin juga menyukai