Anda di halaman 1dari 14

THE EVER CHANGING WORLD OF

High
Performance
Computing
2

Table of Contents

3 INTRODUCTION: The Ever Changing World of HPC

4 CHAPTER 1: The World of HPC Keeps Getting Faster and Faster

6 CHAPTER 2: Managing the Change

10 CHAPTER 3: Dealing with Big DataManaging Hadoop and HPC Together

11 CHAPTER 4: HPC Will Ultimately Transform How Companies Do Business

12 CHAPTER 5: HPC to be Delivered as a Service

13 SUMMARY
3

INTRODUCTION

The Ever Changing World of HPC


High Performance
Computing (HPC) clusters
have been making
supercomputing possible for
organizations everywhere
for many years now.
But the world of HPC is
anything but static. It is ever
changing, prompting many
organizations to upgrade
their clusters regularly in
order to take advantage of
the performance increases
made possible by the ever-
increasing power of multi-
core CPUs, coprocessors, blindingly fast GPUs, high capacity
servers, and higher-bandwidth, lower-latency networks. In
this e-book, well take a look at some areas of the HPC world
where interesting changes are currently underway.
4

CHAPTER 1

The World of HPC Keeps Getting Faster and Faster


We tend to take our technology for
granted, but once in a while its nice to
look back in time to see how far weve
come. Lets start with a peek at where the
state of HPC was a little over a decade
ago.

Back in 2002, at the US Department of


Energys Los Alamos National Laboratory,
the IBM Roadrunner was introduced. It was
planned to be completed in three stages,
eventually reaching a processing speed of 1.7 Today, were chasing the elusive exascale
system something capable of 1,000 petaflops.
petaflops (a petaflop is a quadrillion floating
operations per second).

In 2006, the first phase of this pioneering The company plans to use their 5 petaflop
HPC was completed. By 2008, the Roadrunner machine in order to create high resolution
had crossed the petaflop threshold and seismic maps and 3D Earth models. Other oil
had become the fastest supercomputer and gas companies are most likely to follow
in the world. This designation like the suit as analytics become a bigger factor for
Roadrunner, itself was short-lived, however. the energy industry.
In 2013, the supercomputer, now the 22nd
fastest in the world, was shut down due to Today, were chasing the elusive exascale
excess power consumption. By that point, the system something capable of 1,000
Oak Ridge National Laboratorys Titan was the petaflops. Its not clear when dreams of
fastest, at 17.6 petaflops. exascale computing first entered the world,
but by 2013, serious discussions were taking
Recently, a 5 petaflop supercomputer was place regarding when the first exascale
introduced for the Norwegian oil and gas supercomputers would be built. The US
company, Petroleum Geo-Services. It signaled Department of Energy has its sights set on
the biggest leap yet by that industry into the 2020 as the year in which it would build its
need for speed when it comes to computing. first exascale computer. It may not happen
5

by 2020, but well certainly have an exascale HOW THE UNIVERSE OF HPC IS
computer before too long. BEING RESHAPED

In recent years, enterprises have begun


using machine learning (ML) to collect and High Performance Computing clusters
analyze large amounts of data. Now we are evolving to keep up with the changing
are seeing that many are using a subset of demands being placed on them. The goal
machine learning techniques called deep of exascale was confirmed earlier this year
learning (DL) to get at the more esoteric in research from Intersect360. It showed
properties hidden in the data. there is a drive towards exascale computing
and data-centric processing, and predicted
The computer vision, speech recognition, an increased focus on improving energy
natural language processing, and audio efficiency and I/O performance.
recognition applications being developed
using DL techniques need large amounts One consequence of these pressures on
of computational power to process large the HPC universe is greater adoption of
amounts of data. To perform these compute- performance-enhancing hardware. More
intensive processes, we are seeing an increase than half of the systems installed this year
in the use of GPU accelerators, including are expected to include accelerators and
special purpose GPU and field-programmable coprocessors. Additionally, finding a machine
gate array (FPGA) software/hardware co- with big data capabilities is becoming easier
designs. To ensure that applications receive and this year is expected to become
benefits from these accelerators wherever a standard offering with general purpose
they are located in the infrastructure, designs servers and storage appliances. While cloud
require a flexible pool of resources and a computing has become a go-to technology
high-speed, low-latency interconnect. for most enterprises, Intersect360 predicts
that it will continue to be used selectively in
HPC environments.
6

CHAPTER 2

Managing the Change


FROM PALLET TO High performance and hand-editing many
PRODUCTION WITH computing clusters have configuration files, which can
HIGH PERFORMANCE much in common with each be time consuming, and error
other, but each cluster is prone. Difficulties in making
CLUSTERS
unique. The reason for this is everything work well together
simple: Adding a cluster to hardware, accelerators,
your data center represents drivers, operating systems,
a significant investment, and and other software up the
it has to meet your specific stack is a major contributor
requirements for capacity, to implementation delays.
performance, and ROI. An off-
the-shelf solution isnt likely to Rather than work with a
be a perfect fit. collection of disparate tools
that may not interoperate
The bespoke nature of well with each other, consider
clusters can mean that you using an integrated software
need to do a lot of work to system that is specifically
make the various components designed to deploy and
play well together. This often manage clusters. These have
means writing, testing, and the advantage of providing
debugging a lot of scripts consistent workflows, are
tested to work together, and
theyre designed to optimize
the configuration, monitoring,
and management tasks you
do every day.

By choosing the right


management solution, you
will be able to go from pallet-
to-production in less time
than you imagined.
7

HPC CLUSTER MANAGEMENT: MONITOR YOUR CLUSTER TODAY


IN ORDER TO PLAN FOR TOMORROW

Most HPC systems will need to be


upgraded every two or three years,
and most of those upgrades will
be necessary due to the demands
put on the system as your business
grows. How do you know when its
time for an upgrade?

Daily monitoring and data reporting


are the answers to that question.
Which applications are run most often?
How do they rank by CPU run time and
memory usage? Who are the heaviest
users and which resources do their jobs consume? What is
the throughput? What is the cost allocation of compute and
storage by users and projects?

The data may vary by location as clusters, end users, and


data centers for larger organizations tend to be distributed
around the world. Local issues such as energy costs can
also impact system operating costs. Planning early for
the investment of HPC system upgrades is definitely
recommended, as procuring the upgrades may come with a
long lead time.

Gathering all of this information will help you make better


decisions, and plan your clusters growth. To make that task
easier, we recommend using a comprehensive management
solution like Bright Cluster Manager. It was designed from the
ground up to monitor and manage clustered environments,
and take the drudgery out of the job.
8

USING A SINGLE GUI FOR ALL YOUR CLUSTER


MANAGEMENT NEEDS

A well-designed Graphical User Interface sure youre comfortable with exploring the
(GUI) can go a long way towards making it environment, even before learning the right
easier to maintain an HPC cluster. Traditional way to do things. Youll find that starting with
command line interfaces, while powerful, can something that makes sense to you intuitively
have a steeper learning curve which can be will make it easier for you to learn advanced
a problem when training new employees. capabilities down the road.
They also make it harder for untrained staff
to step in and replace staff members that Another factor to consider is how
have left the organization, or are just simply comprehensive the management interface
taking a sick day. Even seasoned systems is. Theres a real advantage to having a
administrators often prefer using a GUI single GUI that meets all of your cluster
management interface, finding it an intuitive management needs, including monitoring,
and time-saving way to get their job done. automation, and more. With a well-designed
interface, you will be able to effortlessly move
Having said that, not all GUIs are created between the various systems in the cluster.
equal. A well designed interface is intuitive Youll be switching from monitoring hardware,
yet powerful, aiding operators as they carry to setting HPC parameters, to adding users
out routine tasks. A poorly designed GUI, to a Hadoop cluster without missing a beat.
on the other hand, can send sysadmins Whats more, the ease-of-use afforded by a
scrambling for the command line interface. well-designed cluster management GUI makes
When evaluating alternatives, see if you can it easier to delegate some of the management
discover a workflow that accomplishes what tasks to staff members who arent as intimately
you need on your own. Is it easy to find and familiar with the ins and out of clustered
configure objects in the clusters? How quickly environments, which can result in reduced
can you build a view that lets you see what training costs.
you want monitor day to day? While its true
that every GUI requires a little training and
practice in order to use it effectively, make
9

DEALING WITH VIRTUALIZATION

Another area of focus in HPC has been to segregate the physical infrastructure from
the applications by way of virtualization. More and more users want to combine HPC
workloads with big data analytics workloads on the same infrastructure. They also want
the flexibility of running workloads on either bare metal, virtual machines, or, most
recently, in containerized
environments. They also are
looking for the flexibility to
run on-premise and in the
public cloud.

For example, OpenStack


cloud software provides a
common, open source platform
that can be used to deploy
a private cloud. Many HPC
users are looking to run an HPC workload
atop or alongside an OpenStack cloud. In Virtualizing the infrastructure using an Open
the past, users needed a dedicated HPC Stack private cloud allows administrators to be
cluster, but now the ability to choose either far more flexible in allocating resources.
option creates flexibility. Virtualizing the
infrastructure using an OpenStack private
cloud allows administrators to be far more
flexible in allocating resources.

However, OpenStack can be difficult One way to reap the advantages of


to configure and manage. In addition, virtualization is by using Bright OpenStack,
with virtualization there is a performance which allows users to deploy an OpenStack
penalty, typically less than 10 percent. environment from bare metal servers and
Some users chalk that up to the price one manage that environment throughout its
must pay, while others consider it a barrier entire life cycle. Administrators can manage
to adoption. Also, there is a perception that different OpenStack users and tenants, as
some hardware technologies are not fully well as multiple instances of OpenStack,
supported by OpenStack, but the industry their underlying server infrastructure, and
is definitely making huge progress. For even Hadoop and HPC clusters within the
example, InfiniBand and accelerators can OpenStack clouds, all from a single pane of
now be used through virtualization. glass user interface.
10

CHAPTER 3

Dealing with Big DataManaging Hadoop and HPC Together


High Performance Computing is designed
HPC VS. HADOOP CLUSTERS
to deliver massive amounts of compute
power at an affordable cost. Hadoop is HPC clusters use high-speed communi-
designed to analyze huge amounts of cations fabric like InfiniBand (IB), while
data at an affordable cost. Together, HPC Hadoop clusters usually use ethernet.
and Hadoop can bring capabilities once
HPC nodes tend to have less storage,
reserved for the largest, most affluent of
while Hadoop clusters typically have
enterprises to medium-sized businesses
large disk storage capacity.
everywhere.

Both HPC clusters and Hadoop clusters


share much in common. They are typically Another difference is storage. Servers
composed of rack-mounted servers destined for use in Hadoop clusters tend to
interconnected through a high-speed have large disk storage capacity, while HPC
communications fabric, and provisioned with nodes tend to be less storage-centric.
software that lets the collection of servers act
as one. Some key differences are that HPC Still, despite the differences, some
clusters tend to be connected with a high- economies can be achieved by building
speed, low-latency fabric like InfiniBand (IB), a cluster or a collection of clusters
while Hadoop clusters typically use Ethernet. designed to do both HPC and Hadoop. If
the two can also be deployed, monitored,
and managed in a cohesive way, even
more savings can be had. Fortunately such
management solutions are available today.
If your organization has the need to manage
both types of clusters, take a look at Bright
Cluster Manager.
11

CHAPTER 4

HPC Will Ultimately Transform How Companies do Business


Many industry sources are talking about For example, banks and investment firms
how HPC is poised to make major now use HPC to analyze data at the extremely
changes in how enterprises conduct their high speed necessary to execute orders in
business. Two examples of areas where high frequency trading (HFT). Manufacturers
this is playing out are the financial and life of automobiles, aircraft and other industrial
sciences sectors. equipment use HPC with powerful computer-
aided engineering (CAE) tools that produce
complicated design simulations and
prototyping evaluations.

Similarly, life science companies are using


HPC to meet their specific data management
and storage challenges. Many are looking
for ways to blend hardware, software, and
storage into a system that can handle the
massive amounts of data generated by
research teams.
12

CHAPTER 5

HPC to be Delivered as a Service


The growth in demand for analyzing OpenStack cloud. Each virtual cluster is fully
large amounts of data is leading vendors managed by its own instance of Bright Cluster
to come up with more flexible solutions Manager.
that provide additional compute and
storage on demand. Typical use cases Another option in the offing is hybrid HPC
include quality assurance, support infrastructure that uses software-defined data
functions, conducting demonstrations, center assets for controlling on-premises HPC
training environments, and research clusters and a software stack in the cloud that
and development, as well as virtualized is accessed on demand.
labs. Another interesting use case is
Infrastructure as a Service/Platform as a For example, Bright Cluster on Demand
Service (IaaS/PaaS) providers who would
allows users to create a relatively small
like to deliver clusters in an as-a-service
virtual cluster on premise, and then extend
manner.
that cluster to a public cloud whenever
they experience a peak in their workloads.
Bright Cluster on Demand (COD) was Tools are embedded into each virtual cluster
developed in response to this growing enabling them to be dynamically scaled up
trend. COD lets end users provision entire and down in the public cloud depending
clustered environments in either Amazon Web on the number of jobs in the job queues.
Services (AWS) or Bright OpenStack clouds. Users can also deploy and manage large-
It can create any kind of cluster, including scale container orchestration systems
HPC, Hadoop, Spark, or even a virtualized withKubernetes.

Bright Cluster on Demand


(COD)...lets end users pro-
vision entire clustered
environments in either
Amazon Web Services
(AWS) or Bright OpenStack
clouds.
13

CHAPTER 6

Summary

The world of HPC is ever changing. Weve only

been able to skim the surface in this e-book, but were

constantly on the lookout for trends and technologies

that will help you get your job done. Follow the Bright

Blog for updates, and if youre interested in seeing

how Bright can help you manage your own clusters,

wed love to show you what we can do.

HPC-EB-V03-2017-04
14

INTERESTED? GET A FREE DEMO OF BRIGHT


CLUSTER MANAGER AND BRIGHT OPENSTACK

Bright Computing provides comprehensive software


solutions for deploying and managing HPC clusters, big
data clusters, and OpenStack in the data center and in
the cloud.
Bright Cluster Manager for HPC lets you deploy
complete clusters over bare metal and manage them
effectively. It provides single-pane-of-glass management
for the hardware, the operating system, HPC software,
and users.
Bright OpenStack makes it easy to deploy, provision, and manage your
OpenStack-based private cloud infrastructure. It provides headache-free
deployment on bare metal, advanced monitoring and management tools, and
dynamic health-checking, all in one powerful, intuitive package.
Wed love to show you what Bright Cluster Manager and Bright OpenStack
can do for you. Please contact us and we will set up a live demo with a Bright
technical expert and answer any questions you may have.

FREE DEMO

www.brightcomputing.com