Anda di halaman 1dari 126

version 3.

User Guide

Fuel for Openstack v3.1 User Guide

Contents
Introducing Fuel for OpenStack About Fuel How Fuel Works Deployment Configurations Provided By Fuel Supported Software Components Download Fuel Release Notes New Features in Fuel 3.1 Resolved Issues in Fuel 3.1 Known Issues in Fuel 3.1 Reference Architectures Overview Simple (non-HA) Deployment Multi-node (HA) Deployment (Compact) Multi-node (HA) Deployment (Full) Details of HA Compact Deployment Red Hat OpenStack Architectures HA Logical Setup Cluster Sizing Network Architecture Technical Considerations Production Considerations Sizing Hardware for Production Deployment Redeploying An Environment Large Scale Deployments Create an OpenStack cluster using Fuel UI Installing Fuel Master Node Understanding and Configuring the Network Fuel Deployment Schema Network Issues Red Hat OpenStack Deployment Notes 1 3 4 7 8 9 10 10 11 13 17 18 19 20 21 22 24 27 30 32 35 36 37 42 44 45 46 54 57 64 67

2013, Mirantis Inc.

Page i

Fuel for Openstack v3.1 User Guide

Overview Post-Deployment Check Deploy an OpenStack cluster using Fuel CLI Understanding the CLI Deployment Workflow Deploying OpenStack Cluster Using CLI Configuring Nodes for Provisioning Configuring Nodes for Deployment Configure Deployment Scenario Finally Triggering the Deployment Testing OpenStack Cluster FAQ (Frequently Asked Questions) and HowTos Common Technical Issues How HA with Pacemaker and Corosync Works HowTo Notes Other Questions Fuel License Index

67 71 79 80 81 85 92 95 98 99 101 102 105 108 114 115 121

2013, Mirantis Inc.

Page ii

Fuel for Openstack v3.1 User Guide

Introducing Fuel for OpenStack

Introducing Fuel for OpenStack


OpenStack is an extensible, versatile, and flexible cloud management platform. By exposing its portfolio of cloud infrastructure services compute, storage, networking and other core resources through ReST APIs, OpenStack enables a wide range of control over these services, both from the perspective of an integrated Infrastructure as a Service (IaaS) controlled by applications, as well as automated manipulation of the infrastructure itself. This architectural flexibility doesnt set itself up magically. It asks you, the user and cloud administrator, to organize and manage an extensive array of configuration options. Consequently, getting the most out of your OpenStack cloud over time in terms of flexibility, scalability, and manageability requires a thoughtful combination of complex configuration choices. This can be very time consuming and requires that you become familiar with a lot of documentation from a number of different projects. Mirantis Fuel for OpenStack was created to eliminate exactly these problems. This step-by-step guide takes you through this process of: Configuring OpenStack and its supporting components into a robust cloud architecture Deploying that architecture through an effective, well-integrated automation package that sets up and maintains the components and their configurations Providing access to a well-integrated, up-to-date set of components known to work together Fuel for OpenStack can be used to create virtually any OpenStack configuration. To make things easier, the installation includes several pre-defined architectures. For the sake of simplicity, this guide emphasises a single, common reference architecture; the multi-node, high-availability configuration. We begin with an explanation of this architecture, then move on to the details of creating the configuration in a test environment using VirtualBox. Finally, we give you the information you need to know to create this and other OpenStack architectures in a production environment. This guide assumes that you are familiar with general Linux commands and administration concepts, as well as general networking concepts. You should have some familiarity with grid or virtualization systems such as Amazon Web Services or VMware, as well as OpenStack itself, but you don't need to be an expert. The Fuel User Guide is organized as follows: About Fuel, gives you an overview of Fuel and gives you a general idea of how it works. Reference Architectures, provides a general look at the components that make up OpenStack. Create an OpenStack cluster using Fuel UI, takes you step-by-step through the process of creating a high-availability OpenStack cluster using Fuel UI. Deploy an OpenStack cluster using Fuel CLI, takes you step-by-step through the more advanced process of creating a high-availability OpenStack cluster using the command line and Puppet manifests. Production Considerations, looks at the real-world questions and problems involved in creating an OpenStack cluster for production use. We discuss issues such as network layout and hardware requirements, and provide tips and tricks for creating a cluster of up to 100 nodes. With the current (3.1) release Fuel UI (aka FuelWeb) and Fuel CLI (aka Fuel Library) are integrated. We encourage all users to use the Fuel UI for installation and configuration. However, the standard Fuel CLI installation process is still available for those who prefer a more detailed approach to deployment. Even with a utility as powerful as Fuel, creating an OpenStack cluster can be complex, and FAQ (Frequently Asked Questions) and HowTos section covers many of the issues that tend to arise during the process.

2013, Mirantis Inc.

Page 1

Fuel for Openstack v3.1 User Guide

Introducing Fuel for OpenStack

Lets start off by taking a closer look at Fuel itself. We'll start by explaining How Fuel Works and then move to the process of installation itself.

2013, Mirantis Inc.

Page 2

Fuel for Openstack v3.1 User Guide

About Fuel

About Fuel
How Fuel Works Deployment Configurations Provided By Fuel Supported Software Components Download Fuel 4 7 8 9

2013, Mirantis Inc.

Page 3

Fuel for Openstack v3.1 User Guide

How Fuel Works

How Fuel Works


Fuel is a ready-to-install collection of the packages and scripts you need to create a robust, configurable, vendor-independent OpenStack cloud in your own environment. As of Fuel 3.1, Fuel Library and Fuel Web have been merged into a single toolbox with options to use the UI or CLI for management. A single OpenStack cloud consists of packages from many different open source projects, each with its own requirements, installation procedures, and configuration management. Fuel brings all of these projects together into a single open source distribution, with components that have been tested and are guaranteed to work together, and all wrapped up using scripts to help you work through a single installation. Simply put, Fuel is a way for you to easily configure and install an OpenStack-based infrastructure in your own environment.

2013, Mirantis Inc.

Page 4

Fuel for Openstack v3.1 User Guide

How Fuel Works

Fuel works on a simple premise. Rather than installing each of the components that make up OpenStack directly, you instead use a configuration management system like Puppet to create scripts that can provide a configurable, reproducible, sharable installation process. In practice, Fuel works as follows: 1. First, set up Fuel Master Node using the ISO. This process only needs to be completed once per installation. 2. Next, discover your virtual or physical nodes and configure your OpenStack cluster using the Fuel UI. 3. Finally, deploy your OpenStack cluster on discovered nodes. Fuel will perform all deployment magic for you by applying pre-configured and pre-integrated Puppet manifests via Astute orchestration engine. Fuel is designed to enable you to maintain your cluster while giving you the flexibility to adapt it to your own configuration.

2013, Mirantis Inc.

Page 5

Fuel for Openstack v3.1 User Guide

How Fuel Works

Fuel comes with several pre-defined deployment configurations, some of them include additional configuration options that allow you to adapt OpenStack deployment to your environment. Fuel UI integrates all of the deployments scripts into a unified, Web-based Graphical User Interface that walks administrators through the process of installing and configuring a fully functional OpenStack environment.

2013, Mirantis Inc.

Page 6

Fuel for Openstack v3.1 User Guide

Deployment Configurations Provided By Fuel

Deployment Configurations Provided By Fuel


One of the advantages of Fuel is that it comes with a number of pre-built deployment configurations that you can use to quickly build your own OpenStack cloud infrastructure. These are well-specified configurations of OpenStack and its constituent components that are expertly tailored to one or more common cloud use cases. Fuel provides the ability to create the following cluster types without requiring extensive customization: Simple (non-HA): The Simple (non-HA) installation provides an easy way to install an entire OpenStack cluster without requiring the degree of increased hardware involved in ensuring high availability. Multi-node (HA): When you're ready to begin your move to production, the Multi-node (HA) configuration is a straightforward way to create an OpenStack cluster that provides high availability. With three controller nodes and the ability to individually specify services such as Cinder, Neutron (formerly Quantum), and Swift, Fuel provides the following variations of the Multi-node (HA) configurations: Compact HA: When you choose this option, Swift will be installed on your controllers, reducing your hardware requirements by eliminating the need for additional Swift servers while still addressing high availability requirements. Full HA: This option enables you to install independent Swift and Proxy nodes, so that you can separate their operation from your controller nodes. In addition to these configurations, Fuel is designed to be completely customizable. For assistance on deeper customization options based on the included configurations you can contact Mirantis for further assistance.

2013, Mirantis Inc.

Page 7

Fuel for Openstack v3.1 User Guide

Supported Software Components

Supported Software Components


Fuel has been tested and is guaranteed to work with the following software components: Operating Systems CentOS 6.4 (x86_64 architecture only) RHEL 6.4 (x86_64 architecture only) Puppet (IT automation tool) 2.7.19 MCollective 2.3.1 Cobbler (bare-metal provisioning tool) 2.2.3 OpenStack Grizzly release 2013.1.2 Hypervisor KVM QEMU Open vSwitch 1.10.0 HA Proxy 1.4.19 Galera 23.2.2 RabbitMQ 2.8.7 Pacemaker 1.1.8 Corosync 1.4.3

2013, Mirantis Inc.

Page 8

Fuel for Openstack v3.1 User Guide

Download Fuel

Download Fuel
The first step in installing Fuel is to download the version appropriate to your environment. Fuel is available for Essex, Folsom and Grizzly OpenStack installations, and will be available for Havana shortly after Havana's release. The Fuel ISO and IMG, along with other Fuel releases, are available in the Downloads section of the Fuel portal.

2013, Mirantis Inc.

Page 9

Fuel for Openstack v3.1 User Guide

Release Notes

Release Notes
New Features in Fuel 3.1
Fuel 3.1 with Integrated Graphical and Command Line controls Option to deploy Red Hat Enterprise Linux OpenStack Platform Mirantis OpenStack Health Check Ability to deploy properly in networks that are not utilizing VLAN tagging Improved High Availability resiliency Horizon password entry can be hidden Full support of Neutron (Quantum) networking engine 10 10 10 11 11 11 11

Fuel 3.1 with Integrated Graphical and Command Line controls


In earlier releases, Fuel was distributed as two packages Fuel Web for graphical workflow, and Fuel Library for command-line based manipulation. Starting with this 3.1 release, weve integrated these two capabilities into a single offering, referred to simply as Fuel. If you used Fuel Web, youll see that capability along with its latest improvements to that capability in the the Fuel User Interface (UI), providing a streamlined, graphical console that enables a point-and-click experience for the most commonly deployed configurations. Advanced users with more complex environmental needs can still get command-line access to the underlying deployment engine (aka Fuel Library).

Option to deploy Red Hat Enterprise Linux OpenStack Platform


Mirantis Fuel now includes the ability to deploy the Red Hat Enterprise Linux OpenStack Platform (a solution that includes both Red Hat Enterprise Linux and the Red Hat OpenStack distribution). During the definition of a new environment, the user will be presented with the option of either installing the Mirantis-provided OpenStack distribution onto CentOS-powered nodes or installing the Red Hat provided OpenStack distribution onto Red Hat Enterprise Linux powered nodes.

Note
A Red Hat subscription is required to download and deploy Red Hat Enterprise Linux OpenStack Platform.

Mirantis OpenStack Health Check


New in this release is the Mirantis OpenStack Health Check which can be accessed through a tab in the Fuel UI. The OpenStack HealthCheck is a battery of tests that can be run against an OpenStack deployment to ensure that it is installed properly and operating correctly. The suite of tests exercise not only the core components within OpenStack, but also the added packages included in the Mirantis OpenStack distribution. Tests can be run individually or in groups. A full list of available tests can be found in the documentation.

2013, Mirantis Inc.

Page 10

Fuel for Openstack v3.1 User Guide

Resolved Issues in Fuel 3.1

Ability to deploy properly in networks that are not utilizing VLAN tagging
In some environments, it may not be possible or desired to utilize VLANs to segregate network traffic. In these networks, Fuel can now be configured through the Fuel UI to disable the need for VLAN tagging. This configuration option is available through the Network Settings tab.

Improved High Availability resiliency


To improve the resiliency of the Mirantis OpenStack High Availability reference architecture, Fuel now deploys services including Neutron (Quantum) Agents, HAProxy, Galera or MySQL native Master/Slave replication under Pacemaker from ClusterLabs. Neutron (Quantum) agents now support seamless failover with metadata proxy and agent support, allowing minimum downtime for cluster Neutron (Quantum)-enabled networking. The Galera/Mysql replication engine now supports automatic cluster reassembling after the entire cluster is rebooted.

Horizon password entry can be hidden


In the OpenStack settings tab, the input of the password used for Horizon access can now be hidden by clicking on the eye icon to the left of the field. This icon acts as a toggle between hidden and visible input modes.

Full support of Neutron (Quantum) networking engine


All the features of the Neutron (Quantum) OpenStack virtual networking implementation, including the network namespaces feature allowing virtual networks to be overlapping, are now supported by Fuel. This improvement also enables Neutron (Quantum) to work properly with Open vSwitch GRE tunnels. This capability is currently supported only with the Mirantis OpenStack distribution and CentOS version (2.6.32-358) included with Fuel.

Resolved Issues in Fuel 3.1


Disk Configuration now displays proper size and validates input Improved behavior for allocating space Eliminated the need to specify a bootable disk Floating IP allocation speed increased Ability to map logical networks to physical interfaces Separate Logical Volume Manager (LVM) now used for Glance storage Memory leaks in nailgun service Network Verification failures Installing Fuel Master node onto a system with em# network interfaces Provisioning failure on large hard drives Access to OpenStack API or VNC console in Horizon when running in VirtualBox Other resolved issues 11 12 12 12 12 12 12 12 13 13 13 13

Disk Configuration now displays proper size and validates input

2013, Mirantis Inc.

Page 11

Fuel for Openstack v3.1 User Guide

Resolved Issues in Fuel 3.1

Previously, the Total Space displayed in the Disk Configuration screen was slightly larger than what was actually available. This has now been corrected to be accurate. In addition, user input validation has been improved when making changes to ensure that space is not incorrectly allocated. And finally, the unit of measure has been changed to MB from GB in the Disk Configuration screen.

Improved behavior for allocating space


In Fuel 3.0.x, users were forced to manually zero out fields in the Disk Configuration screen if the total allocated space exceeded the total disk size before the "USE ALL UNALLOCATED SPACE" option could be utilized. Now you can now enter a value above the maximum available for a volume group (as long as it does not exceed total disk size), select "USE ALL UNALLOCATED SPACE" for a second volume group and that group will be assigned the available space up to the maximum disk size. In addition, the current allocated sizes are reflected graphically in the bars above the volume group.

Eliminated the need to specify a bootable disk


Previously, the Disk Configuration screen had a special Make bootable option. This release of Fuel makes the option unnecessary because Fuel now has a Master Boot Record (MBR) and boot partition installed on all hard drives. BIOS can now be configured to load from any disk and the node will boot the operating system. Because of this, the Make bootable option has been removed.

Floating IP allocation speed increased


In Fuel 3.0.x, the step of floating IP allocation was taking significant time. During cluster provisioning, it was taking up to 8 minutes for creating the pool of 250 floating IP addresses. This has now been reduced down to seconds instead of minutes.

Ability to map logical networks to physical interfaces


With the introduction of the ability to deploy properly in networks that are not utilizing VLAN tagging, it is now possible to map logical OpenStack networks to physical interfaces without using VLANs.

Separate Logical Volume Manager (LVM) now used for Glance storage
Glance storage was previously configured to use a root partition on a controller node. Because of this, in HA mode, Swift was configured to use only 5 GB of storage. A user was unable to load large images into Glance in HA mode and could receive an out of space error message if a small root partition were used. This situation has been corrected by creating special LVM for Glance storage. You can modify the size of this partition in the Disk Configuration screen.

Memory leaks in nailgun service


Nailgun is the RESTful API backend service that is used in Fuel. In 3.0.1 an increase in memory consumption could occur over time. This has now been fixed.

Network Verification failures


In some cases, the "Verify Networks" option in the Network configuration tab reported a connectivity problem, however manual checks confirmed that the connection was fine. The problem was identified as a loss of packets when a particular Python library was used. That library has been replaced and verification now functions

2013, Mirantis Inc.

Page 12

Fuel for Openstack v3.1 User Guide properly.

Known Issues in Fuel 3.1

Installing Fuel Master node onto a system with em# network interfaces
In Fuel 3.0.1 a fix was included to recognize network interfaces that start with em (meaning "embedded") instead of eth. However the fix only applied to the Slave nodes used to deploy OpenStack components. The Fuel Master node was still affected. This has now been corrected and Fuel can be deployed on machines where the operating systems uses the prefix of em instead of eth.

Provisioning failure on large hard drives


In previous releases, when ext4 was used as a file system for a partition, provisioning would fail for for large volumes (larger than 16 TB) in some cases. Ext4 has been replaced by the xfs file system which works well on large volumes.

Access to OpenStack API or VNC console in Horizon when running in VirtualBox


Previously it was impossible to access the OpenStack API or VNC console in Horizon when running the OpenStack environment created in VirtualBox by the Mirantis demo VirtualBox. This was caused by an inability to create a route to the OpenStack public network from a host system due to a lack of VLAN tags. With the introduction of the ability to deploy properly in networks that are not utilizing VLAN tagging, it is now possible to create the route. Information on how to create this route is documented in the user guide.

Other resolved issues


If CPU speed could not be determined through an operating system level query on a slave node, that node would not register properly with the Fuel Master node. This issue has been corrected to register the node even if some information about the node is unavailable.

Known Issues in Fuel 3.1


Limited Support for OpenStack Grizzly Nagios deployment is disabled Ability to deploy Swift and Neutron (Quantum) is limited to Fuel CLI Ability to add new nodes without redeployment Ability to deploy properly in networks that are not utilizing VLAN tagging Time synchronization failures in a VirtualBox environment If a controllers root partition runs out of space, the controller fails to operate The "Create instance volume" test in the Mirantis OpenStack Healthcheck tab has a wrong result for attachment volumes Other Limitations: 13 14 14 14 15 15 15 15 15

Limited Support for OpenStack Grizzly


The following improvements in Grizzly are not currently supported directly by Fuel:

2013, Mirantis Inc.

Page 13

Fuel for Openstack v3.1 User Guide

Known Issues in Fuel 3.1

Nova Compute Cells Availability zones Host aggregates Neutron (formerly Quantum) LBaaS (Load Balancer as a Service) Multiple L3 and DHCP agents per cloud Keystone Multi-factor authentication PKI authentication Swift Regions Adjustable replica count Cross-project ACLs Cinder Support for FCoE Support for LIO as an iSCSI backend Support for multiple backends on the same manager Ceilometer Heat It is expected that these capabilities will be supported in future releases of Fuel. In addition, support for High Availability of Neutron (Quantum) on Red Hat Enterprise Linux (RHEL) is not available due to a limitation within the Red Hat kernel. It is expected that this issue will be addressed by a patch to RHEL in late 2013.

Nagios deployment is disabled


Due to instability of PuppetDB and Nagios manifests we decided to temporarily disable the Nagios deployment feature. It is planned to re-enable this feature in next release with improved and much more stable manifests.

Ability to deploy Swift and Neutron (Quantum) is limited to Fuel CLI


At this time, customers wishing to deploy Swift or Neutron (Quantum) will need to do so through the Fuel CLI. An option to deploy these components as standalone nodes is not currently present in the Fuel UI. It is expect that a near future release will enable this capability.

Ability to add new nodes without redeployment

2013, Mirantis Inc.

Page 14

Fuel for Openstack v3.1 User Guide

Known Issues in Fuel 3.1

Its possible to add new compute and Cinder nodes to an existing OpenStack environment. However, this capability can not be used yet to deploy additional controller nodes in HA mode.

Ability to deploy properly in networks that are not utilizing VLAN tagging
While included in Fuel and fully supported, network environments can be complex and Mirantis has not exhaustively identified all of the configurations where this feature works properly. Fuel does not prevent the user from creating an environment that may not work properly, although the Verify Networks function will confirm necessary connectivity. As Mirantis discovers environments where a lack of VLAN tagging causes issue, they will be further documented. Currently, a known limitation is that untagged networks should not be mapped to the physical network interface that is used for PXE provisioning. Another known situation occurs when the user separates the public and floating networks onto different physical interfaces without VLAN tagging, which will cause deployment to fail.

Time synchronization failures in a VirtualBox environment


If the ntpd service fails on the Fuel master node, desynchronization of nodes in the environment will occur. OpenStack identifies services as broken if the time synchronization is broken, which will cause the "Services list availability" test in the Mirantis OpenStack HealthCheck to fail. In addition, instances may fail to boot. This issue appears to be limited to VirtualBox environments as it could not be replicated on KVM and physical hardware deployments.

If a controllers root partition runs out of space, the controller fails to operate
Logging is configured to send most of messages over rsyslog, and disk space consuming services use their own logical volumes (such as Cinder, Compute). However, if processes write to the root partition and the root partition runs out of disk space, the controller will fail.

The "Create instance volume" test in the Mirantis OpenStack Healthcheck tab has a wrong result for attachment volumes
The "Create instance volume" test is designed to confirm that a volume can be created. However, even if OpenStack fails to attach the volume to the VM, the test still passes.

Other Limitations:
When using the Fuel UI, IP addresses for Slave nodes (but not the Master node) are assigned via DHCP during PXE booting from the master node. Because of this, even after installation, the Fuel Master node must remain available and continue to act as a DHCP server. When using the Fuel UI, the floating VLAN and public networks must use the same L2 network. In the UI, these two networks are locked together, and can only run via the same physical interface on the server. Deployments done through the Fuel UI creates all networks on all servers, even if they are not required by a specific role (e.g. A Cinder node will have VLANs created and addresses obtained from the public network). Some of OpenStack services listen on all interfaces, which may be detected and reported by security audits or scans. Please discuss this issue with your security administrator if it is of concern in your organization. The provided scripts that enable Fuel to be automatically installed on VirtualBox will create separated host interfaces. If a user associates logical networks to different physical interfaces on different nodes, it will

2013, Mirantis Inc.

Page 15

Fuel for Openstack v3.1 User Guide

Known Issues in Fuel 3.1

lead to network connectivity issues between OpenStack components. Please check to see if this has happened prior to deployment by clicking on the Verify Networks button on the networking tab. The networks tab was redesigned to allow the user to provide IP ranges instead of CIDRs, however not all user input is properly verified. Entering a wrong wrong value may cause failures in deployment. Fuel UI may not reflect changes in NICs or disks after initial discovery, and it can lead to failure in deployment. In other words, if user powers on the node, it gets discovered, and then some disks are replaced or network cards added or removed, rediscovering of changed hardware may not be done correctly. For example, the Total Space displayed in the Disk Configuration screen may be different than the actual size of the disk. Neutron (Quantum) Metadata API agents in High Availability mode are only supported for Compact and Full scenarios if network namespaces (netns) is not used. The Neutron (Quantum) namespace metadata proxy is not supported unless netns is used. Neutron (Quantum) multi-node balancing conflicts with pacemaker, so the two should not be used together in the same environment. When deploying Neutron (Quantum) with the Fuel CLI and when virtual machines need to have access to internet and/or external networks you need to set the floating network prefix and public_address so that they do not intersect with the network external interface to which it belongs. This is due to specifics of how Neutron(Quantum) sets Network Address Translation (NAT) rules and a lack of namespaces support in CentOS 6.4. In environments with a large number of tenant networks, e.g. over 300, network verification may stop responding. In these cases, the networks themselves are unaffected and it is only the test that ceases to function correctly.

2013, Mirantis Inc.

Page 16

Fuel for Openstack v3.1 User Guide

Reference Architectures

Reference Architectures
Overview Simple (non-HA) Deployment Multi-node (HA) Deployment (Compact) Multi-node (HA) Deployment (Full) Details of HA Compact Deployment Red Hat OpenStack Architectures Simple (non-HA) Red Hat OpenStack Deployment Multi-node (HA) Red Hat OpenStack Deployment (Compact) HA Logical Setup Controller Nodes Compute Nodes Storage Nodes Cluster Sizing Network Architecture Public Network Internal (Management) Network Private Network Technical Considerations Neutron vs. nova-network Cinder vs. nova-volume Object Storage Deployment 18 19 20 21 22 24 24 24 27 27 28 29 30 32 34 34 34 35 35 35 35

2013, Mirantis Inc.

Page 17

Fuel for Openstack v3.1 User Guide

Overview

Overview
Before you install any hardware or software, you must know what you're trying to achieve. This section looks at the basic components of an OpenStack infrastructure and organizes them into one of the more common reference architectures. You'll then use that architecture as a basis for installing OpenStack in the next section. As you know, OpenStack provides the following basic services: Compute: Compute servers are the workhorses of your installation; they're the servers on which your users' virtual machines are created. nova-scheduler controls the life-cycle of these VMs. Networking: Because an OpenStack cluster (virtually) always includes multiple servers, the ability for them to communicate with each other and with the outside world is crucial. Networking was originally handled by the nova-network service, but it has given way to the newer Neutron (formerly Quantum) networking service. Authentication and authorization for these transactions are handled by keystone. Storage: OpenStack provides for two different types of storage: block storage and object storage. Block storage is traditional data storage, with small, fixed-size blocks that are mapped to locations on storage media. At its simplest level, OpenStack provides block storage using nova-volume, but it is common to use cinder. Object storage, on the other hand, consists of single variable-size objects that are described by system-level metadata, and you can access this capability using swift. OpenStack storage is used for your users' objects, but it is also used for storing the images used to create new VMs. This capability is handled by glance. These services can be combined in many different ways. Out of the box, Fuel supports the following deployment configurations: Non-HA Simple HA Compact HA Full RHOS Non-HA Simple RHOS HA Compact

2013, Mirantis Inc.

Page 18

Fuel for Openstack v3.1 User Guide

Simple (non-HA) Deployment

Simple (non-HA) Deployment


In a production environment, you will never have a Simple non-HA deployment of OpenStack, partly because it forces you to make a number of compromises as to the number and types of services that you can deploy. It is, however, extremely useful if you just want to see how OpenStack works from a user's point of view.

More commonly, your OpenStack installation will consist of multiple servers. Exactly how many is up to you, of course, but the main idea is that your controller(s) are separate from your compute servers, on which your users' VMs will actually run. One arrangement that will enable you to achieve this separation while still keeping your hardware investment relatively modest is to house your storage on your controller nodes.

2013, Mirantis Inc.

Page 19

Fuel for Openstack v3.1 User Guide

Multi-node (HA) Deployment (Compact)

Multi-node (HA) Deployment (Compact)


Production environments typically require high availability, which involves several architectural requirements. Specifically, you will need at least three controllers, and certain components will be deployed in multiple locations to prevent single points of failure. That's not to say, however, that you can't reduce hardware requirements by combining your storage, network, and controller nodes:

We'll take a closer look at the details of this deployment configuration in Details of HA Compact Deployment section.

2013, Mirantis Inc.

Page 20

Fuel for Openstack v3.1 User Guide

Multi-node (HA) Deployment (Full)

Multi-node (HA) Deployment (Full)


For large production deployments, its more common to provide dedicated hardware for storage. This architecture gives you the advantages of high availability, but this clean separation makes your cluster more maintainable by separating storage and controller functionality:

Where Fuel really shines is in the creation of more complex architectures, so in this document you'll learn how to use Fuel to easily create a multi-node HA OpenStack cluster. To reduce the amount of hardware you'll need to follow the installation, however, the guide focuses on the Multi-node HA Compact architecture.

2013, Mirantis Inc.

Page 21

Fuel for Openstack v3.1 User Guide

Details of HA Compact Deployment

Details of HA Compact Deployment


In this section, you'll learn more about the Multi-node (HA) Compact deployment configuration and how it achieves high availability. As you may recall, this configuration looks something like this:

OpenStack services are interconnected by RESTful HTTP-based APIs and AMQP-based RPC messages. So redundancy for stateless OpenStack API services is implemented through the combination of Virtual IP (VIP) management using Pacemaker and load balancing using HAProxy. Stateful OpenStack components, such as the state database and messaging server, rely on their respective active/active and active/passive modes for high availability. For example, RabbitMQ uses built-in clustering capabilities, while the database uses MySQL/Galera replication.

2013, Mirantis Inc.

Page 22

Fuel for Openstack v3.1 User Guide

Details of HA Compact Deployment

Lets take a closer look at what an OpenStack deployment looks like, and what it will take to achieve high availability for an OpenStack deployment.

2013, Mirantis Inc.

Page 23

Fuel for Openstack v3.1 User Guide

Red Hat OpenStack Architectures

Red Hat OpenStack Architectures


Red Hat has partnered with Mirantis to offer an end-to-end supported distribution of OpenStack powered by Fuel. Because Red Hat offers support for a subset of all available open source packages, the reference architecture has been slightly modified to meet Red Hat's support requirements to provide a highly available OpenStack cluster. Below is the list of modifications: Database backend: MySQL with Galera has been replaced with native replication in a Master/Slave configuration. MySQL master is elected via Corosync and master and slave status is managed via Pacemaker. Messaging backend: RabbitMQ has been replaced with QPID. Qpid is an AMQP provider that Red Hat offers, but it cannot be clustered in Red Hat's offering. As a result, Fuel configures three non-clustered, independent QPID brokers. Fuel still offers HA for messaging backend via virtual IP management provided by Corosync. Nova networking: Quantum is not available for Red Hat OpenStack because the Red Hat kernel lacks GRE tunneling support for OpenVSwitch. This issue should be fixed in a future release. As a result, Fuel for Red Hat OpenStack Platform will only support Nova networking.

Simple (non-HA) Red Hat OpenStack Deployment


In a production environment, you will never have a Simple non-HA deployment of OpenStack, partly because it forces you to make a number of compromises as to the number and types of services that you can deploy. It is, however, extremely useful if you just want to see how OpenStack works from a user's point of view.

More commonly, your OpenStack installation will consist of multiple servers. Exactly how many is up to you, of course, but the main idea is that your controller(s) are separate from your compute servers, on which your users' VMs will actually run. One arrangement that will enable you to achieve this separation while still keeping your hardware investment relatively modest is to house your storage on your controller nodes.

Multi-node (HA) Red Hat OpenStack Deployment (Compact)


2013, Mirantis Inc. Page 24

Fuel for Openstack v3.1 User Guide

Red Hat OpenStack Architectures

Production environments typically require high availability, which involves several architectural requirements. Specifically, you will need at least three controllers, and certain components will be deployed in multiple locations to prevent single points of failure. That's not to say, however, that you can't reduce hardware requirements by combining your storage, network, and controller nodes:

OpenStack services are interconnected by RESTful HTTP-based APIs and AMQP-based RPC messages. So redundancy for stateless OpenStack API services is implemented through the combination of Virtual IP (VIP) management using Corosync and load balancing using HAProxy. Stateful OpenStack components, such as the state database and messaging server, rely on their respective active/passive modes for high availability. For example, MySQL uses built-in replication capabilities (plus the help of Pacemaker), while QPID is offered in three independent brokers with virtual IP management to provide high availability.

2013, Mirantis Inc.

Page 25

Fuel for Openstack v3.1 User Guide

Red Hat OpenStack Architectures

2013, Mirantis Inc.

Page 26

Fuel for Openstack v3.1 User Guide

HA Logical Setup

HA Logical Setup
An OpenStack HA cluster involves, at a minimum, three types of nodes: controller nodes, compute nodes, and storage nodes.

Controller Nodes
The first order of business in achieving high availability (HA) is redundancy, so the first step is to provide multiple controller nodes. You must keep in mind, however, that the database uses Galera to achieve HA, and Galera is a quorum-based system. That means that you must provide at least 3 controller nodes.

2013, Mirantis Inc.

Page 27

Fuel for Openstack v3.1 User Guide

HA Logical Setup

Every OpenStack controller runs HAProxy, which manages a single External Virtual IP (VIP) for all controller nodes and provides HTTP and TCP load balancing of requests going to OpenStack API services, RabbitMQ, and MySQL. When an end user accesses the OpenStack cloud using Horizon or makes a request to the REST API for services such as nova-api, glance-api, keystone-api, quantum-api, nova-scheduler, MySQL or RabbitMQ, the request goes to the live controller node currently holding the External VIP, and the connection gets terminated by HAProxy. When the next request comes in, HAProxy handles it, and may send it to the original controller or another in the cluster, depending on load conditions. Each of the services housed on the controller nodes has its own mechanism for achieving HA: nova-api, glance-api, keystone-api, quantum-api and nova-scheduler are stateless services that do not require any special attention besides load balancing. Horizon, as a typical web application, requires sticky sessions to be enabled at the load balancer. RabbitMQ provides active/active high availability using mirrored queues. MySQL high availability is achieved through Galera active/active multi-master deployment and Pacemaker. Quantum agents are managed by Pacemaker.

Compute Nodes
OpenStack compute nodes are, in many ways, the foundation of your cluster; they are the servers on which your users will create their Virtual Machines (VMs) and host their applications. Compute nodes need to talk to controller nodes and reach out to essential services such as RabbitMQ and MySQL. They use the same approach that provides redundancy to the end-users of Horizon and REST APIs, reaching out to controller nodes using the VIP and going through HAProxy.

2013, Mirantis Inc.

Page 28

Fuel for Openstack v3.1 User Guide

HA Logical Setup

Storage Nodes
In this OpenStack cluster reference architecture, shared storage acts as a backend for Glance, so that multiple Glance instances running on controller nodes can store images and retrieve images from it. To achieve this, you are going to deploy Swift. This enables you to use it not only for storing VM images, but also for any other objects such as user files.

2013, Mirantis Inc.

Page 29

Fuel for Openstack v3.1 User Guide

Cluster Sizing

Cluster Sizing
This reference architecture is well suited for production-grade OpenStack deployments on a medium and large scale when you can afford allocating several servers for your OpenStack controller nodes in order to build a fully redundant and highly available environment. The absolute minimum requirement for a highly-available OpenStack deployment is to allocate 4 nodes: 3 controller nodes, combined with storage 1 compute node

If you want to run storage separately from the controllers, you can do that as well by raising the bar to 9 nodes: 3 Controller nodes 3 Storage nodes 2 Swift Proxy nodes 1 Compute node

2013, Mirantis Inc.

Page 30

Fuel for Openstack v3.1 User Guide

Cluster Sizing

Of course, you are free to choose how to deploy OpenStack based on the amount of available hardware and on your goals (such as whether you want a compute-oriented or storage-oriented cluster). For a typical OpenStack compute deployment, you can use this table as high-level guidance to determine the number of controllers, compute, and storage nodes you should have: # of Nodes 4-10 11-40 41-100 >100 3 3 4 5 Controllers Computes 1-7 3-32 29-88 >84 3 (on controllers) 3+ (swift) + 2 (proxy) 6+ (swift) + 2 (proxy) 9+ (swift) + 2 (proxy) Storages

2013, Mirantis Inc.

Page 31

Fuel for Openstack v3.1 User Guide

Network Architecture

Network Architecture
The current architecture assumes the presence of 3 NICs, but it can be customized for two or 4+ network interfaces. Most servers arebuilt with at least two network interfaces. In this case, let's consider a typical example of three NIC cards. They're utilized as follows: eth0: The internal management network, used for communication with Puppet & Cobbler eth1: The public network, and floating IPs assigned to VMs eth2: The private network, for communication between OpenStack VMs, and the bridge interface (VLANs) In the multi-host networking mode, you can choose between the FlatDHCPManager and VlanManager network managers in OpenStack. The figure below illustrates the relevant nodes and networks.

2013, Mirantis Inc.

Page 32

Fuel for Openstack v3.1 User Guide

Network Architecture

2013, Mirantis Inc.

Page 33

Fuel for Openstack v3.1 User Guide Lets take a closer look at each network and how its used within the cluster.

Network Architecture

Public Network
This network allows inbound connections to VMs from the outside world (allowing users to connect to VMs from the Internet). It also allows outbound connections from VMs to the outside world. For security reasons, the public network is usually isolated from the private network and internal (management) network. Typically, it's a single C class network from your globally routed or private network range. To enable Internet access to VMs, the public network provides the address space for the floating IPs assigned to individual VM instances by the project administrator. Nova-network or Neutron (formerly Quantum) services can then configure this address on the public network interface of the Network controller node. Clusters based on nova-network use iptables to create a Destination NAT from this address to the fixed IP of the corresponding VM instance through the appropriate virtual bridge interface on the Network controller node. In the other direction, the public network provides connectivity to the globally routed address space for VMs. The IP address from the public network that has been assigned to a compute node is used as the source for the Source NAT performed for traffic going from VM instances on the compute node to Internet. The public network also provides VIPs for Endpoint nodes, which are used to connect to OpenStack services APIs.

Internal (Management) Network


The internal network connects all OpenStack nodes in the cluster. All components of an OpenStack cluster communicate with each other using this network. This network must be isolated from both the private and public networks for security reasons. The internal network can also be used for serving iSCSI protocol exchanges between Compute and Storage nodes. This network usually is a single C class network from your private, non-globally routed IP address range.

Private Network
The private network facilitates communication between each tenant's VMs. Private network address spaces are part of the enterprise network address space. Fixed IPs of virtual instances are directly accessible from the rest of Enterprise network. The private network can be segmented into separate isolated VLANs, which are managed by nova-network or Neutron (formerly Quantum) services.

2013, Mirantis Inc.

Page 34

Fuel for Openstack v3.1 User Guide

Technical Considerations

Technical Considerations
Before performing any installations, you'll need to make a number of decisions about which services to deploy, but from a general architectural perspective, it's important to think about how you want to handle both networking and block storage.

Neutron vs. nova-network


Neutron (formerly Quantum) is a service which provides Networking-as-a-Service functionality in OpenStack. It has a rich tenant-facing API for defining network connectivity and addressing in the cloud, and gives operators the ability to leverage different networking technologies to power their cloud networking. There are various deployment use cases for Neutron. Fuel supports the most common of them, called Provider Router with Private Networks. It provides each tenant with one or more private networks, which can communicate with the outside world via a Neutron router. Neutron is not, however, required in order to run an OpenStack cluster. If you don't need (or want) this added functionality, it's perfectly acceptable to continue using nova-network. In order to deploy Neutron, you need to enable it in the Fuel configuration. Fuel will then set up an additional node in the OpenStack installation to act as an L3 router, or, depending on the configuration options you've chosen, install Neutron on the controllers.

Cinder vs. nova-volume


Cinder is a persistent storage management service, also known as block-storage-as-a-service. It was created to replace nova-volume, and provides persistent storage for VMs. If you want to use Cinder for persistent storage, you will need to both enable Cinder and create the block devices on which it will store data. You will then provide information about those blocks devices during the Fuel install. Cinder block devices can be: created by Cobbler during the initial node installation, or attached manually (e.g. as additional virtual disks if you are using VirtualBox, or as additional physical RAID, SAN volumes)

Object Storage Deployment


Fuel currently supports several scenarios to deploy the object storage: Glance + filesystem By default, Glance uses the file system backend to store virtual machine images. In this case, you can use any of shared file systems supported by Glance. Swift on controllers In this mode the role of swift-storage and swift-proxy are combined with a nova-controller. Use it only for testing in order to save nodes. It's not suitable for production environments. Swift on dedicated nodes In this case the Proxy service and Storage (account/container/object) services reside on separate nodes, with two proxy nodes and a minimum of three storage nodes.

2013, Mirantis Inc.

Page 35

Fuel for Openstack v3.1 User Guide

Production Considerations

Production Considerations
Fuel simplifies the set up of an OpenStack cluster, affording you the ability to dig in and fully understand how OpenStack works. You can deploy on test hardware or in a virtualized environment and root around all you like, but when it comes time to deploy to production there are a few things to take into consideration. In this section we will talk about such things including how to size your hardware and how to handle large-scale deployments. Sizing Hardware for Production Deployment Processing Memory Storage Space Networking Summary Redeploying An Environment Environments Deployment pipeline Large Scale Deployments Certificate signing requests and Puppet Master/Cobbler capacity Downloading of operating systems and other software 37 37 37 38 40 41 42 42 42 44 44 44

2013, Mirantis Inc.

Page 36

Fuel for Openstack v3.1 User Guide

Sizing Hardware for Production Deployment

Sizing Hardware for Production Deployment


One of the first questions people ask when planning an OpenStack deployment is "what kind of hardware do I need?" There is no such thing as a one-size-fits-all answer, but there are straightforward rules to selecting appropriate hardware that will suit your needs. The Golden Rule, however, is to always accommodate for growth. With the potential for growth accounted for, you can move on to the actual hardware needs. Many factors contribute to selecting hardware for an OpenStack cluster -- contact Mirantis for information on your specific requirements -- but in general, you will want to consider the following factors: Processing Memory Storage Networking Your needs in each of these areas are going to determine your overall hardware requirements.

Processing
In order to calculate how much processing power you need to acquire you will need to determine the number of VMs your cloud will support. You must also consider the average and maximum processor resources you will allocate to each VM. In the vast majority of deployments, the allocated resources will be the same for all of your VMs. However, if you are planning to create groups of VMs that have different requirements, you will need to calculate for all of them in aggregate. Consider this example: 100 VMs 2 EC2 compute units (2 GHz) average 16 EC2 compute units (16 GHz) max To make it possible to provide the maximum CPU in this example you will need at least 5 CPU cores (16 GHz/(2.4 GHz per core * 1.3 to adjust for hyper-threading)) per machine, and at least 84 CPU cores ((100 VMs * 2 GHz per VM)/2.4 GHz per core) in total. If you were to select the Intel E5 2650-70 8 core CPU, that means you need 11 sockets (84 cores / 8 cores per socket). This breaks down to six dual core servers (12 sockets / 2 sockets per server), for a "packing density" of 17 VMs per server (102 VMs / 6 servers). This process also accommodates growth since you now know what a single server using this CPU configuration can support. You can add new servers accounting for 17 VMs each as needed without having to re-calculate. You will also need to take into account the following: This model assumes you are not oversubscribing your CPU. If you are considering Hyper-threading, count each core as 1.3, not 2. Choose a good value CPU that supports the technologies you require.

Memory
Continuing to use the example from the previous section, we need to determine how much RAM will be required to support 17 VMs per server. Let's assume that you need an average of 4 GBs of RAM per VM with dynamic allocation for up to 12GBs for each VM. Calculating that all VMs will be using 12 GBs of RAM requires that each server have 204 GBs of available RAM. 2013, Mirantis Inc. Page 37

Fuel for Openstack v3.1 User Guide

Sizing Hardware for Production Deployment

You must also consider that the node itself needs sufficient RAM to accommodate core OS operations as well as RAM for each VM container (not the RAM allocated to each VM, but the memory the core OS uses to run the VM). The node's OS must run it's own operations, schedule processes, allocate dynamic resources, and handle network operations, so giving the node itself at least 16 GBs or more RAM is not unreasonable. Considering that the RAM we would consider for servers comes in 4 GB, 8 GB, 16 GB and 32 GB sticks, we would need a total of 256 GBs of RAM installed per server. For an average 2-CPU socket server board you get 16-24 RAM slots. To have 256 GBs installed you would need sixteen 16 GB sticks of RAM to satisfy your RAM needs for up to 17 VMs requiring dynamic allocation up to 12 GBs and to support all core OS requirements. You can adjust this calculation based on your needs.

Storage Space
When it comes to disk space there are several types that you need to consider: Ephemeral (the local drive space for a VM) Persistent (the remote volumes that can be attached to a VM) Object Storage (such as images or other objects) As far as local drive space that must reside on the compute nodes, in our example of 100 VMs we make the following assumptions: 150 GB local storage per VM 5 TB total of local storage (100 VMs * 50 GB per VM) 500 GB of persistent volume storage per VM 50 TB total persistent storage Returning to our already established example, we need to figure out how much storage to install per server. This storage will service the 17 VMs per server. If we are assuming 50 GBs of storage for each VMs drive container, then we would need to install 2.5 TBs of storage on the server. Since most servers have anywhere from 4 to 32 2.5" drive slots or 2 to 12 3.5" drive slots, depending on server form factor (i.e., 2U vs. 4U), you will need to consider how the storage will be impacted by the intended use. If storage impact is not expected to be significant, then you may consider using unified storage. For this example a single 3 TB drive would provide more than enough storage for seventeen 150 GB VMs. If speed is really not an issue, you might even consider installing two or three 3 TB drives and configure a RAID-1 or RAID-5 for redundancy. If speed is critical, however, you will likely want to have a single hardware drive for each VM. In this case you would likely look at a 3U form factor with 24-slots. Don't forget that you will also need drive space for the node itself, and don't forget to order the correct backplane that supports the drive configuration that meets your needs. Using our example specifications and assuming that speed it critical, a single server would need 18 drives, most likely 2.5" 15,000 RPM 146 GB SAS drives. Throughput As far as throughput, that's going to depend on what kind of storage you choose. In general, you calculate IOPS based on the packing density (drive IOPS * drives in the server / VMs per server), but the actual drive IOPS will depend on the drive technology you choose. For example:

2013, Mirantis Inc.

Page 38

Fuel for Openstack v3.1 User Guide

Sizing Hardware for Production Deployment

3.5" slow and cheap (100 IOPS per drive, with 2 mirrored drives) 100 IOPS * 2 drives / 17 VMs per server = 12 Read IOPS, 6 Write IOPS 2.5" 15K (200 IOPS, four 600 GB drive, RAID-10) 200 IOPS * 4 drives / 17 VMs per server = 48 Read IOPS, 24 Write IOPS SSD (40K IOPS, eight 300 GB drive, RAID-10) 40K * 8 drives / 17 VMs per server = 19K Read IOPS, 9.5K Write IOPS Clearly, SSD gives you the best performance, but the difference in cost between SSDs and the less costly platter-based solutions is going to be significant, to say the least. The acceptable cost burden is determined by the balance between your budget and your performance and redundancy needs. It is also important to note that the rules for redundancy in a cloud environment are different than a traditional server installation in that entire servers provide redundancy as opposed to making a single server instance redundant. In other words, the weight for redundant components shifts from individual OS installation to server redundancy. It is far more critical to have redundant power supplies and hot-swappable CPUs and RAM than to have redundant compute node storage. If, for example, you have 18 drives installed on a server and have 17 drives directly allocated to each VM installed and one fails, you simply replace the drive and push a new node copy. The remaining VMs carry whatever additional load is present due to the temporary loss of one node. Remote storage IOPS will also be a factor in determining how you plan to handle persistent storage. For example, consider these options for laying out your 50 TB of remote volume space: 12 drive storage frame using 3 TB 3.5" drives mirrored 36 TB raw, or 18 TB usable space per 2U frame 3 frames (50 TB / 18 TB per server) 12 slots x 100 IOPS per drive = 1200 Read IOPS, 600 Write IOPS per frame 3 frames x 1200 IOPS per frame / 100 VMs = 36 Read IOPS, 18 Write IOPS per VM 24 drive storage frame using 1TB 7200 RPM 2.5" drives 24 TB raw, or 12 TB usable space per 2U frame 5 frames (50 TB / 12 TB per server) 24 slots x 100 IOPS per drive = 2400 Read IOPS, 1200 Write IOPS per frame 5 frames x 2400 IOPS per frame / 100 VMs = 120 Read IOPS, 60 Write IOPS per frame You can accomplish the same thing with a single 36 drive frame using 3 TB drives, but this becomes a single point of failure in your cluster. Object storage When it comes to object storage, you will find that you need more space than you think. For example, this example specifies 50 TB of object storage. Easy right? Not really.

2013, Mirantis Inc.

Page 39

Fuel for Openstack v3.1 User Guide

Sizing Hardware for Production Deployment

Object storage uses a default of 3 times the required space for replication, which means you will need 150 TB. However, to accommodate two hands-off zones, you will need 5 times the required space, which actually means 250 TB. The calculations don't end there. You don't ever want to run out of space, so "full" should really be more like 75% of capacity, which means you will need a total of 333 TB, or a multiplication factor of 6.66. Of course, that might be a bit much to start with; you might want to start with a happy medium of a multiplier of 4, then acquire more hardware as your drives begin to fill up. That calculates to 200 TB in our example. So how do you put that together? If you were to use 3 TB 3.5" drives, you could use a 12 drive storage frame, with 6 servers hosting 36 TB each (for a total of 216 TB). You could also use a 36 drive storage frame, with just 2 servers hosting 108 TB each, but its not recommended due to the high cost of failure to replication and capacity issues.

Networking
Perhaps the most complex part of designing an OpenStack cluster is the networking. An OpenStack cluster can involve multiple networks even beyond the Public, Private, and Internal networks. Your cluster may involve tenant networks, storage networks, multiple tenant private networks, and so on. Many of these will be VLANs, and all of them will need to be planned out in advance to avoid configuration issues. In terms of the example network, consider these assumptions: 100 Mbits/second per VM HA architecture Network Storage is not latency sensitive In order to achieve this, you can use two 1 Gb links per server (2 x 1000 Mbits/second / 17 VMs = 118 Mbits/second). Using two links also helps with HA. You can also increase throughput and decrease latency by using two 10 Gb links, bringing the bandwidth per VM to 1 Gb/second, but if you're going to do that, you've got one more factor to consider. Scalability and oversubscription It is one of the ironies of networking that 1 Gb Ethernet generally scales better than 10Gb Ethernet -- at least until 100 Gb switches are more commonly available. It's possible to aggregate the 1 Gb links in a 48 port switch, so that you have 48 x 1 Gb links down, but 4 x 10 Gb links up. Do the same thing with a 10 Gb switch, however, and you have 48 x 10 Gb links down and 4 x 100b links up, resulting in oversubscription. Like many other issues in OpenStack, you can avoid this problem to a great extent with careful planning. Problems only arise when you are moving between racks, so plan to create "pods", each of which includes both storage and compute nodes. Generally, a pod is the size of a non-oversubscribed L2 domain. Hardware for this example In this example, you are looking at: 2 data switches (for HA), each with a minimum of 12 ports for data (2 x 1 Gb links per server x 6 servers) 1 x 1 Gb switch for IPMI (1 port per server x 6 servers) Optional Cluster Management switch, plus a second for HA

2013, Mirantis Inc.

Page 40

Fuel for Openstack v3.1 User Guide

Sizing Hardware for Production Deployment

Because your network will in all likelihood grow, it's best to choose 48 port switches. Also, as your network grows, you will need to consider uplinks and aggregation switches.

Summary
In general, your best bet is to choose a 2 socket server with a balance in I/O, CPU, Memory, and Disk that meets your project requirements. Look for a 1U R-class or 2U high density C-class servers. Some good options from Dell for compute nodes include: Dell PowerEdge R620 Dell PowerEdge C6220 Rack Server Dell PowerEdge R720XD (for high disk or IOPS requirements) You may also want to consider systems from HP (http://www.hp.com/servers) or from a smaller systems builder like Aberdeen, a manufacturer that specializes in powerful, low-cost systems and storage servers (http://www.aberdeeninc.com).

2013, Mirantis Inc.

Page 41

Fuel for Openstack v3.1 User Guide

Redeploying An Environment

Redeploying An Environment
Because Puppet is additive only, there is no ability to revert changes as you would in a typical application deployment. If a change needs to be backed out, you must explicitly add a configuration to reverse it, check the configuration in, and promote it to production using the pipeline. This means that if a breaking change does get deployed into production, typically a manual fix is applied, with the proper fix subsequently checked into version control. Fuel offers the ability to isolate code changes while developing a deployment and minimizes the headaches associated with maintaining multiple configurations through a single Puppet Master by creating what are called environments.

Environments
Puppet supports assigning nodes 'environments'. These environments can be mapped directly to your development, QA and production life cycles, so its a way to distribute code to nodes that are assigned to those environments. On the Master node: The Puppet Master tries to find modules using its modulepath setting, which by default is /etc/puppet/modules. It is common practice to set this value once in your /etc/puppet/puppet.conf. Environments expand on this idea and give you the ability to use different settings for different configurations. For example, you can specify several search paths. The following example dynamically sets the modulepath so Puppet will check a per-environment folder for a module before serving it from the main set: [master] modulepath = $confdir/$environment/modules:$confdir/modules [production] manifest = $confdir/manifests/site.pp [development] manifest = $confdir/$environment/manifests/site.pp On the Slave Node: Once the slave node makes a request, the Puppet Master gets informed of its environment. If you dont specify an environment, the agent uses the default production environment. To set aslave-side environment, just specify the environment setting in the [agent] puppet.conf: [agent] environment = development block of

Deployment pipeline

2013, Mirantis Inc.

Page 42

Fuel for Openstack v3.1 User Guide

Redeploying An Environment

1. Deploy In order to deploy multiple environments that don't interfere with each other, you should specify the deployment_id option in YAML file. It should be an even integer value in the range of 2-254. This value is used in dynamic environment-based tag generation. Fuel applies that tag globally to all resources and some services on each node. 2. Clean/Revert At this stage you just need to make sure the environment has the original/virgin state. 3. Puppet node deactivate This will ensure that any resources exported by that node will stop appearing in the catalogs served to the slave nodes: puppet node deactivate <node> where <node> is the fully qualified domain name as seen in puppetcertlist--all. You can deactivate nodes manually one by one, or execute the following command to automatically deactivate all nodes:
cert list --all | awk '! /DNS:puppet/ { gsub(/"/, "", $2); print $2}' | xargs puppet node deactivate

4. Redeploy Start the puppet agent again to apply a desired node configuration.

See also
http://puppetlabs.com/blog/a-deployment-pipeline-for-infrastructure/ http://docs.puppetlabs.com/guides/environment.html

2013, Mirantis Inc.

Page 43

Fuel for Openstack v3.1 User Guide

Large Scale Deployments

Large Scale Deployments


When deploying large clusters (of 100 nodes or more) there are two basic bottlenecks: Careful planning is key to eliminating these potential problem areas, but there's another way. Fuel takes care of these problems through caching and orchestration. We feel, however, that it's always good to have a sense of how to solve these problems should they appear.

Certificate signing requests and Puppet Master/Cobbler capacity


When deploying a large cluster, you may find that Puppet Master begins to have difficulty when you start exceeding 20 or more simultaneous requests. Part of this problem is because the initial process of requesting and signing certificates involves *.tmp files that can create conflicts. To solve this problem, you have two options: reduce the number of simultaneous requests, or increase the number of Puppet Master/Cobbler servers. The number of simultaneous certificate requests that are active can be controlled by staggering the Puppet agent run schedule. This can be accomplished through orchestration. You don't need extreme staggering (1 to 5 seconds will do) but if this method isn't practical, you can increase the number of Puppet Master/Cobbler servers. If you're simply overwhelming the Puppet Master process and not running into file conflicts, one way to get around this problem is to use Puppet Master with Thin as the backend component and nginx as a frontend component. This configuration dynamically scales the number of Puppet Master processes to better accommodate changing load. You can also increase the number of servers by creating a cluster that utilizes a round robin DNS configuration through a service like HAProxy. You will need to ensure that these nodes are kept in sync. For Cobbler, that means a combination of the --replicate switch, XMLRPC for metadata, rsync for profiles and distributions. Similarly, Puppet Master can be kept in sync with a combination of rsync (for modules, manifests, and SSL data) and database replication.

Downloading of operating systems and other software


Large deployments can also suffer from a bottleneck in terms of the additional traffic created by downloading software from external sources. One way to avoid this problem is by increasing LAN bandwidth through bonding multiple gigabit interfaces. You might also want to consider 10G Ethernet trunking between infrastructure switches using CAT-6a or fiber cables to improve backend speeds to reduce latency and provide more overall pipe.

See also
Sizing Hardware for Production Deployment for more information on choosing networking equipment.

2013, Mirantis Inc.

Page 44

Fuel for Openstack v3.1 User Guide

Create an OpenStack cluster using Fuel UI

Create an OpenStack cluster using Fuel UI


Now let's look at performing an actual OpenStack deployment using Fuel. Installing Fuel Master Node On Bare-Metal Environment On VirtualBox Changing Network Parameters Before the Installation Changing Network Parameters After Installation Name Resolution (DNS) PXE Booting Settings When Master Node Installation is Done Understanding and Configuring the Network FlatDHCPManager (multi-host scheme) FlatDHCPManager (single-interface scheme) VLANManager Fuel Deployment Schema Configuring the network Network Issues On VirtualBox Timeout In Connection to OpenStack API From Client Applications Red Hat OpenStack Deployment Notes Overview Deployment Requirements Red Hat Subscription Management (RHSM) Red Hat RHN Satellite Troubleshooting Red Hat OpenStack Deployment Post-Deployment Check Benefits Running Post-Deployment Checks What To Do When a Test Fails Sanity Tests Description Smoke Tests Description 46 46 46 50 51 52 52 53 54 54 55 55 57 57 64 64 64 67 67 67 67 68 69 71 71 71 72 73 75

2013, Mirantis Inc.

Page 45

Fuel for Openstack v3.1 User Guide

Installing Fuel Master Node

Installing Fuel Master Node


Fuel is distributed as both ISO and IMG images, each of them contains an installer for Fuel Master node. The ISO image is used for CD media devices, iLO or similar remote access systems. The IMG file is used for USB memory sticks. Once installed, Fuel can be used to deploy and manage OpenStack clusters. It will assign IP addresses to the nodes, perform PXE boot and initial configuration, and provision of OpenStack nodes according to their roles in the cluster.

On Bare-Metal Environment
To install Fuel on bare-metal hardware, you need to burn the provided ISO to a CD/DVD or create a bootable USB stick. You would then begin the installation process by booting from that media, very much like any other OS. Burning an ISO to optical media is a deeply supported function on all OSes. For Linux there are several interfaces available such as Brasero or Xfburn, two of the more commonly pre-installed desktop applications. There are also a number for Windows such as ImgBurn and the open source InfraRecorder. Burning an ISO in Mac OS X is deceptively simple. Open Disk Utility from Applications > Utilities, drag the ISO into the disk list on the left side of the window and select it, insert blank media with enough room, and click Burn. If you prefer a utility, check out the open source Burn. Installing the ISO to a bootable USB stick, however, is an entirely different matter. Canonical suggests PenDriveLinux which is a GUI tool for Windows. On Windows, you can write the installation image with a number of different utilities. The following list links to some of the more popular ones and they are all available at no cost: Win32 Disk Imager. ISOtoUSB. After the installation is complete, you will need to allocate bare-metal nodes for your OpenStack cluster, put them on the same L2 network as the Master node, and PXE boot. The UI will discover them and make them available for installing OpenStack.

On VirtualBox
If you are going to evaluate Fuel on VirtualBox, you should know that we provide a set of scripts that create and configure all of the required VMs for you, including the Master node and Slave nodes for OpenStack itself. It's a very simple, single-click installation.

Note
These scripts are not supported on Windows, but you can still test on VirtualBox by creating the VMs by yourself. See Manual Mode for more details.

The requirements for running Fuel on VirtualBox are: A host machine with Linux or Mac OS.

2013, Mirantis Inc.

Page 46

Fuel for Openstack v3.1 User Guide

Installing Fuel Master Node

The scripts have been tested on Mac OS 10.7.5, Mac OS 10.8.3, Ubuntu 12.04 and Ubuntu 12.10. VirtualBox 4.2.12 (or later) must be installed with the extension pack. Both can be downloaded from http://www.virtualbox.org/. 8 GB+ of RAM to handle 4 VMs for non-HA OpenStack installation (1 Master node, 1 Controller node, 1 Compute node, 1 Cinder node) or to handle 5 VMs for HA OpenStack installation (1 Master node, 3 Controller nodes, 1 Compute node) Automatic Mode When you unpack the scripts, you will see the following important files and folders: iso This folder needs to contain a single ISO image for Fuel. Once you downloaded ISO from the portal, copy or move it into this directory. config.sh This file contains configuration, which can be fine-tuned. For example, you can select how many virtual nodes to launch, as well as how much memory to give them. launch.sh Once executed, this script will pick up an image from the iso directory, create a VM, mount the image to this VM, and automatically install the Fuel Master node. After installation of the Master node, the script will create Slave nodes for OpenStack and boot them via PXE from the Master node. Finally, the script will give you the link to access the Web-based UI for the Master node so you can start installation of an OpenStack cluster. Manual Mode

Note
However, these manual steps will allow you to set up the evaluation environment for vanilla OpenStack release only. RHOS installation is not possible. To download and deploy RedHat OpenStack you need to use automated VirtualBox helper scripts or install Fuel On Bare-Metal Environment.

If you cannot or would rather not run our helper scripts, you can still run Fuel on VirtualBox by following these steps. Master Node Deployment First, create the Master node VM. 1. Configure the host-only interface vboxnet0 in VirtualBox. IP address: 10.20.0.1

2013, Mirantis Inc.

Page 47

Fuel for Openstack v3.1 User Guide

Installing Fuel Master Node

Interface mask: 255.255.255.0 DHCP disabled 2. Create a VM for the Master node with the following parameters: OS Type: Linux, Version: Red Hat (64bit) RAM: 1024 MB HDD: 20 GB, with dynamic disk expansion CDROM: mount Fuel ISO Network 1: host-only interface vboxnet0 3. Power on the VM in order to start the installation. 4. Wait for the Welcome message with all information needed to login into the UI of Fuel. Adding Slave Nodes Next, create Slave nodes where OpenStack needs to be installed. 1. Create 3 or 4 additional VMs depending on your wish with the following parameters: OS Type: Linux, Version: Red Hat (64bit) RAM: 1024 MB HDD: 30 GB, with dynamic disk expansion Network 1: host-only interface vboxnet0, PCnet-FAST III device 2. Set priority for the network boot:

2013, Mirantis Inc.

Page 48

Fuel for Openstack v3.1 User Guide

Installing Fuel Master Node

3. Configure the network adapter on each VM:

2013, Mirantis Inc.

Page 49

Fuel for Openstack v3.1 User Guide

Installing Fuel Master Node

Changing Network Parameters Before the Installation


You can change the network settings 10.20.0.2/24gw10.20.0.1 by default. for the Fuel (PXE booting) network, which is

In order to do so, press the <TAB> key on the very first installation screen which says "Welcome to Fuel Installer!" and update the kernel options. For example, to use 192.168.1.10/24 IP address for the Master node and 192.168.1.1 as the gateway and DNS server you should change the parameters to those shown in the image below:

2013, Mirantis Inc.

Page 50

Fuel for Openstack v3.1 User Guide

Installing Fuel Master Node

When you're finished making changes, press the <ENTER> key and wait for the installation to complete.

Changing Network Parameters After Installation


It is still possible to configure other interfaces, or add 802.1Q sub-interfaces to the Master node to be able to access it from your network if required. It is easy to do via standard network configuration scripts for CentOS. When the installation is complete, you can modify /etc/sysconfig/network-scripts/ifcfg-eth\* scripts. For example, if eth1 interface is on the L2 network which is planned for PXE booting, and eth2 is the interface connected to your office network switch, eth0 is not in use, then settings can be the following: /etc/sysconfig/network-scripts/ifcfg-eth0: DEVICE=eth0 ONBOOT=no /etc/sysconfig/network-scripts/ifcfg-eth1: DEVICE=eth1 ONBOOT=yes HWADDR=<your MAC>

2013, Mirantis Inc.

Page 51

Fuel for Openstack v3.1 User Guide

Installing Fuel Master Node

..... (other settings in your config) ..... PEERDNS=no BOOTPROTO=static IPADDR=192.168.1.10 NETMASK=255.255.255.0 /etc/sysconfig/network-scripts/ifcfg-eth2: DEVICE=eth2 ONBOOT=yes HWADDR=<your MAC> ..... (other settings in your config) ..... PEERDNS=no IPADDR=172.18.0.5 NETMASK=255.255.255.0

Warning
Once IP settings are set at the boot time for Fuel Master node, they should not be changed during the whole lifecycle of Fuel.

After modification of network configuration files, it is required to apply the new configuration: service network restart Now you should be able to connect to Fuel UI from your network at http://172.18.0.5:8000/

Name Resolution (DNS)


During Master node installation, it is assumed that there is a recursive DNS service on 10.20.0.1. If you want to make it possible for Slave nodes to be able to resolve public names, you need to change this default value to point to an actual DNS service. To make the change, run the following command on Fuel Master node (replace IP to your actual DNS): echo "nameserver 172.0.0.1" > /etc/dnsmasq.upstream

PXE Booting Settings


By default, eth0 on Fuel Master node serves PXE requests. If you are planning to use another interface, then it is required to modify dnsmasq settings (which acts as DHCP server). Edit the file /etc/cobbler/dnsmasq.template, find the line interface=eth0 and replace the interface name with the one you want to use.

2013, Mirantis Inc.

Page 52

Fuel for Openstack v3.1 User Guide Launch command to synchronize cobbler service afterwards: cobbler sync

Installing Fuel Master Node

During synchronization cobbler builds actual dnsmasq configuration file /etc/dnsmasq.conf from template /etc/cobbler/dnsmasq.template. That is why you should not edit /etc/dnsmasq.conf. Cobbler rewrites it each time when it is synchronized. If you want to use virtual machines to launch Fuel then you have to be sure that dnsmasq on Master node is configured to support the PXE client you use on your virtual machines. We enabled dhcp-no-override option because without it dnsmasq tries to move PXEfilename and PXEservername special fields into DHCP options. Not all PXE implementations can recognize those options and therefore they will not be able to boot. For example, CentOS 6.4 uses gPXE implementation instead of more advanced iPXE by default.

When Master Node Installation is Done


Once the Master node is installed, power on all other nodes and log in to the Fuel UI. Slave nodes will be booted in bootstrap mode (CentOS based Linux in memory) via PXE and you will see notifications in the user interface about discovered nodes. This is the point when you can create an environment, add nodes into it, and start configuration... Networking configuration is most complicated part, so please read the networking section of documentation carefully.

2013, Mirantis Inc.

Page 53

Fuel for Openstack v3.1 User Guide

Understanding and Configuring the Network

Understanding and Configuring the Network


OpenStack clusters use several types of network managers: FlatDHCPManager, VLANManager and Neutron (formerly Quantum). The current version of Fuel UI supports only two (FlatDHCP and VLANManager), but Fuel CLI supports all three. For more information about how the first two network managers work, you can read these two resources: OpenStack Networking FlatManager and FlatDHCPManager Openstack Networking for Scalability and Multi-tenancy with VLANManager

FlatDHCPManager (multi-host scheme)


The main idea behind the flat network manager is to configure a bridge (i.e. br100) on every Compute node and have one of the machine's host interfaces connect to it. Once the virtual machine is launched its virtual interface will connect to that bridge as well. The same L2 segment is used for all OpenStack projects, which means that there is no L2 isolation between virtual hosts, even if they are owned by separate projects, and there is only one flat IP pool defined for the cluster. For this reason it is called the Flat manager. The simplest case here is as shown on the following diagram. Here the eth1 interface is used to give network access to virtual machines, while eth0 interface is the management network interface.

Fuel deploys OpenStack in FlatDHCP mode with the so called multi-host feature enabled. Without this feature enabled, network traffic from each VM would go through the single gateway host, which basically becomes a single point of failure. In enabled mode, each Compute node becomes a gateway for all the VMs running on the host, providing a balanced networking solution. In this case, if one of the Computes goes down, the rest of the environment remains operational. The current version of Fuel uses VLANs, even for the FlatDHCP network manager. On the Linux host, it is implemented in such a way that it is not the physical network interfaces that are connected to the bridge, but the VLAN interface (i.e. eth0.102).

2013, Mirantis Inc.

Page 54

Fuel for Openstack v3.1 User Guide

Understanding and Configuring the Network

FlatDHCPManager (single-interface scheme)

Therefore all switch ports where Compute nodes are connected must be configured as tagged (trunk) ports with required VLANs allowed (enabled, tagged). Virtual machines will communicate with each other on L2 even if they are on different Compute nodes. If the virtual machine sends IP packets to a different network, they will be routed on the host machine according to the routing table. The default route will point to the gateway specified on the networks tab in the UI as the gateway for the Public network.

VLANManager
VLANManager mode is more suitable for large scale clouds. The idea behind this mode is to separate groups of virtual machines, owned by different projects, on different L2 layers. In VLANManager this is done by tagging IP frames, or simply speaking, by VLANs. It allows virtual machines inside the given project to communicate with each other and not to see any traffic from VMs of other projects. Switch ports must be configured as tagged (trunk) ports to allow this scheme to work.

2013, Mirantis Inc.

Page 55

Fuel for Openstack v3.1 User Guide

Understanding and Configuring the Network

2013, Mirantis Inc.

Page 56

Fuel for Openstack v3.1 User Guide

Fuel Deployment Schema

Fuel Deployment Schema


One of the physical interfaces on each host has to be chosen to carry VM-to-VM traffic (fixed network), and switch ports must be configured to allow tagged traffic to pass through. OpenStack Computes will untag the IP packets and send them to the appropriate VMs. Simplifying the configuration of VLAN Manager, there is no known limitation which Fuel could add in this particular networking mode.

Configuring the network


Once you choose a networking mode (FlatDHCP/VLAN), you must configure equipment accordingly. The diagram below shows an example configuration.

Fuel operates with following logical networks: Fuel network Used for internal Fuel communications only and PXE booting (untagged on the scheme); Public network Is used to get access from virtual machines to outside, Internet or office network (VLAN 101 on the scheme); Floating network Used to get access to virtual machines from outside (shared L2-interface with Public network; in this case it's VLAN 101); Management network Is used for internal OpenStack communications (VLAN 102 on the scheme); Storage network Is used for Storage traffic (VLAN 103 on the scheme); Fixed network

2013, Mirantis Inc.

Page 57

Fuel for Openstack v3.1 User Guide

Fuel Deployment Schema

One (for flat mode) or more (for VLAN mode) virtual machines networks (VLAN 104 on the scheme). Mapping logical networks to physical interfaces on servers Fuel allows you to use different physical interfaces to handle different types of traffic. When a node is added to the environment, click at the bottom line of the node icon. In the detailed information window, click the "Network Configuration" button to open the physical interfaces configuration screen.

On this screen you can drag-and-drop logical networks to physical interfaces according to your network setup. All networks are presented on the screen, except Fuel. It runs on the physical interface from which node was initially PXE booted, and in the current version it is not possible to map it on any other physical interface. Also,

2013, Mirantis Inc.

Page 58

Fuel for Openstack v3.1 User Guide

Fuel Deployment Schema

once the network is configured and OpenStack is deployed, you may not modify network settings, even to move a logical network to another physical interface or VLAN number. Switch Fuel can configure hosts, however switch configuration is still manual work. Unfortunately the set of configuration steps, and even the terminology used, is different for different vendors, so we will try to provide vendor-agnostic information on how traffic should flow and leave the vendor-specific details to you. We will provide an example for a Cisco switch. First of all, you should configure access ports to allow non-tagged PXE booting connections from all Slave nodes to the Fuel node. We refer this network as the Fuel network. By default, the Fuel Master node uses the eth0 interface to serve PXE requests on this network. So if that's left unchanged, you have to set the switch port for eth0 of Fuel Master node to access mode. We recommend that you use the eth0 interfaces of all other nodes for PXE booting as well. Corresponding ports must also be in access mode. Taking into account that this is the network for PXE booting, do not mix this L2 segment with any other network segments. Fuel runs a DHCP server, and if there is another DHCP on the same L2 network segment, both the company's infrastructure and Fuel's will be unable to function properly. You also need to configure each of the switch's ports connected to nodes as an "STP Edge port" (or a "spanning-tree port fast trunk", according to Cisco terminology). If you don't do that, DHCP timeout issues may occur. As long as the Fuel network is configured, Fuel can operate. Other networks are required for OpenStack environments, and currently all of these networks live in VLANs over the one or multiple physical interfaces on a node. This means that the switch should pass tagged traffic, and untagging is done on the Linux hosts.

Note
For the sake of simplicity, all the VLANs specified on the networks tab of the Fuel UI should be configured on switch ports, pointing to Slave nodes, as tagged.

Of course, it is possible to specify as tagged only certain ports for a certain nodes. However, in the current version, all existing networks are automatically allocated for each node, with any role. And network check will also check if tagged traffic pass, even if some nodes do not require this check (for example, Cinder nodes do not need fixed network traffic). This is enough to deploy the OpenStack environment. However, from a practical standpoint, it's still not really usable because there is no connection to other corporate networks yet. To make that possible, you must configure uplink port(s). One of the VLANs may carry the office network. To provide access to the Fuel Master node from your network, any other free physical network interface on the Fuel Master node can be used and configured according to your network rules (static IP or DHCP). The same network segment can be used for Public and Floating ranges. In this case, you must provide the corresponding VLAN ID and IP ranges in the UI. One Public IP per node will be used to SNAT traffic out of the VMs network, and one or more floating addresses per VM instance will be used to get access to the VM from your network, or even the global Internet. To have a VM visible from the Internet is similar to having it visible from corporate network - corresponding IP ranges and VLAN IDs must be specified for the Floating and Public networks. One current limitation of Fuel is that the user must use the same L2 segment for both Public and Floating networks.

2013, Mirantis Inc.

Page 59

Fuel for Openstack v3.1 User Guide Example configuration for one of the ports on a Cisco switch:
interface GigabitEthernet0/6 description s0_eth0 jv switchport trunk encapsulation dot1q switchport trunk native vlan 262 switchport trunk allowed vlan 100,102,104 switchport mode trunk spanning-tree portfast trunk vlan 262,100,102,104 # # # # # # # # #

Fuel Deployment Schema

switch port description enables VLANs access port, untags VLAN 262 100,102,104 VLANs are passed with tags To allow more than 1 VLAN on the port STP Edge port to skip network loop checks (to prevent DHCP timeout issues) Might be needed for enabling VLANs

Router To make it possible for VMs to access the outside world, you must have an IP address set on a router in the Public network. In the examples provided, that IP is 12.0.0.1 in VLAN 101. Fuel UI has a special field on the networking tab for the gateway address. As soon as deployment of OpenStack is started, the network on nodes is reconfigured to use this gateway IP as the default gateway. If Floating addresses are from another L3 network, then you have to configure the IP address (or even multiple IPs if Floating addresses are from more than one L3 network) for them on the router as well. Otherwise, Floating IPs on nodes will be inaccessible. Deployment configuration to access OpenStack API and VMs from host machine Helper scripts for VirtualBox create network adapters eth0, eth1, eth2 which are represented on host machine as vboxnet0, vboxnet1, vboxnet2 correspondingly, and assign IP addresses for adapters: vboxnet0 - 10.20.0.1/24, vboxnet1 - 172.16.1.1/24, vboxnet2 - 172.16.0.1/24. For the demo environment on VirtualBox, the first network adapter is used to run Fuel network traffic, including PXE discovery. To access the Horizon and OpenStack RESTful API via Public network from the host machine, it is required to have route from your host to the Public IP address on the OpenStack Controller. Also, if access to Floating IP of VM is required, it is also required to have route to the Floating IP on Compute host, which is binded to Public interface there. To make this configuration possible on VirtualBox demo environment, the user has to run Public network untagged. On the image below you can see the configuration of Public and Floating networks which will allow to make this happen.

2013, Mirantis Inc.

Page 60

Fuel for Openstack v3.1 User Guide

Fuel Deployment Schema

By default Public and Floating networks are run on the first network interface. It is required to change it, as you can see on this image below. Make sure you change it on every node.

2013, Mirantis Inc.

Page 61

Fuel for Openstack v3.1 User Guide

Fuel Deployment Schema

If you use default configuration in VirtualBox scripts, and follow the exact same settings on the images above, you should be able to access OpenStack Horizon via Public network after the installation. If you want to enable Internet on provisioned VMs by OpenStack, you have to configure NAT on the host machine. When packets reach vboxnet1 interface, according to the OpenStack settings tab, they have to know the way out of the host. For Ubuntu, the following command, executed on the host, can make this happen:
sudo iptables -t nat -A POSTROUTING -s 172.16.1.0/24 \! -d 172.16.1.0/24 -j MASQUERADE

To access VMs managed by OpenStack it is needed to provide IP addresses from Floating IP range. When OpenStack cluster is deployed and VM is provisioned there, you have to associate one of the Floating IP addresses from the pool to this VM, whether in Horizon or via Nova CLI. By default, OpenStack blocking all the traffic to the VM. To allow the connectivity to the VM, you need to configure security groups. It can be done in Horizon, or from OpenStack Controller using the following commands: . /root/openrc nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0 nova secgroup-add-rule default tcp 22 22 0.0.0.0/0

2013, Mirantis Inc.

Page 62

Fuel for Openstack v3.1 User Guide

Fuel Deployment Schema

IP ranges for Public and Management networks (172.16.*.*) are defined in config.sh script. If default values doesn't fit your needs, you are free to change them, but before the installation of Fuel Master node.

2013, Mirantis Inc.

Page 63

Fuel for Openstack v3.1 User Guide

Network Issues

Network Issues
Fuel has a built-in capability to run network check before or after OpenStack deployment. Currently it can check connectivity between nodes within configured VLANs on configured server interfaces. Image below shows sample result of such check. By using this simple table it is easy to say which interfaces do not receive certain VLAN IDs. Usually it means that switch or multiple switches are not configured correctly and do not allow certain tagged traffic to pass through.

On VirtualBox
Scripts which are provided for quick Fuel setup, create 3 host-interface adapters. Basically networking works as this being a 3 bridges, in each of them the only one VMs interfaces is connected. It means there is only L2 connectivity between VMs on interfaces with the same name. If you try to move, for example, management network to eth1 on Controller node, and the same network to eth2 on the Compute, then there will be no connectivity between OpenStack services in spite of being configured to live on the same VLAN. It is very easy to validate network settings before deployment by clicking the "Verify Networks" button. If you need to access OpenStack REST API over Public network, VNC console of VMs, Horizon in HA mode or VMs, refer to this section: Deployment configuration to access OpenStack API and VMs from host machine.

Timeout In Connection to OpenStack API From Client Applications


If you use Java, Python or any other code to work with OpenStack API, all connections should be done over OpenStack Public network. To explain why we can not use Fuel network, let's try to run nova client with debug option enabled:
[root@controller-6 ~]# nova --debug list REQ: curl -i http://192.168.0.2:5000/v2.0/tokens -X POST -H "Content-Type: appli cation/json" -H "Accept: application/json" -H "User-Agent: python-novaclient" -d '{"auth": {"tenantName": "admin", "passwordCredentials": {"username": "admin", "password": "admin"}}}' INFO (connectionpool:191) Starting new HTTP connection (1): 192.168.0.2 DEBUG (connectionpool:283) "POST /v2.0/tokens HTTP/1.1" 200 2702 RESP: [200] {'date': 'Tue, 06 Aug 2013 13:01:05 GMT', 'content-type': 'applicati on/json', 'content-length': '2702', 'vary': 'X-Auth-Token'}

2013, Mirantis Inc.

Page 64

Fuel for Openstack v3.1 User Guide

Network Issues

RESP BODY: {"access": {"token": {"issued_at": "2013-08-06T13:01:05.616481", "exp ires": "2013-08-07T13:01:05Z", "id": "c321cd823c8a4852aea4b870a03c8f72", "tenant ": {"description": "admin tenant", "enabled": true, "id": "8eee400f7a8a4f35b7a92 bc6cb54de42", "name": "admin"}}, "serviceCatalog": [{"endpoints": [{"adminURL": "http://192.168.0.2:8774/v2/8eee400f7a8a4f35b7a92bc6cb54de42", "region": "Region One", "internalURL": "http://192.168.0.2:8774/v2/8eee400f7a8a4f35b7a92bc6cb54de4 2", "id": "6b9563c1e37542519e4fc601b994f980", "publicURL": "http://172.16.1.2:87 74/v2/8eee400f7a8a4f35b7a92bc6cb54de42"}], "endpoints_links": [], "type": "compu te", "name": "nova"}, {"endpoints": [{"adminURL": "http://192.168.0.2:8080", "re gion": "RegionOne", "internalURL": "http://192.168.0.2:8080", "id": "4db0e11de35 74c889179f499f1e53c7e", "publicURL": "http://172.16.1.2:8080"}], "endpoints_link s": [], "type": "s3", "name": "swift_s3"}, {"endpoints": [{"adminURL": "http://1 92.168.0.2:9292", "region": "RegionOne", "internalURL": "http://192.168.0.2:9292 ", "id": "960a3ad83e4043bbbc708733571d433b", "publicURL": "http://172.16.1.2:929 2"}], "endpoints_links": [], "type": "image", "name": "glance"}, {"endpoints": [ {"adminURL": "http://192.168.0.2:8776/v1/8eee400f7a8a4f35b7a92bc6cb54de42", "reg ion": "RegionOne", "internalURL": "http://192.168.0.2:8776/v1/8eee400f7a8a4f35b7 a92bc6cb54de42", "id": "055edb2aface49c28576347a8c2a5e35", "publicURL": "http:// 172.16.1.2:8776/v1/8eee400f7a8a4f35b7a92bc6cb54de42"}], "endpoints_links": [], " type": "volume", "name": "cinder"}, {"endpoints": [{"adminURL": "http://192.168. 0.2:8773/services/Admin", "region": "RegionOne", "internalURL": "http://192.168. 0.2:8773/services/Cloud", "id": "1e5e51a640f94e60aed0a5296eebdb51", "publicURL": "http://172.16.1.2:8773/services/Cloud"}], "endpoints_links": [], "type": "ec2" , "name": "nova_ec2"}, {"endpoints": [{"adminURL": "http://192.168.0.2:8080/", "region": "RegionOne", "internalURL": "http://192.168.0.2:8080/v1/AUTH_8eee400f 7a8a4f35b7a92bc6cb54de42", "id": "081a50a3c9fa49719673a52420a87557", "publicURL ": "http://172.16.1.2:8080/v1/AUTH_8eee400f7a8a4f35b7a92bc6cb54de42"}], "endpoi nts_links": [], "type": "object-store", "name": "swift"}, {"endpoints": [{"admi nURL": "http://192.168.0.2:35357/v2.0", "region": "RegionOne", "internalURL": " http://192.168.0.2:5000/v2.0", "id": "057a7f8e9a9f4defb1966825de957f5b", "publi cURL": "http://172.16.1.2:5000/v2.0"}], "endpoints_links": [], "type": "identit y", "name": "keystone"}], "user": {"username": "admin", "roles_links": [], "id" : "717701504566411794a9cfcea1a85c1f", "roles": [{"name": "admin"}], "name": "ad min"}, "metadata": {"is_admin": 0, "roles": ["90a1f4f29aef48d7bce3ada631a54261" ]}}} REQ: curl -i http://172.16.1.2:8774/v2/8eee400f7a8a4f35b7a92bc6cb54de42/servers/ detail -X GET -H "X-Auth-Project-Id: admin" -H "User-Agent: python-novaclient" H "Accept: application/json" -H "X-Auth-Token: c321cd823c8a4852aea4b870a03c8f72" INFO (connectionpool:191) Starting new HTTP connection (1): 172.16.1.2

Even though initial connection was in 192.168.0.2, then client tries to access Public network for Nova API. The reason is because Keystone returns the list of OpenStack services URLs, and for production-grade deployments it is required to access services over public network.

2013, Mirantis Inc.

Page 65

Fuel for Openstack v3.1 User Guide

Network Issues

See also
Deployment configuration to access OpenStack API and VMs from host machine if you want to configure the installation on VirtualBox to make all these issues fixed.

2013, Mirantis Inc.

Page 66

Fuel for Openstack v3.1 User Guide

Red Hat OpenStack Deployment Notes

Red Hat OpenStack Deployment Notes


Overview
Fuel can deploy OpenStack using Red Hat OpenStack packages and Red Hat Enterprise Linux Server as a base operating system. Because Red Hat has exclusive distribution rights for its products, Fuel cannot be bundled with Red Hat OpenStack directly. To work around this issue, you can enter your Red Hat account credentials in order to download Red Hat OpenStack Platform. The necessary components will be prepared and loaded into Cobbler. There are two methods Fuel supports for obtaining Red Hat OpenStack packages: Red Hat Subscription Management (RHSM) (default) Red Hat RHN Satellite

Deployment Requirements
Minimal Requirements Red Hat account (https://access.redhat.com) Red Hat OpenStack entitlement (one per node) Internet access for Fuel Master name Optional requirements Red Hat Satellite Server Configured Satellite activation key

Red Hat Subscription Management (RHSM)


Benefits No need to handle large ISOs or physical media. Register all your clients with just a single username and password. Automatically register the necessary products required for installation and downloads a full cache. Download only the latest packages. Download only necessary packages. Considerations Must observe Red Hat licensing requirements after deployment Package download time is dependent on network speed (20-60 minutes)

2013, Mirantis Inc.

Page 67

Fuel for Openstack v3.1 User Guide

Red Hat OpenStack Deployment Notes

See also
Overview of Subscription Management - Red Hat Customer Portal

Red Hat RHN Satellite


Benefits Faster download of Red Hat OpenStack packages Register all your clients with an activation key More granular control of package set for your installation Registered OpenStack hosts don't need external network access Easier to consume for large enterprise customers Considerations Red Hat RHN Satellite is a separate offering from Red Hat and requires dedicated hardware Still requires Red Hat Subscription Manager and Internet access to download registration packages (just for Fuel Master host) What you need Red Hat account (https://access.redhat.com) Red Hat OpenStack entitlement (one per host) Internet access for Fuel master host Red Hat Satellite Server Configured Satellite activation key Your RHN Satellite activation key must be configured the following channels RHEL Server High Availability RHEL Server Load Balancer RHEL Server Optional RHEL Server Resilient Storage RHN Tools for RHEL Red Hat OpenStack 3.0

2013, Mirantis Inc.

Page 68

Fuel for Openstack v3.1 User Guide

Red Hat OpenStack Deployment Notes

See also
Red Hat | Red Hat Network Satellite

Fuel looks for the following RHN Satellite channels. rhel-x86_64-server-6 rhel-x86_64-server-6-ost-3 rhel-x86_64-server-ha-6 rhel-x86_64-server-lb-6 rhel-x86_64-server-rs-6

Note
If you create cloned channels, leave these channel strings intact.

Troubleshooting Red Hat OpenStack Deployment


Issues downloading from Red Hat Subscription Manager If you receive an error from Fuel UI regarding Red Hat OpenStack download issues, ensure that you have a valid subscription to the Red Hat OpenStack 3.0 product. This product is separate from standard Red Hat Enterprise Linux. You can check by going to https://access.redhat.com and checking Active Subscriptions. Contact your Red Hat sales representative to get the proper subscriptions associated with your account. If you are still encountering issues, contact Mirantis Support. Issues downloading from Red Hat RHN Satellite If you receive an error from Fuel UI regarding Red Hat OpenStack download issues, ensure that you have all the necessary channels available on your RHN Satellite Server. The correct list is here. If you are missing these channels, please contact your Red Hat sales representative to get the proper subscriptions associated with your account. RHN Satellite error: "rhel-x86_64-server-rs-6 not found" This means your Red Hat Satellite Server has run out of available entitlements or your licenses have expired. Check your RHN Satellite to ensure there is at least one available entitlement for each of the required channels. If any of these channels are missing or you need to make changes your account, please contact your Red Hat sales representative to get the proper subscriptions associated with your account. Yum Error: Cannot retrieve repository metadata (repomd.xml) for repository: rhel-x86_64-server-6. 2013, Mirantis Inc. Page 69

Fuel for Openstack v3.1 User Guide

Red Hat OpenStack Deployment Notes

This can be caused by many problems. This could happen if your SSL certificate does not match the hostname of your RHN Satellite Server or if you configured Fuel to use an IP address during deployment. This is not recommended and you should use a fully qualified domain name for your RHN Satellite Server. You may find solutions to your issues with repomd.xml at the Red Hat Knowledgebase or contact Red Hat Support.. GPG Key download failed. Looking for URL your-satellite-server/pub/RHN-ORG-TRUSTED-SSL-CERT This issue has two known problems. If you are using VirtualBox, this may not be properly configured. Ensure that your upstream DNS resolver is correct in /etc/dnsmasq.upstream. This setting is configured during the bootstrap process, but it is not possible to validate resolution of internal DNS names at that time. Also, this may be caused by other DNS issues, local network, or incorrect spelling of the RHN Satellite Server. Check your local network and settings and try again.

2013, Mirantis Inc.

Page 70

Fuel for Openstack v3.1 User Guide

Post-Deployment Check

Post-Deployment Check
On occasion, even a successful deployment may result in some OpenStack components not working correctly. If this happens, Fuel offers the ability to perform post-deployment checks to verify operations. Part of Fuel's goal is to provide easily accessible status information about the most commonly used components and the most recently performed actions. To perform these checks you will use Sanity and Smoke checks, as described below: Sanity Checks Reveal whether the overall system is functional. If it fails, you will most likely need to restart some services to operate OpenStack. Smoke Checks Dive in a little deeper and reveal networking, system-requirements, functionality issues. Sanity Checks will likely be the point on which the success of your deployment pivots, but it is critical to pay close attention to all information collected from theses tests. Another way to look at these tests is by their names. Sanity Checks are intended to assist in maintaining your sanity. Smoke Checks tell you where the fires are so you can put them out strategically instead of firehosing the entire installation.

Benefits
Using post-deployment checks helps you identify potential issues which may impact the health of a deployed system. All post-deployment checks provide detailed descriptions about failed operations and tell you which component or components are not working properly. Previously, performing these checks manually would have consumed a great deal of time. Now, with these checks the process will take only a few minutes. Aside from verifying that everything is working correctly, the process will also determine how quickly your system works. Post-deployment checks continue to be useful, for example after sizable changes are made in the system you can use the checks to determine if any new failure points have been introduced.

Running Post-Deployment Checks


Now, let`s take a closer look on what should be done to execute the tests and to understand if something is wrong with your OpenStack cluster. As you can see on the image above, the Fuel UI now contains a Healthcheck tab, indicated by the Heart icon. All of the post-deployment checks are displayed on this tab. If your deployment was successful, you will see a list of tests this show a green Thumbs Up in the last column. The Thumb indicates the status of the component. If you see a detailed message and a Thumbs Down, that component has failed in some manner, and the details will indicate where the failure was detected. All tests can be run on different environments, which you select on main page of Fuel UI. You can run checks in parallel on different environments. Each test contains information on its estimated and actual duration. We have included information about test processing time from our own tests and indicate this in each test. Note that we show average times from the slowest to the fastest systems we have tested, so your results will vary.

2013, Mirantis Inc.

Page 71

Fuel for Openstack v3.1 User Guide

Post-Deployment Check

Once a test is complete the results will appear in the Status column. If there was an error during the test the UI will display the error message below the test name. To assist in the troubleshooting process, the test scenario is displayed under the failure message and the failed step is highlighted. You will find more detailed information on these tests later in this section. An actual test run looks like this:

What To Do When a Test Fails


If a test fails, there are several ways to investigate the problem. You may prefer to start in Fuel UI since it's feedback is directly related to the health of the deployment. To do so, start by checking the following: Under the Healthcheck tab In the OpenStack Dashboard In the test execution logs (/var/log/ostf-stdout.log) In the individual OpenStack components logs Of course, there are many different conditions that can lead to system breakdowns, but there are some simple things that can be examined before you dig deep. The most common issues are: Not all OpenStack services are running Any defined quota has been exceeded Something has been broken in the network configuration There is a general lack of resources (memory/disk space) The first thing to be done is to ensure all OpenStack services are up and running. To do this you can run sanity test set, or execute the following command on your Controller node: nova-manage service list If any service is off (has XXX status), you can restart it using this command: service openstack-<service name> restart If all services are on, but you`re still experiencing some issues, you can gather information on OpenStack Dashboard (exceeded number of instances, fixed IPs etc). You may also read the logs generated by tests which is stored at /var/log/ostf-stdout.log, or go to /var/log/<component> and view if any operation has ERROR status. If it looks like the last item, you may have underprovisioned your environment and should check your math and your project requirements.

2013, Mirantis Inc.

Page 72

Fuel for Openstack v3.1 User Guide

Post-Deployment Check

Sanity Tests Description


Sanity checks work by sending a query to all OpenStack components to get a response back from them. Many of these tests are simple in that they ask each service for a list of it's associated objects and waits for a response. The response can be something, nothing, and error, or a timeout, so there are several ways to determine if a service is up. The following list shows what test is used for each service:

Instances list availability


Test checks that Nova component can return list of instances. Test scenario: 1. Request list of instances. 2. Check returned list is not empty.

Images list availability


Test checks that Glance component can return list of images. Test scenario: 1. Request list of images. 2. Check returned list is not empty.

Volumes list availability


Test checks that Swift component can return list of volumes. Test scenario: 1. Request list of volumes. 2. Check returned list is not empty.

Snapshots list availability


Test checks that Glance component can return list of snapshots. Test scenario: 1. Request list of snapshots. 2. Check returned list is not empty.

Flavors list availability


Test checks that Nova component can return list of flavors. Test scenario: 1. Request list of flavors. 2. Check returned list is not empty.

2013, Mirantis Inc.

Page 73

Fuel for Openstack v3.1 User Guide

Post-Deployment Check

Limits list availability


Test checks that Nova component can return list of absolute limits. Test scenario: 1. Request list of limits. 2. Check response.

Services list availability


Test checks that Nova component can return list of services. Test scenario: 1. Request list of services. 2. Check returned list is not empty.

User list availability


Test checks that Keystone component can return list of users. Test scenario: 1. Request list of services. 2. Check returned list is not empty.

Services execution monitoring


Test checks that all of the expected services are on, meaning the test will fail if any of the listed services is in XXX status. Test scenario: 1. Connect to a controller via SSH. 2. Execute nova-manage service list command. 3. Check there are no failed services.

DNS availability
Test checks that DNS is available. Test scenario: 1. Connect to a Controller node via SSH. 2. Execute host command for the controller IP. 3. Check DNS name can be successfully resolved.

Networks availability
Test checks that Nova component can return list of available networks.

2013, Mirantis Inc.

Page 74

Fuel for Openstack v3.1 User Guide Test scenario: 1. Request list of networks. 2. Check returned list is not empty.

Post-Deployment Check

Ports availability
Test checks that Nova component can return list of available ports. Test scenario: 1. Request list of ports. 2. Check returned list is not empty. For more information refer to nova cli reference.

Smoke Tests Description


Smoke tests verify how your system handles basic OpenStack operations under normal circumstances. The Smoke test series uses timeout tests for operations that have a known completion time to determine if there is any smoke, and thusly fire. An additional benefit to the Smoke Test series is that you get to see how fast your environment is the first time you run them. All tests use basic OpenStack services (Nova, Glance, Keystone, Cinder etc), therefore if any of them is off, the test using it will fail. It is recommended to run all sanity checks prior to your smoke checks to determine all services are alive. This helps ensure that you don't get any false negatives. The following is a description of each sanity test available:

Flavor creation
Test checks that low requirements flavor can be created. Target component: Nova Scenario: 1. Create small-size flavor. 2. Check created flavor has expected name. 3. Check flavor disk has expected size. For more information refer to nova cli reference.

Volume creation
Test checks that a small-sized volume can be created. Target component: Compute Scenario: 1. Create a new small-size volume. 2. Wait for "available" volume status. 3. Check response contains "display_name" section. 2013, Mirantis Inc. Page 75

Fuel for Openstack v3.1 User Guide

Post-Deployment Check

4. Create instance and wait for "Active" status 5. Attach volume to instance. 6. Check volume status is "in use". 7. Get created volume information by its id. 8. Detach volume from instance. 9. Check volume has "available" status. 10. Delete volume. If you see that created volume is in ERROR status, it can mean that you`ve exceeded the maximum number of volumes that can be created. You can check it on OpenStack dashboard. For more information refer to volume management instructions.

Instance booting and snapshotting


Test creates a keypair, checks that instance can be booted from default image, then a snapshot can be created from it and a new instance can be booted from a snapshot. Test also verifies that instances and images reach ACTIVE state upon their creation. Target component: Glance Scenario: 1. Create new keypair to boot an instance. 2. Boot default image. 3. Make snapshot of created server. 4. Boot another instance from created snapshot. If you see that created instance is in ERROR status, it can mean that you`ve exceeded any system requirements limit. The test is using a nano-flavor with parameters: 64 RAM, 1 GB disk space, 1 virtual CPU presented. For more information refer to nova cli reference, image management instructions.

Keypair creation
Target component: Nova. Scenario: 1. Create a new keypair, check if it was created successfully (check name is expected, response status is 200). For more information refer to nova cli reference.

Security group creation


Target component: Nova Scenario: 1. Create security group, check if it was created correctly (check name is expected, response status is 200). For more information refer to nova cli reference.

2013, Mirantis Inc.

Page 76

Fuel for Openstack v3.1 User Guide

Post-Deployment Check

Network parameters check


Target component: Nova Scenario: 1. Get list of networks. 2. Check seen network labels equal to expected ones. 3. Check seen network ids equal to expected ones. For more information refer to nova cli reference.

Instance creation
Target component: Nova Scenario: 1. Create new keypair (if it`s nonexistent yet). 2. Create new sec group (if it`s nonexistent yet). 3. Create instance with usage of created sec group and keypair. For more information refer to nova cli reference, instance management instructions.

Floating IP assignment
Target component: Nova Scenario: 1. Create new keypair (if it`s nonexistent yet). 2. Create new sec group (if it`s nonexistent yet). 3. Create instance with usage of created sec group and keypair. 4. Create new floating IP. 5. Assign floating IP to created instance. For more information refer to nova cli reference, floating ips management instructions.

Network connectivity check through floating IP


Target component: Nova Scenario: 1. Create new keypair (if it`s nonexistent yet). 2. Create new sec group (if it`s nonexistent yet). 3. Create instance with usage of created sec group and keypair. 4. Check connectivity for all floating IPs using ping command.

2013, Mirantis Inc.

Page 77

Fuel for Openstack v3.1 User Guide

Post-Deployment Check

If this test failed, it`s better to run a network check and verify that all connections are correct. For more information refer to the Nova CLI reference's floating IPs management instructions.

User creation and authentication in Horizon


Test creates new user, tenant, user role with admin privileges and logs in to dashboard. Target components: Nova, Keystone Scenario: 1. Create a new tenant. 2. Check tenant was created successfully. 3. Create a new user. 4. Check user was created successfully. 5. Create a new user role. 6. Check user role was created successfully. 7. Perform token authentication. 8. Check authentication was successful. 9. Send authentication request to Horizon. 10. Verify response status is 200. If this test fails on the authentication step, you should first try opening the dashboard - it may be unreachable for some reason and then you should check your network configuration. For more information refer to nova cli reference.

2013, Mirantis Inc.

Page 78

Fuel for Openstack v3.1 User Guide

Deploy an OpenStack cluster using Fuel CLI

Deploy an OpenStack cluster using Fuel CLI


Understanding the CLI Deployment Workflow Discover Provision Deploy Deploying OpenStack Cluster Using CLI YAML High Level Structure Collecting Identities Calculating Partitioning of the Nodes Configuring Nodes for Provisioning Configuring Nodes for Deployment Node Configuration General Parameters Configure Deployment Scenario Enabling Quantum Enabling Cinder Configuring Syslog Parameters Setting Verbosity Enabling Horizon HTTPS/SSL mode Dealing With Multicast Issues Finally Triggering the Deployment Testing OpenStack Cluster 80 80 80 80 81 81 81 81 85 92 92 93 95 95 95 96 96 96 97 98 99

2013, Mirantis Inc.

Page 79

Fuel for Openstack v3.1 User Guide

Understanding the CLI Deployment Workflow

Understanding the CLI Deployment Workflow


To deploy OpenStack using CLI successfully you need nodes to pass through the "Prepare->Discover->Provision->Deploy" workflow. Following sections describe how to do this from the beginning to the end of the deployment. During Prepare stage nodes should be connected correctly to the Master node for network booting. Then turn on the nodes to boot using PXE provided by Fuel Master node.

Discover
Nodes being booted into bootstrap mode run all the required services for the node to be managed by Fuel Master node. When booted into bootstrap phase, node contains ssh authorized keys of Master node which allows Cobbler server installed on Master node to reboot the node during provision phase. Also, bootstrap mode configures MCollective on the node and specifies ID used by Astute orchestrator to check the status of the node.

Provision
Provisioning is done using Cobbler. Astute orchestrator parses nodes section of YAML configuration file and creates corresponding Cobbler systems using parameters specified in engine section of YAML file. After the systems are created, it connects to Cobbler engine and reboots nodes according to the power management parameters of the node.

Deploy
Deployment is done using Astute orchestrator, which parses nodes and attributes sections and recalculates parameters needed for deployment. Calculated parameters are passed to the nodes being deployed by use of nailyfact MCollective agent that uploads these attributes to /etc/naily.facts file of the node. Then puppet parses this file using Facter plugin and uploads these facts into puppet. These facts are used during catalog compilation phase by puppet master. Finally catalog is executed and Astute orchestrator passes to the next node in deployment sequence.

2013, Mirantis Inc.

Page 80

Fuel for Openstack v3.1 User Guide

Deploying OpenStack Cluster Using CLI

Deploying OpenStack Cluster Using CLI


After you understood how deployment workflow is traversed, you can finally start. Connect the nodes to Master node and power them on. You should also plan your cluster configuration meaning that you should know which node should host which role in the cluster. As soon as nodes boot into bootstrap mode and populate their data to MCollective you will need to fill configuration YAML file and consequently trigger Provisioning and Deployment phases.

YAML High Level Structure


The high level structure of deployment configuration file is: nodes: - name: role: ..... attributes: engine: nodes Section In this section you define nodes, their IP/MAC addresses, disk partitioning, their roles in the cluster and so on. attributes Section In this section OpenStack cluster attributes such as which networking engine (Quantum or Nova Network) to use, whether to use Cinder block storage, which usernames and passwords to use for internal and public services of OpenStack and so on. engine Section This section specifies parameters used to connect to Cobbler engine during provisioning phase. # Array of nodes # Definition of node # OpenStack cluster attributes used during deployment # Cobbler engine parameters

Collecting Identities
After the nodes boot to bootstrap mode, you need to collect their MCollective identities. You can do this in two ways: Login to the node, open /etc/mcollective/server.cfg and find node ID in the identity field: identity = 7 Get discovered nodes JSON file by issuing GET HTTP request to http://<master_ip>:8000/api/nodes/

Calculating Partitioning of the Nodes


In order to provision nodes, you need to calculate partitioning for each particular node. Currently, the smallest partitioning scheme includes two partitions: root and swap. These ones reside on os LVM volume group. If you want to have separate partition for Glance and Swift what we strongly suggest you to do, then you need to create a partition with mount point /var/lib/glance.

2013, Mirantis Inc.

Page 81

Fuel for Openstack v3.1 User Guide

Deploying OpenStack Cluster Using CLI

If you want the node to work as cinder LVM storage you will also need to create a cinder LVM Volume Group.

Warning
Do not use '_' and '-' symbols in cinder volume names since the Anaconda limitation.

Partitioning is done by parsing ks_spaces section of node's ks_meta hash. Example ks_spaces is pasted below. Be also aware that the sizes are provided in MiBs (= 1024KiB = 1048576 bytes) and Anaconda uses 32MiB physical extents for LVM. Thus your LVM PVs size MUST be multiple of 32. # == ks_spaces # Kickstart data for disk partitioning # The simplest way to calculate is to use REST call to nailgun api, # recalculate disk size into MiB and dump the following config. # Workflow is as follows: # GET request to http://<fuel-master-node>:8000/api/nodes # Parse JSON and derive disk data from meta['disks']. # Set explicitly which disk is system and which is for cinder. # $system_disk_size=floor($system_disk_meta['disks']['size']/1048756) # $system_disk_path=$system_disk_meta['disks']['disk'] # $cinder_disk_size=floor($cinder_disk_meta['disks']['size']/1048756) # # $cinder_disk_path=$cinder_disk_meta['disks']['disk'] # # All further calculations are made in MiB # Calculation of system partitions # # For each node: # calculate size of physical volume for operating system: # $pv_size = $system_disk_size - 200 - 1 # declare $swap_size # calculate size of root partition: # $free_vg_size = $pv_size - $swap_size # $free_extents = floor($free_vg_size/32) # $system_disk_size = 32 * $free_extents # ks_spaces: '"[ # { # \"type\": \"disk\", # \"id\": \"$system_disk_path\", # \"volumes\": # [ # { # \"mount\": \"/boot\", # \"type\": \"partition\",

2013, Mirantis Inc.

Page 82

Fuel for Openstack v3.1 User Guide

Deploying OpenStack Cluster Using CLI

# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #

\"size\": 200 }, { \"type\": \"mbr\" }, { \"size\": $pv_size, \"type\": \"pv\", \"vg\": \"os\" } ], \"size\": $system_disk_size }, { \"type\": \"vg\", \"id\": \"os\", \"volumes\": [ { \"mount\": \"/\", \"type\": \"lv\", \"name\": \"root\", \"size\": $system_disk_size }, { \"mount\": \"swap\", \"type\": \"lv\", \"name\": \"swap\", \"size\": $swap_size } ] }, { \"type\": \"disk\", \"id\": \"$path_to_cinder_disk\", \"volumes\": [ { \"type\": \"mbr\" }, { \"size\": $cinder_disk_size, \"type\": \"pv\", \"vg\": \"cinder\" } ], \"size\": $cinder_disk_size }

2013, Mirantis Inc.

Page 83

Fuel for Openstack v3.1 User Guide

Deploying OpenStack Cluster Using CLI

# ]"' ks_spaces: '" [ { \"type\": \"disk\", \"id\": \"disk/by-path/pci-0000:00:06.0-virtio-pci-virtio3\", \"volumes\": [ { \"mount\": \"/boot\", \"type\": \"partition\", \"size\": 200 }, { \"type\": \"mbr\" }, { \"size\": 20000, \"type\": \"pv\", \"vg\": \"os\" } ], \"size\": 20480 }, { \"type\": \"vg\", \"id\": \"os\", \"volumes\": [ { \"mount\": \"/\", \"type\": \"lv\", \"name\": \"root\", \"size\": 10240 }, { \"mount\": \"swap\", \"type\": \"lv\", \"name\": \"swap\", \"size\": 2048 } ] } ]"'

2013, Mirantis Inc.

Page 84

Fuel for Openstack v3.1 User Guide

Configuring Nodes for Provisioning

Configuring Nodes for Provisioning


In order to provision nodes you need to configure nodes section of YAML file for each node. Sample YAML configuration for provisioning is listed below: nodes: # == id # MCollective node id in mcollective server.cfg. - id: 1 # == uid # UID of the node for deployment engine. Should be equal to `id` uid: 1 # == mac # MAC address of the interface being used for network boot. mac: 64:43:7B:CA:56:DD # == name # name of the system in cobbler name: controller-01 # == ip # IP issued by cobbler DHCP server to this node during network boot. ip: 10.20.0.94 # == profile # Cobbler profile for the node. # Default: centos-x86_64 # [centos-x86_64|rhel-x86_64] # CAUTION: # rhel-x86_64 is created only after rpmcache class is run on master node # and currently not supported in CLI mode profile: centos-x86_64 # == fqdn # Fully-qualified domain name of the node fqdn: controller-01.domain.tld # == power_type # Cobbler power-type. Consult cobbler documentation for available options. # Default: ssh power_type: ssh # == power_user # Username for cobbler to manage power of this machine # Default: unset power_user: root # == power_pass # Password/credentials for cobbler to manage power of this machine # Default: unset power_pass: /root/.ssh/bootstrap.rsa # == power_address # IP address of the device managing the node power state. # Default: unset power_address: 10.20.0.94

2013, Mirantis Inc.

Page 85

Fuel for Openstack v3.1 User Guide

Configuring Nodes for Provisioning

# == netboot_enabled # Disable/enable netboot for this node. netboot_enabled: '1' # == name_servers # DNS name servers for this node during provisioning phase. name_servers: ! '"10.20.0.2"' # == puppet_master # Hostname or IP address of puppet master node puppet_master: fuel.domain.tld # == ks_meta # Kickstart metadata used during provisioning ks_meta: # == ks_spaces # Kickstart data for disk partitioning # The simplest way to calculate is to use REST call to nailgun api, # recalculate disk size into MiB and dump the following config. # Workflow is as follows: # GET request to http://<fuel-master-node>:8000/api/nodes # Parse JSON and derive disk data from meta['disks']. # Set explicitly which disk is system and which is for cinder. # $system_disk_size=floor($system_disk_meta['disks']['size']/1048756) # $system_disk_path=$system_disk_meta['disks']['disk'] # $cinder_disk_size=floor($cinder_disk_meta['disks']['size']/1048756) # # $cinder_disk_path=$cinder_disk_meta['disks']['disk'] # # All further calculations are made in MiB # Calculation of system partitions # # For each node: # calculate size of physical volume for operating system: # $pv_size = $system_disk_size - 200 - 1 # declare $swap_size # calculate size of root partition: # $free_vg_size = $pv_size - $swap_size # $free_extents = floor($free_vg_size/32) # $system_disk_size = 32 * $free_extents # ks_spaces: '"[ # { # \"type\": \"disk\", # \"id\": \"$system_disk_path\", # \"volumes\": # [ # { # \"mount\": \"/boot\", # \"type\": \"partition\", # \"size\": 200 # },

2013, Mirantis Inc.

Page 86

Fuel for Openstack v3.1 User Guide

Configuring Nodes for Provisioning

# { # \"type\": \"mbr\" # }, # { # \"size\": $pv_size, # \"type\": \"pv\", # \"vg\": \"os\" # } # ], # \"size\": $system_disk_size # }, # { # \"type\": \"vg\", # \"id\": \"os\", # \"volumes\": # [ # { # \"mount\": \"/\", # \"type\": \"lv\", # \"name\": \"root\", # \"size\": $system_disk_size # }, # { # \"mount\": \"swap\", # \"type\": \"lv\", # \"name\": \"swap\", # \"size\": $swap_size # } # ] # }, # { # \"type\": \"disk\", # \"id\": \"$path_to_cinder_disk\", # \"volumes\": # [ # { # \"type\": \"mbr\" # }, # { # \"size\": $cinder_disk_size, # \"type\": \"pv\", # \"vg\": \"cinder\" # } # ], # \"size\": $cinder_disk_size # } # ]"' ks_spaces: '"[

2013, Mirantis Inc.

Page 87

Fuel for Openstack v3.1 User Guide

Configuring Nodes for Provisioning

# == mco_enable # If mcollective should be installed and enabled on the node mco_enable: 1 # == mco_vhost # Mcollective AMQP virtual host mco_vhost: mcollective # == mco_pskey

{ \"type\": \"disk\", \"id\": \"disk/by-path/pci-0000:00:06.0-virtio-pci-virtio3\", \"volumes\": [ { \"mount\": \"/boot\", \"type\": \"partition\", \"size\": 200 }, { \"type\": \"mbr\" }, { \"size\": 20000, \"type\": \"pv\", \"vg\": \"os\" } ], \"size\": 20480 }, { \"type\": \"vg\", \"id\": \"os\", \"volumes\": [ { \"mount\":\"/\", \"type\": \"lv\", \"name\": \"root\", \"size\": 10240 }, { \"mount\": \"swap\", \"type\": \"lv\", \"name\": \"swap\", \"size\": 2048 } ] } ]"'

2013, Mirantis Inc.

Page 88

Fuel for Openstack v3.1 User Guide

Configuring Nodes for Provisioning

# **NOT USED** mco_pskey: unset # == mco_user # Mcollective AMQP user mco_user: mcollective # == puppet_enable # should puppet agent start on boot # Default: 0 puppet_enable: 0 # == install_log_2_syslog # Enable/disable on boot remote logging # Default: 1 install_log_2_syslog: 1 # == mco_password # Mcollective AMQP password mco_password: marionette # == puppet_auto_setup # Whether to install puppet during provisioning # Default: 1 puppet_auto_setup: 1 # == puppet_master # hostname or IP of puppet master server puppet_master: fuel.domain.tld # == puppet_auto_setup # Whether to install mcollective during provisioning # Default: 1 mco_auto_setup: 1 # == auth_key # Public RSA key to be added to cobbler authorized keys auth_key: ! '""' # == puppet_version # Which puppet version to install on the node puppet_version: 2.7.19 # == mco_connector # Mcollective AMQP driver. # Default: rabbitmq mco_connector: rabbitmq # == mco_host # AMQP host to which Mcollective agent should connect mco_host: 10.20.0.2 # == interfaces # Hash of interfaces configured during provision state interfaces: eth0: ip_address: 10.20.0.94 netmask: 255.255.255.0 dns_name: controller-01.domain.tld static: '1'

2013, Mirantis Inc.

Page 89

Fuel for Openstack v3.1 User Guide

Configuring Nodes for Provisioning

mac_address: 64:43:7B:CA:56:DD # == interfaces_extra # extra interfaces information interfaces_extra: eth2: onboot: 'no' peerdns: 'no' eth1: onboot: 'no' peerdns: 'no' eth0: onboot: 'yes' peerdns: 'no' # == meta # Metadata needed for log parsing during deployment jobs. meta: # == Array of hashes of interfaces interfaces: - mac: 64:D8:E1:F6:66:43 max_speed: 100 name: <iface name> ip: <IP> netmask: <Netmask> current_speed: <Integer> - mac: 64:C8:E2:3B:FD:6E max_speed: 100 name: eth1 ip: 10.21.0.94 netmask: 255.255.255.0 current_speed: 100 disks: - model: VBOX HARDDISK disk: disk/by-path/pci-0000:00:0d.0-scsi-2:0:0:0 name: sdc size: 2411724800000 - model: VBOX HARDDISK disk: disk/by-path/pci-0000:00:0d.0-scsi-1:0:0:0 name: sdb size: 536870912000 - model: VBOX HARDDISK disk: disk/by-path/pci-0000:00:0d.0-scsi-0:0:0:0 name: sda size: 17179869184 system: serial: '0' version: '1.2' fqdn: bootstrap family: Virtual Machine

2013, Mirantis Inc.

Page 90

Fuel for Openstack v3.1 User Guide

Configuring Nodes for Provisioning

manufacturer: VirtualBox error_type: After you populate YAML file with all the required data, fire Astute orchestrator and point it to corresponding YAML file: [root@fuel ~]# astute -f simple.yaml -c provision Wait for command to finish. Now you can start configuring OpenStack cluster parameters.

2013, Mirantis Inc.

Page 91

Fuel for Openstack v3.1 User Guide

Configuring Nodes for Deployment

Configuring Nodes for Deployment


Node Configuration
In order to deploy OpenStack cluster, you need to populate each node's nodes section of the file with data related to deployment. nodes: ..... # == role # Specifies role of the node # [primary-controller|controller|storage|swift-proxy|primary-swift-proxy] # Default: unspecified role: primary-controller # == network_data # Array of network interfaces hashes # === name: scalar or array of one or more of # [management|fixed|public|storage] # ==== 'management' is used for internal communication # ==== 'public' is used for public endpoints # ==== 'storage' is used for cinder and swift storage networks # ==== 'fixed' is used for traffic passing between VMs in Quantum 'vlan' # segmentation mode or with Nova Network enabled # === ip: IP address to be configured by puppet on this interface # === dev: interface device name # === netmask: network mask for the interface # === vlan: vlan ID for the interface # === gateway: IP address of gateway (**not used**) network_data: - name: public ip: 10.20.0.94 dev: eth0 netmask: 255.255.255.0 gateway: 10.20.0.1 - name: - management - storage ip: 10.20.1.94 netmask: 255.255.255.0 dev: eth1 - name: fixed dev: eth2 # == public_br # Name of the public bridge for Quantum-enabled configuration public_br: br-ex # == internal_br # Name of the internal bridge for Quantum-enabled configuration internal_br: br-mgmt

2013, Mirantis Inc.

Page 92

Fuel for Openstack v3.1 User Guide

Configuring Nodes for Deployment

General Parameters
Once nodes are populated with role and networking information, it is time to set some general parameters for deployment. attributes: .... # == master_ip # IP of puppet master. - master_ip: 10.20.0.2 # == deployment_id # Id if deployment used do differentiate environments deployment_id: 1 # == deployment_source # [web|cli] - should be set to cli for CLI installation deployment_source: cli # == management_vip # Virtual IP address for internal services # (MySQL, AMQP, internal OpenStack endpoints) management_vip: 10.20.1.200 # == public_vip # Virtual IP address for public services # (Horizon, public OpenStack endpoints) public_vip: 10.20.0.200 # == auto_assign_floating_ip # Whether to assign floating IPs automatically auto_assign_floating_ip: true # == start_guests_on_host_boot # Default: true start_guests_on_host_boot: true # == create_networks # whether to create fixed or floating networks create_networks: true # == compute_scheduler_driver # Nova scheduler driver class compute_scheduler_driver: nova.scheduler.multi.MultiSchedule == use_cow_images: # Whether to use cow images use_cow_images: true # == libvirt_type # Nova libvirt hypervisor type # Values: qemu|kvm # Default: kvm libvirt_type: qemu # == dns_nameservers # array of DNS servers configured during deployment phase. dns_nameservers: - 10.20.0.1

2013, Mirantis Inc.

Page 93

Fuel for Openstack v3.1 User Guide

Configuring Nodes for Deployment

# Below go credentials and access parameters for main OpenStack components mysql: root_password: root glance: db_password: glance user_password: glance swift: user_password: swift_pass nova: db_password: nova user_password: nova access: password: admin user: admin tenant: admin email: admin@example.org keystone: db_password: keystone admin_token: nova quantum_access: user_password: quantum db_password: quantum rabbit: password: nova user: nova cinder: password: cinder user: cinder # == floating_network_range # CIDR (for quantum == true) or array if IPs (for quantum == false) # Used for creation of floating networks/IPs during deployment floating_network_range: 10.20.0.150/26 # == fixed_network_range # CIDR for fixed network created during deployment. fixed_network_range: 10.20.2.0/24 # == ntp_servers # List of ntp servers ntp_servers: - pool.ntp.org

2013, Mirantis Inc.

Page 94

Fuel for Openstack v3.1 User Guide

Configure Deployment Scenario

Configure Deployment Scenario


Choose deployment scenario you want to use. Currently supported scenarios are: HA Compact HA Full Non-HA Multinode Simple attributes: .... # == deployment_mode # [ha|ha_full|multinode] deployment_mode: ha

Enabling Quantum
In order to deploy OpenStack with Quantum you need to enable quantum in your YAML file attributes: ..... quantum: false You need also to configure some nova-network related parameters: attributes: ..... #Quantum part, used only if quantum='true' quantum_parameters: # == tenant_network_type # Which type of network segmentation to use. # Values: gre|vlan tenant_network_type: gre # == segment_range # Range of IDs for network segmentation. Consult Quantum documentation. # Values: gre|vlan segment_range: ! '300:500' # == metadata_proxy_shared_secret # Shared secret for metadata proxy services # Values: String metadata_proxy_shared_secret: quantum

Enabling Cinder
Our example uses Cinder, and with some very specific variations from the default. Specifically, as we said before, while the Cinder scheduler will continue to run on the controllers, the actual storage can be specified by setting cinder_nodes array.

2013, Mirantis Inc.

Page 95

Fuel for Openstack v3.1 User Guide

Configure Deployment Scenario

attributes: ..... # == cinder_nodes # Which nodes to use as cinder-volume backends # Array of values # 'all'|<hostname>|<internal IP address of node>|'controller'|<node_role> cinder_nodes: - controller

Configuring Syslog Parameters


To configure syslog servers to use, specify several parameters: # == base_syslog # Main syslog server configuration. base_syslog: syslog_port: '514' syslog_server: 10.20.0.2 # == syslog # Additional syslog servers configuration. syslog: syslog_port: '514' syslog_transport: udp syslog_server: ''

Setting Verbosity
You also have the option to determine how much information OpenStack provides when performing configuration: attributes: .... verbose: true debug: false

Enabling Horizon HTTPS/SSL mode


Using the horizon_use_ssl variable, you have the option to decide whether the OpenStack dashboard (Horizon) uses HTTP or HTTPS: attributes: .... horizon_use_ssl: false This variable accepts the following values: false:

2013, Mirantis Inc.

Page 96

Fuel for Openstack v3.1 User Guide

Configure Deployment Scenario

In this mode, the dashboard uses HTTP with no encryption. default: In this mode, the dashboard uses keys supplied with the standard Apache SSL module package. exist: In this case, the dashboard assumes that the domain name-based certificate, or keys, are provisioned in advance. This can be a certificate signed by any authorized provider, such as Symantec/Verisign, Comodo, GoDaddy, and so on. The system looks for the keys in these locations: public /etc/pki/tls/certs/domain-name.crt private /etc/pki/tls/private/domain-name.key custom: This mode requires a static mount point on the fileserver for [ssl_certs] and certificate pre-existence. To enable this mode, configure the puppet fileserver by editing /etc/puppet/fileserver.conf to add: [ssl_certs] path /etc/puppet/templates/ssl allow * From there, create the appropriate directory: mkdir -p /etc/puppet/templates/ssl Add the certificates to this directory. Then reload the puppetmaster service for these changes to take effect.

Dealing With Multicast Issues


Fuel uses Corosync and Pacemaker cluster engines for HA scenarios, thus requiring consistent multicast networking. Sometimes it is not possible to configure multicast in your network. In this case, you can tweak Corosync to use unicast addressing by setting use_unicast_corosync variable to true. # == use_unicast_corosync # which communication protocol to use for corosync use_unicast_corosync: false

2013, Mirantis Inc.

Page 97

Fuel for Openstack v3.1 User Guide

Finally Triggering the Deployment

Finally Triggering the Deployment


After YAML is updated with all the required parameters you can finally trigger deployment by issuing deploy command to Astute orchestrator. [root@fuel ~]# astute -f simple.yaml -c deploy And wait for command to finish.

2013, Mirantis Inc.

Page 98

Fuel for Openstack v3.1 User Guide

Testing OpenStack Cluster

Testing OpenStack Cluster


Now that you've installed OpenStack, its time to take your new OpenStack cloud for a drive around the block. Follow these steps: 1. On the host machine, open your browser to http://192.168.0.10/ (change the IP address value to your own public_virtual_ip) and login as nova/nova (unless you changed these credentials in YAML file). 2. Click the Project tab in the left-hand column. 3. Under ManageCompute, choose Access&Security to set security settings: Click CreateKeypair and enter a name for the new keypair. The private key should download automatically; make sure to keep it safe. Click Access&Security again and click EditRules for the default Security Group. Add a new rule allowing TCP connections from port 22 to port 22 for all IP addresses using a CIDR of 0.0.0.0/0. (You can also customize this setting as necessary.) Click AddRule to save the new rule. Add a second new rule allowing ICMP connections with a type and code of -1 to the default Security Group and click AddRule to save. 4. Click AllocateIPToProject and add two new floating IPs. Notice that they come from the pool specified in config.yaml and site.pp. 5. Click Images&Snapshots, then CreateImage. Enter a name and specify the ImageLocation as https://launchpad.net/cirros/trunk/0.3.0/+download/cirros-0.3.0-x86_64-disk.img with a Format of QCOW2. Check the Public checkbox. 6. The next step is to upload an image to use for creating VMs, but an OpenStack bug prevents you from doing this in the browser. Instead, log in to any of the controllers as root and execute the following commands:
cd ~ source openrc glance image-create --name cirros --container-format bare --disk-format qcow2 --is-public yes \ --location https://launchpad.net/cirros/trunk/0.3.0/+download/cirros-0.3.0-x86_64-disk.img

7. Go back to the browser and refresh the page. Launch a new instance of this image using the tiny flavor. Click the Networking tab and choose the default net04_ext network, then click the Launch button. 8. On the instances page: Click the new instance and look at the settings. Click the Logs tab to look at the logs. Click the VNC tab to log in. If you see just a big black rectangle, the machine is in screensaver mode; click the grey area and press the space bar to wake it up, then login as cirros/cubswin:).

2013, Mirantis Inc.

Page 99

Fuel for Openstack v3.1 User Guide

Testing OpenStack Cluster

At the command line, enter ifconfig-a|more and see the assigned IP address. Enter sudofdisk-l to see that no volume has yet been assigned to this VM. 9. On the Instances page, click AssignFloatingIP and assign an IP address to your instance. You can either choose from one of the existing created IPs by using the pulldown menu or click the plus sign (+) to choose a network and allocate a new IP address. From your host machine, ping the floating IP assigned to this VM. If that works, try to sshcirros@floating-ip from the host machine. 10. Back in the browser, click Volumes and CreateVolume. Create the new volume, and attach it to the instance. 11. Go back to the VNC tab and repeat fdisk-l to see the new unpartitioned disk attached. Now your new VM is ready to be used.

2013, Mirantis Inc.

Page 100

Fuel for Openstack v3.1 User Guide

FAQ (Frequently Asked Questions) and HowTos

FAQ (Frequently Asked Questions) and HowTos


Common Technical Issues Corosync crashes without network connectivity RabbitMQ Cluster Restart Issues Following A Systemwide Power Failure How HA with Pacemaker and Corosync Works Corosync Settings Pacemaker Settings How Fuel Deploys HA HowTo Notes HowTo: Create the XFS partition HowTo: Redeploy a node from scratch HowTo: Enable/Disable Galera Cluster Autorebuild Mechanism How To Troubleshoot Corosync/Pacemaker How To Smoke Test HA Other Questions 102 103 103 105 105 105 106 108 108 108 108 108 111 114

2013, Mirantis Inc.

Page 101

Fuel for Openstack v3.1 User Guide

Common Technical Issues

Common Technical Issues


Issue: Puppet fails with: err: Could not retrieve catalog from remote server: Error 400 on SERVER: undefined method 'fact_merge' for nil:NilClass" This is a Puppet bug. See: http://projects.puppetlabs.com/issues/3234 Workaround: service puppetmaster restart Issue: Puppet client will never resend the certificate to Puppet Master. The certificate cannot be signed and verified. This is a Puppet bug. See: http://projects.puppetlabs.com/issues/4680 Workaround: On Puppet client: rm -f /etc/puppet/ssl/certificate_requests/*.pem rm -f /etc/puppet/ssl/certs/*.pem On Puppet master: rm -f /var/lib/puppet/ssl/ca/requests/*.pem Issue: The manifests are up-to-date under /etc/puppet/manifests, but Puppet Master keeps serving the previous version of manifests to the clients. Manifests seem to be cached by the Puppet Master. More information: https://groups.google.com/forum/?fromgroups=#!topic/puppet-users/OpCBjV1nR2M Workaround: service puppetmaster restart Issue: Timeout error for fuel-controller-XX when running puppet-agent--test to install OpenStack when using HDD instead of SSD | | | | | | Sep 26 17:56:15 fuel-controller-02 puppet-agent[1493]: Could not retrieve catalog from remote server: execution expired Sep 26 17:56:15 fuel-controller-02 puppet-agent[1493]: Not using cache on failed catalog Sep 26 17:56:15 fuel-controller-02 puppet-agent[1493]: Could not retrieve catalog; skipping run

2013, Mirantis Inc.

Page 102

Fuel for Openstack v3.1 User Guide Workaround: vi /etc/puppet/puppet.conf add: configtimeout=1200 Issue: On running puppetagent--test, the error messages below occur:

Common Technical Issues

| err: /File[/var/lib/puppet/lib]: Could not evaluate: Could not retrieve | information from environment production source(s) puppet://fuel-pm.localdomain/plugins

Workaround: http://projects.reductivelabs.com/issues/2244

Corosync crashes without network connectivity


Depending on a wide range of systems and configurations in network it is possible for Corosync's networking protocol, TOTEM, to time out. If this happens for an extended period of time, Corosync may crash. In addition, MySQL may have stopped. This guide illustrates the process of working through Corosync with MySQL issues. Workaround: 1. Verify that corosync is really broken servicecorosyncstatus. You should see next error: corosyncdeadbutpidfileexists 2. Start corosync manually servicecorosyncstart. 3. Run ps-ef|grepmysql and kill ALL(!) mysqld and mysqld_safe processes. 4. Wait while pacemaker starts mysql processes again. You can check it with ps-ef|grepmysql command. If it doesn't start, run crmresourcep_mysql start. 5. Check with crmstatus command that this host is part of the cluster and p_mysql is not within "Failed actions".

RabbitMQ Cluster Restart Issues Following A Systemwide Power Failure


Issue: As a rule of thumb, all RabbitMQ nodes must not be shut down simultaneously. RabbitMQ requires that after a full shutdown of the cluster, the first node brought up should be the last one to shut down, but it's not always possible to know which node that is in the event of a power outage or similar event. Fuel solve this problem by managing the restart of available nodes, so you should not experience difficulty with this issue. If you are still using previous versions of Fuel, the following describes how Fuel works around this problem which, in turn, you can use to perform the steps manually. Workaround: There are 2 possible scenarios, depending on the results of the shutdown: 1. The RabbitMQ master node is alive and can be started. 2013, Mirantis Inc. Page 103

Fuel for Openstack v3.1 User Guide

Common Technical Issues

2. It's impossible to start the RabbitMQ master node due to a hardware or system failure Fuel updates the /etc/init.d/rabbitmq-server init scripts for RHEL/Centos to customized versions. These scripts attempt to start RabbitMQ twice, giving the RabbitMQ master node the necessary time to start after complete power loss. With the scripts in place, power up all nodes, then check to see whether the RabbitMQ server started on all nodes. All nodes should start automatically. On the other hand, if the RabbitMQ master node has failed, the init script performs the following actions during the rabbitmq-server start. It moves the existing Mnesia database to a backup directory, and then makes the third and final attempt to start the RabbitMQ server. In this case, RabbitMQ starts with a clean database, and the live rabbit nodes assemble a new cluster. The script uses the current RabbitMQ settings to find the current Mnesia location and creates a backup directory in the same path as Mnesia, tagged with the current date. So with the customized init scripts included in Fuel, in most cases RabbitMQ simply starts after complete power loss and automatically assembles the cluster, but you can manage the process yourself.

See also
http://comments.gmane.org/gmane.comp.networking.rabbitmq.general/19792

2013, Mirantis Inc.

Page 104

Fuel for Openstack v3.1 User Guide

How HA with Pacemaker and Corosync Works

How HA with Pacemaker and Corosync Works


Corosync Settings
Corosync is using Totem protocol which is an implementation of Virtual Synchrony protocol. It uses it in order to provide connectivity between cluster nodes, decide if cluster is quorate to provide services, to provide data layer for services that want to use features of Virtual Synchrony. Corosync is used in Fuel as communication and quorum service for Pacemaker cluster resource manager (crm). It's main configuration file is located in /etc/corosync/corosync.conf. The main Corosync section is totem section which describes how cluster nodes should communicate: totem { version: token: token_retransmits_before_loss_const: join: consensus: vsftype: max_messages: clear_node_high_bit: rrp_mode: secauth: threads: interface { ringnumber: 0 bindnetaddr: 10.107.0.8 mcastaddr: 239.1.1.2 mcastport: 5405 } }

2 3000 10 60 3600 none 20 yes none off 0

Corosync usually uses multicast UDP transport and sets "redundant ring" for communication. Currently Fuel deploys controllers with one redundant ring. Each ring has its own multicast address and bind net address that specifies on which interface Corosync should join corresponding multicast group. Fuel uses default Corosync configuration, which can also be altered in Fuel manifests.

See also
mancorosync.conf or Corosync documentation at http://clusterlabs.org/doc/ if you want to know how to tune installation completely

Pacemaker Settings

2013, Mirantis Inc.

Page 105

Fuel for Openstack v3.1 User Guide

How HA with Pacemaker and Corosync Works

Pacemaker is the cluster resource manager used by Fuel to manage Quantum resources, HAProxy, virtual IP addresses and MySQL Galera (or simple MySQL Master/Slave replication in case of RHOS installation) cluster. It is done by use of Open Cluster Framework (see http://linux-ha.org/wiki/OCF_Resource_Agents) agent scripts which are deployed in order to start/stop/monitor Quantum services, to manage HAProxy, virtual IP addresses and MySQL replication. These are located at /usr/lib/ocf/resource.d/mirantis/quantum-agent-[ovs|dhcp|l3], /usr/lib/ocf/resource.d/mirantis/mysql, /usr/lib/ocf/resource.d/ocf/haproxy. Firstly, MySQL agent is started, HAproxy and virtual IP addresses are set up. Then Open vSwitch and metadata agents are cloned on all the nodes. Then dhcp and L3 agents are started and tied together by use of Pacemaker constraints called "colocation".

See also
Using Rules to Determine Resource Location

MySQL HA script primarily targets to the cluster rebuild after power failure or equal type of disaster - it needs working Corosync in which it forms quorum of an epochs of replication and then electing master from node with newest epoch. Be aware of default five minute interval in which every cluster member should be booted to participate in such election. Every node is a self-aware, that means if nobody pushes higher epoch that it retrieved from Corosync (neither no one did), it will just elect itself as a master.

How Fuel Deploys HA


Fuel installs Corosync service, configures corosync.conf and includes Pacemaker service plugin into /etc/corosync/service.d. Then Corosync service starts and spawns corresponding Pacemaker processes. Fuel configures cluster properties of Pacemaker and then injects resources configuration for virtual IPs, HAProxy, MySQL and Quantum agent resources: primitive p_haproxy ocf:pacemaker:haproxy \ op monitor interval="20" timeout="30" \ op start interval="0" timeout="30" \ op stop interval="0" timeout="30" primitive p_mysql ocf:mirantis:mysql \ op monitor interval="60" timeout="30" \ op start interval="0" timeout="450" \ op stop interval="0" timeout="150" primitive p_quantum-dhcp-agent ocf:mirantis:quantum-agent-dhcp \ op monitor interval="20" timeout="30" \ op start interval="0" timeout="360" \ op stop interval="0" timeout="360" \ params tenant="services" password="quantum" username="quantum" \ os_auth_url="http://10.107.2.254:35357/v2.0" \ meta is-managed="true" primitive p_quantum-l3-agent ocf:mirantis:quantum-agent-l3 \ op monitor interval="20" timeout="30" \

2013, Mirantis Inc.

Page 106

Fuel for Openstack v3.1 User Guide

How HA with Pacemaker and Corosync Works

op start interval="0" timeout="360" \ op stop interval="0" timeout="360" \ params tenant="services" password="quantum" syslog="true" username="quantum" \ debug="true" os_auth_url="http://10.107.2.254:35357/v2.0" \ meta is-managed="true" target-role="Started" primitive p_quantum-metadata-agent ocf:mirantis:quantum-agent-metadata \ op monitor interval="60" timeout="30" \ op start interval="0" timeout="30" \ op stop interval="0" timeout="30" primitive p_quantum-openvswitch-agent ocf:pacemaker:quantum-agent-ovs \ op monitor interval="20" timeout="30" \ op start interval="0" timeout="480" \ op stop interval="0" timeout="480" primitive vip__management_old ocf:heartbeat:IPaddr2 \ op monitor interval="2" timeout="30" \ op start interval="0" timeout="30" \ op stop interval="0" timeout="30" \ params nic="br-mgmt" iflabel="ka" ip="10.107.2.254" primitive vip__public_old ocf:heartbeat:IPaddr2 \ op monitor interval="2" timeout="30" \ op start interval="0" timeout="30" \ op stop interval="0" timeout="30" \ params nic="br-ex" iflabel="ka" ip="172.18.94.46" clone clone_p_haproxy p_haproxy \ meta interleave="true" clone clone_p_mysql p_mysql \ meta interleave="true" is-managed="true" clone clone_p_quantum-metadata-agent p_quantum-metadata-agent \ meta interleave="true" is-managed="true" clone clone_p_quantum-openvswitch-agent p_quantum-openvswitch-agent \ meta interleave="true"

And ties them with Pacemaker colocation resource:


colocation dhcp-with-metadata inf: p_quantum-dhcp-agent \ clone_p_quantum-metadata-agent colocation dhcp-with-ovs inf: p_quantum-dhcp-agent \ clone_p_quantum-openvswitch-agent colocation dhcp-without-l3 -100: p_quantum-dhcp-agent p_quantum-l3-agent colocation l3-with-metadata inf: p_quantum-l3-agent clone_p_quantum-metadata-agent colocation l3-with-ovs inf: p_quantum-l3-agent clone_p_quantum-openvswitch-agent order dhcp-after-metadata inf: clone_p_quantum-metadata-agent p_quantum-dhcp-agent order dhcp-after-ovs inf: clone_p_quantum-openvswitch-agent p_quantum-dhcp-agent order l3-after-metadata inf: clone_p_quantum-metadata-agent p_quantum-l3-agent order l3-after-ovs inf: clone_p_quantum-openvswitch-agent p_quantum-l3-agent

2013, Mirantis Inc.

Page 107

Fuel for Openstack v3.1 User Guide

HowTo Notes

HowTo Notes
HowTo: Create the XFS partition
In most cases, Fuel creates the XFS partition for you. If for some reason you need to create it yourself, use this procedure: 1. Create the partition itself: fdisk /dev/sdb n(for new) p(for partition) <enter> (to accept the defaults) <enter> (to accept the defaults) w(to save changes) 2. Initialize the XFS partition: mkfs.xfs -i size=1024 -f /dev/sdb1 3. For a standard swift install, all data drives are mounted directly under /srv/node, so first create the mount point: mkdir -p /srv/node/sdb1 4. Finally, add the new partition to fstab so it mounts automatically, then mount all current partitions: echo "/dev/sdb1 /srv/node/sdb1 xfs noatime,nodiratime,nobarrier,logbufs=8 0 0" >> /etc/fstab mount -a

HowTo: Redeploy a node from scratch


Compute and Cinder nodes in an HA configuration and controller in any configuration cannot be redeployed without completely redeploying the cluster. However, in a non-HA situation you can redeploy a Compute or Cinder node. To do so, follow these steps: 1. Remove the certificate for the node by executing the command puppetcertclean<hostname> on Fuel Master node. 2. Reboot the node over the network so it can be picked up by cobbler. 3. Run the puppet agent on the target node using puppetagent--test.

HowTo: Enable/Disable Galera Cluster Autorebuild Mechanism


By defaults Fuel reassembles Galera cluster automatically without need for any user interaction. To prevent autorebuild feature you shall do: crm_attribute -t crm_config --name mysqlprimaryinit --delete To re-enable autorebuild feature you should do: crm_attribute -t crm_config --name mysqlprimaryinit --update done

How To Troubleshoot Corosync/Pacemaker

2013, Mirantis Inc.

Page 108

Fuel for Openstack v3.1 User Guide

HowTo Notes

Pacemaker and Corosync come with several CLI utilities that can help you troubleshoot and understand what is going on. crm - Cluster Resource Manager This is the main pacemaker utility it shows you state of pacemaker cluster. Several most popular commands that you can use to understand whether your cluster is consistent: crm status This command shows you the main information about pacemaker cluster and state of resources being managed:
crm(live)# status ============ Last updated: Tue May 14 15:13:47 2013 Last change: Mon May 13 18:36:56 2013 via cibadmin on fuel-controller-01 Stack: openais Current DC: fuel-controller-01 - partition with quorum Version: 1.1.6-9971ebba4494012a93c03b40a2c58ec0eb60f50c 5 Nodes configured, 5 expected votes 3 Resources configured. ============ Online: [ fuel-controller-01 fuel-controller-02 fuel-controller-03 fuel-controller-04 fuel-controller-05 ] p_quantum-plugin-openvswitch-agent (ocf::pacemaker:quantum-agent-ovs): Started fuel-controller-01 p_quantum-dhcp-agent (ocf::pacemaker:quantum-agent-dhcp): Started fuel-controller-01 p_quantum-l3-agent (ocf::pacemaker:quantum-agent-l3): Started fuel-controller-01

crm(live)# resource Here you can enter resource-specific commands: crm(live)resource# status`

p_quantum-plugin-openvswitch-agent (ocf::pacemaker:quantum-agent-ovs) Started p_quantum-dhcp-agent (ocf::pacemaker:quantum-agent-dhcp) Started p_quantum-l3-agent (ocf::pacemaker:quantum-agent-l3) Started crm(live)resource# start|restart|stop|cleanup <resource_name> These commands let you correspondingly start, stop, restart resources. cleanup Cleanup command cleans resources state on the nodes in case of their failure or unexpected operation, e.g. some residuals of SysVInit operation on resource, in which case pacemaker will manage it by itself, thus deciding in which node to run the resource. E.g.: 3 Nodes configured, 3 expected votes 3 Resources configured.

2013, Mirantis Inc.

Page 109

Fuel for Openstack v3.1 User Guide

HowTo Notes

============ 3 Nodes configured, 3 expected votes 16 Resources configured. Online: [ controller-01 controller-02 controller-03 ] vip__management_old (ocf::heartbeat:IPaddr2): Started controller-01 vip__public_old (ocf::heartbeat:IPaddr2): Started controller-02 Clone Set: clone_p_haproxy [p_haproxy] Started: [ controller-01 controller-02 controller-03 ] Clone Set: clone_p_mysql [p_mysql] Started: [ controller-01 controller-02 controller-03 ] Clone Set: clone_p_quantum-openvswitch-agent [p_quantum-openvswitch-agent] Started: [ controller-01 controller-02 controller-03 ] Clone Set: clone_p_quantum-metadata-agent [p_quantum-metadata-agent] Started: [ controller-01 controller-02 controller-03 ] p_quantum-dhcp-agent (ocf::mirantis:quantum-agent-dhcp): Started controller-01 p_quantum-l3-agent (ocf::mirantis:quantum-agent-l3): Started controller-03

In this case there were residual OpenStack agent processes that were started by pacemaker in case of network failure and cluster partitioning. After the restoration of connectivity pacemaker saw these duplicate resources running on different nodes. You can let it clean up this situation automatically or, if you do not want to wait, cleanup them manually.

See also
crm interactive help and documentation resources for Pacemaker http://doc.opensuse.org/products/draft/SLE-HA/SLE-ha-guide_sd_draft/cha.ha.manual_config.html). (e.g.

In some network scenarios one can get cluster split into several parts and crmstatus showing something like this: On ctrl1 ============ . Online: [ ctrl1 ] On ctrl2 ============ . Online: [ ctrl2 ] On ctrl3

2013, Mirantis Inc.

Page 110

Fuel for Openstack v3.1 User Guide

HowTo Notes

============ . Online: [ ctrl3 ] You can troubleshoot this by checking corosync connectivity between nodes. There are several points: 1. Multicast should be enabled in the network, IP address configured as multicast should not be filtered, mcastport and mcasport - 1 udp ports should be accepted on management network between controllers 2. corosync should start after network interfaces are configured 3. bindnetaddr should be in the management network or at least in the same multicast reachable segment You can check this in output of ipmaddrshow: 5: br-mgmt link 33:33:00:00:00:01 link 01:00:5e:00:00:01 link 33:33:ff:a3:e2:57 link 01:00:5e:01:01:02 link 01:00:5e:00:00:12 inet 224.0.0.18 inet 239.1.1.2 inet 224.0.0.1 inet6 ff02::1:ffa3:e257 inet6 ff02::1

corosync-objctl This command is used to get/set runtime corosync configuration values including status of corosync redundant ring members: runtime.totem.pg.mrp.srp.members.134245130.ip=r(0) ip(10.107.0.8) runtime.totem.pg.mrp.srp.members.134245130.join_count=1 ... runtime.totem.pg.mrp.srp.members.201353994.ip=r(0) ip(10.107.0.12) runtime.totem.pg.mrp.srp.members.201353994.join_count=1 runtime.totem.pg.mrp.srp.members.201353994.status=joined If IP of the node is 127.0.0.1 it means that corosync started when only loopback interfaces was available and bound to it. If there is only one IP in members list that means there is corosync connectivity issue because the node does not see the other ones. The same stays for the case when members list is incomplete.

How To Smoke Test HA


To test if Quantum HA is working, simply shut down the node hosting, e.g. Quantum agents (either gracefully or hardly). You should see agents start on the other node:

2013, Mirantis Inc.

Page 111

Fuel for Openstack v3.1 User Guide

HowTo Notes

# crm status Online: [ fuel-controller-02 fuel-controller-03 fuel-controller-04 fuel-controller-05 ] OFFLINE: [ fuel-controller-01 ] p_quantum-plugin-openvswitch-agent (ocf::pacemaker:quantum-agent-ovs): Started fuel-controller-02 p_quantum-dhcp-agent (ocf::pacemaker:quantum-agent-dhcp): Started fuel-controller-02 p_quantum-l3-agent (ocf::pacemaker:quantum-agent-l3): Started fuel-controller-02

and see corresponding Quantum interfaces on the new Quantum node: # ip link show 11: tap7b4ded0e-cb: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc 12: qr-829736b7-34: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc 13: qg-814b8c84-8f: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc You can also check ovs-vsctlshowoutput to see that all corresponding tunnels/bridges/interfaces are created and connected properly: ce754a73-a1c4-4099-b51b-8b839f10291c Bridge br-mgmt Port br-mgmt Interface br-mgmt type: internal Port "eth1" Interface "eth1" Bridge br-ex Port br-ex Interface br-ex type: internal Port "eth0" Interface "eth0" Port "qg-814b8c84-8f" Interface "qg-814b8c84-8f" type: internal Bridge br-int Port patch-tun Interface patch-tun type: patch options: {peer=patch-int} Port br-int Interface br-int type: internal Port "tap7b4ded0e-cb" tag: 1 Interface "tap7b4ded0e-cb" type: internal

2013, Mirantis Inc.

Page 112

Fuel for Openstack v3.1 User Guide

HowTo Notes

Port "qr-829736b7-34" tag: 1 Interface "qr-829736b7-34" type: internal Bridge br-tun Port "gre-1" Interface "gre-1" type: gre options: {in_key=flow, out_key=flow, Port "gre-2" Interface "gre-2" type: gre options: {in_key=flow, out_key=flow, Port patch-int Interface patch-int type: patch options: {peer=patch-tun} Port "gre-3" Interface "gre-3" type: gre options: {in_key=flow, out_key=flow, Port "gre-4" Interface "gre-4" type: gre options: {in_key=flow, out_key=flow, Port br-tun Interface br-tun type: internal ovs_version: "1.4.0+build0"

remote_ip="10.107.0.8"}

remote_ip="10.107.0.5"}

remote_ip="10.107.0.6"}

remote_ip="10.107.0.7"}

2013, Mirantis Inc.

Page 113

Fuel for Openstack v3.1 User Guide

Other Questions

Other Questions
1. [Q] Why did you decide to provide OpenStack packages through your own repository? [A] We are fully committed to providing our customers with working and stable bits and pieces in order to make successful OpenStack deployments. Please note that we do not distribute our own version of OpenStack; we rather provide a plain vanilla distribution. As such, there is no vendor lock-in. For convenience, our repository maintains the history of OpenStack packages certified to work with our Puppet manifests. The advantage of this approach is that you can install any OpenStack version you want. If you are running Essex, just use the Puppet manifests which reference OpenStack packages for Essex from our repository. With each new release we add new OpenStack packages to our repository and created a separate branch with the Puppet manifests (which, in turn, reference these packages) corresponding to each release. With EPEL this would not be possible, as that repository only keeps the latest version for OpenStack packages.

2013, Mirantis Inc.

Page 114

Fuel for Openstack v3.1 User Guide

Fuel License

Fuel License
Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of,

2013, Mirantis Inc.

Page 115

Fuel for Openstack v3.1 User Guide

Fuel License

the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions:

2013, Mirantis Inc.

Page 116

Fuel for Openstack v3.1 User Guide

Fuel License

(a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form that You distribute, all copyright, attribution notices from the Source excluding those notices that do not the Derivative Works; and of any Derivative Works patent, trademark, and form of the Work, pertain to any part of

(d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file.

2013, Mirantis Inc.

Page 117

Fuel for Openstack v3.1 User Guide

Fuel License

7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "[]" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright [yyyy] [name of copyright owner]

2013, Mirantis Inc.

Page 118

Fuel for Openstack v3.1 User Guide

Fuel License

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

2013, Mirantis Inc.

Page 119

Fuel for Openstack v3.1 User Guide

Index HA Full HA Logical Setup HA with Pacemaker and Corosync Hardware Sizing How Fuel Deploys HA HowTo: Create the XFS partition HowTo: Galera Cluster Autorebuild HowTo: Redeploy a node from scratch HowTo: Smoke Test HA HowTo: Troubleshoot Corosync/Pacemaker I Installing Fuel Master Node Internal Network Introduction L Large Scale Deployments M Management Network N Network Architecture Neutron vs. nova-network Non-HA Simple O Object storage P Pacemaker Settings Private Network Production Considerations Public Network Q

Index
A About Fuel C Cinder vs. nova-volume CLI Deployment Workflow Cluster Sizing Common Technical Issues Configure Deployment Scenario Configuring Nodes for Deployment Configuring Nodes for Provisioning Corosync Settings D Deploy using CLI Deploy using UI Deploying Using CLI Deployment Configurations Download Fuel F FAQ (Frequently Asked Questions) Fuel License Fuel UI: Deployment Schema Fuel UI: Network Configuration Fuel UI: Network Issues Fuel UI: Post-Deployment Check G Glance H HA Compact HA Compact Details

2013, Mirantis Inc.

Page 121

Fuel for Openstack v3.1 User Guide Quantum vs. nova-network R Red Hat OpenStack Red Hat OpenStack Architecture Red Hat OpenStack: Deployment Requirements Red Hat OpenStack: Troubleshooting Redeploying An Environment Reference Architectures Reference Architectures: HA Compact Reference Architectures: HA Compact Details Reference Architectures: HA Full Reference Architectures: HA Logical Setup Reference Architectures: Non-HA Simple Reference Architectures: RHOS Reference Architectures: RHOS HA Compact Reference Architectures: RHOS Non-HA Simple Release Notes Release Notes: Fuel 3.1 RHOS HA Compact RHOS Non-HA Simple S Sizing Hardware Supported Software Components Swift T Testing OpenStack Cluster Manually Triggering the Deployment

Index

2013, Mirantis Inc.

Page 122

Anda mungkin juga menyukai