Anda di halaman 1dari 55

Extreme Networks OpenStack

Plugin 2.0 Installation Guide


Copyright 2014 Extreme Networks
AccessAdapt, Alpine, Altitude, BlackDiamond, Direct Attach, EPICenter, ExtremeWorks Essentials,
Ethernet Everywhere, Extreme Enabled, Extreme Ethernet Everywhere, Extreme Networks,
Extreme Standby Router Protocol, Extreme Turbodrive, Extreme Velocity, ExtremeWare,
ExtremeWorks, ExtremeXOS, Go Purple Extreme Solution, ExtremeXOS ScreenPlay, ReachNXT,
Ridgeline, Sentriant, ServiceWatch, Summit, SummitStack, Triumph, Unified Access Architecture,
Unified Access RF Manager, UniStack, XNV, the Extreme Networks logo, the Alpine logo, the
BlackDiamond logo, the Extreme Turbodrive logo, the Summit logos, and the Powered by
ExtremeXOS logo are trademarks or registered trademarks of Extreme Networks, Inc. or its
subsidiaries in the United States and/or other countries.
sFlow is the property of InMon Corporation.
iBooks is property of Apple, Inc.
Specifications are subject to change without notice.
All other registered trademarks, trademarks, and service marks are property of their respective
owners.
For additional information on Extreme Networks trademarks, please see: http://
www.extremenetworks.com/company/legal/trademarks/.
120873-00 Rev 2
Table of Contents
Chapter 1: Prerequisites 4
Software Requirements 4
Hardware Requirements 4
OpenStack Requirements 5
Chapter 2: Reference Topology 6
Reference Topology Setup 6
Chapter 3: Installing Ubuntu 12.04 LTS on the Hosts 10
Installing Ubuntu 12.04 LTS on the Hosts 10
Chapter 4: Preparing the Servers and Switches 12
Updating the Servers 12
Setting up the Ethernet Port for the Management Interface 12
Chapter 5: Installing OpenStack Plugin 2.0 on the Servers 13
Downloading and Preparing OpenStack 13
Installing OpenStack on the OSController 13
Installing OpenStack on the OSHosts 16
Chapter 6: Configuring Extreme Networks Switches 19
Configuring Control and TOR Switches 19
Configuring the TOR1 Switch 19
Configuring the TOR2 Switch 20
Configuring the Control 1 Switch 20
Configuring the Control 2 Switch 20
Chapter 7: Starting and Stopping Extreme Networks OpenStack 22
Starting and Stopping Extreme Networks OpenStack 22
Modifying the Devstack localrc Parameters 23
Starting OpenStack on OSController 24
Starting OpenStack on OSHost1 and OSHost2 25
Verifying the OSController and OSHosts 25
Populating the Topology Database 26
Configuring the Network FabricLAG/MLAG 31
Shutting Down OpenStack on All Servers 33
Logs 33
Chapter 8: Managing Tenants and Virtual Machines 35
Creating Tenants 35
Creating Tenants Using Python Script and Configuration File (L3 Agent) 35
Creating Tenants Using Python Script and Configuration File (Virtual Routers) 37
Verifying TOR Switch Configuration after Tenant Creation (L3 Agent) 40
Verifying TOR Switch Configuration after Tenant Creation (Virtual Routers) 42
Creating Tenant Virtual Machine Instances 44
Migrating Tenant Virtual Machine Instances (Live Migration) 50
Deleting Tenant Virtual Machine Instances 51
Chapter 9: Resetting the Testbed 53
Appendix A: Glossary 54

Extreme Networks OpenStack Plugin 2.0 Installation Guide 3
1 Prerequisites
Software Requirements
Hardware Requirements
OpenStack Requirements
This chapter explains the prerequisites for installing the Extreme Networks OpenStack Plugin 2.0.
Software Requirements
You need the following software installed:
Extreme Networks' ExtremeXOS operating system release 15.3.2 or 15.3.3 (
www.extremenetworks.com/products/extreme-xos.aspx).
Extreme Networks OpenStack Plugin 2.0 software package (request download package from
Extreme Networks).
Ubuntu 12.04 LTS (Precise) image with KVM (www.ubuntu.com/download).
Hardware Requirements
You need the following hardware if you want to install the reference topology setup (see Reference
Topology Setup on page 6):
OSController (OpenStack Cloud Controller with Quantum Server and Network Host; main server)
64-bit x86 processor, 8GB RAM (minimum), 7 NICs
OSHost1, OSHost2 (OpenStack compute hosts; hosts VMs only)64-bit x86 processor, 8GB RAM
(minimum), 7 NICs
TOR1, TOR2Extreme Networks switch (recommended: Summit X460, X480, or X670)
We support any switch running ExtremeXOS release 15.3.2 or 15.3.3:
Stackable Switches
Summit X670 (www.extremenetworks.com/product/summit-x670-series)
Summit X480 (http://www.extremenetworks.com/product/summit-x480-series)
Summit X460 (http://www.extremenetworks.com/product/summit-x460-series)
Chassis-Based Switches
BlackDiamond X8 (http://www.extremenetworks.com/product/blackdiamond-x-series)
BlackDiamond 8800 (http://www.extremenetworks.com/product/blackdiamond-8800-series)
OpenStack Requirements
OpenStack requirements are available at:
http://docs.openstack.org/grizzly/openstack-ops/content/index.html.
Prerequisites

Extreme Networks OpenStack Plugin 2.0 Installation Guide 5
2 Reference Topology
Reference Topology Setup
Reference Topology Setup
The reference topology setup consists of three servers: one controller (OSController) and two compute
nodes (OSHost1 and OSHost2); two control switches (CTRL1 and CTRL2); and two "top of rack"
switches (TOR1 and TOR2).
This setup uses redundancy where possible; for the servers this means that bonding is used to connect
to the TOR switches, as well as the control switches. There are distinct networks for data, storage,
control, and managementeach using its own set of NICs, or in the case of the management port, a
single NIC. The control network is not the management network. In the following setup, OpenStack
services are behind the control network IP.
Figure 1: Complete Reference Topology Setup
Control network (in red) is used for exchanging control messages between the OpenStack components
(servers) and ExtremeXOS switches.
Management network (in blue) is used for out-of-band access. It requires Internet access during
installation, but can run through a SNAT layer. It can use any routable subnet.
Data network (in purple) is used for data traffic from the tenant VMs on the servers. It is also used for
DHCP request/reply between tenant VMs and per-tenant DHCP/NAT server on the controller. It is also
used for tenant data traffic to/from public/external network through per-tenant gateway on the
controller.
Storage network (in green) is used for storage traffic from the tenant VMs on the servers.
Figure 2: Complete Reference Topology Setup with Border Gateways
The following four figures show in isolation each of the logical networks (data, storage, control, and
management) within the reference topology.
Reference Topology

Extreme Networks OpenStack Plugin 2.0 Installation Guide 7
Figure 3: Control Network
Figure 4: Data Network
Reference Topology

Extreme Networks OpenStack Plugin 2.0 Installation Guide 8
Figure 5: Storage Network
Reference Topology

Extreme Networks OpenStack Plugin 2.0 Installation Guide 9
3 Installing Ubuntu 12.04 LTS on the
Hosts
Installing Ubuntu 12.04 LTS on the Hosts
Installing Ubuntu 12.04 LTS on the Hosts
The Extreme Networks OpenStack software package is based on DevStack (http://devstack.org),
which assumes a specific version of Ubuntu12.04 LTS (Precise) for this release.
To download and install the Ubuntu Server on the hosts (controller and compute nodes):
1 Go to www.ubuntu.com/download/server and download Ubuntu Server 12.04.3 LTS 64-bit
(ubuntu-12.04.3-server-amd64.iso). For specific instructions, see the Ubuntu Installation Guide.
2 Load the image on a CD-ROM or USB memory stick.
3 Boot the system from the CD-ROM or USB memory stick.
4 On the Language screen, select the language for the install.
5 On the Ubuntu screen, select Install Ubuntu Server.
6 On the Select a language screen, select the language for the installation text and for the installed
program.
7 On the Select your location screen, select your country.
8 On the Configure the keyboard screen, select:
YesInstallation program attempts to detect your keyboard layout. Go to Step 9 on page 10.
NoChoose your keyboard layout from a list:
a Select the country of origin for the keyboard from the list.
b Select the layout for the keyboard from the list.
9 On the Configure the network screen, enter a hostname (for example, OSController, OSHost1, or
OSHost2).
Important
It is recommended that you use the naming convention "OSController", "OSHost1", and
"OSHost2" to make it easier to follow the rest of the procedures in this Installation Guide.
However, you may use whatever naming convention you prefer, but you must be
consistent. OpenStack uses hostnames for communication, and several files (hosts files,
configuration files, etc.) depend on consistent use of hostnames.
10 On the Set up users and passwords screen, type:
stack for the user
stack for the username
stack for the password
11 On the Set up users and passwords screen, re-type the user password to verify it.
12 On the Set up users and passwords screen, select yes when you are warned that the password
consists of less than eight characters.
13 On the Set up users and passwords screen, select no when prompted to encrypt your home
directory.
14 On the Configure the clock screen, select your time zone.
15 On the Partition disks screen, select Guided - user entire disk and set up LVM.
16 On the Partition disks screen, select the disk to partition.
17 On the Partition disks screen, select yes to confirm that you want to write the changes to disk and
configure LVM.
18 On the Partition disks screen, select Continue to accept the maximum amount for the volume group
for guided partitioning (21.2 GB).
19 On the Partition disks screen, select yes to accept writing the changes to disk.
20 On the Configure the package manager screen, type a proxy path, if needed, in the format
http://[[user][:pass]@]host[:port]/, and then select Continue.
21 On the Configuring tasksel screen, select No automatic updates.
22 On the Software Selection screen, select OpenSSH server, and then select Continue.
23 On the Install the GRUB boot loader on a hard disk screeen, select Yes to install the GRUB boot
loader.
24 When Finish the Installation screen appears, remove the Ubuntu image CD-ROM or USB memory
stick, so that the computer boots from the newly installed operating system.
25 On the Finish the Installation screen, select Continue to finish the installation.
Installing Ubuntu 12.04 LTS on the Hosts

Extreme Networks OpenStack Plugin 2.0 Installation Guide 11
4 Preparing the Servers and
Switches
Updating the Servers
Setting up the Ethernet Port for the Management Interface
Updating the Servers
After installing Ubuntu, you need to update the servers:
apt-get update
Setting up the Ethernet Port for the Management Interface
On every server, each physical Ethernet port must be identified (which one is eth0, eth1, eth2, etc.). This
is important since the setup script (see Creating Tenants on page 35) uses particular Ethernet
interfaces for data, control, storage, and management.
Note
If you do not know the port order on the server, plug a cable into each port and run either
dmesg or ethtool to identify each port. If you want to change the port order, edit the
following file: /etc/udev/rules.d/70-persistent-net.rules.
Eth0 is used for the management interface.
To set up Eth0 with an IP address and DNS:
1 Set up an IP address for Eth0:
ifconfig eth0 x.x.x.x/x
2 Set up DNS by adding the following to Eth0 inet static in /etc/network/interfaces:
dns-nameservers x.x.x.x
3 Define subnets for the management and control networks.
5 Installing OpenStack Plugin 2.0 on
the Servers
Downloading and Preparing OpenStack
Installing OpenStack on the OSController
Installing OpenStack on the OSHosts
Downloading and Preparing OpenStack
To download and prepare OpenStack 2.0 on all servers:
1 Download the OpenStack 2.0 software package:
Note
After purchasing a license from Extreme Networks, you are provided with a URL to
download the software.
2 Copy the package onto the host under the stack user's home directory (/home/stack).
3 Log in as the stack user and untar the package:
stack@OSController$ tar xvf extr_openstack_v200bXX.tar.gz
Where XX is the software build number.
Installing OpenStack on the OSController
Note
The OScontroller must be up and running prior to stacking any compute nodes.
To install OpenStack 2.0 on OSController:
1 On the host, edit the setup.config filelocated in /home/stack/
extr_openstack_v200bXX/setup ( where XX is the software build number), assuming the tar
file was extracted in the users (stack) home directory . The setup script prepares the OSController.
The management, control, data, and storage interfaces are defined (two interfaces are bonded
together). Thus, BondH0=Ctrl Bond1=Data Bond2=Storage. If a single interface is defined, no bond is
created. If you make a mistake, you can edit the file again, and then re-run the setup script.
stack@OSController$ cd extr_openstack_v200bXX/setup
stack@OSController$ vi setup.config
Where XX is the software build number.
The following is a sample of the setup.config file on the OSController node. All parts that need
to be edited are in bold.
##########################################################################
#
# config file for extreme_setup_controller.sh and extreme_setup_compute.sh
#
##########################################################################

############################################
# env variables specific to this host
############################################

# Network interfaces:
# - Management network. This is an out-of-band network used for install
# and support. It should already be configured. It cannot be bonded.
MGMT_IF=eth0
# - Control network. This is how Openstack talks to itself. It's also
# where Openstack exposes its services. It will be configured by this
# script. If two interfaces are provided, they will be bonded.
CTRL_IF_1=eth1
CTRL_IF_2=eth2
# - Data network. This is where we'll layer tenant networks. It will be
# configured by this script. If two interfaces are provided, they will
# be bonded.
DATA_IF_1=eth3
DATA_IF_2=eth4
# - Storage network. Simple network for storage traffic. It will be
# configured by this script.
STORAGE_IF_1=eth5
STORAGE_IF_2=eth6

# IP addresses:
MGMT_IP=10.68.61.116
CTRL_IP=192.168.50.10


############################################
# env variables common for all hosts
############################################

# Networks:
MGMT_NET=10.68.61.0
MGMT_MASK=255.255.255.0
CTRL_NET=192.168.50.0
CTRL_MASK=255.255.255.0

# Other networking
DEFAULT_GATEWAY=10.68.61.1
DNS_NAMESERVERS=10.6.16.32
FLOATING_RANGE=192.168.50.0/27 #This will need to be changed to whatever
your floating range will be - does not have to be overlapped with the ctrl
net

# Hostnames and control network IP addresses:
CONTROLLER=OSController
CONTROLLER_IP=192.168.50.10

Installing OpenStack Plugin 2.0 on the Servers

Extreme Networks OpenStack Plugin 2.0 Installation Guide 14
COMPUTE_HOST1=OSHost1
COMPUTE_HOST1_IP=192.168.50.9

COMPUTE_HOST2=OSHost2
COMPUTE_HOST2_IP=192.168.50.8

#COMPUTE_HOST3=OSHost3
#COMPUTE_HOST3_IP=192.168.9.13

#COMPUTE_HOST4=OSHost4
#COMPUTE_HOST4_IP=192.168.9.13

#COMPUTE_HOST5=OSHost5
#COMPUTE_HOST5_IP=192.168.9.13

# Override branches and repos. If you want the defaults, then leave it
commented.
# Use these if you are on the Corporate Network - Otherwise comment out
all of this below

UBUNTU_ROOT=http://10.68.61.14/ubuntu/201309131057/
PIP_MIRROR=http://10.68.61.14/pypi/201309120802/pypi/web
GIT_BASE=http://buildmaster.extremenetworks.com/anon-gitrepo
#DEVSTACK_BRANCH=extreme/grizzly
DEVSTACK_URL=http://buildmaster.extremenetworks.com/anon-gitrepo/
openstack- dev/devstack.git

#DEVSTACK_BRANCH=extreme/grizzly
#DEVSTACK_URL=http://10.68.61.68/anon-git/devstack.git
#OPENSTACK_BRANCH=2013.1.3
#CINDER_BRANCH=${OPENSTACK_BRANCH}
#GLANCE_BRANCH=${OPENSTACK_BRANCH}
#HORIZON_BRANCH=${OPENSTACK_BRANCH}
#KEYSTONE_BRANCH=${OPENSTACK_BRANCH}
#NOVA_BRANCH=${OPENSTACK_BRANCH}
#QUANTUM_BRANCH=${OPENSTACK_BRANCH}
#OPENSTACKCLIENT_BRANCH=0.2.1
#CINDERCLIENT_BRANCH=1.0.5
#GLANCECLIENT_BRANCH=0.6.0
#KEYSTONECLIENT_BRANCH=0.2.5
#NOVACLIENT_BRANCH=2.10.0
#QUANTUM_CLIENT_BRANCH=2.2.6
2 Run the setup script extreme_setup_controller.sh setup.config for the first time (may
require a password).
stack@OSController$ ./extreme_setup_controller.sh setup.config
3 If instructed by the setup script, reboot the machine.
Installing OpenStack Plugin 2.0 on the Servers

Extreme Networks OpenStack Plugin 2.0 Installation Guide 15
4 If you rebooted the machine in the previous step, re-run the setup script
extreme_setup_controller.sh setup.config.
stack@OSController$ cd extr_openstack_v200bXX/setup
stack@OSController$ ./extreme_setup_controller.sh setup.config
(upon successful installation)
OSController has been successfully setup for bond0:192.168.50.230
Where XX is the software build number.
This setup script automates the following steps required for an OpenStack controller host:
Sets up network interfaces (with NIC bonding if applicable) for this server.
Downloads devstack and other required software packages.
Configures devstack/localrc for OpenStack components and OpenStack 2.0.
Starts devstack and installs OpenStack packages.
Configures NFS and libvirt for live migration.
Installing OpenStack on the OSHosts
After the OSController is stacked and operational (see Installing OpenStack on the OSController on
page 13), you can set up the compute nodes (OSHost1 and OSHost2).
Note
Installing OpenStack 2.0 deletes all of the initial network configuration on the compute nodes.
To install the OpenStack 2.0 software package on OSHost1:
1 On this host, edit the setup.config filelocated in /home/stack/
extr_openstack_v200bXX/setup ( where XX is the software build number), assuming the tar
file was extracted in the users (stack) home directory . The setup script prepares OSHost1. The
management, control, data, and storage interfaces are defined (two interfaces are bonded
together). Thus, BondH0=Ctrl Bond1=Data Bond2=Storage. If a single interface is defined, no bond is
created. If you make a mistake, you can edit the file again, and then re-run the setup script.
stack@OSHost1$ cd /home/stack/extr_openstack_v200bXX/setup
stack@OSHost1$ vi setup.config
Where XX is the software build number.
The following is a sample of the setup.config file on the OSHost1 node. All parts that need to be
edited are in bold.
##########################################################################
#
# config file for extreme_setup_controller.sh and extreme_setup_compute.sh
#
##########################################################################

############################################
# env variables specific to this host
############################################

# Network interfaces:
Installing OpenStack Plugin 2.0 on the Servers

Extreme Networks OpenStack Plugin 2.0 Installation Guide 16
# - Management network. This is an out-of-band network used for install
# and support. It should already be configured. It cannot be bonded.
MGMT_IF=eth0
# - Control network. This is how Openstack talks to itself. It's also
# where Openstack exposes its services. It will be configured by this
# script. If two interfaces are provided, they will be bonded.
CTRL_IF_1=eth1
CTRL_IF_2=eth2
# - Data network. This is where we'll layer tenant networks. It will be
# configured by this script. If two interfaces are provided, they will
# be bonded.
DATA_IF_1=eth3
DATA_IF_2=eth4
# - Storage network. Simple network for storage traffic. It will be
# configured by this script.
STORAGE_IF_1=eth5
STORAGE_IF_2=eth6

# IP addresses:
MGMT_IP=10.68.61.116
CTRL_IP=192.168.50.10


############################################
# env variables common for all hosts
############################################

# Networks:
MGMT_NET=10.68.61.0
MGMT_MASK=255.255.255.0
CTRL_NET=192.168.50.0
CTRL_MASK=255.255.255.0

# Other networking
DEFAULT_GATEWAY=10.68.61.1
DNS_NAMESERVERS=10.6.16.32
FLOATING_RANGE=192.168.50.0/27 #This will need to be changed to whatever
your floating range will be - does not have to be overlapped with the ctrl
net

# Hostnames and control network IP addresses:
CONTROLLER=OSController
CONTROLLER_IP=192.168.50.10

COMPUTE_HOST1=OSHost1
COMPUTE_HOST1_IP=192.168.50.9

COMPUTE_HOST2=OSHost2
COMPUTE_HOST2_IP=192.168.50.8

#COMPUTE_HOST3=OSHost3
#COMPUTE_HOST3_IP=192.168.9.13

#COMPUTE_HOST4=OSHost4
#COMPUTE_HOST4_IP=192.168.9.13

#COMPUTE_HOST5=OSHost5
#COMPUTE_HOST5_IP=192.168.9.13
Installing OpenStack Plugin 2.0 on the Servers

Extreme Networks OpenStack Plugin 2.0 Installation Guide 17

# Override branches and repos. If you want the defaults, then leave it
commented.
# Use these if you are on the Corporate Network - Otherwise comment out
all of this below

UBUNTU_ROOT=http://10.68.61.14/ubuntu/201309131057/
PIP_MIRROR=http://10.68.61.14/pypi/201309120802/pypi/web
GIT_BASE=http://buildmaster.extremenetworks.com/anon-gitrepo
#DEVSTACK_BRANCH=extreme/grizzly
DEVSTACK_URL=http://buildmaster.extremenetworks.com/anon-gitrepo/
openstack- dev/devstack.git

#DEVSTACK_BRANCH=extreme/grizzly
#DEVSTACK_URL=http://10.68.61.68/anon-git/devstack.git
#OPENSTACK_BRANCH=2013.1.3
#CINDER_BRANCH=${OPENSTACK_BRANCH}
#GLANCE_BRANCH=${OPENSTACK_BRANCH}
#HORIZON_BRANCH=${OPENSTACK_BRANCH}
#KEYSTONE_BRANCH=${OPENSTACK_BRANCH}
#NOVA_BRANCH=${OPENSTACK_BRANCH}
#QUANTUM_BRANCH=${OPENSTACK_BRANCH}
#OPENSTACKCLIENT_BRANCH=0.2.1
#CINDERCLIENT_BRANCH=1.0.5
#GLANCECLIENT_BRANCH=0.6.0
#KEYSTONECLIENT_BRANCH=0.2.5
#NOVACLIENT_BRANCH=2.10.0
#QUANTUM_CLIENT_BRANCH=2.2.6
2 Run the setup script extreme_setup_compute.sh setup.config for the first time (may
require a password).
stack@OSHost1$ ./extreme_setup_compute.sh setup.config.
3 If instructed by the setup script, reboot the machine.
4 If you rebooted the machine in the previous step, re-run the setup script
extreme_setup_compute.sh setup.config.
stack@OSHost1$ cd /home/stack/extr_openstack_v200bXX/setup/
stack@OSHost1$ ./extreme_setup_compute.sh setup.config
(upon successful installation)
OSHost1 has been successfully setup for bond0:192.168.50.231
Where XX is the software build number.
5 Repeat procedure for OSHost2.
This setup script automates the following steps required for an OpenStack compute host:
Sets up network interfaces (with NIC bonding if applicable) for this server.
Downloads devstack and other required software packages.
Configures devstack/localrc for OpenStack components and OpenStack 2.0.
Starts devstack and installs OpenStack packages.
Configures NFS and libvirt for live migration.
Installing OpenStack Plugin 2.0 on the Servers

Extreme Networks OpenStack Plugin 2.0 Installation Guide 18
6 Configuring Extreme Networks
Switches
Configuring Control and TOR Switches
Configuring the TOR1 Switch
Configuring the TOR2 Switch
Configuring the Control 1 Switch
Configuring the Control 2 Switch
Configuring Control and TOR Switches
For all the servers to communicate with each other, the TOR and control switches need to be setup
correctly. The TOR switches also need certain configurations applied to work correctly with OpenStack
2.0. The easiest way to accomplish this is use a default.xsf file on each switch, and then run the
command unconfig switch all. This way the switches are configured correctly each time.
There are also loops in the control topology. This is prevented by using a UPM script, a software
redundant port, or simply disabling one port on each TOR switch to ensure a loop-free control network.
Configuring the TOR1 Switch
Since OpenStack 2.0. uses virtual routers, all ports are deleted from virtual router vr-default, except for
the ctrl-net ports. You can create a separate ctrl-net virtual router if needed. Only port 1 is added to the
ctrl-net VLAN to prevent a loop.
To configure the TOR1 switch:
1 Begin with the factory default configuration.
2 Edit default.xsf:
vi default.xsf
configure snmp sysname TOR1
configure default del ports all
configure vlan mgmt ipaddress 10.68.61.226/24
configure iproute add default 10.68.61.1 vr vr-mgmt
configure dns-client add name-server 10.6.16.32 vr vr-mgmt
configure dns-client add name-server 10.6.17.21 vr vr-mgmt
configure dns-client add name-server 10.6.25.30 vr vr-mgmt
enable web http
create vlan ctrl-net
conf ctrl-net ipaddress 192.168.50.12/24
configure vlan ctrl-net add ports 1 untagged
conf vr "VR-Default" del port 3 - 58
disable idletimeout
Configuring the TOR2 Switch
Since OpenStack 2.0. uses virtual routers, all ports are deleted from virtual router vr-default, except for
the ctrl-net ports. You can create a separate ctrl-net vr if needed. Only port 1 is added to the ctrl-net
VLAN to prevent a loop.
To configure the TOR2 switch:
1 Begin with the factory default configuration.
2 Edit default.xsf:
vi default.xsf
configure snmp sysname TOR2
configure default del ports all
configure vlan mgmt ipaddress 10.68.61.227/24
configure iproute add default 10.68.61.1 vr vr-mgmt
configure dns-client add name-server 10.6.16.32
configure dns-client add name-server 10.6.17.21
configure dns-client add name-server 10.6.25.30
create vlan ctrl-net
conf ctrl-net ipaddress 192.168.50.13/24
configure vlan ctrl-net add ports 1 untagged
enable web http
conf vr "VR-Default" del port 3 - 58
disable idletimeout
enable cli-config-logging
Configuring the Control 1 Switch
To configure the Control 1 switch:
1 Begin with the factory default configuration.
2 Edit default.xsf:
vi default.xsf
configure snmp sysname CTRL1
configure default del ports all
configure vlan Mgmt ipaddress 10.68.61.224/24
configure iproute add default 10.68.61.1 vr VR-Mgmt
create vlan ctrl-net
conf "ctrl-net" tag 300
configure vlan ctrl-net ipaddress 192.168.50.10/24
enable sharing 19 grouping 19,20 algorithm address-based L2
conf "ctrl-net" add port 7-11,19 untagged
Configuring the Control 2 Switch
To configure the Control 2 switch:
1 Begin with the factory default configuration.
Configuring Extreme Networks Switches

Extreme Networks OpenStack Plugin 2.0 Installation Guide 20
2 Edit default.xsf:
vi default.xsf
configure snmp sysname CTRL2
configure default del ports all
configure vlan Mgmt ipaddress 10.68.61.225/24
configure iproute add default 10.68.61.1 vr VR-Mgmt
create vlan ctrl-net
conf "ctrl-net" tag 300
configure vlan ctrl-net ipaddress 192.168.50.11/24
enable sharing 19 grouping 19,20 algorithm address-based L2
conf "ctrl-net" add port 7-11,19 untagged
Configuring Extreme Networks Switches

Extreme Networks OpenStack Plugin 2.0 Installation Guide 21
7 Starting and Stopping Extreme
Networks OpenStack
Starting and Stopping Extreme Networks OpenStack
Modifying the Devstack localrc Parameters
Starting OpenStack on OSController
Starting OpenStack on OSHost1 and OSHost2
Verifying the OSController and OSHosts
Populating the Topology Database
Configuring the Network FabricLAG/MLAG
Shutting Down OpenStack on All Servers
Logs
Starting and Stopping Extreme Networks OpenStack
The Extreme Networks reference topology setup (see Reference Topology Setup on page 6) with
OpenStack 2.0 must be started up in a specific order:
1 Start up all of the TOR switches.
2 (Optional) Modify devstack's localrc parameters (see Modifying the Devstack localrc Parameters on
page 23).
3 Start OpenStack on the controller (OSController) (see Starting OpenStack on OSController on page
24).
4 After the controller is completely up and running, start OpenStack on the compute hosts (OSHost1
and OSHost2) (see Starting OpenStack on OSHost1 and OSHost2 on page 25).
5 Populate the OpenStack 2.0 topology database from the controller (see Populating the Topology
Database on page 26).
6 Configure the network fabric (see Configuring the Network FabricLAG/MLAG on page 31).
7 Create sample tenants and tenant networks from the controller (see Creating Tenants on page
35).
8 Create/migrate/delete tenant VM instances (see Creating Tenant Virtual Machine Instances on page
44, Migrating Tenant Virtual Machine Instances (Live Migration) on page 50, and Deleting Tenant
Virtual Machine Instances on page 51).
9 When finished, shut down OpenStack on all servers (see Shutting Down OpenStack on All Servers
on page 33).
Modifying the Devstack localrc Parameters
Several configuration parameters for running OpenStack Nova/Neutron are configured from devstack's
localrc (/home/stack/devstack/localrc). Extreme Networks has extended this localrc to
include key parameters for OpenStack 2.0 along with the OVS plugin.
Below is a list for parameters for OpenStack 2.0 that can be modified in localrc:
Parameter Default Description
EXTR_VLAN_START
*
EXTR_VLAN_END
101
**
4,000
Start ID of VLAN from tenant VMs
Last ID of VLAN
EXTR_MAX_CORES 16 Maximum number of virtual
machines (VMs) per compute host
To edit localrc and have the changes take effect:
1 Ensure that all servers are unstacked.
./unstack.sh
2 Install OSController:
./extreme_setup_controller.sh setup.config
3 Change the parameters in localrc:
vi /opt/stack/devstack/localrc
4 Unstack OSController:
stack@OSController:/opt/stack/devstack$ ./unstack.sh
5 Re-stack OSController:
stack@OSController:/opt/stack/devstack$ ./stack.sh
6 Install OSHost1 and OSHost2:
./extreme_setup_compute.sh setup.config
Advanced Information:
If OpenStack 2.0 is to be plugged into OpenStack manually (by other means than using devstack), you
must provide the above configuration parameters in a quantum .ini file: /etc/quantum/plugins/
extreme/extreme_quantum_plugin.ini.
[VIRT_NETWORK]
# Virtual network plugin responsible for virtual network connectivity.
vplugin =
quantum.plugins.openvswitch.ovs_quantum_plugin.OVSQuantumPluginV2
[SCHEDULER]
# Scheduler for VM instances.
scheduler =
quantum.plugins.extreme.scheduler.topology_aware_scheduler.TopologyAwareS
cheduler
scheduler_vman =
*
VLAN/CVID/SVID of 1 is reserved on ExtremeXOS.
**
Default values for VLAN/CVID/SVID used in this guide are only intended for ease of demonstration and have no
particular meanings.
Starting and Stopping Extreme Networks OpenStack

Extreme Networks OpenStack Plugin 2.0 Installation Guide 23
quantum.plugins.extreme.scheduler.topology_aware_scheduler_vman.TopologyA
wareSchedulerVman
[GLOBAL_PREFS]
multi_tenants_per_host=True
multi_tenancy_solution_type=VLAN
min_vlan=101
max_vlan=4000
persistant_cfg=True
Additionally, OpenStack 2.0 must be specified as a core plugin in quantum conf: /etc/quantum/
quantum.conf.
# Quantum plugin provider module core_plugin =
quantum.plugins.extreme.api.extreme_quantum_plugin_v2.ExtremeQuantumPlugi
nV2
Also, OpenStack 2.0's Nova components need to be specified in nova conf: /etc/nova/nova.conf.
compute_scheduler_driver=quantum.plugins.extreme.nova.extreme_scheduler_v
2.ExtrSchedulerV2
network_api_class=quantum.plugins.extreme.nova.quantumv2_api.API
For details about OpenStack configuration parameters, see the following files, which are controlled
from devstack's localrc and cannot be modified directly:
Nova:/etc/nova/nova.conf
Quantum:/etc/quantum/quantum.conf
OVS Quantum Plugin:/etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini
OpenStack 2.0: /etc/quantum/plugins/extreme/extreme_qunatum_plugin.ini
Starting OpenStack on OSController
Note
Several configuration parameters for running OpenStack Nova/Neutron are configured from
devstack's localrc (/home/stack/devstack/localrc). Extreme Networks has extended
this localrc to include key parameters for OpenStack 2.0 along with the OVS plugin. If you
need to change these parameters, do so before starting OpenStack on the OSController (see
Modifying the Devstack localrc Parameters on page 23).
To start OpenStack on the OSController:
Run stack.sh to start OpenStack controller with the above configurations in localrc:
stack@OSController$ cd /opt/stack/devstack
stack@OSController$ ./stack.sh
(upon successful start)
Horizon is now available at http://192.168.9.11/
Keystone is serving at http://192.168.9.11:5000/v2.0/
Examples on using novaclient command line is in exercise.sh
The default users are: admin and demo
The password: nova123
This is your host ip: 192.168.9.11 stack.sh completed in 91 seconds.
Starting and Stopping Extreme Networks OpenStack

Extreme Networks OpenStack Plugin 2.0 Installation Guide 24
Starting OpenStack on OSHost1 and OSHost2
To start OpenStack on OSHost1 and OSHost2:
After successfully starting the OSController, run stack.sh to start OpenStack computes (you do
not need to modify localrc on compute hosts).
Note
Always start the controller first before starting any compute hosts.
stack@OSHost1$ cd /opt/stack/devstack
stack@OSHost1$ ./stack.sh
(upon successful launch)
This is your host ip: 192.168.9.12
stack.sh completed in 19 seconds.
-----------------------------------------------
stack@OSHost2$ cd /opt/stack/devstack
stack@OSHost2$ ./stack.sh
(upon successful launch)
This is your host ip: 192.168.9.13
stack.sh completed in 19 seconds.
Verifying the OSController and OSHosts
After starting OpenStack on OSController and OSHosts, you can verify that they are setup properly.
To verify OSController and OSHosts setup:
1 To grant permission to execute commands using OpenStack Compute (Nova), Networking
(Neutron), and Identity Service (Keystone):
source /opt/stack/devstack/openrc admin admin
Note
If interested, read the openrc file.
Starting and Stopping Extreme Networks OpenStack

Extreme Networks OpenStack Plugin 2.0 Installation Guide 25
2 Type the following commands to view the status of OSController and OSHost1/OSHost2:
stack@OSController:~/extr_openstack_v200b20/setup$ nova service-list
+-----------------+-------------+---------+--------+------
+--------------------------+
| Binary | Host | Zone | Status | State|
Updated_at |
+-----------------+-------------+---------+--------+------
+--------------------------+
| nova-conductor | OSController| internal| enabled| up |
2014-01-03T20:25:48.000000|
| nova-cert | OSController| internal| enabled| up |
2014-01-03T20:25:50.000000|
| nova-scheduler | OSController| internal| enabled| up |
2014-01-03T20:25:52.000000|
| nova-consoleauth| OSController| internal| enabled| up |
2014-01-03T20:25:46.000000|
| nova-compute | OSHost1 | nova | enabled| up |
2014-01-03T20:25:49.000000|
| nova-compute | OSHost2 | nova | enabled| up |
2014-01-03T20:25:54.000000|
+-----------------+-------------+---------+--------+------
+--------------------------+

stack@OSController:~/extr_openstack_v200b20/setup$ nova host-list
+--------------+-------------+----------+
| host_name | service | zone |
+--------------+-------------+----------+
| OSController | conductor | internal |
| OSController | cert | internal |
| OSController | scheduler | internal |
| OSController | consoleauth | internal |
| OSHost1 | compute | nova |
| OSHost2 | compute | nova |
+--------------+-------------+----------+
Populating the Topology Database
To use OpenStack 2.0, the topology database must be populated first with the proper topology
informationthe actual physical topology.
The key topology information includes:
Zones and pods information. [zone / pod]
Hostnames of all the servers (OSController, Network and Compute) and their roles. [server_host]
L2 devices (TOR switches) login information. [device]
L2 device slot/port information. [dev_port]
Inter-connection between servers and TOR switches [server_device_connect]
Inter-connection between TORs and other devices. [dev_interconnect]
You can create a new script or edit an existing sample script. Sample scripts are available under the /
home/stack/extr_openstack_v200bXX/setup/configs directory (where XX is the build
Starting and Stopping Extreme Networks OpenStack

Extreme Networks OpenStack Plugin 2.0 Installation Guide 26
number). Choose an example that is similar to your topology and modify it. The following procedure
shows how to edit an existing script.
Note
You must run the script on OSController every time devstack (stack.sh) is started, and before
creating virtual networks.
To edit an existing sample json script to update the topology database:
1 On OSController, navigate to the folder with the sample json scripts:
cd /home/stack/extr_openstack_v200bXX/setup/configs
Where XX is the build number.
2 Copy and edit a sample json script (in this example,
bens_one_tier_two_tor_topology.json):
stack@OSController:~/extr_openstack_v200bXX/setup/configs$ ls
create_tenant.json create_tenant_vr.json one_tier_one_tor.json
one_tier_two_tor.json openstack_init.json
cp one_tier_two_tor.json bens_one_tier_two_tor_topology.json
vi bens_one_tier_two_tor_topology.json
3 Modify the hosts part of the json file to define the servers and appliances in your OpenStack pod.
Ensure that the name matches the name on the server/appliance and that the number of ports/NICs
is defined. (These are the data NICs that make up the bond on the servers, and the single NIC for
border gateway devices.)
"hosts": [
{ "name":"OSController", "type":"network",
"num_of_nics":"2" },
{ "name":"OSHost1", "type":"compute", "num_of_nics":"2" },
{ "name":"OSHost2", "type":"compute", "num_of_nics":"2" },
{ "name":"ose-bgw1", "type":"appliance",
"num_of_nics":"1" },
{ "name":"ose-bgw2", "type":"appliance", "num_of_nics":"1" }
],
Note
Ose-bgw1 and osebgw2 are not Extreme Networks devices, but instead "appliances"
(routers, firewalls, etc.) that plug into the TOR switches and then forward the VM traffic.
Starting and Stopping Extreme Networks OpenStack

Extreme Networks OpenStack Plugin 2.0 Installation Guide 27
4 Modify each TOR switch's information to include the appropriate management (control network) IP
address and the ports used for the connections between the TOR switches, the servers, and the
appliances:
Note
In this example:
TOR1 has an IP address of 192.168.50.12, has ports 7,3,5,8,4,6 going to the servers, port
46 going to the appliance (Border Gateway), and ports 19 and 20 configured in a LAG
between the TOR switches.
TOR2 has an IP address of 192.168.50.13, has ports 7,3,5,8,4,6 going to the servers, port
46 going to the appliance (Border Gateway), and ports 19 and 20 configured in a LAG
between the TOR switches.
If the port is used for the storage network, include storage_port:1; if the port is a non-
master part of the LAG, add master_port:0
"devices": [
{
"name":"TOR1",
"mgmt_ip_addr":"192.168.50.12",
"default_vlan":"Default",
"slots" : [
{
"ports": [
{"port_id":"7"},
{"port_id":"3"},
{"port_id":"5"},
{"port_id":"19", "storage_port":"1"},
{"port_id":"20",
"master_port":"0","storage_port":"1"},
{"port_id":"8", "storage_port":"1"},
{"port_id":"4", "storage_port":"1"},
{"port_id":"6", "storage_port":"1"},
{"port_id":"46"}
]
}
]
},
{
"name":"TOR2",
"mgmt_ip_addr":"192.168.50.13",
"default_vlan":"Default",
"slots" : [
{
"ports": [
{"port_id":"8"},
{"port_id":"4"},
{"port_id":"6"},
{"port_id":"19", "storage_port":"1"},
{"port_id":"20",
"master_port":"0","storage_port":"1"},
{"port_id":"7", "storage_port":"1"},
{"port_id":"3", "storage_port":"1"},
{"port_id":"5", "storage_port":"1"},
{"port_id":"46"}
]
}
]
}
],
Starting and Stopping Extreme Networks OpenStack

Extreme Networks OpenStack Plugin 2.0 Installation Guide 28
5 Specify the authentication to the switches. In this example, the username is admin, and the
password is blank.
device_auth_info": {"user_name":"admin", "password":""},
Starting and Stopping Extreme Networks OpenStack

Extreme Networks OpenStack Plugin 2.0 Installation Guide 29
6 Finally, identify the connections between the devices with the device name, followed by the port
number. Notice how the LAG has two physical connections in this example:
Port 7 on TOR1 is connected to one of the two OSController data network ports
Port 8 on TOR2 is connected to the other OSController data network port
Port 3 on TOR1 is connected to one of the two OSHost1 data network ports
Port 4 on TOR2 is connected to the other OSHost1 data network port
Port 5 on TOR1 is connected to one of the two OSHost2 data network ports
Port 6 on TOR2 is connected to the other OSHost2 data network port
Port 19 on TOR1 is connected as a master to port 19 on TOR2
Port 20 on TOR1 is connected as a slave to port 20 on TOR2
Port 46 on TOR1 is connected to Border Gateway appliance 1
Port 46 on TOR2 is connected to Border Gateway appliance 2
"connections": [
{"left":"device:TOR1:7","right":"host:OSController"},
{"left":"device:TOR2:8","right":"host:OSController"},
{"left":"device:TOR1:3","right":"host:OSHost1"},
{"left":"device:TOR2:4","right":"host:OSHost1"},
{"left":"device:TOR1:5","right":"host:OSHost2"},
{"left":"device:TOR2:6","right":"host:OSHost2"},
{"left":"device:TOR1:19","right":"device:TOR2:19"},
{"left":"device:TOR1:20","right":"device:TOR2:20"},
{"left":"device:TOR1:48","right":"host:ose-bgw1"},
{"left":"device:TOR2:48","right":"host:ose-bgw2"}
]
After you finish editing the file, it should resemble the following. The following json script is based
on the reference topology (see Reference Topology Setup on page 6):
{
"zones": [
{
"name": "zone1",
"pods": [
{
"name": "pod1",
"hosts": [
{
"name": "OSController",
"type": "network",
"num_of_nics": "2"
},
{
"name": "OSHost1",
"type": "compute",
"num_of_nics": "2"
},
{
"name": "OSHost2",
"type": "compute",
"num_of_nics": "2"
},
{
"name": "ose-bgw1",
"type": "appliance",
"num_of_nics": "1"
},
{
"name": "ose-bgw2",
"type": "appliance",
"num_of_nics": "1"
}
],
"devices": [
{
"name": "TOR1",
"mgmt_ip_addr": "192.168.50.12",
"default_vlan": "Default",
"slots": [
{
"ports": [
{
"port_id": "7"
},
{
"port_id": "3"
},
{
"port_id": "5"
},
{
"port_id": "19",
"storage_port": "1"
},
{
"port_id": "20",
"master_port": "0",
"storage_port": "1"
},
{
"port_id": "8",
"storage_port": "1"
},
{
"port_id": "4",
"storage_port": "1"
},
{
"port_id": "6",
"storage_port": "1"
},
{
"port_id": "46"
}
]
}
]
},
{
"name": "TOR2",
"mgmt_ip_addr": "192.168.50.13",
"default_vlan": "Default",
"slots": [
{
"ports": [
{
"port_id": "8"
},
{
"port_id": "4"
},
{
"port_id": "6"
},
{
"port_id": "19",
"storage_port": "1"
},
{
"port_id": "20",
"master_port": "0",
"storage_port": "1"
},
{
"port_id": "7",
"storage_port": "1"
},
{
"port_id": "3",
"storage_port": "1"
},
{
"port_id": "5",
"storage_port": "1"
},
{
"port_id": "46"
}
]
}
]
}
],
"device_auth_info": {
"user_name": "admin",
"password": ""
},
"connections": [
{
"left": "device:TOR1:7",
"right": "host:OSController"
},
{
"left": "device:TOR2:8",
"right": "host:OSController"
},
{
"left": "device:TOR1:3",
"right": "host:OSHost1"
},
{
"left": "device:TOR2:4",
"right": "host:OSHost1"
},
{
"left": "device:TOR1:5",
"right": "host:OSHost2"
},
{
"left": "device:TOR2:6",
"right": "host:OSHost2"
},
{
"left": "device:TOR1:19",
"right": "device:TOR2:19"
},
{
"left": "device:TOR1:20",
"right": "device:TOR2:20"
},
{
"left": "device:TOR1:46",
"right": "host:ose-bgw1"
},
{
"left": "device:TOR2:46",
"right": "host:ose-bgw2"
}
]
}
]
}
]
}
Starting and Stopping Extreme Networks OpenStack

Extreme Networks OpenStack Plugin 2.0 Installation Guide 30
7 Now that the information represents the physical topology, the database can be populated. Run the
following:
cd ..
stack@OSController:~/extr_openstack_v200b17/setup/$ pwd
/home/stack/extr_openstack_v200b17/setup/

stack@OSController:~/extr_openstack_v200b17/setup/$ source /opt/stack/
devstack/openrc admin admin

stack@OSController:~/extr_openstack_v200b17/setup/topology$./
extreme_prep_topology.py -i configs/
bens_one_tier_two_tor_topology.json

Topology file: configs/bens_one_tier_two_tor_topology.json
Loading topology description...done.
Cleaning current topology database...done
Creating topology...
Creating zone:zone1
Creating pod:pod1
Creating host:OSController
Creating host:OSHost1
Creating host:OSHost2
Creating host:ose-bgw1
Creating host:ose-bgw2
Creating device:TOR1 nslots: 1 nports: 9
Creating device:TOR2 nslots: 1 nports: 9
Creating connection: device:TOR1:7<->host:OSController
Creating connection: device:TOR2:8<->host:OSController
Creating connection: device:TOR1:3<->host:OSHost1
Creating connection: device:TOR2:4<->host:OSHost1
Creating connection: device:TOR1:5<->host:OSHost2
Creating connection: device:TOR2:6<->host:OSHost2
Creating connection: device:TOR1:19<->device:TOR2:19
Creating connection: device:TOR1:20<->device:TOR2:20
Creating connection: device:TOR1:46<->host:ose-bgw1
Creating connection: device:TOR2:46<->host:ose-bgw2
Configuring the Network FabricLAG/MLAG
After the topology database is prepared, the next step is to run extreme_prep_openstack.py
using config/openstack_init.json as an input.
The openstack_init.json script sets up the initial fabric on the TOR switches. The storage VLAN
is created and sharing is enabled on the ISC links, plus server ports. Additionally, the script configures
MLAG on both the TOR1 and TOR2 switches, complete with the ISC, and enables MLAG on the ports
facing the OSController and compute nodes. A dynamic ACL is also created to block VRRP
advertisements to make VRRP active/active on the TOR1 and TOR2 switches.
To configure the network fabric on the TOR switches:
Starting and Stopping Extreme Networks OpenStack

Extreme Networks OpenStack Plugin 2.0 Installation Guide 31
1 Change the subnet, if needed, in the openstack_init.json file. The openstack_init.json
file describes the external network that virtual machines can use. This is a VLAN called PUBLIC on
the switch:
stack@OSController:~/extr_openstack_v200b17/setup$ cat configs/
openstack_init.json
//
// This file contains a json description for an extrnal
// network to be created by the extreme_prep_openstack.py
// script. This example creates a public network to which
// tenants within the openstack environment can connect
// for external access.
//
{
"extnets" : [
{
"name": "PUBLIC",
"owner": "admin",
"subnets" : [
{
"name": "PUBLIC-subnet",
"gateway": "192.168.24.1",
"range": "192.168.24.0/24"
}
]
}
]
}
2 Run the script:
stack@OSController:~/extr_openstack_v200bXX/setup$ ./
extreme_prep_openstack.py -i configs/openstack_init.json
OpenStack init file: configs/openstack_init.json
Loading initial OpenStack objects description...done.
Creating initial OpenStack objects...
Creating extnet:PUBLIC
Creating subnet:PUBLIC-subnet 192.168.24.1 192.168.24.0/24
Where XX is the build number.
The following excerpts from the script extreme_prep_openstack.py show the TOR switch
configuration changes:
TOR1 switch configuration additions:
#
# Module vlan configuration.
#
create vlan "isc-vlan"
configure vlan isc-vlan tag 4093
create vlan "storage-vlan"
configure vlan storage-vlan tag 4094
enable sharing 19 grouping 19-20 algorithm address-based L2 lacp
enable sharing 7 grouping 7 algorithm address-based L2 lacp
enable sharing 3 grouping 3 algorithm address-based L2 lacp
enable sharing 5 grouping 5 algorithm address-based L2 lacp
configure vlan isc-vlan add ports 19 tagged
configure vlan storage-vlan add ports 4, 6, 8, 19 tagged
configure vlan isc-vlan ipaddress 1.1.1.1 255.255.255.0
#
# Module acl configuration.
Starting and Stopping Extreme Networks OpenStack

Extreme Networks OpenStack Plugin 2.0 Installation Guide 32
#
create access-list NO-ISC-VRRP-TOR1 " destination-address
224.0.0.18/32 ;" " deny ;" application "Cli"
configure access-list add NO-ISC-VRRP-TOR1 last priority 0 zone SYSTEM
ports 19 ingress
TOR2 switch configuration additions:
#
# Module vlan configuration.
#
create vlan "isc-vlan"
configure vlan isc-vlan tag 4093
create vlan "storage-vlan"
configure vlan storage-vlan tag 4094
enable sharing 19 grouping 19-20 algorithm address-based L2 lacp
enable sharing 8 grouping 8 algorithm address-based L2 lacp
enable sharing 4 grouping 4 algorithm address-based L2 lacp
enable sharing 6 grouping 6 algorithm address-based L2 lacp
configure vlan isc-vlan add ports 19 tagged
configure vlan storage-vlan add ports 3, 5, 7, 19 tagged
configure vlan isc-vlan ipaddress 1.1.1.2 255.255.255.0
#
# Module acl configuration.
#
create access-list NO-ISC-VRRP-TOR2 " destination-address
224.0.0.18/32 ;" " deny ;" application "Cli"
configure access-list add NO-ISC-VRRP-TOR2 last priority 0 zone SYSTEM
ports 19 ingress
configure access-list add NO-ISC-VRRP-TOR2 last priority 0 zone SYSTEM
ports 20 ingress
#
# Module vsm configuration.
#
configure mlag ports convergence-control fast
create mlag peer "TOR1"
configure mlag peer "TOR1" ipaddress 1.1.1.1 vr VR-Default
enable mlag port 4 peer "TOR1" id 2
enable mlag port 6 peer "TOR1" id 3
enable mlag port 8 peer "TOR1" id 1
Shutting Down OpenStack on All Servers
You should shut down OpenStack after you have finished using it or before restarting it (for example,
after a configuration change).
Run unstack.sh on all servers:
stack@OSController$ cd /opt/stack/devstack
stack@OSController$ ./unstack.sh
Logs
Devstack generates log files for each OpenStack components in /opt/stack/logs/.
You can find useful log information:
Starting and Stopping Extreme Networks OpenStack

Extreme Networks OpenStack Plugin 2.0 Installation Guide 33
Devstack: /opt/stack/logs/stack.sh.log
Nova API: /opt/stack/logs/screen/screen-n-api.log
Nova Compute: /opt/stack/logs/screen/screen-n-cpu.log
Nova Scheduler: /opt/stack/logs/screen/screen-n-sch.log
Quantum Server: /opt/stack/logs/screen/screen-q-svc.log
Quantum Agents: /opt/stack/logs/screen/screen-q-agt.log
Quantum Agents: /opt/stack/logs/screen/screen-q-dhcp.log
Quantum Agents: /opt/stack/logs/screen/screen-q-l3.log
Starting and Stopping Extreme Networks OpenStack

Extreme Networks OpenStack Plugin 2.0 Installation Guide 34
8 Managing Tenants and Virtual
Machines
Creating Tenants
Creating Tenants Using Python Script and Configuration File (L3 Agent)
Creating Tenants Using Python Script and Configuration File (Virtual Routers)
Verifying TOR Switch Configuration after Tenant Creation (L3 Agent)
Verifying TOR Switch Configuration after Tenant Creation (Virtual Routers)
Creating Tenant Virtual Machine Instances
Migrating Tenant Virtual Machine Instances (Live Migration)
Deleting Tenant Virtual Machine Instances
Creating Tenants
After setting up the initial fabric (see Configuring the Network FabricLAG/MLAG on page 31), you can
create tenants along with their respective networks. After creating tenants, you can then create virtual
machines (see Creating Tenant Virtual Machine Instances on page 44).
To create tenants, run the Python script extreme_create_tenant.py that takes as its input a
configuration file. The script and configuration files are available in the Configs folder.
There are two possible configuration files:
create_tenant.jsonuses L3 agent-based router (see see Creating Tenants Using Python
Script and Configuration File (L3 Agent) on page 35)
create_tenant_vr.jsonuses Extreme Networks' virtual router (see Creating Tenants Using
Python Script and Configuration File (Virtual Routers) on page 37)
Creating Tenants Using Python Script and Configuration File (L3
Agent)
One option for creating tenants is with a configuration file using an L3 Agent with PUBLIC network
used as the default gateway.
To create tenants using a Python script with a configuration file (L3 agent) as the input:
1 Edit the create_tenant.jsonconfiguration file to reflect the tenants and networks that need to
be created:
stack@OSController: cd /home/stack/extr_openstack_v200bXX/setup/
configs
stack@OSController: vi create_tenant.json
Where XX is the build number.
The following is the create_tenant.json configuration file. Edit it to match the configured
topology.
//
// This file provides a description of an initial
// tenant configuration that can be ingested by the
// the extreme_create_tenant.py script.
//
// Through this interface, tenants, their instances,
// and associated internal and external networks can
// be created without requiring a series of individual
// OpenStack interface commands.
//
// This example uses an L3-agent based router.
{
"tenants": [
{
"name": "tenant1", //Specify the tenant
name
"description": "tenant1 description", //Specify the tenant
description
//
// List of VMs
//
"instances": [ //Begin VM definition
{
"name": "tenant1-vm1", //Specify VM name
"image": "cirros-0.3.1-x86_64-uec", //Specify VM image
"flavor": "m1.tiny", //Specify VM flavor
// Networks connected to the VM
"networks": [
"tenant1-int-net" //Specify the
network for the VM. You will actually create a network later in this
file
]
},
{ //Repeat for
additional VMs
"name": "tenant1-vm2",
"image": "cirros-0.3.1-x86_64-uec",
"flavor": "m1.tiny",
// Networks connected to the VM
"networks": [
"tenant1-int-net"
]
}
],
//
// List of internal networks
//
"networks": [ //create the
internal networks
// Internal network on which the VMs will connect
{
"name": "tenant1-int-net", //specify the
internal network name
"subnets": [
{
"gateway": "192.168.1.1", //specify the
network subnet IP, name, and network address.
"name": "tenant1-int-subnet", //(Note: the
switches will not be configured with the IP address)
"network": "192.168.1.0/24"
}
]
}
],
//
// List of routers
//
"routers": [
// An l3-agent based router
{
"name": "tenant1-
router1", //specify the name for an L3-
agent based router
// External network connected to the router. Optional.
"ext_network":
"PUBLIC", //specify the external
network.
// Internal networks connected to the
router //This network was configured earlier with
extreme_prep_openstack.py and openstack_init.json
"int_subnets": [
"tenant1-int-
subnet" //specify the internal subnet
that will utilize the router.
]
}
],
//
// List of users
//
"users": [
{
"name": "t1user", //
create a user and configure a password
"password": "nova123"
}
]
}
]
}
Managing Tenants and Virtual Machines

Extreme Networks OpenStack Plugin 2.0 Installation Guide 36
2 Run the Python file extreme_create_tenant.py:
stack@OSController:~/extr_openstack_v200bXX/setup$ ./
extreme_create_tenant.py -i configs/create_tenant.json
Tenant file: configs/create_tenant.json
Loading tenant description...done.
Creating tenant tenant1
Creating t1user
Creating internal network tenant1-int-net
Creating subnet tenant1-int-subnet
Creating router tenant1-router1
Creating server tenant1-vm1
Creating server tenant1-vm2
Where XX is the build number.
Creating Tenants Using Python Script and Configuration File
(Virtual Routers)
One option for creating tenants is with a configuration file using virtual routers (see the following
figure).
Managing Tenants and Virtual Machines

Extreme Networks OpenStack Plugin 2.0 Installation Guide 37
Figure 6: Reference Topology with Border Gateways with Created Tenant (Virtual
Routers)
To create tenants using a Python script with a configuration file (virtual routers) as the input:
Managing Tenants and Virtual Machines

Extreme Networks OpenStack Plugin 2.0 Installation Guide 38
1 Edit the create_tenant_vr.jsonconfiguration file to reflect the tenants and networks that
need to be created:
stack@OSController: cd /home/stack/extr_openstack_v200bXX/setup/
configs
stack@OSController: vi create_tenant_vr.json
Where XX is the build number.
The following is the create_tenant_vr.json configuration file. Edit it to match the configured
topology.
//
// This file provides a description of an initial
// tenant configuration that can be ingested by the
// the extreme_create_tenant.py script.
//
// Through this interface, tenants, their instances,
// and associated internal and external networks can
// be created without requiring a series of individual
// OpenStack interface commands.
//
// This example uses an Extreme VR based router.
{
"tenants": [
{
"name": "tenant1", //Specify the tenant
name
"description": "tenant1 description", //Specify the tenant
description
//
// List of VMs
//
"instances": [ //Begin VM definition
{
"name": "tenant1-vm1", //Specify VM name
"image": "cirros-0.3.1-x86_64-uec", //Specify VM image
"flavor": "m1.tiny", //Specify VM flavor
// Networks connected to the VM
"networks": [
"tenant1-int-net" //Specify the
network for the VM. You will actually create a network later in this
file
]
},
{ //Repeat for
additional VMs
"name": "tenant1-vm2",
"image": "cirros-0.3.1-x86_64-uec",
"flavor": "m1.tiny",
// Networks connected to the VM
"networks": [
"tenant1-int-net"
]
}
],
//
// List of internal networks
//
"networks": [ //Create the
networks
// Internal network on which the VMs will connect
{
"name": "tenant1-int-net", //Specify the
network name
"extr:vrrp": true, //"extr:vrrp":
true flag makes the network internal
"subnets": [
{
"gateway": "192.168.1.1", //Specify the
network gateway
"name": "tenant1-int-subnet", //Specify the
network subnet name
"network": "192.168.1.0/24" //Specify the
network subnet IP address and range
}
]
},
// Network between TOR1 and BGW1
{ //Repeat for the
networks between BGWs and TORs
"name": "tenant1-bgw1",
"extr:host": "ose-bgw1", //"extr:host":
"ose-bgw1" flag shows the network to be between TOR1 and BGW1
"subnets": [
{
"gateway": "192.168.68.1",
"name": "tenant1-bgw1-subnet",
"network": "192.168.68.0/29"
}
]
},
// Network between TOR2 and BGW2
{
"name": "tenant1-bgw2",
"extr:host": "ose-bgw2", //"extr:host":
"ose-bgw2" flag shows the network to be between TOR2 and BGW2
"subnets": [
{
"gateway": "192.168.68.9",
"name": "tenant1-bgw2-subnet",
"network": "192.168.68.8/29"
}
]
},
// Network between TOR1 and TOR2, for redundancy
{
"name": "tenant1-bgw3",
"extr:host": "TOR1,TOR2", //"extr:host":
"TOR1,TOR2" flag shows the network to be between TOR1 and TOR2
"subnets": [
{
"name": "tenant1-bgw3-subnet",
"network": "192.168.68.16/29"
}
],
"ports": [ //Since this
network is between TOR1 and TOR2, specify each TOR IP as a port
{
"name": "tenant1-bgw3-TOR1", //port name
for TOR1 should match the port name configured below
"fixed_ip": "192.168.68.19", //fixed_ip
will be assigned to the TOR bgw3 subnet
"subnet": "tenant1-bgw3-subnet", //specify the
bgw3 subnet name
"extr:device":
"TOR1" //"extr:device":"TOR1" is used to install the
port on TOR1 only
},
{ //Repeat the
port information as above for TOR2
"name": "tenant1-bgw3-TOR2",
"fixed_ip": "192.168.68.20",
"subnet": "tenant1-bgw3-subnet",
"extr:device": "TOR2"
}
]
}
],
//
// List of routers
//
"routers": [
// An Extreme VR based router
{
"name": "tenant1-router1", //Specify the
name of a VR-based Router
"extr:vr": true, //"extr:vr":
true flag makes the router Extreme VR based
// Internal networks connected to the router
"int_subnets": [ //Specify the
internal networks connected to the router (bgw1 and bgw2 are treated
as internal)
"tenant1-int-subnet",
"tenant1-bgw1-subnet",
"tenant1-bgw2-subnet"
],
// Internal ports connected to the router
"int_ports": [ //Specify the
internal ports for bgw3 network
"tenant1-bgw3-TOR1",
"tenant1-bgw3-TOR2"
],
"routes": [ //Specify the
internal routes for TOR1 and TOR2
// Primary route for TOR1
{
"destination": "0.0.0.0/0", //Default
gateway to BGW1
"nexthop": "192.168.68.2",
"device": "TOR1",
"metric": "1"
},
// Backup route for TOR1
{
"destination": "0.0.0.0/0", //Backup
default gateway through TOR2
"nexthop": "192.168.68.20",
"device": "TOR1",
"metric": "2"
},
// Primary route for TOR2
{
"destination": "0.0.0.0/0", //Default
gateway to BGW2
"nexthop": "192.168.68.10",
"device": "TOR2",
"metric": "1"
},
// Backup route for TOR2
{
"destination": "0.0.0.0/0", //Backup
default gateway through TOR1
"nexthop": "192.168.68.19",
"device": "TOR2",
"metric": "2"
}
]
}
],
//
// List of users
//
"users": [
{
"name": "t1user", //Specify
the username and password for the tenant
"password": "nova123"
}
]
}
]
}
Managing Tenants and Virtual Machines

Extreme Networks OpenStack Plugin 2.0 Installation Guide 39
2 Run the Python file extreme_create_tenant.py:
extreme_create_tenant.py -i configs/create_tenant_vr.json
Verifying TOR Switch Configuration after Tenant Creation (L3
Agent)
After creating a tenant using an L3 Agent with PUBLIC network used as the default gateway (see
Creating Tenants Using Python Script and Configuration File (L3 Agent) on page 35), you should make
sure that the TOR switches were configured correctly.
After using the L3 agent method, there should be a single VLAN across to the border gateways, and
the PUBLIC VLAN should be connected to the L3 agent.
Managing Tenants and Virtual Machines

Extreme Networks OpenStack Plugin 2.0 Installation Guide 40
Show VLAN on TOR switches:
* OS-Tor3.9 # sh vlan
----------------------------------------------------------------------
-----------------------
Name VID Protocol Addr Flags
Proto Ports Virtual

Active router
/
Total
----------------------------------------------------------------------
-----------------------
Default 1 ------------------------------------------------
ANY 0 /0 VR-Default
isc-vlan 4093 1.1.1.2 /24 ------I---------------------
ANY 1 /1 VR-Default
Mgmt 4095 10.68.67.23 /24 ----------------------------
ANY 1 /1 VR-Mgmt
OSMgmt 4092 192.168.9.23 /24 ----------------------------
ANY 1 /2 VR-OSMgmt
PUBLIC 4091 ------------------------------------------------
ANY 2 /2 VR-Default
storage-vlan 4094 ------------------------------------------------
ANY 4 /4 VR-Default
tenant1-int-net-101 101
------------------------------------------------ ANY 3 /3 VR-
Default
----------------------------------------------------------------------
-----------------------
Flags : (B) BFD Enabled, (c) 802.1ad customer VLAN, (C) EAPS Control
VLAN,
(d) Dynamically created VLAN, (D) VLAN Admin Disabled,
(e) CES Configured, (E) ESRP Enabled, (f) IP Forwarding
Enabled,
(F) Learning Disabled, (i) ISIS Enabled, (I) Inter-Switch
Connection VLAN for MLAG,
(k) PTP Configured, (l) MPLS Enabled, (L) Loopback Enabled,
(m) IPmc Forwarding Enabled, (M) Translation Member VLAN or
Subscriber VLAN,
(n) IP Multinetting Enabled, (N) Network Login VLAN, (o) OSPF
Enabled,
(O) Flooding Disabled, (p) PIM Enabled, (P) EAPS protected
VLAN,
(r) RIP Enabled, (R) Sub-VLAN IP Range Configured,
(s) Sub-VLAN, (S) Super-VLAN, (t) Translation VLAN or Network
VLAN,
(T) Member of STP Domain, (v) VRRP Enabled, (V) VPLS Enabled,
(W) VPWS Enabled,
(Z) OpenFlow Enabled
Total number of VLAN(s) : 7
Managing Tenants and Virtual Machines

Extreme Networks OpenStack Plugin 2.0 Installation Guide 41
Verifying TOR Switch Configuration after Tenant Creation (Virtual
Routers)
After creating a tenant using virtual routers (see Creating Tenants Using Python Script and
Configuration File (Virtual Routers) on page 37), you should make sure that the TOR switches were
configured correctly.
After using the virtual router method, there should be three VLANs created on TOR1 and three on TOR2
in support of a single tenant network.
As seen below on TOR1, VLANs T1-bgw1net-102, T1-bgw3net-104, and T1-net-101 were created within
the VR-T1-router virtual router. T1-net-101 is the VLAN that extends to the serversit has the port of the
OSController, but no other server (yet). When a virtual machine (VM) is created, the server that houses
that particular VM will then have its port added on the switch. The T1-bgw1net-102 VLAN extends to the
Border Gateways and will have port 46 added in this case (see Reference Topology Setup on page 6).
T1-bgw3net-104 is used as a redundancy VLANif the Border Gateway on TOR1 goes down, there is a
route within the virtual router to route the traffic through this VLAN to the TOR2 switch and use its
Border Gateway.
Managing Tenants and Virtual Machines

Extreme Networks OpenStack Plugin 2.0 Installation Guide 42
1 Show VLANs on TOR switches:
* TOR1.2 # sh vlan
----------------------------------------------------------------------
-
Name VID Protocol Addr Flags Proto Ports Virtual router
Active/Total
----------------------------------------------------------------------
-
ctrl-net 4092 192.168.50.12/24---- ANY 2 /2 VR-Default
Default 1 -------------------- ANY 0 /0 VR-Default
isc-vlan 4093 1.1.1.1 /24 I ANY 1 /1 VR-Default
Mgmt 4095 10.68.61.226 /24---- ANY 1 /1 VR-Mgmt
PUBLIC 4091 -------------------- ANY 3 /3 VR-Default
storage-vlan 4094 -------------------- ANY 3 /4 VR-Default
T1-bgw1net-102 102 30.0.0.1 /24 -f--- ANY 1 /1 VR-T1-router
T1-bgw3net-104 104 50.0.0.1 /24 -f--- ANY 1 /1 VR-T1-router
T1-net-101 101 10.0.0.3 /24 -f v- ANY 2 /2 VR-T1-router
----------------------------------------------------------------------
-
* TOR2.2 # sh vlan
----------------------------------------------------------------------
--
Name VID Protocol Addr Flags Proto Ports Virtual
Router
Active/Total
----------------------------------------------------------------------
--
ctrl-net 4092 192.168.50.13 /24 ---- ANY 1 /1 VR-
Default
Default 1 ---------------------- ANY 0 /0 VR-
Default
isc-vlan 4093 1.1.1.2 /24 I ANY 1 /1 VR-
Default
Mgmt 4095 10.68.61.227 /24 ----- ANY 1 /1 VR-
Mgmt
PUBLIC 4091 ----------------------- ANY 3 /3 VR-
Default
storage-vlan 4094 ----------------------- ANY 4 /4 VR-
Default
T1-bgw2net-103 103 40.0.0.1 /24 -f ANY 1 /1 VR-T1-
router
T1-bgw3net-104 104 50.0.0.2 /24 -f ANY 1 /1 VR-T1-
router
T1-net-101 101 10.0.0.4 /24 -f--v ANY 2 /2 VR-T1-
router
Managing Tenants and Virtual Machines

Extreme Networks OpenStack Plugin 2.0 Installation Guide 43
2 Display the contents of the IP routing tables:
* TOR1.7 # sh iproute vr "VR-T1-router"
Ori Destination Gateway Mtr Flags VLAN Duration
#s Default Route 30.0.0.10 1 UG---S-um--f- T1-bgw1net-102 0d:
0h:0m:24s
s Default Route 50.0.0.2 2 UG---S-um---- T1-bgw3net-104 0d:
0h:0m:23s
#d 10.0.0.0/24 10.0.0.3 1 U------um--f- T1-net-101 0d:0h:
0m:40s
#d 30.0.0.0/24 30.0.0.1 1 U------um--f- T1-bgw1net-102 0d:
0h:0m:33s
#d 50.0.0.0/24 50.0.0.1 1 U------um--f- T1-bgw3net-104 0d:
0h:0m:28s
* TOR2.1 # sh iproute vr "VR-T1-router"
Ori Destination Gateway Mtr Flags VLAN Duration
#s Default Route 40.0.0.10 1 UG---S-um--f- T1-bgw2net-103 0d:
0h:0m:46s
s Default Route 50.0.0.1 2 UG---S-um---- T1-bgw3net-104 0d:
0h:0m:45s
#d 10.0.0.0/24 10.0.0.4 1 U------um--f- T1-net-101 0d:0h:
1m:1s
#d 40.0.0.0/24 40.0.0.1 1 U------um--f- T1-bgw2net-103 0d:
0h:0m:54s
#d 50.0.0.0/24 50.0.0.2 1 U------um--f- T1-bgw3net-104 0d:
0h:0m:50s
Creating Tenant Virtual Machine Instances
The OpenStack is now ready to manage the lifecycle of tenant virtual machine (VM) instances. As a
tenant (or admin), it allows creating new and deleting (see Deleting Tenant Virtual Machine Instances
on page 51) existing VM instances. As an admin, it also allows migrating an active VM instance to a
new server (see Migrating Tenant Virtual Machine Instances (Live Migration) on page 50).
Managing Tenants and Virtual Machines

Extreme Networks OpenStack Plugin 2.0 Installation Guide 44
1 In a web browser on any machine, connect to the Horizon Dashboard at 192.168.9.11. The Log In
window appears.
Horizon is a web-based GUI for OpenStack. It runs on the OpenStack OSController and connects to
it via the control-net IP address of the OSController.
Figure 7: Horizon Login Window
Managing Tenants and Virtual Machines

Extreme Networks OpenStack Plugin 2.0 Installation Guide 45
2 Type login credentials in the User Name and Password boxes.
Log in as:
TenantWhen the Python script (see Creating Tenants on page 35) creates a tenant it provides
a user name and password that can be used for that tenant. Logging in as a tenant provides
access only to that tenant's resources.
AdminLogging in as admin allows you to access all tenants. User Name = admin; Password =
nova123.
The Instances window appears.
Figure 8: Instances Window
3 On the left panel, click the Project tab.
4 Click Instances.
Managing Tenants and Virtual Machines

Extreme Networks OpenStack Plugin 2.0 Installation Guide 46
5 Click the Launch Instance button. The Launch Instance window appears.
Figure 9: Launch Instance Window
6 From the Image drop-down list, select an image.
7 In the Instance Name box, type a name.
8 Click the Networking tab. The Launch InstanceNetworking Tab window appears.
Figure 10: Launch InstanceNetworking Tab Window
Managing Tenants and Virtual Machines

Extreme Networks OpenStack Plugin 2.0 Installation Guide 47
9 Select T1-net. This is the tenants VLAN within the T1 virtual router. The other VLANs connect to the
Border Gateways.
Figure 11: Launch Instances Window with T1-net Selected
T1-net now appears under the heading Selected Networks.
10 Click the Launch button
The Instances window appears showing the progress of creating the instance "Test1".
Figure 12: Instances Window with Instance "Test1" Creation in Progress
After the instance is created, it should have its name and an IP address and show no errors.
Figure 13: Instances Window with Successfully Created Instance "Test1"
Managing Tenants and Virtual Machines

Extreme Networks OpenStack Plugin 2.0 Installation Guide 48
11 To see additional details about the instance, click the instance name. The Instance Detailwindow
appears.
Figure 14: Instance Detail Window
Managing Tenants and Virtual Machines

Extreme Networks OpenStack Plugin 2.0 Installation Guide 49
12 Click the Console tab. The Instance Console window appears.
Figure 15: Instance Console Window
Note
If the screen does not respond, click the gray status bar.
The VM has an IP address within the T1-net and can use the T1-virtual router and its VLANs to access
Internet.
Migrating Tenant Virtual Machine Instances (Live Migration)
A running VM instance can be migrated to a new server without any interruption to the users. The
migration command can only be done from the controller using the nova API.
Note
VM migration is only supported between servers with compatible chipsets only.
The following procedure below explains how to migrate the tenant1 VM instance t1Server1 created
earlier from OSHost1 to OSHost2 (see Creating Tenant Virtual Machine Instances on page 44).
1 Authenticate as a user admin of tenant tenant1.
stack@OSController$ cd /opt/stack/devstack
stack@OSController$ source openrc admin tenant1
2 List all VM instances of this tenant to make sure the intended instance exists.
stack@OSController$ nova list
3 Perform the live migration.
OSController> nova live-migration t1Server1 OSHost2
Managing Tenants and Virtual Machines

Extreme Networks OpenStack Plugin 2.0 Installation Guide 50
4 Confirm that the instance is correctly migrated to the new target server. See the following figure.
+--------------------------+-----------+--------
+----------------------+
| ID | Name | Status |
Networks |
+--------------------------+-----------+--------
+----------------------+
| 753b2144-5d59-4670-. . . | t1Server1 | ACTIVE | tenant1-
net=10.0.1.2 |
| b0daa302-b0b6-4ee3-. . . | t1Server2 | ACTIVE | tenant1-
net=10.0.1.3 |
+--------------------------+-----------+--------
+----------------------+
Figure 16: All Instances Window
Deleting Tenant Virtual Machine Instances
To delete a tenant virtual machine (VM) instance:
1 Delete the interface using quantum router-interface-delete router_id
port=port_id:
stack@OSController:~$ quantum router-interface-delete
aa959db9-2f89-4bee-b99c-f05cbe41070b port=03a3d251-aebb-47f2-a53a-
da209fac8844
Removed interface from router aa959db9-2f89-4bee-b99c-f05cbe41070b.
2 Repeat preceding step until all ports belonging to the tenant subnet are detached from the router.
Managing Tenants and Virtual Machines

Extreme Networks OpenStack Plugin 2.0 Installation Guide 51
3 Use quantum router-port-list to check which existing ports are attached to the router:
stack@OSController:~$ quantum router-port-list aa959db9-2f89-4bee-
b99c-f05cbe41070b
+--------------------------------------+-----------------------------
+-------------------
+---------------------------------------------------------------------
------------+
| id | name
| mac_address |
fixed_ips
|
+--------------------------------------+-----------------------------
+-------------------
+---------------------------------------------------------------------
------------+
| 03a3d251-aebb-47f2-a53a-da209fac8844 | TOR2 VLAN port
| fa:16:3e:2d:c4:b7 | {"subnet_id": "a24edd2f-16f2-4345-
a75f-34f5576b7754", "ip_address": "10.1.0.4"} |
| 102a7f01-0a9b-469a-bec5-5d7dd926ec40 | T2-bgw3-TOR1
| fa:16:3e:5d:9d:28 | {"subnet_id": "dab4a5c3-d121-437f-add9-
c9e3f5a1bac9", "ip_address": "50.1.0.1"} |
| 3517f1d4-adb0-4daf-9577-3a855dbdf55c | T2-int-subnet gateway port
| fa:16:3e:c4:31:dd | {"subnet_id": "a24edd2f-16f2-4345-
a75f-34f5576b7754", "ip_address": "10.1.0.1"} |
| 3edd4354-8583-4b23-9a94-11feb8105793 | T2-bgw1-subnet gateway port
| fa:16:3e:94:ee:a4 | {"subnet_id": "430a0f09-d50e-4642-
b41c-62452ec0983b", "ip_address": "30.1.0.1"} |
| 4e5c2815-88b6-422a-920c-56bdc22609ca | TOR1 VLAN port
| fa:16:3e:11:78:c3 | {"subnet_id": "a24edd2f-16f2-4345-
a75f-34f5576b7754", "ip_address": "10.1.0.3"} |
| 6002a994-f454-4826-9fbe-1e5ab12d2f25 | T2-bgw3-TOR2
| fa:16:3e:47:84:26 | {"subnet_id": "dab4a5c3-d121-437f-add9-
c9e3f5a1bac9", "ip_address": "50.1.0.2"} |
| d3f6a0d9-daa7-4d6c-b351-cdaf4913e93b | TOR1 VLAN port
| fa:16:3e:00:8c:60 | {"subnet_id": "430a0f09-d50e-4642-
b41c-62452ec0983b", "ip_address": "30.1.0.2"} |
+--------------------------------------+-----------------------------
+-------------------
+---------------------------------------------------------------------
------------+
4 Delete the subnet using quantum subnet-delete subnet_id:
stack@OSController:~$ quantum subnet-delete 1637bc33-c18b-4722-92a3-
d8a9903081e7
Deleted subnet: 1637bc33-c18b-4722-92a3-d8a9903081e7
stack@OSController:~$
5 Delete the network using quantum net-delete network_id:
stack@OSController:~$ quantum net-delete d088bdfe-c97d-43d0-
a23f-16e413d35631
Deleted network: d088bdfe-c97d-43d0-a23f-16e413d35631
Managing Tenants and Virtual Machines

Extreme Networks OpenStack Plugin 2.0 Installation Guide 52
9 Resetting the Testbed
To reset servers and switches in the testbed back to their original state:
1 On OSController:
/opt/stack/devstack/unstack.sh
sudo /etc/init.d/nfs-kernel-server stop
sudo sed -i '/stack/d' /etc/exports
sudo sed -i '/instances/d' /etc/exports
sudo rm -rf /opt/stack && sudo rm -rf /etc/nova && sudo rm -rf /etc/
quantum && sudo rm -rf /etc/keystone && sudo rm -rf /etc/glance
sudo reboot
2 On OSHost1 and OSHost2:
/opt/stack/devstack/unstack.sh
sudo /etc/init.d/umountnfs.sh stop
sudo sed -i '/openstack/d' /etc/fstab
sudo sed -i '/OSController/d' /etc/fstab
sudo rm -rf /opt/cinder && sudo rm -rf /opt/stack && sudo rm -rf /etc/
nova && sudo rm -rf /etc/quantum && sudo rm -rf /etc/keystone && sudo
rm -rf /etc/glance
3 On TOR1 and TOR2 switches:
These switches should have the default.xsf file, so they only need to be completely unconfigured:
unconfig switch all
A Glossary
Compute node
Controller
DHCP
Hypervisor
Instance
KVM
LAG
Libvert
MLAG
Tenant
VM
VR
Compute node
Compute nodes form the resource core of the OpenStack compute cloud, providing the processing,
memory, network, and storage resources to run instances.
Controller
The controller orchestrates the network configuration of nodes including IP addresses, VLANs,
bridging, and manages routing for both public and private networks. The controller provides virtual
networks to enable compute servers to interact with each other and with the public network. All
machines must have a public and private network interface.
DHCP
Dynamic Host Configuration Protocol. DHCP is used by servers on an IP network to allocate IP
addresses to computers. DHCP automates the IP address configuration of a computer without a
network administrator. IP addresses are typically selected from a range of assigned IP addresses stored
in a database on the server and issued to a computer which requests a new IP address.
Hypervisor
A hypervisor or virtual machine monitor (VMM) is a piece of computer software, firmware or hardware
that creates and runs virtual machines.
Instance
A running virtual machine, or a virtual machine that can be used like a hardware server.
KVM
Kernel-based virtual machine. KVM is a virtualization infrastructure for the Linux kernel that turns it into
a hypervisor. See Hypervisor.
LAG
Link aggregation group. LAG is an open standards 802.3ad solution to bundle ports together for multi-
path support to increase resiliency and redundancy. LAGs allow you to combine (aggregate) multiple
network connections in parallel to increase throughput beyond what a single connection could sustain,
and allows you to provide redundancy if a links fails.
Libvert
Libvirt is an open source API, daemon, and management tool for managing platform virtualization. It
can be used to manage virtualization technologies, such as Linux KVM. These APIs are widely used in
the orchestration layer of hypervisors in the development of a cloud-based solution. See KVM and
Hypervisor.
MLAG
Multi-chassis link aggregation group. MLAG is an evolution of 802.3ad that allows the bundled ports to
be distributed to two chassis uplinks for chassis-level redundancy. See LAG.
Tenant
The OpenStack compute system (Nova) is designed to be used by many different cloud computing
consumers or customers acting as tenants on a shared system, using role-based access assignments.
Tenants are isolated resource containers forming the principal organizational structure within the
compute service. Tenants consist of a separate VLAN, volumes, instances, images, keys, and users.
VM
Virtual machine. A VM is a software-based emulation of a computer. Virtual machines act like they have
the computer architecture and functions of a physical computer. Some VMs emulate a complete
system platform with a full operating systemothers are designed to only run a particular program.
VR
Virtual router. A virtual router is a software-based routing framework that allows a host machine to act
as a typical hardware router over a local area network. A virtual router can enable a computer/server to
have the abilities of an actual physical router by performing network and packet routing functionality of
a router.
Glossary

Extreme Networks OpenStack Plugin 2.0 Installation Guide 55

Anda mungkin juga menyukai