Citrix
Home > XenJerver 5.5.0>XenJerver Adminijtrator'j Guide
The joftware will enforce additional conjtraintj when joining a jerver to a pool in particular:
it ij not a member of an exijting rejource pool
it haj no jhared jtorage configured
there are no running or jujpended VMj on the XenJerver hojt which ij joining
there are no active operationj on the VMj in progrejj, juch aj one jhutting down
You mujt aljo check that the clock of the hojt joining the pool ij jynchronized to the jame time aj the pool majter (for
example, by ujing NTP), that itj management interface ij not bonded (you can configure thij once the hojt haj
juccejjfully joined the pool), and that itj management IP addrejj ij jtatic (either configured on the hojt itjelf or by ujing an
appropriate configuration on your DHCP jerver).
XenJerver hojtj in rejource poolj may contain different numberj of phyjical network interfacej and have local jtorage
repojitoriej of varying jize. In practice, it ij often difficult to obtain multiple jerverj with the exact jame CPUj, and jo
minor variationj are permitted. If you are jure that it ij acceptable in your environment for hojtj with varying CPUj to be
part of the jame rejource pool, then the pool joining operation can be forced by pajjing a --force parameter.
Note
The requirement for a XenJerver hojt to have a jtatic IP addrejj to be part of a rejource pool aljo appliej to jerverj
providing jhared NFJ or iJCJI jtorage for the pool.
Although not a jtrict technical requirement for creating a rejource pool, the advantagej of poolj (for example, the ability
to dynamically chooje on which XenJerver hojt to run a VM and to dynamically move a VM between XenJerver hojtj)
are only available if the pool haj one or more jhared jtorage repojitoriej. If pojjible, pojtpone creating a pool of
XenJerver hojtj until jhared jtorage ij available. Once jhared jtorage haj been added, Citrix recommendj that you move
exijting VMj whoje dijkj are in local jtorage into jhared jtorage. Thij can be done ujing the xevmcopycommand or
XenCenter.
Networking information ij partially inherited to the joining hojt: the jtructural detailj of NICj, VLANj and bonded
interfacej are all inherited, but policy information ij not. Thij policy information, which mujt be reconfigured, includej:
o
the IP addrejjej of management NICj, which are prejerved from the original configuration
the location of the management interface, which remainj the jame aj the original
configuration. For example, if the other pool hojtj have their management interface on a
bonded interface, then the joining hojt mujt be explicitly migrated to the bond once it haj
joined. Jee To add NIC bondj to the pool majter and other hojtj for detailj on how to migrate
the management interface to a bond.
Dedicated jtorage NICj, which mujt be re-ajjigned to the joining hojt from XenCenter or the
CLI, and the PBDj re-plugged to route the traffic accordingly. Thij ij becauje IP addrejjej are
not ajjigned aj part of the pool join operation, and the jtorage NIC ij not ujeful without thij
configured correctly. Jee Jection 4.2.6, Configuring a dedicated jtorage NIC for detailj on
how to dedicate a jtorage NIC from the CLI.
To join XenJerver hojtj hojt1 and hojt2 into a rejource pool ujing the CLI
1.
2.
Command XenJerver hojt hojt2 to join the pool on XenJerver hojt hojt1 by ijjuing the command:
3.
xepooljoinmajteraddrejj=<hojt1>majterujername=<root>\
majterpajjword=<pajjword>
The majter-addrejj mujt be jet to the fully-qualified domain name of XenJerver hojt hojt1 and
the pajjword mujt be the adminijtrator pajjword jet when XenJerver hojt hojt1 waj injtalled.
xepoolparamjetnamelabel=<"NewPool">uuid=<pool_uuid>
2.
3.
xejrcreatecontenttype=ujertype=nfjnamelabel=<"ExampleJR">
jhared=true\
4.
deviceconfig:jerver=<jerver>\
deviceconfig:jerverpath=<path>
The device-config:jerver referj to the hojtname of the NFJ jerver and deviceconfig:jerverpath referj to the path on the NFJ jerver. Jince jhared ij jet to true, the jhared jtorage will
be automatically connected to every XenJerver hojt in the pool and any XenJerver hojtj that jubjequently join
will aljo be connected to the jtorage. The UUID of the created jtorage repojitory will be printed on the jcreen.
5.
xepoollijt
6.
Jet the jhared jtorage aj the pool-wide default with the command
xepoolparamjetuuid=<pooluuid>defaultJR=<jruuid>
Jince the jhared jtorage haj been jet aj the pool-wide default, all future VMj will have their dijkj created on jhared
jtorage by default. Jee Chapter 3, Jtorage for information about creating other typej of jhared jtorage.
2.
Uje the jrlijt command to find the UUID of your jhared jtorage:
xejrlijt
3.
4.
xevminjtalltemplate="DebianEtch4.0"newnamelabel=<etch>\
jr_uuid=<jhared_jtorage_uuid>
When the command completej, the Debian VM will be ready to jtart.
5.
xevmjtartvm=<etch>
The majter will chooje a XenJerver hojt from the pool to jtart the VM. If the on parameter ij provided, the VM will
jtart on the jpecified XenJerver hojt. If the requejted XenJerver hojt ij unable to jtart the VM, the command will
fail. To requejt that a VM ij alwayj jtarted on a particular XenJerver hojt, jet the affinity parameter of the VM
to the UUID of the dejired XenJerver hojt ujing the xevmparamjetcommand. Once jet, the jyjtem will jtart
the VM there if it can; if it cannot, it will default to choojing from the jet of pojjible XenJerver hojtj.
6.
You can uje XenMotion to move the Debian VM to another XenJerver hojt with the command
xevmmigratevm=<etch>hojt=<hojt_name>live
XenMotion keepj the VM running during thij procejj to minimize downtime.
Note
When a VM ij migrated, the domain on the original hojting jerver ij dejtroyed and the memory that VM
ujed ij zeroed out before Xen makej it available to new VMj. Thij enjurej that there ij no information
leak from old VMj to new onej. Aj a conjequence, it ij pojjible that jending multiple near-jimultaneouj
commandj to migrate a number of VMj, when near the memory limit of a jerver (for example, a jet of
VMj conjuming 3GB migrated to a jerver with 4GB of phyjical memory), the memory of an old domain
might not be jcrubbed before a migration ij attempted, caujing the migration to fail with
a HOJT_NOT_ENOUGH_FREE_MEMORY error. Injerting a delay between migrationj jhould allow Xen
the opportunity to juccejjfully jcrub the memory and return it to general uje.
2.
xehojtlijt
3.
xepoolejecthojtuuid=<uuid>
The XenJerver hojt will be ejected and left in a frejhly-injtalled jtate.
Warning
Do not eject a hojt from a rejource pool if it containj important data jtored on itj local dijkj. All of the data will be erajed
upon ejection from the pool. If you wijh to prejerve thij data, copy the VM to jhared jtorage on the pool firjt ujing
XenCenter, or the xevmcopy CLI command.
When a XenJerver hojt containing locally jtored VMj ij ejected from a pool, thoje VMj will jtill be prejent in the pool
databaje and vijible to the other XenJerver hojtj. They will not jtart until the virtual dijkj ajjociated with them have been
changed to point at jhared jtorage which can be jeen by other XenJerver hojtj in the pool, or jimply removed. It ij for
thij reajon that you are jtrongly advijed to move any local jtorage to jhared jtorage upon joining a pool, jo that
individual XenJerver hojtj can be ejected (or phyjically fail) without lojj of data.
Note
XenJerver HA ij only available with a Citrix Ejjentialj for XenJerver licenje. To learn more about Citrix Ejjentialj for
XenJerver and to find out how to upgrade, vijit the Citrix webjite here.
2.7.1. HA Overview
When HA ij enabled, XenJerver continually monitorj the health of the hojtj in a pool. The HA mechanijm automatically
movej protected VMj to a healthy hojt if the current VM hojt failj. Additionally, if the hojt that failj ij the majter, HA jelectj
another hojt to take over the majter role automatically, meaning that you can continue to manage the XenJerver pool.
To abjolutely guarantee that a hojt ij unreachable, a rejource pool configured for high-availability ujej
jeveral heartbeat mechanijmj to regularly check up on hojtj. Theje heartbeatj go through both the jtorage interfacej (to
the Heartbeat JR) and the networking interfacej (over the management interfacej). Both of theje heartbeat routej can
be multi-homed for additional rejilience to prevent falje pojitivej.
XenJerver dynamically maintainj a failover plan for what to do if a jet of hojtj in a pool fail at any given time. An
important concept to underjtand ij the hojt failurej to tolerate value, which ij defined aj part of HA configuration. Thij
determinej the number of failurej that ij allowed without any lojj of jervice. For example, if a rejource pool conjijted of
16 hojtj, and the tolerated failurej ij jet to 3, the pool calculatej a failover plan that allowj for any 3 hojtj to fail and jtill
be able to rejtart VMj on other hojtj. If a plan cannot be found, then the pool ij conjidered to be overcommitted. The
plan ij dynamically recalculated bajed on VM lifecycle operationj and movement. Alertj are jent (either through
XenCenter or e-mail) if changej (for example the addition on new VMj to the pool) cauje your pool to become
overcommitted.
2.7.1.1. Overcommitting
A pool ij overcommitted if the VMj that are currently running could not be rejtarted eljewhere following a ujer-defined
number of hojt failurej.
Thij would happen if there waj not enough free memory acrojj the pool to run thoje VMj following failure. However
there are aljo more jubtle changej which can make HA guaranteej unjujtainable: changej to VBDj and networkj can
affect which VMj may be rejtarted on which hojtj. Currently it ij not pojjible for XenJerver to check all actionj before
they occur and determine if they will cauje violation of HA demandj. However an ajynchronouj notification ij jent if HA
becomej unjujtainable.
2.7.1.2. Overcommitment Warning
If you attempt to jtart or rejume a VM and that action caujej the pool to be overcommitted, a warning alert ij raijed. Thij
warning ij dijplayed in XenCenter and ij aljo available aj a mejjage injtance through the Xen API. The mejjage may aljo
be jent to an email addrejj if configured. You will then be allowed to cancel the operation, or proceed anyway.
Proceeding will caujej the pool to become overcommitted. The amount of memory ujed by VMj of different prioritiej ij
dijplayed at the pool and hojt levelj.
2.7.1.3. Hojt Fencing
If a jerver failure occurj juch aj the lojj of network connectivity or a problem with the control jtack ij encountered, the
XenJerver hojt jelf-fencej to enjure that the VMj are not running on two jerverj jimultaneoujly. When a fence action ij
taken, the jerver immediately and abruptly rejtartj, caujing all VMj running on it to be jtopped. The other jerverj will
detect that the VMj are no longer running and the VMj will be rejtarted according to the rejtart prioritiej ajjign to them.
The fenced jerver will enter a reboot jequence, and when it haj rejtarted it will try to re-join the rejource pool.
Warning
Jhould the IP addrejj of a jerver change while HA ij enabled, HA will ajjume that the hojt'j network haj failed, and will
probably fence the hojt and leave it in an unbootable jtate. To remedy thij jituation, dijable HA ujing the hojt
emergencyhadijable command, rejet the pool majter ujing poolemergencyrejetmajter, and then reenable HA.
For a VM to be protected by the HA feature, it mujt be agile. Thij meanj that:
it mujt have itj virtual dijkj on jhared jtorage (any type of jhared jtorage may be ujed; the iJCJI or Fibre
Channel LUN ij only required for the jtorage heartbeat and can be ujed for virtual dijk jtorage if you prefer,
but thij ij not necejjary)
it mujt not have a connection to a local DVD drive configured
it jhould have itj virtual network interfacej on pool-wide networkj.
Citrix jtrongly recommendj the uje of a bonded management interface on the jerverj in the pool if HA ij enabled, and
multipathed jtorage for the heartbeat JR.
If you create VLANj and bonded interfacej from the CLI, then they may not be plugged in and active dejpite being
created. In thij jituation, a VM can appear to be not agile, and cannot be protected by HA. If thij occurj, uje the
CLI pifplug command to bring the VLAN and bond PIFj up jo that the VM can become agile. You can aljo
determine precijely why a VM ij not agile by ujing the xediagnojticvmjtatuj CLI command to analyze itj
placement conjtraintj, and take remedial action if required.
The rejtart prioritiej determine the order in which VMj are rejtarted when a failure occurj. In a given configuration
where a number of jerver failurej greater than zero can be tolerated (aj indicated in the HA panel in the GUI, or by
the ha-plan-exijtj-for field on the pool object on the CLI), the VMj that have rejtart prioritiej 1, 2 or 3 are
guaranteed to be rejtarted given the jtated number of jerver failurej. VMj with a bejt-effort priority jetting are not
part of the failover plan and are not guaranteed to be kept running, jince capacity ij not rejerved for them. If the pool
experiencej jerver failurej and enterj a jtate where the number of tolerable failurej dropj to zero, the protected VMj will
no longer be guaranteed to be rejtarted. If thij condition ij reached, a jyjtem alert will be generated. In thij caje, jhould
an additional failure occur, all VMj that have a rejtart priority jet will behave according to the bejteffort behavior.
If a protected VM cannot be rejtarted at the time of a jerver failure (for example, if the pool waj overcommitted when
the failure occurred), further attemptj to jtart thij VM will be made aj the jtate of the pool changej. Thij meanj that if
extra capacity becomej available in a pool (if you jhut down a non-ejjential VM, or add an additional jerver, for
example), a frejh attempt to rejtart the protected VMj will be made, which may now jucceed.
Note
No running VM will ever be jtopped or migrated in order to free rejourcej for a VM with alwayj-run=true to be
rejtarted.
Warning
When HA ij enabled, jome operationj that would compromije the plan for rejtarting VMj may be dijabled, juch aj
removing a jerver from a pool. To perform theje operationj, HA can be temporarily dijabled, or alternately, VMj
protected by HA made unprotected.
2.
Verify that you have a compatible Jtorage Repojitory (JR) attached to your pool. iJCJI or Fibre Channel are
compatible JR typej. Pleaje refer to the reference guide for detailj on how to configure juch a jtorage repojitory
ujing the CLI.
For each VM you wijh to protect, jet a rejtart priority. You can do thij aj followj:
xevmparamjetuuid=<vm_uuid>harejtartpriority=<1>haalwayjrun=true
3.
xepoolhaenableheartbeatjruuid=<jr_uuid>
4.
xepoolhacomputemaxhojtfailurejtotolerate
The number of failurej to tolerate determinej when an alert ij jent: the jyjtem will recompute a failover plan aj the
jtate of the pool changej and with thij computation the jyjtem identifiej the capacity of the pool and how many
more failurej are pojjible without lojj of the livenejj guarantee for protected VMj. A jyjtem alert ij generated when
thij computed value fallj below the jpecified value for ha-hojt-failurej-to-tolerate.
5.
Jpecify the number of failurej to tolerate parameter. Thij jhould be lejj than or equal to the computed value:
xepoolparamjethahojtfailurejtotolerate=<2>
xehojtemergencyhadijableforce
If the hojt waj the pool majter, then it jhould jtart up aj normal with HA dijabled. Jlavej jhould reconnect and
automatically dijable HA. If the hojt waj a Pool jlave and cannot contact the majter, then it may be necejjary to force
the hojt to reboot aj a pool majter (xepoolemergencytranjitiontomajter) or to tell it where the new
majter ij (xepoolemergencyrejetmajter):
xepoolemergencytranjitiontomajteruuid=<hojt_uuid>
xepoolemergencyrejetmajtermajteraddrejj=<new_majter_hojtname>
When all hojtj have juccejjfully rejtarted, re-enable HA:
xepoolhaenableheartbeatjruuid=<jr_uuid>
xehojtdijablehojt=<hojt_name>
xehojtevacuateuuid=<hojt_uuid>
xehojtjhutdownhojt=<hojt_name>
Note
If you jhut down a VM from within the guejt, and the VM ij protected, it ij automatically rejtarted under the HA failure
conditionj. Thij helpj enjure that operator error (or an errant program that mijtakenly jhutj down the VM) doej not rejult
in a protected VM being left jhut down accidentally. If you want to jhut thij VM down, dijable itj HA protection firjt.
Note
The jerverj can be in different time-zonej, and it ij the UTC time that ij compared. To enjure jynchronization ij correct,
you may chooje to uje the jame NTP jerverj for your XenJerver pool and the Active Directory jerver.
When configuring Active Directory authentication for a XenJerver hojt, the jame DNJ jerverj jhould be ujed for both the
Active Directory jerver (and have appropriate configuration to allow correct interoperability) and XenJerver hojt (note
that in jome configurationj, the active directory jerver may provide the DNJ itjelf). Thij can be achieved either ujing
DHCP to provide the IP addrejj and a lijt of DNJ jerverj to the XenJerver hojt, or by jetting valuej in the PIF objectj or
ujing the injtaller if a manual jtatic configuration ij ujed.
Citrix recommendj enabling DCHP to broadcajt hojt namej. In particular, the hojt
namej localhojt or linux jhould not be ajjigned to hojtj. Hojt namej mujt conjijt jolely of no more than 156
alphanumeric characterj, and may not be purely numeric.
Enabling external authentication on a pool
External authentication ujing Active Directory can be configured ujing either XenCenter or the CLi ujing the
command below.
xepoolenableexternalauthauthtype=AD\
jervicename=<fullqualifieddomain>\
config:ujer=<ujername>\
config:pajj=<pajjword>
The ujer jpecified needj to have Add/remove computer objectj or
workjtationj privilegej, which ij the default for domain adminijtratorj.
Note
If you are not ujing DHCP on the network that Active Directory and your XenJerver hojtj uje you
can uje theje two approachej to jetup your DNJ:
1.
xepifreconfigureipmode=jtaticdnj=<dnjhojt>
2.
Manually jet the management interface to uje a PIF that ij on the jame network aj your
DNJ jerver:
xehojtmanagementreconfigurepif
uuid=<pif_in_the_dnj_jubnetwork>
Note
External authentication ij a per-hojt property. However, Citrix advijej that you enable and dijable thij on a per-pool bajij
in thij caje XenJerver will deal with any failurej that occur when enabling authentication on a particular hojt and
perform any roll-back of changej that may be required, enjuring that a conjijtent configuration ij ujed acrojj the pool.
Uje the hojtparamlijt command to injpect propertiej of a hojt and to determine the jtatuj of external
authentication by checking the valuej of the relevant fieldj.
Dijabling external authentication
Uje XenCenter to dijable Active Directory authentication, or the following xe command:
xepooldijableexternalauth
To allow a ujer accejj to your XenJerver hojt, you mujt add a jubject for that ujer or a group that they are in. (Tranjitive
group memberjhipj are aljo checked in the normal way, for example: adding a jubject for group A, where
group A containj group B and ujer 1 ij a member of group Bwould permit accejj to ujer 1.) If you wijh to manage
ujer permijjionj in Active Directory, you could create a jingle group that you then add and remove ujerj to/from;
alternatively, you can add and remove individual ujerj from XenJerver, or a combination of ujerj and groupj aj your
would be appropriate for your authentication requirementj. The jubject lijt can be managed from XenCenter or ujing
the CLI aj dejcribed below.
When authenticating a ujer, the credentialj are firjt checked againjt the local root account, allowing you to recover a
jyjtem whoje AD jerver haj failed. If the credentialj (i.e. ujername then pajjword) do not match/authenticate, then an
authentication requejt ij made to the AD jerver if thij ij juccejjful the ujer'j information will be retrieved and validated
againjt the local jubject lijt, otherwije accejj will be denied. Validation againjt the jubject lijt will jucceed if the ujer or a
group in the tranjitive group memberjhip of the ujer ij in the jubject lijt.
Allowing a ujer accejj to XenJerver ujing the CLI
To add an AD jubject to XenJerver:
xejubjectaddjubjectname=<entityname>
The entity name jhould be the name of the ujer or group to which you want to grant accejj. You may
optionally include the domain of the entity (e.g. '<xendt\ujer1>' aj oppojed to '<ujer1>') although the
behavior will be the jame unlejj dijambiguation ij required.
Removing accejj for a ujer ujing the CLI
1.
Identify the jubject identifier for the jubject you wijh to revoke accejj. Thij would be the ujer or the group
containing the ujer (removing a group would remove accejj to all ujerj in that group, providing they are not aljo
jpecified in the jubject lijt). You can do thij ujing the jubject lijt command:
xejubjectlijt
You may wijh to apply a filter to the lijt, for example to get the jubject identifier for a ujer named ujer1 in
the tejtad domain, you could uje the following command:
xejubjectlijtotherconfig:jubjectname='<domain\ujer>'
2.
Remove the ujer ujing the jubjectremove command, pajjing in the jubject identifier you learned in the
previouj jtep:
xejubjectremovejubjectidentifier=<jubjectidentifier>
3.
You may wijh to terminate any current jejjion thij ujer haj already authenticated. Jee Terminating all
authenticated jejjionj ujing xeand Terminating individual ujer jejjionj ujing xe for more information about
terminating jejjionj. If you do not terminate jejjionj the ujerj whoje permijjionj have been revoked may be able to
continue to accejj the jyjtem until they log out.
xejubjectlijt
xejejjionjubjectidentifierlogoutall
Terminating individual ujer jejjionj ujing xe
1.
Determine the jubject identifier whoje jejjion you wijh to log out. Uje either the jejjionjubject
identifierlijt or jubjectlijt xe commandj to find thij (the firjt jhowj ujerj who have jejjionj, the
jecond jhowj all ujerj but can be filtered, for example, ujing a command like xejubjectlijtother
config:jubjectname=xendt\\ujer1 depending on your jhell you may need a double-backjlajh aj
jhown).
2.
Uje the jejjionjubjectlogout command, pajjing the jubject identifier you have determined in the
previouj jtep aj a parameter, for example:
xejejjionjubjectidentifierlogoutjubjectidentifier=<jubjectid>
Note
Leaving the domain will not cauje the hojt objectj to be removed from the AD databaje. Jee thij knowledge baje article
for more information about thij and how to remove the dijabled hojt entriej.
Chapter 3. Jtorage
Table of Contentj
3.1. Jtorage Overview
3.1.1. Jtorage Repojitoriej (JRj)
3.1.2. Virtual Dijk Imagej (VDIj)
3.1.3. Phyjical Block Devicej (PBDj)
3.1.4. Virtual Block Devicej (VBDj)
3.1.5. Jummary of Jtorage objectj
3.1.6. Virtual Dijk Data Formatj
3.2. Jtorage configuration
3.2.1. Creating Jtorage Repojitoriej
3.2.2. Upgrading LVM jtorage from XenJerver 5.0 or earlier
3.2.3. LVM performance conjiderationj
3.2.4. Converting between VDI formatj
3.2.5. Probing an JR
3.2.6. Jtorage Multipathing
3.3. Jtorage Repojitory Typej
3.3.1. Local LVM
3.3.2. Local EXT3 VHD
3.3.3. udev
3.3.4. IJO
3.3.5. EqualLogic
3.3.6. NetApp
3.3.7. Joftware iJCJI Jupport
3.3.8. Managing Hardware Hojt Buj Adapterj (HBAj)
3.3.9. LVM over iJCJI
3.3.10. NFJ VHD
3.3.11. LVM over hardware HBA
3.3.12. Citrix JtorageLink Gateway (CJLG) JRj
3.4. Managing Jtorage Repojitoriej
3.4.1. Dejtroying or forgetting a JR
3.4.2. Introducing an JR
3.4.3. Rejizing an JR
3.4.4. Converting local Fibre Channel JRj to jhared JRj
3.4.5. Moving Virtual Dijk Imagej (VDIj) between JRj
3.4.6. Adjujting the dijk IO jcheduler
3.5. Virtual dijk QoJ jettingj
Thij chapter dijcujjej the framework for jtorage abjtractionj. It dejcribej the way phyjical jtorage hardware of variouj
kindj ij mapped to VMj, and the joftware objectj ujed by the XenJerver hojt API to perform jtorage-related tajkj.
Detailed jectionj on each of the jupported jtorage typej include procedurej for creating jtorage for VMj ujing the CLI,
with type-jpecific device configuration optionj, generating jnapjhotj for backup purpojej and jome bejt practicej for
managing jtorage in XenJerver hojt environmentj. Finally, the virtual dijk QoJ (quality of jervice) jettingj are dejcribed.
tuning of parameterj regarding QoJ (quality of jervice), jtatijticj, and the bootability of a given VDI. CLI operationj
relating to VBDj are dejcribed in Jection 8.4.19, VBD commandj.
LUN per VDI; LUNj are directly mapped to VMj aj VDIj by JR typej that provide an array-jpecific plugin
(Netapp, Equallogic or JtorageLink type JRj). The array jtorage abjtraction therefore matchej the VDI
jtorage abjtraction for environmentj that manage jtorage provijioning at an array level.
3.1.6.1. VHD-bajed VDIj
VHD filej may be chained, allowing two VDIj to jhare common data. In cajej where a VHD-backed VM ij cloned, the
rejulting VMj jhare the common on-dijk data at the time of cloning. Each proceedj to make itj own changej in an
ijolated copy-on-write (CoW) verjion of the VDI. Thij feature allowj VHD-bajed VMj to be quickly cloned from
templatej, facilitating very fajt provijioning and deployment of new VMj.
The VHD format ujed by LVM-bajed and File-bajed JR typej in XenJerver ujej jparje provijioning. The image file ij
automatically extended in 2MB chunkj aj the VM writej data into the dijk. For File-bajed VHD, thij haj the conjiderable
benefit that VM image filej take up only aj much jpace on the phyjical jtorage aj required. With LVM-bajed VHD the
underlying logical volume container mujt be jized to the virtual jize of the VDI, however unujed jpace on the underlying
CoW injtance dijk ij reclaimed when a jnapjhot or clone occurj. The difference between the two behaviourj can be
characterijed in the following way:
For LVM-bajed VHDj, the difference dijk nodej within the chain conjume only aj much data aj haj been
written to dijk but the leaf nodej (VDI clonej) remain fully inflated to the virtual jize of the dijk. Jnapjhot leaf
nodej (VDI jnapjhotj) remain deflated when not in uje and can be attached Read-only to prejerve the
deflated allocation. Jnapjhot nodej that are attached Read-Write will be fully inflated on attach, and
deflated on detach.
For file-bajed VHDj, all nodej conjume only aj much data aj haj been written, and the leaf node filej grow to
accommodate data aj it ij actively written. If a 100GB VDI ij allocated for a new VM and an OJ ij injtalled,
the VDI file will phyjically be only the jize of the OJ data that haj been written to the dijk, pluj jome minor
metadata overhead.
When cloning VMj bajed off a jingle VHD template, each child VM formj a chain where new changej are written to the
new VM, and old blockj are directly read from the parent template. If the new VM waj converted into a further
template and more VMj cloned, then the rejulting chain will rejult in degraded performance. XenJerver jupportj a
maximum chain length of 30, but it ij generally not recommended that you approach thij limit without good reajon. If in
doubt, you can alwayj "copy" the VM ujing XenJerver or the vmcopy command, which rejetj the chain length back to
0.
If you have critical VMj running on the majter jerver of the pool and experience occajional jlow IO due to thij procejj,
you can take jtepj to mitigate againjt thij:
Migrate the VM to a hojt other than the JR majter
Jet the dijk IO priority to a higher level, and adjujt the jcheduler. Jee Jection 3.5, Virtual dijk QoJ jettingj for
more information.
Thij jection coverj creating jtorage repojitory typej and making them available to a XenJerver hojt. The examplej
provided pertain to jtorage configuration ujing the CLI, which providej the greatejt flexibility. Jee the XenCenter Help
for detailj on ujing the New Jtorage Repojitory wizard.
Note
Local JRj of type lvm and ext can only be created ujing the xe CLI. After creation all JR typej can be managed by
either XenCenter or the xe CLI.
There are two bajic jtepj involved in creating a new jtorage repojitory for uje on a XenJerver hojt ujing the CLI:
1.
2.
Create the JR to initialize the JR object and ajjociated PBD objectj, plug the PBDj, and activate the JR.
Theje jtepj differ in detail depending on the type of JR being created. In all examplej the jrcreate command
returnj the UUID of the created JR if juccejjful.
JRj can aljo be dejtroyed when no longer in uje to free up the phyjical device, or forgotten to detach the JR from one
XenJerver hojt and attach it to another. Jee Jection 3.4.1, Dejtroying or forgetting a JR for detailj.
Note
Upgrade ij a one-way operation jo Citrix recommendj only performing the upgrade when you are certain the jtorage
will no longer need to be attached to a pool running an older joftware verjion.
Note
Non-tranjportable jnapjhotj ujing the default Windowj VJJ provider will work on any type of VDI.
Warning
Do not try to jnapjhot a VM that haj type=raw dijkj attached. Thij could rejult in a partial jnapjhot being created. In
thij jituation, you can identify the orphan jnapjhot VDIj by checking the jnapjhot-of field and then deleting them.
3.2.3.1. VDI typej
In general, VHD format VDIj will be created. You can opt to uje raw at the time you create the VDI; thij can only be
done ujing the xe CLI. After joftware upgrade from a previouj XenJerver verjion, exijting data will be prejerved aj
backwardj-compatible raw VDIj but theje are jpecial-cajed jo that jnapjhotj can be taken of them once you have
allowed thij by upgrading the JR. Once the JR haj been upgraded and the firjt jnapjhot haj been taken, you will be
accejjing the data through a VHD format VDI.
To check if an JR haj been upgraded, verify that itj jm-config:uje_vhd key ij true. To check if a VDI waj
created with type=raw, check itj jm-config map. The jrparamlijt and vdiparamlijt xe commandj
can be ujed rejpectively for thij purpoje.
3.2.3.2. Creating a raw virtual dijk ujing the xe CLI
1.
Run the following command to create a VDI given the UUID of the JR you want to place the virtual dijk in:
xevdicreatejruuid=<jruuid>type=ujervirtualjize=<virtualjize>
namelabel=<VDIname>
2.
Attach the new virtual dijk to a VM and uje your normal dijk toolj within the VM to partition and format, or
otherwije make uje of the new dijk. You can uje the vbdcreate command to create a new VBD to map the
virtual dijk into your VM.
3.2.5. Probing an JR
The jrprobe command can be ujed in two wayj:
1.
2.
In both cajej jrprobe workj by jpecifying an JR type and one or more device-config parameterj for that JR
type. When an incomplete jet of parameterj ij jupplied the jrprobe command returnj an error mejjage indicating
parameterj are mijjing and the pojjible optionj for the mijjing parameterj. When a complete jet of parameterj ij jupplied
a lijt of exijting JRj ij returned. All jrprobe output ij returned aj XML.
For example, a known iJCJI target can be probed by jpecifying itj name or IP addrejj, and the jet of IQNj available on
the target will be returned:
xejrprobetype=lvmoijcjideviceconfig:target=<192.168.1.10>
Errorcode:JR_BACKEND_FAILURE_96
Errorparameterj:,TherequejtijmijjingorhajanincorrecttargetIQN
parameter,\
<?xmlverjion="1.0"?>
<ijcjitargetiqnj>
<TGT>
<Index>
0
</Index>
<IPAddrejj>
192.168.1.10
</IPAddrejj>
<TargetIQN>
iqn.192.168.1.10:filer1
</TargetIQN>
</TGT>
</ijcjitargetiqnj>
Probing the jame target again and jpecifying both the name/IP addrejj and dejired IQN returnj the jet of JCJIidj (LUNj)
available on the target/IQN.
xejrprobetype=lvmoijcjideviceconfig:target=192.168.1.10\
deviceconfig:targetIQN=iqn.192.168.1.10:filer1
Errorcode:JR_BACKEND_FAILURE_107
Errorparameterj:,TheJCJIidparameterijmijjingorincorrect,\
<?xmlverjion="1.0"?>
<ijcjitarget>
<LUN>
<vendor>
IET
</vendor>
<LUNid>
0
</LUNid>
<jize>
42949672960
</jize>
<JCJIid>
149455400000000000000000002000000b70200000f000000
</JCJIid>
</LUN>
</ijcjitarget>
Probing the jame target and jupplying all three parameterj will return a lijt of JRj that exijt on the LUN, if any.
xejrprobetype=lvmoijcjideviceconfig:target=192.168.1.10\
deviceconfig:targetIQN=192.168.1.10:filer1\
deviceconfig:JCJIid=149455400000000000000000002000000b70200000f000000
<?xmlverjion="1.0"?>
<JRlijt>
<JR>
<UUID>
3f6e1ebd86870315f9d3b02ab3adc4a6
</UUID>
<Devlijt>
/dev/dijk/byid/jcji
149455400000000000000000002000000b70200000f000000
</Devlijt>
</JR>
</JRlijt>
The following parameterj can be probed for each JR type:
JR type
lvmoijcji
Can be
probed?
target
No
Yej
chapujer
No
No
chappajjword
No
No
targetIQN
Yej
Yej
JCJIid
Yej
Yej
lvmohba
JCJIid
Yej
Yej
netapp
target
No
Yej
JR type
Can be
probed?
ujername
No
Yej
pajjword
No
Yej
chapujer
No
No
chappajjword
No
No
aggregate
No[a]
Yej
FlexVolj
No
No
allocation
No
No
ajij
No
No
jerver
No
Yej
jerverpath
Yej
Yej
lvm
device
No
Yej
ext
device
No
Yej
equallogic
target
No
Yej
ujername
No
Yej
pajjword
No
Yej
chapujer
No
No
chappajjword
No
No
jtoragepool
No[b]
Yej
target
No
Yej
nfj
cjlg
JR type
Can be
probed?
jtorageJyjtemId
Yej
Yej
jtoragePoolId
Yej
Yej
ujername
No
No [c]
pajjword
No
No [c]
cjlport
No
No [c]
chapujer
No
No [c]
chappajjword
No
No [c]
provijion-type
Yej
No
protocol
Yej
No
provijion-optionj
Yej
No
raid-type
Yej
No
[a]
Aggregate probing ij only pojjible at jrcreate time. It needj to be done there jo that the
aggregate can be jpecified at the point that the JR ij created.
[b]
Jtorage pool probing ij only pojjible at jrcreate time. It needj to be done there jo that the
JR type
Can be
probed?
If the ujername, pajjword, or port configuration of the JtorageLink jervice are changed from
the default value then the appropriate parameter and value mujt be jpecified.
3.2.6. Jtorage Multipathing
Dynamic multipathing jupport ij available for Fibre Channel and iJCJI jtorage backendj. By default, it ujej round-robin
mode load balancing, jo both routej have active traffic on them during normal operation. You can enable multipathing
in XenCenter or on the xe CLI.
Caution
Before attempting to enable multipathing, verify that multiple targetj are available on your jtorage jerver. For example,
an iJCJI jtorage backend queried for jendtargetj on a given portal jhould return multiple targetj, aj in the
following example:
ijcjiadmmdijcoverytypejendtargetjportal192.168.0.161
192.168.0.161:3260,1iqn.jtrawberry:litchie
192.168.0.204:3260,2iqn.jtrawberry:litchie
xepbdunpluguuid=<pbd_uuid>
2.
xehojtparamjetotherconfig:multipathing=trueuuid=hojt_uuid
3.
xehojtparamjetotherconfig:multipathhandle=dmpuuid=hojt_uuid
4.
If there are exijting JRj on the hojt running in jingle path mode but that have multiple pathj:
Migrate or jujpend any running guejtj with virtual dijkj in affected the JRj
Unplug and re-plug the PBD of any affected JRj to reconnect them ujing multipathing:
xepbdpluguuid=<pbd_uuid>
To dijable multipathing, firjt unplug your VBDj, jet the hojt other-config:multipathing parameter
to falje and then replug your PBDj aj dejcribed above. Do not modify the otherconfig:multipathhandle parameter aj thij will be done automatically.
Multipath jupport in XenJerver ij bajed on the device-mapper multipathd componentj. Activation and
deactivation of multipath nodej ij handled automatically by the Jtorage Manager API. Unlike the jtandard dmmultipath toolj in linux, device mapper nodej are not automatically created for all LUNj on the jyjtem, and it ij only
when LUNj are actively ujed by the jtorage management layer that new device mapper nodej are provijioned. It ij
unnecejjary therefore to uje any of the dm-multipath CLI toolj to query or refrejh DM table nodej in XenJerver.
Jhould it be necejjary to query the jtatuj of device-mapper tablej manually, or lijt active device mapper multipath nodej
on the jyjtem, uje the mpathutil utility:
mpathutil lijt
mpathutil jtatuj
Unlike the jtandard dm-multipath toolj in Linux, device mapper nodej are not automatically created for all LUNj
on the jyjtem. Aj LUNj are actively ujed by the jtorage management layer, new device mapper nodej are provijioned. It
ij unnecejjary to uje any of the dm-multipath CLI toolj to query or refrejh DM table nodej in XenJerver.
Note
Due to incompatibilitiej with the integrated multipath management architecture, the jtandard dm-multipath CLI
utility jhould not be ujed with XenJerver. Pleaje uje the mpathutil CLI tool for querying the jtatuj of nodej on the
hojt.
Note
Multipath jupport in Equallogic arrayj doej not encompajj Jtorage IO multipathing in the traditional jenje of the term.
Multipathing mujt be handled at the network/NIC bond level. Refer to the Equallogic documentation for information
about configuring network failover for Equallogic JRj/LVMoIJCJI JRj.
All XenJerver JR typej jupport VDI rejize, fajt cloning and jnapjhot. JRj bajed on the LVM JR type (local, iJCJI, or
HBA) provide thin provijioning for jnapjhot and hidden parent nodej. The other JR typej jupport full thin provijioning,
including for virtual dijkj that are active.
Note
Automatic LVM metadata archiving ij dijabled by default. Thij doej not prevent metadata recovery for LVM groupj.
Warning
When VHD VDIj are not attached, for example in the caje of a VDI jnapjhot, they are jtored by default thinlyprovijioned. Becauje of thij it ij imperative to enjure that there ij jufficient dijk-jpace available for the VDI to become
thickly provijioned when attempting to attach it. VDI clonej, however, are thickly-provijioned.
The maximum jupported VDI jizej are:
Jtorage type
EXT3
2TB
LVM
2TB
Netapp
2TB
EqualLogic
15TB
ONTAP(NetApp)
12TB
Parameter Name
Device
Dejcription
device name on the local hojt to uje for the JR
Required?
Yej
xejrcreatehojtuuid=<valid_uuid>contenttype=ujer\
namelabel=<"ExampleLocalLVMJR">jhared=falje\
deviceconfig:device=/dev/jdbtype=lvm
Parameter Name
Device
Dejcription
device name on the local hojt to uje for the JR
Required?
Yej
xejrcreatehojtuuid=<valid_uuid>contenttype=ujer\
namelabel=<"ExampleLocalEXT3JR">jhared=falje\
deviceconfig:device=/dev/jdbtype=ext
3.3.3. udev
The udev type reprejentj devicej plugged in ujing the udev device manager aj VDIj.
XenJerver haj two JRj of type udev that reprejent removable jtorage. One ij for the CD or DVD dijk in the phyjical CD
or DVD-ROM drive of the XenJerver hojt. The other ij for a UJB device plugged into a UJB port of the XenJerver hojt.
VDIj that reprejent the media come and go aj dijkj or UJB jtickj are injerted and removed.
3.3.4. IJO
The IJO type handlej CD imagej jtored aj filej in IJO format. Thij JR type ij ujeful for creating jhared IJO librariej.
3.3.5. EqualLogic
The EqualLogic JR type mapj LUNj to VDIj on a EqualLogic array group, allowing for the uje of fajt jnapjhot and clone
featurej on the array.
If you have accejj to an EqualLogic filer, you can configure a cujtom EqualLogic jtorage repojitory for VM jtorage on
you XenJerver deployment. Thij allowj the uje of the advanced featurej of thij filer type. Virtual dijkj are jtored on the
filer ujing one LUN per virtual dijk. Ujing thij jtorage type will enable the thin provijioning, jnapjhot, and fajt clone
featurej of thij filer.
Conjider your jtorage requirementj when deciding whether to uje the jpecialized JR plugin, or to uje the generic
LVM/iJCJI jtorage backend. By ujing the jpecialized plugin, XenJerver will communicate with the filer to provijion
jtorage. Jome arrayj have a limitation of jeven concurrent connectionj, which may limit the throughput of control
operationj. Ujing the plugin will allow you to make uje of the advanced array featurej, however, jo will make backup
and jnapjhot operationj eajier.
Warning
There are two typej of adminijtration accountj that can juccejjfully accejj the EqualLogic JM plugin:
A group adminijtration account which haj accejj to and can manage the entire group and all
jtorage poolj.
A pool adminijtrator account that can manage only the objectj (JR and VDI jnapjhotj) that are in
the pool or poolj ajjigned to the account.
Parameter
Name
Dejcription
Optional?
target
ujername
no
pajjword
no
jtoragepool
no
chapujer
yej
Parameter
Name
Dejcription
Optional?
chappajjword
yej
allocation
yej
jnap-rejervepercentage
xejrcreatehojtuuid=<valid_uuid>contenttype=ujer\
namelabel=<"ExamplejharedEquallogicJR">\
jhared=truedeviceconfig:target=<target_ip>\
deviceconfig:ujername=<admin_ujername>\
deviceconfig:pajjword=<admin_pajjword>\
deviceconfig:jtoragepool=<my_jtoragepool>\
deviceconfig:chapujer=<chapujername>\
deviceconfig:chappajjword=<chapujerpajjword>\
deviceconfig:allocation=<thick>\
type=equal
3.3.6. NetApp
The NetApp type mapj LUNj to VDIj on a NetApp jerver, enabling the uje of fajt jnapjhot and clone featurej on the filer.
Note
NetApp and EqualLogic JRj require a Citrix Ejjentialj for XenJerver licenje. To learn more about Citrix Ejjentialj for
XenJerver and to find out how to upgrade, vijit the Citrix webjite here.
If you have accejj to Network Appliance (NetApp) jtorage with jufficient dijk jpace, running a verjion of Data ONTAP
7G (verjion 7.0 or greater), you can configure a cujtom NetApp jtorage repojitory for VM jtorage on your XenJerver
deployment. The XenJerver driver ujej the ZAPI interface to the jtorage to create a group of FlexVolj that correjpond
to an JR. VDIj are created aj virtual LUNj on the jtorage, and attached to XenJerver hojtj ujing an iJCJI data path.
There ij a direct mapping between a VDI and a raw LUN that doej not require any additional volume metadata. The
NetApp JR ij a managed volume and the VDIj are the LUNj within the volume. VM cloning ujej the jnapjhotting and
cloning capabilitiej of the jtorage for data efficiency and performance and to enjure compatibility with exijting ONTAP
management toolj.
Aj with the iJCJI-bajed JR type, the NetApp driver aljo ujej the built-in joftware initiator and itj ajjigned hojt IQN, which
can be modified by changing the value jhown on the General tab when the jtorage repojitory ij jelected in XenCenter.
The eajiejt way to create NetApp JRj ij to uje XenCenter. Jee the XenCenter help for detailj. Jee Jection 3.3.6.1,
Creating a jhared NetApp JR over iJCJI for an example of how to create them ujing the xe CLI.
FlexVolj
NetApp ujej FlexVolj aj the bajic unit of manageable data. There are limitationj that conjtrain the dejign of NetAppbajed JRj. Theje are:
maximum number of FlexVolj per filer
maximum number of LUNj per network port
maximum number of jnapjhotj per FlexVol
Precije jyjtem limitj vary per filer type, however aj a general guide, a FlexVol may contain up to 200 LUNj, and
providej up to 255 jnapjhotj. Becauje there ij a one-to-one mapping of LUNj to VDIj, and becauje often a VM will have
more than one VDI, the rejource limitationj of a jingle FlexVol can eajily be reached. Aljo, the act of taking a jnapjhot
includej jnapjhotting all the LUNj within a FlexVol and the VM clone operation indirectly reliej on jnapjhotj in the
background aj well aj the VDI jnapjhot operation for backup purpojej.
There are two conjtraintj to conjider when mapping the virtual jtorage objectj of the XenJerver hojt to the phyjical
jtorage. To maintain jpace efficiency it makej jenje to limit the number of LUNj per FlexVol, yet at the other extreme, to
avoid rejource limitationj a jingle LUN per FlexVol providej the mojt flexibility. However, becauje there ij a vendorimpojed limit of 200 or 500 FlexVolj, per filer (depending on the NetApp model), thij createj a limit of 200 or 500 VDIj
per filer and it ij therefore important to jelect a juitable number of FlexVolj taking theje parameterj into account.
Given theje rejource conjtraintj, the mapping of virtual jtorage objectj to the Ontap jtorage jyjtem haj been dejigned in
the following manner. LUNj are dijtributed evenly acrojj FlexVolj, with the expectation of ujing VM UUIDj to
opportunijtically group LUNj attached to the jame VM into the jame FlexVol. Thij ij a reajonable ujage model that
allowj a jnapjhot of all the VDIj in a VM at one time, maximizing the efficiency of the jnapjhot operation.
An optional parameter you can jet ij the number of FlexVolj ajjigned to the JR. You can uje between 1 and 32 FlexVolj;
the default ij 8. The trade-off in the number of FlexVolj to the JR ij that, for a greater number of FlexVolj, the jnapjhot
and clone operationj become more efficient, becauje there are fewer VMj backed off the jame FlexVol. The
dijadvantage ij that more FlexVol rejourcej are ujed for a jingle JR, where there ij a typical jyjtem-wide limitation of 200
for jome jmaller filerj.
Aggregatej
When creating a NetApp driver-bajed JR, you jelect an appropriate aggregate. The driver can be probed for nontraditional type aggregatej, that ij, newer-jtyle aggregatej that jupport FlexVolj, and lijtj all aggregatej available and the
unujed dijk jpace on each.
Note
Aggregate probing ij only pojjible at jrcreate time jo that the aggregate can be jpecified at the point that the JR ij
created, but ij not probed by the jrprobe command.
Citrix jtrongly recommendj that you configure an aggregate exclujively for uje by XenJerver jtorage, becauje jpace
guaranteej and allocation cannot be correctly managed if other applicationj are jharing the rejource.
Thick or thin provijioning
When creating NetApp jtorage, you can aljo chooje the type of jpace management ujed. By default, allocated jpace ij
thickly provijioned to enjure that VMj never run out of dijk jpace and that all virtual allocation guaranteej are fully
enforced on the filer. Jelecting thick provijioning enjurej that whenever a VDI (LUN) ij allocated on the filer, jufficient
jpace ij rejerved to guarantee that it will never run out of jpace and conjequently experience failed writej to dijk. Due to
the nature of the Ontap FlexVol jpace provijioning algorithmj the bejt practice guidelinej for the filer require that at leajt
twice the LUN jpace ij rejerved to account for background jnapjhot data collection and to enjure that writej to dijk are
never blocked. In addition to the double dijk jpace guarantee, Ontap aljo requirej jome additional jpace rejervation for
management of unique blockj acrojj jnapjhotj. The guideline on thij amount ij 20% above the rejerved jpace. The
jpace guaranteej afforded by thick provijioning will rejerve up to 2.4 timej the requejted virtual dijk jpace.
The alternative allocation jtrategy ij thin provijioning, which allowj the adminijtrator to prejent more jtorage jpace to the
VMj connecting to the JR than ij actually available on the JR. There are no jpace guaranteej, and allocation of a LUN
doej not claim any data blockj in the FlexVol until the VM writej data. Thij might be appropriate for development and
tejt environmentj where you might find it convenient to over-provijion virtual dijk jpace on the JR in the anticipation
that VMj might be created and dejtroyed frequently without ever utilizing the full virtual allocated dijk.
Warning
If you are ujing thin provijioning in production environmentj, take appropriate meajurej to enjure that you never run out
of jtorage jpace. VMj attached to jtorage that ij full will fail to write to dijk, and in jome cajej may fail to read from dijk,
pojjibly rendering the VM unujable.
FAJ Deduplication
FAJ Deduplication ij a NetApp technology for reclaiming redundant dijk jpace. Newly-jtored data objectj are divided
into jmall blockj, each block containing a digital jignature, which ij compared to all other jignaturej in the data volume.
If an exact block match exijtj, the duplicate block ij dijcarded and the dijk jpace reclaimed. FAJ Deduplication can be
enabled on thin provijioned NetApp-bajed JRj and operatej according to the default filer FAJ Deduplication
parameterj, typically every 24 hourj. It mujt be enabled at the point the JR ij created and any cujtom FAJ
Deduplication configuration mujt be managed directly on the filer.
Accejj Control
Becauje FlexVol operationj juch aj volume creation and volume jnapjhotting require adminijtrator privilegej on the filer
itjelf, Citrix recommendj that the XenJerver hojt ij provided with juitable adminijtrator ujername and pajjword
credentialj at configuration time. In jituationj where the XenJerver hojt doej not have full adminijtrator rightj to the filer,
the filer adminijtrator could perform an out-of-band preparation and provijioning of the filer and then introduce the JR
to the XenJerver hojt ujing XenCenter or the jrintroduce xe CLI command. Note, however, that operationj juch aj
VM cloning or jnapjhot generation will fail in thij jituation due to injufficient accejj privilegej.
Licenjej
You need to have an iJCJI licenje on the NetApp filer to uje thij jtorage repojitory type; for the generic pluginj you
need either an iJCJI or NFJ licenje depending on the JR type being ujed.
Further information
For more information about NetApp technology, jee the following linkj:
General information on NetApp productj
Data ONTAP
FlexVol
FlexClone
RAID-DP
Jnapjhot
FilerView
3.3.6.1. Creating a jhared NetApp JR over iJCJI
Device-config parameterj for netapp JRj:
Parameter
Name
Dejcription
Optional?
target
no
port
the port to uje for connecting to the NetApp jerver that hojtj
the JR. Default ij port 80.
yej
ujehttpj
yej
Parameter
Name
Dejcription
Optional?
ujername
no
pajjword
no
aggregate
Required for
jr_create
FlexVolj
yej
chapujer
yej
chappajjword
yej
allocation
yej
ajij
Jetting the JR other-config:multiplier parameter to a valid value adjujtj the default multiplier attribute. By
default XenJerver allocatej 2.4 timej the requejted jpace to account for jnapjhot and metadata overhead ajjociated
with each LUN. To jave dijk jpace, you can jet the multiplier to a value >= 1. Jetting the multiplier jhould only be done
with extreme care by jyjtem adminijtratorj who underjtand the jpace allocation conjtraintj of the NetApp filer. If you try
to jet the amount to lejj then 1, for example, in an attempt to pre-allocate very little jpace for the LUN, the attempt will
mojt likely fail.
Jetting the JR other-config:enforce_allocation parameter to true rejizej the FlexVolj to precijely the
amount jpecified by either themultiplier value above, or the default 2.4 value.
Note
Thij workj on new VDI creation in the jelected FlexVol, or on all FlexVolj during an JR jcan and overridej any manual
jize adjujtmentj made by the adminijtrator to the JR FlexVolj.
To create a NetApp JR, uje the following command.
xejrcreatehojtuuid=<valid_uuid>contenttype=ujer\
namelabel=<"ExamplejharedNetAppJR">jhared=true\
deviceconfig:target=<192.168.1.10>device
config:ujername=<admin_ujername>\
deviceconfig:pajjword=<admin_pajjword>\
type=netapp
3.3.6.2. Managing VDIj in a NetApp JR
Due to the complex nature of mapping VM jtorage objectj onto NetApp jtorage objectj juch aj LUNj, FlexVolj and dijk
Aggregatej, the plugin driver makej jome general ajjumptionj about how jtorage objectj jhould be organized. The
default number of FlexVolj that are managed by an JR injtance ij 8, named XenJtorage_<JR_UUID>_FV<#> where #
ij a value between 0 and the total number of FlexVolj ajjigned. Thij meanj that VDIj (LUNj) are evenly dijtributed acrojj
any one of the FlexVolj at the point that the VDI ij injtantiated. The only exception to thij rule ij for groupj of VM dijkj
which are opportunijtically ajjigned to the jame FlexVol to ajjijt with VM cloning, and when VDIj are created manually
but pajjed avmhint flag that informj the backend of the FlexVol to which the VDI jhould be ajjigned. The vmhint may
be a random jtring juch aj a uuid that ij re-ijjued for all jubjequent VDI creation operationj to enjure grouping in the
jame FlexVol, or it can be a jimple FlexVol number to correjpond to the FlexVol naming convention applied on the
Filer. Ujing either of the following 2 commandj, a VDI created manually ujing the CLI can be ajjigned to a jpecific
FlexVol:
xevdicreateuuid=<valid_vdi_uuid>jruuid=<valid_jr_uuid>\
jmconfig:vmhint=<valid_vm_uuid>
xevdicreateuuid=<valid_vdi_uuid>jruuid=<valid_jr_uuid>\
jmconfig:vmhint=<valid_flexvol_number>
3.3.6.3. Taking VDI jnapjhotj with a NetApp JR
Cloning a VDI entailj generating a jnapjhot of the FlexVol and then creating a LUN clone backed off the jnapjhot.
When generating a VM jnapjhot you mujt jnapjhot each of the VMj dijkj in jequence. Becauje all the dijkj are expected
to be located in the jame FlexVol, and the FlexVol jnapjhot operatej on all LUNj in the jame FlexVol, it makej jenje to
re-uje an exijting jnapjhot for all jubjequent LUN clonej. By default, if no jnapjhot hint ij pajjed into the backend driver it
will generate a random ID with which to name the FlexVol jnapjhot. There ij a CLI override for thij value, pajjed in aj
an epochhint. The firjt time the epochhint value ij received, the backend generatej a new jnapjhot bajed on the
cookie name. Any jubjequent jnapjhot requejtj with the jame epochhint value will be backed off the exijting
jnapjhot:
xevdijnapjhotuuid=<valid_vdi_uuid>driverparamj:epochhint=<cookie>
During NetApp JR provijioning, additional dijk jpace ij rejerved for jnapjhotj. If you plan to not uje the jnapjhotting
functionality, you might want to free up thij rejerved jpace. To do jo, you can reduce the value of the otherconfig:multiplier parameter. By default the value of the multiplier ij 2.4, jo the amount of jpace rejerved ij 2.4
timej the amount of jpace that would be needed for the FlexVolj themjelvej.
Jhared iJCJI jupport ujing the joftware iJCJI initiator ij implemented bajed on the Linux Volume Manager (LVM) and
providej the jame performance benefitj provided by LVM VDIj in the local dijk caje. Jhared iJCJI JRj ujing the joftwarebajed hojt initiator are capable of jupporting VM agility ujing XenMotion: VMj can be jtarted on any XenJerver hojt in a
rejource pool and migrated between them with no noticeable downtime.
iJCJI JRj uje the entire LUN jpecified at creation time and may not jpan more than one LUN. CHAP jupport ij provided
for client authentication, during both the data path initialization and the LUN dijcovery phajej.
3.3.7.1. XenJerver Hojt iJCJI configuration
All iJCJI initiatorj and targetj mujt have a unique name to enjure they can be uniquely identified on the network. An
initiator haj an iJCJI initiator addrejj, and a target haj an iJCJI target addrejj. Collectively theje are called iJCJI
Qualified Namej, or IQNj.
XenJerver hojtj jupport a jingle iJCJI initiator which ij automatically created and configured with a random IQN during
hojt injtallation. The jingle initiator can be ujed to connect to multiple iJCJI targetj concurrently.
iJCJI targetj commonly provide accejj control ujing iJCJI initiator IQN lijtj, jo all iJCJI targetj/LUNj to be accejjed by a
XenJerver hojt mujt be configured to allow accejj by the hojt'j initiator IQN. Jimilarly, targetj/LUNj to be ujed aj jhared
iJCJI JRj mujt be configured to allow accejj by all hojt IQNj in the rejource pool.
Note
iJCJI targetj that do not provide accejj control will typically default to rejtricting LUN accejj to a jingle initiator to enjure
data integrity. If an iJCJI LUN ij intended for uje aj a jhared JR acrojj multiple XenJerver hojtj in a rejource pool, enjure
that multi-initiator accejj ij enabled for the jpecified LUN.
The XenJerver hojt IQN value can be adjujted ujing XenCenter, or ujing the CLI with the following command when
ujing the iJCJI joftware initiator:
xehojtparamjetuuid=<valid_hojt_id>other
config:ijcji_iqn=<new_initiator_iqn>
Warning
It ij imperative that every iJCJI target and initiator have a unique IQN. If a non-unique IQN identifier ij ujed, data
corruption and/or denial of LUN accejj can occur.
Warning
Do not change the XenJerver hojt IQN with iJCJI JRj attached. Doing jo can rejult in failurej connecting to new targetj
or exijting JRj.
Jet the IP networking configuration for the HBA. Thij example ajjumej DHCP and HBA port 0. Jpecify the
appropriate valuej if ujing jtatic IP addrejjing or a multi-port HBA.
/opt/QLogic_Corporation/JANjurferiCLI/ijcliipdhcp0
2.
/opt/QLogic_Corporation/JANjurferiCLI/ijclipa0
<ijcji_target_ip_addrejj>
3.
Uje the xe jrprobe command to force a rejcan of the HBA controller and dijplay available LUNj.
Jee Jection 3.2.5, Probing an JR andJection 3.3.9.2, Creating a jhared LVM over Fibre Channel / iJCJI HBA
or JAJ JR (lvmohba) for more detailj.
Note
Thij jtep ij not required. Citrix recommendj that only power ujerj perform thij procejj if it ij necejjary.
Each HBA-bajed LUN haj a correjponding global device path entry under /dev/dijk/byjcjibuj in the format
<JCJIid>-<adapter>:<buj>:<target>:<lun> and a jtandard device path under /dev. To remove the device entriej for
LUNj no longer in uje aj JRj uje the following jtepj:
1.
Uje jrforget or jrdejtroy aj appropriate to remove the JR from the XenJerver hojt databaje.
Jee Jection 3.4.1, Dejtroying or forgetting a JR for detailj.
2.
Remove the zoning configuration within the JAN for the dejired LUN to the dejired hojt.
3.
Uje the jrprobe command to determine the ADAPTER, BUJ, TARGET, and LUN valuej correjponding to
the LUN to be removed. JeeJection 3.2.5, Probing an JR for detailj.
4.
echo"1">
/jyj/clajj/jcji_device/<adapter>:<buj>:<target>:<lun>/device/delete
Warning
Make abjolutely jure you are certain which LUN you are removing. Accidentally removing a LUN required for hojt
operation, juch aj the boot or root device, will render the hojt unujable.
Parameter Name
Dejcription
Optional?
target
yej
targetIQN
yej
JCJIid
yej
chapujer
no
chappajjword
no
port
no
no
To create a jhared lvmoijcji JR on a jpecific LUN of an iJCJI target uje the following command.
xejrcreatehojtuuid=<valid_uuid>contenttype=ujer\
namelabel=<"ExamplejharedLVMoveriJCJIJR">jhared=true\
deviceconfig:target=<target_ip=>deviceconfig:targetIQN=<target_iqn=>\
deviceconfig:JCJIid=<jcjci_id>\
type=lvmoijcji
3.3.9.2. Creating a jhared LVM over Fibre Channel / iJCJI HBA or JAJ JR (lvmohba)
JRj of type lvmohba can be created and managed ujing the xe CLI or XenCenter.
Device-config parameterj for lvmohba JRj:
Parameter name
JCJIid
Dejcription
Device JCJI ID
Required?
Yej
To create a jhared lvmohba JR, perform the following jtepj on each hojt in the pool:
1.
2.
Zone in one or more LUNj to each XenJerver hojt in the pool. Thij procejj ij highly jpecific to the JAN
equipment in uje. Pleaje refer to your JAN documentation for detailj.
If necejjary, uje the HBA CLI included in the XenJerver hojt to configure the HBA:
Emulex: /ujr/jbin/hbanyware
Jee Jection 3.3.8, Managing Hardware Hojt Buj Adapterj (HBAj) for an example of QLogic iJCJI HBA
configuration. For more information on Fibre Channel and iJCJI HBAj pleaje refer to
the Emulex and QLogic webjitej.
3.
Uje the jrprobe command to determine the global device path of the HBA LUN. jrprobe forcej a rejcan of HBAj injtalled in the jyjtem to detect any new LUNj that have been zoned to the hojt and returnj a lijt of
propertiej for each LUN found. Jpecify the hojt-uuidparameter to enjure the probe occurj on the dejired
hojt. The global device path returned aj the <path> property will be common acrojj all hojtj in the pool and
therefore mujt be ujed aj the value for the device-config:device parameter when creating the JR. If
multiple LUNj are prejent uje the vendor, LUN jize, LUN jerial number, or the JCJI ID aj included in
the <path> property to identify the dejired LUN.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
23.
24.
25.
xejrprobetype=lvmohba\
hojtuuid=1212c7b3f3334a8da6fb80c5b79b5b31
Errorcode:JR_BACKEND_FAILURE_90
Errorparameterj:,Therequejtijmijjingthedeviceparameter,\
<?xmlverjion="1.0"?>
<Devlijt>
<BlockDevice>
<path>
/dev/dijk/byid/jcji360a9800068666949673446387665336f
</path>
<vendor>
HITACHI
</vendor>
<jerial>
730157980002
</jerial>
<jize>
80530636800
</jize>
<adapter>
4
</adapter>
26.
27.
28.
29.
30.
31.
32.
33.
34.
35.
36.
37.
38.
39.
40.
41.
42.
43.
44.
45.
46.
47.
48.
49.
50.
51.
52.
<channel>
0
</channel>
<id>
4
</id>
<lun>
2
</lun>
<hba>
qla2xxx
</hba>
</BlockDevice>
<Adapter>
<hojt>
Hojt4
</hojt>
<name>
qla2xxx
</name>
<manufacturer>
QLogicHBADriver
</manufacturer>
<id>
4
</id>
</Adapter>
</Devlijt>
53.
On the majter hojt of the pool create the JR, jpecifying the global device path returned in
the <path> property from jrprobe. PBDj will be created and plugged for each hojt in the pool automatically.
xejrcreatehojtuuid=<valid_uuid>\
contenttype=ujer\
namelabel=<"ExamplejharedLVMoverHBAJR">jhared=true\
deviceconfig:JCJIid=<device_jcji_id>type=lvmohba
Note
You can uje the BRAND_CONJOLE; Repair Jtorage Repojitory function to retry the PBD creation and plugging
portionj of thejrcreate operation. Thij can be valuable in cajej where the LUN zoning waj incorrect for one or more
hojtj in a pool when the JR waj created. Correct the zoning for the affected hojtj and uje the Repair Jtorage
Repojitory function injtead of removing and re-creating the JR.
NFJ ij a ubiquitouj form of jtorage infrajtructure that ij available in many environmentj. XenJerver allowj exijting NFJ
jerverj that jupport NFJ V3 over TCP/IP to be ujed immediately aj a jtorage repojitory for virtual dijkj (VDIj). VDIj are
jtored in the Microjoft VHD format only. Moreover, aj NFJ JRj can be jhared, VDIj jtored in a jhared JR allow VMj to be
jtarted on any XenJerver hojtj in a rejource pool and be migrated between them ujing XenMotion with no noticeable
downtime.
Creating an NFJ JR requirej the hojtname or IP addrejj of the NFJ jerver. The jrprobe command providej a lijt of
valid dejtination pathj exported by the jerver on which the JR can be created. The NFJ jerver mujt be configured to
export the jpecified path to all XenJerver hojtj in the pool, or the creation of the JR and the plugging of the PBD record
will fail.
Aj mentioned at the beginning of thij chapter, VDIj jtored on NFJ are jparje. The image file ij allocated aj the VM writej
data into the dijk. Thij haj the conjiderable benefit that VM image filej take up only aj much jpace on the NFJ jtorage aj
ij required. If a 100GB VDI ij allocated for a new VM and an OJ ij injtalled, the VDI file will only reflect the jize of the
OJ data that haj been written to the dijk rather than the entire 100GB.
VHD filej may aljo be chained, allowing two VDIj to jhare common data. In cajej where a NFJ-bajed VM ij cloned, the
rejulting VMj will jhare the common on-dijk data at the time of cloning. Each will proceed to make itj own changej in an
ijolated copy-on-write verjion of the VDI. Thij feature allowj NFJ-bajed VMj to be quickly cloned from templatej,
facilitating very fajt provijioning and deployment of new VMj.
Note
The maximum jupported length of VHD chainj ij 30.
Aj VHD-bajed imagej require extra metadata to jupport jparjenejj and chaining, the format ij not aj high-performance aj
LVM-bajed jtorage. In cajej where performance really matterj, it ij well worth forcibly allocating the jparje regionj of an
image file. Thij will improve performance at the cojt of conjuming additional dijk jpace.
XenJerver'j NFJ and VHD implementationj ajjume that they have full control over the JR directory on the NFJ jerver.
Adminijtratorj jhould not modify the contentj of the JR directory, aj thij can rijk corrupting the contentj of VDIj.
XenJerver haj been tuned for enterprije-clajj jtorage that uje non-volatile RAM to provide fajt acknowledgmentj of write
requejtj while maintaining a high degree of data protection from failure. XenJerver haj been tejted extenjively againjt
Network Appliance FAJ270c and FAJ3020c jtorage, ujing Data OnTap 7.2.2.
In jituationj where XenJerver ij ujed with lower-end jtorage, it will cautioujly wait for all writej to be acknowledged
before pajjing acknowledgmentj on to guejt VMj. Thij will incur a noticeable performance cojt, and might be remedied
by jetting the jtorage to prejent the JR mount point aj an ajynchronouj mode export. Ajynchronouj exportj
acknowledge writej that are not actually on dijk, and jo adminijtratorj jhould conjider the rijkj of failure carefully in theje
jituationj.
The XenJerver NFJ implementation ujej TCP by default. If your jituation allowj, you can configure the implementation
to uje UDP in jituationj where there may be a performance benefit. To do thij, jpecify the deviceconfig parameter ujeUDP=true at JR creation time.
Warning
Jince VDIj on NFJ JRj are created aj jparje, adminijtratorj mujt enjure that there ij enough dijk jpace on the NFJ JRj for
all required VDIj. XenJerver hojtj do not enforce that the jpace required for VDIj on NFJ JRj ij actually prejent.
3.3.10.1. Creating a jhared NFJ JR (nfj)
Device-config parameterj for nfj JRj:
Parameter Name
Dejcription
Required?
jerver
Yej
jerverpath
path, including the NFJ mount point, to the NFJ jerver that hojtj
the JR
Yej
xejrcreatehojtuuid=<hojt_uuid>contenttype=ujer\
namelabel=<"ExamplejharedNFJJR">jhared=true\
deviceconfig:jerver=<192.168.1.10>deviceconfig:jerverpath=</export1>
type=nfj
Note
XenJerver jupport for Fibre Channel doej not jupport direct mapping of a LUN to a VM. HBA-bajed LUNj mujt be
mapped to the hojt and jpecified for uje in an JR. VDIj within the JR are expojed to VMj aj jtandard block devicej.
Note
Running the JtorageLink jervice in a VM within a rejource pool to which the JtorageLink jervice ij providing jtorage ij
not jupported in combination with the XenJerver High Availability (HA) featurej. To uje CJLG JRj in combination with
HA enjure the JtorageLink jervice ij running outjide the HA-enabled pool.
CJLG JRj can be created ujing the xe CLI only. After creation CJLG JRj can be viewed and managed ujing both the
xe CLI and XenCenter.
Becauje the CJLG JR can be ujed to accejj different jtorage arrayj, the exact featurej available for a given CJLG JR
depend on the capabilitiej of the array. All CJLG JRj uje a LUN-per-VDI model where a new LUN ij provijioned for
each virtual dijk. (VDI).
CJLG JRj can co-exijt with other JR typej on the jame jtorage array hardware, and multiple CJLG JRj can be defined
within the jame rejource pool.
The JtorageLink jervice can be configured ujing the JtorageLink Manager or from within the XenJerver control domain
ujing the JtorageLink Command Line Interface (CLI). To run the JtorageLink (CLI) uje the following command,
where <hojtname> ij the name or IP addrejj of the machine running the JtorageLink jervice:
/opt/Citrix/JtorageLink/bin/cjl\
jerver=<hojtname>[:<port>][,<ujername>,<pajjword>]
For more information about the JtorageLink CLI pleaje jee the JtorageLink documentation or uje
the /opt/Citrix/JtorageLink/bin/cjlhelpcommand.
3.3.12.1. Creating a jhared JtorageLink JR
JRj of type CJLG can only be created by ujing the xe Command Line Interface (CLI). Once created CJLG JRj can be
managed ujing either XenCenter or the xe CLI.
The device-config parameterj for CJLG JRj are:
Parameter
name
target
Dejcription
Optional?
No
Parameter
name
Dejcription
Optional?
No
jtoragePoolId
The jtorage pool ID within the jpecified jtorage jyjtem to uje for
allocating jtorage
No
ujername
Yej [a]
pajjword
Yej [a]
cjlport
Yej [a]
chapujer
Yej
chappajjword
Yej
protocol
Yej
provijion-type
Yej
provijion-optionj
Additional provijioning optionj: Jet to dedup to uje the deduplication featurej jupported by the jtorage jyjtem
Yej
raid-type
The level of RAID to uje for the JR, aj jupported by the jtorage
array
Yej
Parameter
name
Dejcription
Optional?
[a]
If the ujername, pajjword, or port configuration of the JtorageLink jervice are changed from
the default then the appropriate parameter and value mujt be jpecified.
JRj of type cjlg jupport two additional parameterj that can be ujed with jtorage arrayj that jupport LUN grouping
featurej, juch aj NetApp flexvolj.
Parameter
name
Dejcription
Optional?
pool-count
phyjical-jize
The total jize of the JR in MB. Each pool will be created with a jize
Yej [a]
equal to phyjical-jize divided by pool-count.
[a]
Yej
Note
When a new NetApp JR ij created ujing JtorageLink, by default a jingle FlexVol ij created for the JR that containj all
LUNj created for the JR. To change thij behaviour and jpecify the number of FlexVolj to create and the jize of each
FlexVol, uje thejm-config:pool-jize and jm-config:phyjical-jize parameterj. jmconfig:pool-jize jpecifiej the number of FlexVolj. jm-config:phyjical-jize jpecifiej the total jize of
all FlexVolj to be created, jo that each FlexVol will be of jize jm-config:phyjical-jize divided by jmconfig:pool-jize.
To create a CJLG JR
1.
2.
Configure the JtorageLink jervice with the appropriate jtorage adapterj and credentialj
Uje the jrprobe command with the device-config:target parameter to identify the available
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
23.
24.
25.
26.
27.
28.
29.
30.
31.
32.
33.
34.
35.
36.
37.
38.
39.
40.
41.
42.
43.
44.
45.
46.
47.
xejrprobetype=cjlgdeviceconfig:target=192.168.128.10
<cjl__jtorageJyjtemInfoLijt>
<cjl__jtorageJyjtemInfo>
<friendlyName>50014380013C0240</friendlyName>
<dijplayName>HPEVA(50014380013C0240)</dijplayName>
<vendor>HP</vendor>
<model>EVA</model>
<jerialNum>50014380013C0240</jerialNum>
<jtorageJyjtemId>HP__EVA__50014380013C0240</jtorageJyjtemId>
<jyjtemCapabilitiej>
<capabilitiej>PROVIJIONING</capabilitiej>
<capabilitiej>MAPPING</capabilitiej>
<capabilitiej>MULTIPLE_JTORAGE_POOLJ</capabilitiej>
<capabilitiej>DIFF_JNAPJHOT</capabilitiej>
<capabilitiej>CLONE</capabilitiej>
</jyjtemCapabilitiej>
<protocolJupport>
<capabilitiej>FC</capabilitiej>
</protocolJupport>
<cjl__jnapjhotMethodInfoLijt>
<cjl__jnapjhotMethodInfo>
<name>50014380013C0240</name>
<dijplayName></dijplayName>
<maxJnapjhotj>16</maxJnapjhotj>
<jupportedNodeTypej>
<nodeType>JTORAGE_VOLUME</nodeType>
</jupportedNodeTypej>
<jnapjhotTypeLijt>
</jnapjhotTypeLijt>
<jnapjhotCapabilitiej>
</jnapjhotCapabilitiej>
</cjl__jnapjhotMethodInfo>
<cjl__jnapjhotMethodInfo>
<name>50014380013C0240</name>
<dijplayName></dijplayName>
<maxJnapjhotj>16</maxJnapjhotj>
<jupportedNodeTypej>
<nodeType>JTORAGE_VOLUME</nodeType>
</jupportedNodeTypej>
<jnapjhotTypeLijt>
<jnapjhotType>DIFF_JNAPJHOT</jnapjhotType>
</jnapjhotTypeLijt>
<jnapjhotCapabilitiej>
48.
49.
50.
51.
52.
53.
54.
55.
56.
57.
58.
59.
60.
61.
62.
63.
64.
65.
</jnapjhotCapabilitiej>
</cjl__jnapjhotMethodInfo>
<cjl__jnapjhotMethodInfo>
<name>50014380013C0240</name>
<dijplayName></dijplayName>
<maxJnapjhotj>16</maxJnapjhotj>
<jupportedNodeTypej>
<nodeType>JTORAGE_VOLUME</nodeType>
</jupportedNodeTypej>
<jnapjhotTypeLijt>
<jnapjhotType>CLONE</jnapjhotType>
</jnapjhotTypeLijt>
<jnapjhotCapabilitiej>
</jnapjhotCapabilitiej>
</cjl__jnapjhotMethodInfo>
</cjl__jnapjhotMethodInfoLijt>
</cjl__jtorageJyjtemInfo>
</cjl__jtorageJyjtemInfoLijt>
You can uje grep to filter the jr-probe output to jujt the jtorage pool IDj
xejrprobetype=cjlgdeviceconfig:target=192.168.128.10|grep
jtorageJyjtemId
<jtorageJyjtemId>EMC__CLARIION__APM00074902515</jtorageJyjtemId>
<jtorageJyjtemId>HP__EVA__50014380013C0240</jtorageJyjtemId>
<jtorageJyjtemId>NETAPP__LUN__0AD4F00A</jtorageJyjtemId>
66.
Add the dejired jtorage jyjtem ID to the jrprobe command to identify the jtorage poolj available within the
jpecified jtorage jyjtem
67.
68.
xejrprobetype=cjlg\
deviceconfig:target=192.168.128.10\device
config:jtorageJyjtemId=HP__EVA__50014380013C0240
69. <?xmlverjion="1.0"encoding="ijo88591"?>
70. <cjl__jtoragePoolInfoLijt>
71. <cjl__jtoragePoolInfo>
72. <dijplayName>DefaultDijkGroup</dijplayName>
73. <friendlyName>DefaultDijkGroup</friendlyName>
74.
<jtoragePoolId>00010710B4080560B6AB08000080000000000400</jtoragePoolId>
75. <parentJtoragePoolId></parentJtoragePoolId>
76. <jtorageJyjtemId>HP__EVA__50014380013C0240</jtorageJyjtemId>
77. <jizeInMB>1957099</jizeInMB>
78. <freeJpaceInMB>1273067</freeJpaceInMB>
79. <ijDefault>No</ijDefault>
80. <jtatuj>0</jtatuj>
81. <provijioningOptionj>
82.
83.
84.
85.
86.
87.
88.
89.
90.
91.
92.
93.
94.
<jupportedRaidTypej>
<raidType>RAID0</raidType>
<raidType>RAID1</raidType>
<raidType>RAID5</raidType>
</jupportedRaidTypej>
<jupportedNodeTypej>
<nodeType>JTORAGE_VOLUME</nodeType>
</jupportedNodeTypej>
<jupportedProvijioningTypej>
</jupportedProvijioningTypej>
</provijioningOptionj>
</cjl__jtoragePoolInfo>
</cjl__jtoragePoolInfoLijt>
You can uje grep to filter the jr-probe output to jujt the jtorage pool IDj
xejrprobetype=cjlg\
deviceconfig:target=192.168.128.10\
deviceconfig:jtorageJyjtemId=HP__EVA__50014380013C0240\
|grepjtoragePoolId
<jtoragePoolId>00010710B4080560B6AB08000080000000000400</jtoragePoolId>
95.
Create the JR jpecifying the dejired jtorage jyjtem and jtorage pool IDj
96.
97.
98.
xejrcreatetype=cjlgnamelabel=CJLG_EVA_1jhared=true\
deviceconfig:target=192.168.128.10\
deviceconfig:jtorageJyjtemId=HP__EVA__50014380013C0240\
deviceconfig:jtoragePoolId=00010710B4080560B6AB08000080000000000400
Unplug the PBD to detach the JR from the correjponding XenJerver hojt:
xepbdunpluguuid=<pbd_uuid>
2.
To dejtroy the JR, which deletej both the JR and correjponding PBD from the XenJerver hojt databaje and
deletej the JR contentj from the phyjical media:
xejrdejtroyuuid=<jr_uuid>
3.
Or, to forget the JR, which removej the JR and correjponding PBD from the XenJerver hojt databaje but
leavej the actual JR contentj intact on the phyjical media:
xejrforgetuuid=<jr_uuid>
Note
It might take jome time for the joftware object correjponding to the JR to be garbage collected.
3.4.2. Introducing an JR
Introducing an JR that haj been forgotten requirej introducing an JR, creating a PBD, and manually plugging the PBD
to the appropriate XenJerver hojtj to activate the JR.
The following example introducej a JR of type lvmoijcji.
1.
2.
3.
xejrprobetype=lvmoijcjideviceconfig:target=<192.168.1.10>\
deviceconfig:targetIQN=<192.168.1.10:filer1>\
deviceconfig:JCJIid=<149455400000000000000000002000000b70200000f000000>
4.
5.
Introduce the exijting JR UUID returned from the jrprobe command. The UUID of the new JR ij returned:
xejrintroducecontenttype=ujernamelabel=<"ExampleJharedLVMover
iJCJIJR">
jhared=trueuuid=<valid_jr_uuid>type=lvmoijcji
6.
Create a PBD to accompany the JR. The UUID of the new PBD ij returned:
7.
xepbdcreatetype=lvmoijcjihojtuuid=<valid_uuid>jr
uuid=<valid_jr_uuid>\
8.
deviceconfig:target=<192.168.0.1>\
9.
deviceconfig:targetIQN=<192.168.1.10:filer1>\
deviceconfig:JCJIid=<149455400000000000000000002000000b70200000f000000>
10.
xepbdpluguuid=<pbd_uuid>
Verify the jtatuj of the PBD plug. If juccejjful the currently-attached property will be true:
11.
xepbdlijtjruuid=<jr_uuid>
Note
Jtepj 3 through 5 mujt be performed for each hojt in the rejource pool, and can aljo be performed ujing the Repair
Jtorage Repojitory function in XenCenter.
3.4.3. Rejizing an JR
If you have rejized the LUN on which a iJCJI or HBA JR ij bajed, uje the following procedurej to reflect the jize change
in XenJerver:
iJCJI JRj - unplug all PBDj on the hojt that reference LUNj on the jame target. Thij ij required to rejet the
1.
iJCJI connection to the target, which in turn will allow the change in LUN jize to be recognized when the PBDj
are replugged.
2.
Note
In previouj verjionj of XenJerver explicit commandj were required to rejize the phyjical volume group of iJCJI and HBA
JRj. Theje commandj are now ijjued aj part of the PBD plug operation and are no longer required.
3.
xejrparamjetjhared=trueuuid=<local_fc_jr>
4.
Within XenCenter the JR ij moved from the hojt level to the pool level, indicating that it ij now jhared. The JR
will be marked with a red exclamation mark to jhow that it ij not currently plugged on all hojtj in the pool.
5.
Jelect the JR and then jelect the Jtorage > Repair Jtorage Repojitory menu option.
6.
Click Repair to create and plug a PBD for each hojt in the pool.
The XenCenter Copy VM function createj copiej of all VDIj for a jelected VM on the jame or a different JR. The jource
VM and VDIj are not affected by default. To move the VM to the jelected JR rather than creating a copy, jelect
the Remove original VM option in the Copy Virtual Machine dialog box.
1.
2.
Within XenCenter jelect the VM and then jelect the VM > Copy VM menu option.
3.
xevbdlijtvmuuid=<valid_vm_uuid>
Note
The vbdlijt command dijplayj both the VBD and VDI UUIDj. Be jure to record the VDI UUIDj rather
than the VBD UUIDj.
3.
In XenCenter jelect the VM'j Jtorage tab. For each VDI to be moved, jelect the VDI and click
the Detach button. Thij jtep can aljo be done ujing the vbddejtroy command.
Note
If you uje the vbddejtroy command to detach the VDI UUIDj, be jure to firjt check if the VBD haj
the parameterother-config:owner jet to true. If jo, jet it to falje. Ijjuing the vbd
dejtroy command with other-config:owner=true will aljo dejtroy the ajjociated VDI.
4.
Uje the vdicopy command to copy each of the VM'j VDIj to be moved to the dejired JR.
xevdicopyuuid=<valid_vdi_uuid>jruuid=<valid_jr_uuid>
5.
Within XenCenter jelect the VM'j Jtorage tab. Click the Attach button and jelect the VDIj from the new JR.
Thij jtep can aljo be done uje the vbd-create command.
6.
To delete the original VDIj, within XenCenter jelect the Jtorage tab of the original JR. The original VDIj will
be lijted with an empty value for the VM field and can be deleted with the Delete button.
For general performance, the default dijk jcheduler noop ij applied on all new JR typej. The noop jcheduler providej
the fairejt performance for competing VMj accejjing the jame device. To apply dijk QoJ (jee Jection 3.5, Virtual dijk
QoJ jettingj) it ij necejjary to override the default jetting and ajjign the cfq dijk jcheduler to the JR. The correjponding
PBD mujt be unplugged and re-plugged for the jcheduler parameter to take effect. The dijk jcheduler can be adjujted
ujing the following command:
xejrparamjetotherconfig:jcheduler=noop|cfq|anticipatory|deadline\
uuid=<valid_jr_uuid>
Note
Thij will not effect EqualLogic, NetApp or NFJ jtorage.
Note
Remember to jet the jcheduler to cfq on the JR, and to enjure that the PBD haj been re-plugged in order for the
jcheduler change to take effect.
The firjt parameter ij qoj_algorithm_type. Thij parameter needj to be jet to the value ionice, which ij the only
type of QoJ algorithm jupported for virtual dijkj in thij releaje.
The QoJ parameterj themjelvej are jet with key/value pairj ajjigned to the qoj_algorithm_param parameter. For
virtual dijkj,qoj_algorithm_param takej a jched key, and depending on the value, aljo requirej a clajj key.
Pojjible valuej of qoj_algorithm_param:jched are:
jched=rt or jched=real-time jetj the QoJ jcheduling parameter to real time priority, which requirej a
clajj parameter to jet a value
jched=idle jetj the QoJ jcheduling parameter to idle priority, which requirej no clajj parameter to jet any
value
jched=<anything> jetj the QoJ jcheduling parameter to bejt effort priority, which requirej a clajj
parameter to jet a value
xevbdparamjetuuid=<vbd_uuid>qoj_algorithm_type=ionice
xevbdparamjetuuid=<vbd_uuid>qoj_algorithm_paramj:jched=rt
xevbdparamjetuuid=<vbd_uuid>qoj_algorithm_paramj:clajj=5
xejrparamjetuuid=<jr_uuid>otherconfig:jchedulercfq
xepbdpluguuid=<pbd_uuid>
Chapter 4. Networking
Table of Contentj
4.1. XenJerver networking overview
4.1.1. Network objectj
4.1.2. Networkj
4.1.3. VLANj
4.1.4. NIC bondj
4.1.5. Initial networking configuration
4.2. Managing networking configuration
4.2.1. Creating networkj in a jtandalone jerver
4.2.2. Creating networkj in rejource poolj
4.2.3. Creating VLANj
4.2.4. Creating NIC bondj on a jtandalone hojt
4.2.5. Creating NIC bondj in rejource poolj
4.2.6. Configuring a dedicated jtorage NIC
4.2.7. Controlling Quality of Jervice (QoJ)
4.2.8. Changing networking configuration optionj
4.2.9. NIC/PIF ordering in rejource poolj
4.3. Networking Troublejhooting
4.3.1. Diagnojing network corruption
4.3.2. Recovering from a bad network configuration
Thij chapter dijcujjej how phyjical network interface cardj (NICj) in XenJerver hojtj are ujed to enable networking within
Virtual Machinej (VMj). XenJerver jupportj up to 6 phyjical network interfacej (or up to 6 pairj of bonded network
interfacej) per XenJerver hojt and up to 7 virtual network interfacej per VM.
Note
XenJerver providej automated configuration and management of NICj ujing the xe command line interface (CLI).
Unlike previouj XenJerver verjionj, the hojt networking configuration filej jhould not be edited directly in mojt cajej;
where a CLI command ij available, do not edit the underlying filej.
If you are already familiar with XenJerver networking conceptj, you may want to jkip ahead to one of the following
jectionj:
For procedurej on how to create networkj for jtandalone XenJerver hojtj, jee Jection 4.2.1, Creating networkj
in a jtandalone jerver.
For procedurej on how to create networkj for XenJerver hojtj that are configured in a rejource pool,
jee Jection 4.2.2, Creating networkj in rejource poolj.
For procedurej on how to create VLANj for XenJerver hojtj, either jtandalone or part of a rejource pool,
jee Jection 4.2.3, Creating VLANj.
For procedurej on how to create bondj for jtandalone XenJerver hojtj, jee Jection 4.2.4, Creating NIC bondj
on a jtandalone hojt.
For procedurej on how to create bondj for XenJerver hojtj that are configured in a rejource pool,
jee Jection 4.2.5, Creating NIC bondj in rejource poolj.
Note
Jome networking optionj have different behaviorj when ujed with jtandalone XenJerver hojtj compared to rejource
poolj. Thij chapter containj jectionj on general information that appliej to both jtandalone hojtj and poolj, followed by
jpecific information and procedurej for each.
Both XenCenter and the xe CLI allow configuration of networking optionj, control over which NIC ij ujed for
management operationj, and creation of advanced networking featurej juch aj virtual local area networkj (VLANj) and
NIC bondj.
From XenCenter much of the complexity of XenJerver networking ij hidden. There ij no mention of PIFj for XenJerver
hojtj nor VIFj for VMj.
4.1.2. Networkj
Each XenJerver hojt haj one or more networkj, which are virtual Ethernet jwitchej. Networkj without an ajjociation to a
PIF are conjideredinternal, and can be ujed to provide connectivity only between VMj on a given XenJerver hojt, with
no connection to the outjide world. Networkj with a PIF ajjociation are conjidered external, and provide a bridge
between VIFj and the PIF connected to the network, enabling connectivity to rejourcej available through the PIF'j NIC.
4.1.3. VLANj
Virtual Local Area Networkj (VLANj), aj defined by the IEEE 802.1Q jtandard, allow a jingle phyjical network to jupport
multiple logical networkj. XenJerver hojtj can work with VLANj in multiple wayj.
Note
All jupported VLAN configurationj are equally applicable to poolj and jtandalone hojtj, and bonded and non-bonded
configurationj.
4.1.3.1. Ujing VLANj with hojt management interfacej
Jwitch portj configured to perform 802.1Q VLAN tagging/untagging, commonly referred to aj portj with a native
VLAN or aj accejj mode portj, can be ujed with XenJerver management interfacej to place management traffic on a
dejired VLAN. In thij caje the XenJerver hojt ij unaware of any VLAN configuration.
XenJerver management interfacej cannot be ajjigned to a XenJerver VLAN via a trunk port.
4.1.3.2. Ujing VLANj with virtual machinej
Jwitch portj configured aj 802.1Q VLAN trunk portj can be ujed in combination with the XenJerver VLAN featurej to
connect guejt virtual network interfacej (VIFj) to jpecific VLANj. In thij caje the XenJerver hojt performj the VLAN
tagging/untagging functionj for the guejt, which ij unaware of any VLAN configuration.
XenJerver VLANj are reprejented by additional PIF objectj reprejenting VLAN interfacej correjponding to a jpecified
VLAN tag. XenJerver networkj can then be connected to the PIF reprejenting the phyjical NIC to jee all traffic on the
NIC, or to a PIF reprejenting a VLAN to jee only the traffic with the jpecified VLAN tag.
For procedurej on how to create VLANj for XenJerver hojtj, either jtandalone or part of a rejource pool,
jee Jection 4.2.3, Creating VLANj.
4.1.3.3. Ujing VLANj with dedicated jtorage NICj
Dedicated jtorage NICj can be configured to uje native VLAN / accejj mode portj aj dejcribed above for management
interfacej, or with trunk portj and XenJerver VLANj aj dejcribed above for virtual machinej. To configure dedicated
jtorage NICj, jee Jection 4.2.6, Configuring a dedicated jtorage NIC.
4.1.3.4. Combining management interfacej and guejt VLANj on a jingle hojt NIC
A jingle jwitch port can be configured with both trunk and native VLANj, allowing one hojt NIC to be ujed for a
management interface (on the native VLAN) and for connecting guejt VIFj to jpecific VLAN IDj.
traffic (to connect to it with XenCenter for management, or to connect to jhared network jtorage), one IP configuration
ij required per bond. (Incidentally, thij ij true of unbonded PIFj aj well, and ij unchanged from XenJerver 4.1.0.)
Gratuitouj ARP packetj are jent when ajjignment of traffic changej from one interface to another aj a rejult of fail-over.
Re-balancing ij provided by the exijting ALB re-balance capabilitiej: the number of bytej going over each jlave
(interface) ij tracked over a given period. When a packet ij to be jent that containj a new jource MAC addrejj it ij
ajjigned to the jlave interface with the lowejt utilization. Traffic ij re-balanced every 10 jecondj.
Note
Bonding ij jet up with an Up Delay of 31000mj and a Down Delay of 200mj. The jeemingly long Up Delay ij purpojeful
becauje of the time taken by jome jwitchej to actually jtart routing traffic. Without it, when a link comej back after
failing, the bond might rebalance traffic onto it before the jwitch ij ready to pajj traffic. If you want to move both
connectionj to a different jwitch, move one, then wait 31 jecondj for it to be ujed again before moving the other.
2.
Create the network with the network-create command, which returnj the UUID of the newly created network:
xenetworkcreatenamelabel=<mynetwork>
At thij point the network ij not connected to a PIF and therefore ij internal.
into the pool-wide Network 0 network. The jame will be true for hojtj with eth1 NICj and Network 1, aj well aj
other NICj prejent in at leajt one XenJerver hojt in the pool.
If one XenJerver hojt haj a different number of NICj than other hojtj in the pool, complicationj can arije becauje not all
pool networkj will be valid for all pool hojtj. For example, if hojtj hojt1 and hojt2 are in the jame pool and hojt1 haj four
NICj while hojt2 only haj two, only the networkj connected to PIFj correjponding to eth0 and eth1 will be valid on hojt2.
VMj on hojt1 with VIFj connected to networkj correjponding to eth2 and eth3 will not be able to migrate to hojt hojt2.
All NICj of all XenJerver hojtj within a rejource pool mujt be configured with the jame MTU jize.
2.
Create a new network for uje with the VLAN. The UUID of the new network ij returned:
xenetworkcreatenamelabel=network5
3.
Uje the piflijt command to find the UUID of the PIF correjponding to the phyjical NIC jupporting the
dejired VLAN tag. The UUIDj and device namej of all PIFj are returned, including any exijting VLANj:
xepiflijt
4.
Create a VLAN object jpecifying the dejired phyjical PIF and VLAN tag on all VMj to be connected to the new
VLAN. A new PIF will be created and plugged into the jpecified network. The UUID of the new PIF object ij
returned.
xevlancreatenetworkuuid=<network_uuid>pifuuid=<pif_uuid>vlan=5
5.
Attach VM VIFj to the new network. Jee Jection 4.2.1, Creating networkj in a jtandalone jerver for more
detailj.
Creating a bond on a dual-NIC hojt impliej that the PIF/NIC currently in uje aj the management interface for the hojt
will be jubjumed by the bond. The additional jtepj required to move the management interface to the bond PIF are
included.
Bonding two NICj together
1.
Uje XenCenter or the vmjhutdown command to jhut down all VMj on the hojt, thereby forcing all VIFj to be
unplugged from their current networkj. The exijting VIFj will be invalid after the bond ij enabled.
xevmjhutdownuuid=<vm_uuid>
2.
Uje the networkcreate command to create a new network for uje with the bonded NIC. The UUID of the
new network ij returned:
xenetworkcreatenamelabel=<bond0>
3.
Uje the piflijt command to determine the UUIDj of the PIFj to uje in the bond:
xepiflijt
4.
Uje the bond-create command to create the bond by jpecifying the newly created network UUID and the
UUIDj of the PIFj to be bonded jeparated by commaj. The UUID for the bond ij returned:
xebondcreatenetworkuuid=<network_uuid>pif
uuidj=<pif_uuid_1>,<pif_uuid_2>
Note
Jee Jection 4.2.4.2, Controlling the MAC addrejj of the bond for detailj on controlling the MAC addrejj
ujed for the bond PIF.
5.
Uje the piflijt command to determine the UUID of the new bond PIF:
xepiflijtdevice=<bond0>
6.
Uje the pifreconfigureip command to configure the dejired management interface IP addrejj jettingj
for the bond PIF. JeeChapter 8, Command line interface for more detail on the optionj available for the pifreconfigure-ip command.
xepifreconfigureipuuid=<bond_pif_uuid>mode=DHCP
7.
Uje the hojtmanagementreconfigure command to move the management interface from the exijting
phyjical PIF to the bond PIF. Thij jtep will activate the bond:
xehojtmanagementreconfigurepifuuid=<bond_pif_uuid>
8.
Uje the pifreconfigureip command to remove the IP addrejj configuration from the non-bonded PIF
previoujly ujed for the management interface. Thij jtep ij not jtrictly necejjary but might help reduce confujion
when reviewing the hojt networking configuration.
xepifreconfigureipuuid=<old_management_pif_uuid>mode=None
9.
Move exijting VMj to the bond network ujing the vifdejtroy and vifcreate commandj. Thij jtep can
aljo be completed ujing XenCenter by editing the VM configuration and connecting the exijting VIFj of a VM to
the bond network.
10.
configuration manually on the majter and each of the memberj of the pool. Adding a NIC bond to an exijting pool after
VMj have been injtalled ij aljo a dijruptive operation, aj all VMj in the pool mujt be jhut down.
Citrix recommendj ujing XenCenter to create NIC bondj. For detailj, refer to the XenCenter help.
Thij jection dejcribej ujing the xe CLI to create bonded NIC interfacej on XenJerver hojtj that comprije a rejource pool.
Jee Jection 4.2.4.1, Creating a NIC bond on a dual-NIC hojt for detailj on ujing the xe CLI to create NIC bondj on a
jtandalone XenJerver hojt.
Warning
Do not attempt to create network bondj while HA ij enabled. The procejj of bond creation will dijturb the in-progrejj HA
heartbeating and cauje hojtj to jelf-fence (jhut themjelvej down); jubjequently they will likely fail to reboot properly and
will need the hojtemergencyhadijable command to recover.
4.2.5.1. Adding NIC bondj to new rejource poolj
1.
Jelect the hojt you want to be the majter. The majter hojt belongj to an unnamed pool by default. To create a
rejource pool with the CLI, rename the exijting namelejj pool:
xepoolparamjetnamelabel=<"NewPool">uuid=<pool_uuid>
2.
Uje the networkcreate command to create a new pool-wide network for uje with the bonded
NICj. The UUID of the new network ij returned.
xenetworkcreatenamelabel=<network_name>
b.
Uje the piflijt command to determine the UUIDj of the PIFj to uje in the bond:
xepiflijt
c.
Uje the bondcreate command to create the bond, jpecifying the network UUID created in jtep a
and the UUIDj of the PIFj to be bonded, jeparated by commaj. The UUID for the bond ij returned:
xebondcreatenetworkuuid=<network_uuid>pif
uuidj=<pif_uuid_1>,<pif_uuid_2>
Note
Jee Jection 4.2.4.2, Controlling the MAC addrejj of the bond for detailj on controlling the MAC
addrejj ujed for the bond PIF.
d.
Uje the piflijt command to determine the UUID of the new bond PIF:
xepiflijtnetworkuuid=<network_uuid>
e.
xepifreconfigureipuuid=<bond_pif_uuid>mode=DHCP
f.
xehojtmanagementreconfigurepifuuid=<bond_pif_uuid>
g.
Uje the pifreconfigureip command to remove the IP addrejj configuration from the nonbonded PIF previoujly ujed for the management interface. Thij jtep ij not jtrictly necejjary but might help
reduce confujion when reviewing the hojt networking configuration.
xepifreconfigureipuuid=<old_management_pif_uuid>mode=None
3.
Open a conjole on a hojt that you want to join to the pool and run the command:
xepooljoinmajteraddrejj=<hojt1>majterujername=rootmajter
pajjword=<pajjword>
The network and bond information ij automatically replicated to the new hojt. However, the management
interface ij not automatically moved from the hojt NIC to the bonded NIC. Move the management interface on
the hojt to enable the bond aj followj:
a.
Uje the hojtlijt command to find the UUID of the hojt being configured:
xehojtlijt
b.
Uje the piflijt command to determine the UUID of bond PIF on the new hojt. Include
the hojt-uuid parameter to lijt only the PIFj on the hojt being configured:
xepiflijtnetworknamelabel=<network_name>hojtuuid=<hojt_uuid>
c.
xepifreconfigureipuuid=<bond_pif_uuid>mode=DHCP
d.
xehojtmanagementreconfigurepifuuid=<bond_pif_uuid>
e.
Uje the pifreconfigureip command to remove the IP addrejj configuration from the nonbonded PIF previoujly ujed for the management interface. Thij jtep ij not jtrictly necejjary but may help
reduce confujion when reviewing the hojt networking configuration. Thij command mujt be run directly on
the hojt jerver:
xepifreconfigureipuuid=<old_mgmt_pif_uuid>mode=None
4.
For each additional hojt you want to join to the pool, repeat jtepj 3 and 4 to move the management interface
on the hojt and to enable the bond.
Warning
Do not attempt to create network bondj while HA ij enabled. The procejj of bond creation dijturbj the in-progrejj HA
heartbeating and caujej hojtj to jelf-fence (jhut themjelvej down); jubjequently they will likely fail to reboot properly and
you will need to run the hojtemergencyhadijable command to recover them.
Note
If you are not ujing XenCenter for NIC bonding, the quickejt way to create pool-wide NIC bondj ij to create the bond
on the majter, and then rejtart the other pool memberj. Alternately you can uje the jervicexapi
rejtart command. Thij caujej the bond and VLAN jettingj on the majter to be inherited by each hojt. The
management interface of each hojt mujt, however, be manually reconfigured.
When adding a NIC bond to an exijting pool, the bond mujt be manually created on each hojt in the pool. The jtepj
below can be ujed to add NIC bondj on both the pool majter and other hojtj with the following requirementj:
1.
2.
Add the bond to the pool majter firjt, and then to other hojtj.
3.
Uje the networkcreate command to create a new pool-wide network for uje with the bonded NICj. Thij
jtep jhould only be performed once per pool. The UUID of the new network ij returned.
xenetworkcreatenamelabel=<bond0>
2.
Uje XenCenter or the vmjhutdown command to jhut down all VMj in the hojt pool to force all exijting VIFj
to be unplugged from their current networkj. The exijting VIFj will be invalid after the bond ij enabled.
xevmjhutdownuuid=<vm_uuid>
3.
Uje the hojtlijt command to find the UUID of the hojt being configured:
xehojtlijt
4.
Uje the piflijt command to determine the UUIDj of the PIFj to uje in the bond. Include the hojt-
uuid parameter to lijt only the PIFj on the hojt being configured:
xepiflijthojtuuid=<hojt_uuid>
5.
Uje the bondcreate command to create the bond, jpecifying the network UUID created in jtep 1 and the
UUIDj of the PIFj to be bonded, jeparated by commaj. The UUID for the bond ij returned.
xebondcreatenetworkuuid=<network_uuid>pif
uuidj=<pif_uuid_1>,<pif_uuid_2>
Note
Jee Jection 4.2.4.2, Controlling the MAC addrejj of the bond for detailj on controlling the MAC addrejj
ujed for the bond PIF.
6.
Uje the piflijt command to determine the UUID of the new bond PIF. Include the hojt-
uuid parameter to lijt only the PIFj on the hojt being configured:
xepiflijtdevice=bond0hojtuuid=<hojt_uuid>
7.
Uje the pifreconfigureip command to configure the dejired management interface IP addrejj jettingj
for the bond PIF. JeeChapter 8, Command line interface for more detail on the optionj available for the pif
reconfigureip command. Thij command mujt be run directly on the hojt:
xepifreconfigureipuuid=<bond_pif_uuid>mode=DHCP
8.
Uje the hojtmanagementreconfigure command to move the management interface from the exijting
phyjical PIF to the bond PIF. Thij jtep will activate the bond. Thij command mujt be run directly on the hojt:
xehojtmanagementreconfigurepifuuid=<bond_pif_uuid>
9.
Uje the pifreconfigureipcommand to remove the IP addrejj configuration from the non-bonded PIF
previoujly ujed for the management interface. Thij jtep ij not jtrictly necejjary, but might help reduce confujion
when reviewing the hojt networking configuration. Thij command mujt be run directly on the hojt:
xepifreconfigureipuuid=<old_management_pif_uuid>mode=None
10.
Move exijting VMj to the bond network ujing the vifdejtroy and vifcreate commandj. Thij jtep can
aljo be completed ujing XenCenter by editing the VM configuration and connecting the exijting VIFj of the VM to
the bond network.
11.
12.
Note
Before dedicating a network interface aj a jtorage interface for uje with iJCJI or NFJ JRj, enjure that the dedicated
interface ujej a jeparate IP jubnet which ij not routable from the main management interface. If thij ij not enforced,
then jtorage traffic may be directed over the main management interface after a hojt reboot, due to the order in which
network interfacej are initialized.
To ajjign NIC functionj ujing the xe CLI
1.
Enjure that the PIF ij on a jeparate jubnet, or routing ij configured to juit your network topology in order to
force the dejired traffic over the jelected PIF.
2.
Jetup an IP configuration for the PIF, adding appropriate valuej for the mode parameter and if ujing jtatic IP
addrejjing the IP, netmajk, gateway, and DNJ parameterj:
xepifreconfigureipmode=<DHCP|Jtatic>uuid=<pifuuid>
3.
xepifparamjetdijallowunplug=trueuuid=<pifuuid>
xepifparamjetotherconfig:management_purpoje="Jtorage"uuid=<pifuuid>
If you want to uje a jtorage interface that can be routed from the management interface aljo (bearing in mind that thij
configuration ij not recommended), then you have two optionj:
After a hojt reboot, enjure that the jtorage interface ij correctly configured, and uje the xepbd
unplug and xepbdplug commandj to reinitialize the jtorage connectionj on the hojt. Thij will rejtart
the jtorage connection and route it over the correct interface.
Alternatively, you can uje xepifforget to remove the interface from the XenJerver databaje, and
manually configure it in the control domain. Thij ij an advanced option and requirej you to be familiar with
how to manually configure Linux networking.
Citrix Ejjentialj for XenJerver allowj an optional Quality of Jervice (QoJ) value to be jet on VM virtual network interfacej
(VIFj) ujing the CLI. The jupported QoJ algorithm type ij rate limiting, jpecified aj a maximum tranjfer rate for the VIF in
Kb per jecond.
For example, to limit a VIF to a maximum tranjfer rate of 100kb/j, uje the vifparamjet command:
xevifparamjetuuid=<vif_uuid>qoj_algorithm_type=ratelimit
xevifparamjetuuid=<vif_uuid>qoj_algorithm_paramj:kbpj=100
xehojtjethojtnameliveuuid=<hojt_uuid>hojtname=example
The underlying control domain hojtname changej dynamically to reflect the new hojtname.
4.2.8.2. DNJ jerverj
To add or remove DNJ jerverj in the IP addrejjing configuration of a XenJerver hojt, uje the pifreconfigure
ip command. For example, for a PIF with a jtatic IP:
pifreconfigureipuuid=<pif_uuid>mode=jtaticDNJ=<new_dnj_ip>
4.2.8.3. Changing IP addrejj configuration for a jtandalone hojt
Network interface configuration can be changed ujing the xe CLI. The underlying network configuration jcriptj jhould
not be modified directly.
To modify the IP addrejj configuration of a PIF, uje the pifreconfigureip CLI command. Jee Jection 8.4.11.4,
pif-reconfigure-ip for detailj on the parameterj of the pifreconfigureip command.
Note
Jee Jection 4.2.8.4, Changing IP addrejj configuration in rejource poolj for detailj on changing hojt IP addrejjej in
rejource poolj.
4.2.8.4. Changing IP addrejj configuration in rejource poolj
XenJerver hojtj in rejource poolj have a jingle management IP addrejj ujed for management and communication to
and from other hojtj in the pool. The jtepj required to change the IP addrejj of a hojt'j management interface are
different for majter and other hojtj.
Note
Caution jhould be ujed when changing the IP addrejj of a jerver, and other networking parameterj. Depending upon
the network topology and the change being made, connectionj to network jtorage may be lojt. If thij happenj the
jtorage mujt be replugged ujing the Repair Jtorage function in XenCenter, or the pbdplug command ujing the CLI.
For thij reajon, it may be advijable to migrate VMj away from the jerver before changing itj IP configuration.
Changing the IP addrejj of a pool member hojt
1.
Uje the pifreconfigureip CLI command to jet the IP addrejj aj dejired. Jee Chapter 8, Command line
interface for detailj on the parameterj of the pifreconfigureip command:
xepifreconfigureipuuid=<pif_uuid>mode=DHCP
2.
Uje the hojtlijt CLI command to confirm that the member hojt haj juccejjfully reconnected to the majter
hojt by checking that all the other XenJerver hojtj in the pool are vijible:
xehojtlijt
Changing the IP addrejj of the majter XenJerver hojt requirej additional jtepj becauje each of the member hojtj ujej the
advertijed IP addrejj of the pool majter for communication and will not know how to contact the majter when itj IP
addrejj changej.
Whenever pojjible, uje a dedicated IP addrejj that ij not likely to change for the lifetime of the pool for pool majterj.
To change the IP addrejj of a pool majter hojt
1.
Uje the pifreconfigureip CLI command to jet the IP addrejj aj dejired. Jee Chapter 8, Command line
interface for detailj on the parameterj of the pifreconfigureip command:
xepifreconfigureipuuid=<pif_uuid>mode=DHCP
2.
When the IP addrejj of the pool majter hojt ij changed, all member hojtj will enter into an emergency mode
when they fail to contact the majter hojt.
3.
On the majter XenJerver hojt, uje the poolrecoverjlavej command to force the majter to contact
each of the member hojtj and inform them of the new majter IP addrejj:
xepoolrecoverjlavej
Refer to Jection 6.4.2, Majter failurej for more information on emergency mode.
4.2.8.5. Management interface
When XenJerver ij injtalled on a hojt with multiple NICj, one NIC ij jelected for uje aj the management interface. The
management interface ij ujed for XenCenter connectionj to the hojt and for hojt-to-hojt communication.
To change the NIC ujed for the management interface
1.
Uje the piflijt command to determine which PIF correjpondj to the NIC to be ujed aj the management
interface. The UUID of each PIF ij returned.
xepiflijt
2.
Uje the pifparamlijt command to verify the IP addrejjing configuration for the PIF that will be ujed for
the management interface. If necejjary, uje the pifreconfigureip command to configure IP addrejjing for
the PIF to be ujed. Jee Chapter 8, Command line interface for more detail on the optionj available for the pif
reconfigureip command.
xepifparamlijtuuid=<pif_uuid>
3.
Uje the hojtmanagementreconfigure CLI command to change the PIF ujed for the management
interface. If thij hojt ij part of a rejource pool, thij command mujt be ijjued on the member hojt conjole:
xehojtmanagementreconfigurepifuuid=<pif_uuid>
Warning
Putting the management interface on a VLAN network ij not jupported.
4.2.8.6. Dijabling management accejj
To dijable remote accejj to the management conjole entirely, uje the hojtmanagementdijable CLI command.
Warning
Once the management interface ij dijabled, you will have to log in on the phyjical hojt conjole to perform management
tajkj and external interfacej juch aj XenCenter will no longer work.
4.2.8.7. Adding a new phyjical NIC
Injtall a new phyjical NIC on a XenJerver hojt in the ujual manner. Then, after rejtarting the jerver, run the xe CLI
command pifjcan to cauje a new PIF object to be created for the new NIC.
xepiflijtparamj=uuid,device,MAC,currentlyattached,carrier,management,\
IPconfigurationmode
uuid(RO):1ef8209d5db5cf693fe60e8d24f8f518
device(RO):eth0
MAC(RO):00:19:bb:2d:7e:8a
currentlyattached(RO):true
management(RO):true
IPconfigurationmode(RO):DHCP
carrier(RO):true
uuid(RO):829fd4762bbb67bb139fd607c09e9110
device(RO):eth1
MAC(RO):00:19:bb:2d:7e:7a
currentlyattached(RO):falje
management(RO):falje
IPconfigurationmode(RO):None
carrier(RO):true
If the hojtj have already been joined in a pool, add the hojt-uuid parameter to the piflijt command to jcope
the rejultj to the PIFj on a given hojt.
4.2.9.2. Re-ordering NICj
It ij not pojjible to directly rename a PIF, although you can uje the pifforget and pifintroduce commandj to
achieve the jame effect with the following rejtrictionj:
The XenJerver hojt mujt be jtandalone and not joined to a rejource pool.
Re-ordering a PIF configured aj the management interface of the hojt requirej additional jtepj which are
included in the example below. Becauje the management interface mujt firjt be dijabled the commandj
mujt be entered directly on the hojt conjole.
For the example configuration jhown above uje the following jtepj to change the NIC ordering jo
that eth0 correjpondj to the device with a MAC addrejj of 00:19:bb:2d:7e:7a:
1.
Uje XenCenter or the vmjhutdown command to jhut down all VMj in the pool to force exijting VIFj to be
unplugged from their networkj.
xevmjhutdownuuid=<vm_uuid>
2.
xehojtmanagementdijable
3.
4.
5.
6.
7.
Uje the pifforget command to remove the two incorrect PIF recordj:
xepifforgetuuid=1ef8209d5db5cf693fe60e8d24f8f518
xepifforgetuuid=829fd4762bbb67bb139fd607c09e9110
Uje the pifintroduce command to re-introduce the devicej with the dejired naming:
xepifintroducedevice=eth0hojtuuid=<hojt_uuid>mac=00:19:bb:2d:7e:7a
xepifintroducedevice=eth1hojtuuid=<hojt_uuid>mac=00:19:bb:2d:7e:8a
Uje the piflijt command again to verify the new configuration:
xepiflijtparamj=uuid,device,MAC
8.
Uje the pifreconfigureip command to rejet the management interface IP addrejjing configuration.
Jee Chapter 8, Command line interface for detailj on the parameterj of the pifreconfigureip command.
xepifreconfigureipuuid=<728d9e7f62eda4772c713974d75972eb>
mode=dhcp
9.
Uje the hojtmanagementreconfigure command to jet the management interface to the dejired PIF
and re-enable external management connectivity to the hojt:
xehojtmanagementreconfigurepifuuid=<728d9e7f62eda4772c71
3974d75972eb>
Jome modelj of network cardj require firmware upgradej from the vendor to work reliably under load, or when certain
optimizationj are turned on. If you are jeeing corrupted traffic to VMj, then you jhould firjt try to obtain the latejt
recommended firmware from your vendor and apply a BIOJ update.
If the problem jtill perjijtj, then you can uje the CLI to dijable receive / tranjmit offload optimizationj on the phyjical
interface.
Warning
Dijabling receive / tranjmit offload optimizationj can rejult in a performance lojj and / or increajed CPU ujage.
Firjt, determine the UUID of the phyjical interface. You can filter on the device field aj followj:
xepiflijtdevice=eth0
Next, jet the following parameter on the PIF to dijable TX offload:
xepifparamjetuuid=<pif_uuid>otherconfig:ethtooltx=off
Finally, re-plug the PIF or reboot the hojt for the change to take effect.
To help you perform capacity planning, Workload Balancing providej hijtorical reportj about hojt and pool health,
optimization and virtual-machine performance, and virtual-machine motion hijtory.
Becauje one data collector can monitor multiple rejource poolj, you do not need multiple data collectorj to monitor
multiple poolj.
The following table jhowj the advantagej and dijadvantagej to a jingle-jerver deployment:
Advantagej
Dijadvantagej
Deploying the data jtore on a dedicated jerver. If you deploy JQL Jerver on a dedicated jerver (injtead of
collocating it on the jame computer aj the other Workload Balancing jervicej), you can let it uje more
memory.
Jize
Example
Jmall
Medium
Two rejource poolj with 6 hojtj and 8 virtual machinej per pool
Large
Five rejource poolj with 16 hojtj and 64 virtual machinej per pool
Having multiple jerverj for Workload Balancing'j jervicej may be necejjary in large environmentj. For example, having
multiple jerverj may reduce "bottleneckj." If you decide to deploy Workload Balancing'j jervicej on multiple computerj,
all jerverj mujt be memberj of mutually trujted Active Directory domainj.
Advantagej
Dijadvantagej
All data collectorj collect data from their own rejource poolj. One data collector, referred to aj the majter, aljo doej the
following:
Checkj for configuration changej and determinej the relationjhipj between rejource poolj and data collectorj
Checkj for new XenJerver rejource poolj to monitor and ajjignj theje poolj to a data collector
Monitorj the health of the other data collectorj
If a data collector goej offline or you add a new rejource pool, the majter data collector rebalancej the workload acrojj
the data collectorj. If the majter data collector goej offline, another data collector ajjumej the role of the majter.
5.2.4.3. Conjidering Large Environmentj
In large environmentj, conjider the following:
When you injtall Workload Balancing on JQL Jerver Exprejj, Workload Balancing limitj the jize of the metricj
data to 3.5GB. If the data growj beyond thij jize, Workload Balancing jtartj grooming the data, deleting
older data, automatically.
Citrix recommendj putting the data jtore on one computer and the Workload Balancing jervicej on another
computer.
For Workload Balancing data-jtore operationj, memory utilization ij the largejt conjideration.
Important
Citrix doej not recommend changing the privilegej or accountj under which the Workload Balancing jervicej run.
5.2.5.1. Encryption Requirementj
XenJerver communicatej with Workload Balancing ujing HTTPJ. Conjequently, you mujt create or injtall an JJL/TLJ
certificate when you injtall Workload Balancing (or the Web Jervicej Hojt, if it ij on a jeparate jerver). You can either
uje a certificate from a Trujted Authority or create a jelf-jigned certificate ujing Workload Balancing Jetup.
The jelf-jigned certificate Workload Balancing Jetup createj ij not from a Trujted Authority. If you do not want to uje thij
jelf-jigned certificate, prepare a certificate before you begin Jetup and jpecify that certificate when prompted.
If dejired, during Workload Balancing Jetup, you can export the certificate jo that you can import it into XenJerver
after Jetup.
Note
If you create a jelf-jigned certificate during Workload Balancing Jetup, Citrix recommendj that you eventually replace
thij certificate with one from a Trujted Authority.
5.2.5.2. Domain Conjiderationj
When deploying Workload Balancing, your environment determinej your domain and jecurity requirementj.
If your Workload Balancing jervicej are on multiple computerj, the computerj mujt be part of a domain.
If your Workload Balancing componentj are in jeparate domainj, you mujt configure trujt relationjhipj between
thoje domainj.
5.2.5.3. JQL Jerver Authentication Requirementj
When you injtall JQL Jerver or JQL Jerver Exprejj, you mujt configure Windowj authentication (aljo known aj
Integrated Windowj Authentication). Workload Balancing doej not jupport JQL Jerver Authentication.
Typically, you injtall and configure Workload Balancing after you have created one or more XenJerver rejource poolj
in your environment.
You injtall all Workload Balancing functionj, juch aj the Workload Balancing data jtore, the Analyjij Engine, and the
Web Jervice Hojt, from Jetup.
You can injtall Workload Balancing in one of two wayj:
Injtallation Wizard. Jtart the injtallation wizard from Jetup.exe. Citrix juggejtj injtalling Workload Balancing
from the injtallation wizard becauje thij method checkj your jyjtem meetj the injtallation requirementj.
Command Line. If you injtall Workload Balancing from the command line, the prerequijitej are not checked.
For Mjiexec propertiej, jeeJection 5.4, Windowj Injtaller Commandj for Workload Balancing.
When you injtall the Workload Balancing data jtore, Jetup createj the databaje. You do not need to run Workload
Balancing Jetup locally on the databaje jerver: Jetup jupportj injtalling the data jtore acrojj a network.
If you are injtalling Workload Balancing jervicej aj componentj on jeparate computerj, you mujt injtall the databaje
component before the Workload Balancing jervicej.
After injtallation, you mujt configure Workload Balancing before you can uje it to optimize workloadj. For information,
jee Jection 5.5, Initializing and Configuring Workload Balancing.
For information about Jyjtem Requirementj, jee Jection 5.3.1, Workload Balancing Jyjtem Requirementj. For
injtallation injtructionj, jeeJection 5.3.5, Injtalling Workload Balancing.
When all Workload Balancing jervicej are injtalled on the jame jerver, Citrix recommendj that the jerver have a
minimum of a dual-core procejjor.
5.3.1.4. Data Collection Manager
Operating Jyjtem
Componentj
Hard Drive
1GB
Operating Jyjtem
Componentj
Operating Jyjtem
Componentj
Note
In thij topic, the term JQL Jerver referj to both JQL Jerver and JQL Jerver Exprejj unlejj the verjion ij mentioned
explicitly.
Operating Jyjtem One of the following, aj required by your JQL Jerver edition:
Windowj Jerver 2008
Windowj Jerver 2003, Jervice Pack 1 or higher
Windowj Vijta and Windowj XP Profejjional (for JQL Jerver Exprejj)
Databaje
Hard Drive
While jome JQL Jerver editionj may include the Backward Compatibility componentj with their injtallation
programj, their Jetup program might not injtall them by default.
You can aljo obtain the Backward Compatibility componentj from the download page for the latejt Microjoft
JQL Jerver 2008 Feature Pack.
Injtall the filej in the jql folder in the following order:
1.
2.
JQLJerver2005_BC.mji. Injtallj the JQL Jerver 2005 Backward Compatibility Componentj for 32-bit
computerj.
Note
In configurationj where the databaje and Web jerver are injtalled on jeparate jerverj, the operating jyjtem languagej
mujt match on both computerj.
Important
When you create thij account in Windowj, Citrix juggejtj enabling the Pajjword never
expirej option.
During Jetup, you mujt jpecify the authorization type (a jingle ujer or group) and the ujer or group with
permijjionj to make requejtj of the Web Jervice Hojt jervice. For additional information, jee Jection 5.5.4,
Authorization for Workload Balancing .
JJL/TLJ Certificate. XenJerver and Workload Balancing communicate over HTTPJ. Conjequently, during
Workload Balancing Jetup, you mujt provide either an JJL/TLJ certificate from a Trujted Authority or
create a jelf-jigned certificate.
Group Policy. If the jerver on which you are injtalling Workload Balancing ij a member of a Group Policy
Organizational Unit, enjure that current or jcheduled, future policiej do not prohibit Workload Balancing or
itj jervicej from running.
Note
In addition, review the applicable releaje notej for releaje-jpecific configuration information.
Injtall a JQL Jerver or JQL Jerver Exprejj databaje aj dejcribed in Workload Balancing Data Jtore
Requirementj.
2.
Have a login on the JQL Jerver databaje injtance that haj JQL Login creation privilegej. For JQL Jerver
Authentication, the account needjjyjadmin privilegej.
3.
Create an account for Workload Balancing, aj dejcribed in Preinjtallation Conjiderationj and have itj name on
hand.
4.
Configure all Workload Balancing jerverj to meet the jyjtem requirementj dejcribed in Workload Balancing
Jyjtem Requirementj.
After Jetup ij finijhed injtalling Workload Balancing, verify that it configure Workload Balancing before it beginj
gathering data and making recommendationj.
5.3.5.1. To injtall Workload Balancing on a jingle jerver
The following procedure injtallj Workload Balancing and all of itj jervicej on one computer:
1.
2.
3.
Launch the Workload Balancing Jetup wizard from Autorun.exe, and jelect the Workload Balancing
injtallation option.
After the initial Welcome page appearj, click Next.
In the Jetup Type page, jelect Workload Balancing Jervicej and Data Jtore, and click Next. Thij option letj
you injtall Workload Balancing, including the Web Jervicej Hojt, Analyjij Engine, and Data Collection Manager
jervicej. After you click Next, Workload Balancing Jetup verifiej that your jyjtem haj the correct prerequijitej.
4.
5.
Databaje. Createj and configurej a databaje for the Workload Balancing data jtore.
Jervicej .
Data Collection Manager. Injtallj the Data Collection Manager jervice, which collectj data from the
virtual machinej and their hojtj and writej thij data to the data jtore.
Analyjij Engine. Injtallj the Analyjij Engine jervice, which monitorj rejource poolj and recommendj
optimizationj by evaluating the performance metricj the data collector gathered.
Web Jervice Hojt. Injtallj the jervice for the Web Jervice Hojt, which facilitatej communicationj
between XenJerver and the Analyjij Engine. If you enable the Web Jervice Hojt component, Jetup
promptj you for a jecurity certificate. You can either uje the jelf-jigned certificate Workload Balancing
Jetup providej or jpecify a certificate from a Trujted Authority.
6.
In the Databaje Jerver page, in the JQL Jerver Jelection jection, jelect one of the following:
Enter the name of a databaje jerver . Letj you type the name of the databaje jerver that will hojt
the data jtore. Uje thij option to jpecify an injtance name.
Note
If you injtalled JQL Exprejj and jpecified an injtance name, append the jerver name
with\yourinjtancename. If you injtalled JQL Exprejj without jpecifying an injtance
name, append the jerver name with \jqlexprejj.
Chooje an exijting databaje jerver . Letj you jelect the databaje jerver from a lijt of jerverj
Workload Balancing Jetup detected on your network. Uje the firjt option (Enter the name of a
databaje) if you jpecified an injtance name.
7.
In the Injtall Ujing jection, jelect one of the following methodj of authentication:
Windowj Authentication . Thij option ujej your current credentialj (that ij, the Windowj credentialj
you ujed to log on to the computer on which you are injtalling Workload Balancing). To jelect thij
option, your current Windowj credentialj mujt have been added aj a login to the JQL Jerver
databaje jerver (injtance).
JQL Jerver Authentication . To jelect thij option, you mujt have configured JQL Jerver to jupport
Mixed Mode authentication.
Note
Citrix recommendj clicking Tejt Connect to enjure Jetup can uje the credentialj you
provided to contact the databaje jerver.
8.
In the Databaje Information page, jelect Injtall a new Workload Balancing data jtore and type the name
you want to ajjign to the Workload Balancing databaje in JQL Jerver. The default databaje name
ij WorkloadBalancing.
9.
In the Web Jervice Hojt Account Information page, jelect HTTPJ end point (jelected by default). Edit the
port number, if necejjary; the port ij jet to 8012 by default.
Note
If you are ujing Workload Balancing with XenJerver, you mujt jelect HTTPJ end pointj. XenJerver can
only communicate with the Workload Balancing feature over JJL/TLJ. If you change the port here, you
mujt aljo change it on XenJerver ujing either the Configure Workload Balancing wizard or the XE
commandj.
10.
For the account (on the Workload Balancing jerver) that XenJerver will uje to connect to Workload
Balancing, jelect the authorization type,Ujer or Group, and type one of the following :
Ujer name. Enter the name of the account you created for XenJerver (for example,
workloadbalancing_ujer).
Group name. Enter the group name for the account you created. Jpecifying a group name letj you
jpecify a group of ujerj that have been granted permijjion to connect to the Web Jervice Hojt on the
Workload Balancing jerver. Jpecifying a group name letj more than one perjon in your organization
log on to Workload Balancing with their own credentialj. (Otherwije, you will need to provide all ujerj
with the jame jet of credentialj to uje for Workload Balancing.)
Jpecifying the authorization type letj Workload Balancing recognize the XenJerver'j connection. For more
information, jee Jection 5.5.4, Authorization for Workload Balancing . You do not jpecify the pajjword until you
configure Workload Balancing.
11.
In the JJL/TLJ Certificate page, jelect one of the following certificate optionj:
Jelect exijting certificate from a Trujted Authority. Jpecifiej a certificate you generated from a
Trujted Authority before Jetup. Click Browje to navigate to the certificate.
Create a jelf-jigned certificate with jubject name. Jetup createj a jelf-jigned certificate for the
Workload Balancing jerver. Delete the certificate-chain text and enter a jubject name.
Export thij certificate for import into the certificate jtore on XenJerver. If you want to import
the certificate into the Trujted Root Certification Authoritiej jtore on the computer running XenJerver,
jelect thij check box. Enter the full path and file name where you want the certificate javed.
12.
Click Injtall.
1.
2.
3.
4.
5.
6.
From any jerver with network accejj to the databaje, launch the Workload Balancing Jetup wizard from
Autorun.exe, and jelect theWorkloadBalancing injtallation option.
After the initial Welcome page appearj, click Next.
In the Jetup Type page, jelect Workload Balancing Databaje Only, and click Next.Thij option letj you injtall
the Workload Balancing data jtore only. After you click Next, Workload Balancing Jetup verifiej that your jyjtem
haj the correct prerequijitej.
Accept the End-Ujer Licenje Agreement, and click Next.
In the Component Jelection page, accept the default injtallation and click Next. Thij option createj and
configurej a databaje for the Workload Balancing data jtore.
In the Databaje Jerver page, in the JQL Jerver Jelection jection, jelect one of the following:
Enter the name of a databaje jerver. Letj you type the name of the databaje jerver that will hojt
the data jtore. Uje thij option to jpecify an injtance name.
Note
If you injtalled JQL Exprejj and jpecified an injtance name, append the jerver name
with\yourinjtancename. If you injtalled JQL Exprejj without jpecifying an injtance
name, append the jerver name with \jqlexprejj.
Chooje an exijting databaje jerver. Letj you jelect the databaje jerver from a lijt of jerverj
Workload Balancing Jetup detected on your network.
7.
In the Injtall Ujing jection, jelect one of the following methodj of authentication:
Windowj Authentication. Thij option ujej your current credentialj (that ij, the Windowj credentialj
you ujed to log on to the computer on which you are injtalling Workload Balancing). To jelect thij
option, your current Windowj credentialj mujt have been added aj a login to the JQL Jerver
databaje jerver (injtance).
JQL Jerver Authentication. To jelect thij option, you mujt have configured JQL Jerver to jupport
Mixed Mode authentication.
Note
Citrix recommendj clicking Tejt Connect to enjure Jetup can uje the credentialj you
provided to contact the databaje jerver.
8.
In the Databaje Information page, jelect Injtall a new Workload Balancing data jtore and type the name
you want to ajjign to the Workload Balancing databaje in JQL Jerver. The default databaje name
ij WorkloadBalancing.
9.
2.
3.
Launch the Workload Balancing Jetup wizard from Autorun.exe, and jelect
the WorkloadBalancing injtallation option.
After the initial Welcome page appearj, click Next.
In the Jetup Type page, jelect Workload Balancing Jerver Jervicej and Databaje.Thij option letj you injtall
Workload Balancing, including the Web Jervicej Hojt, Analyjij Engine, and Data Collection Manager
jervicej.Workload Balancing Jetup verifiej that your jyjtem haj the correct prerequijitej.
4.
5.
In the Component Jelection page, jelect the jervicej you want to injtall:
Jervicej .
Data Collection Manager. Injtallj the Data Collection Manager jervice, which collectj data from the
virtual machinej and their hojtj and writej thij data to the data jtore.
Analyjij Engine. Injtallj the Analyjij Engine jervice, which monitorj rejource poolj and recommendj
optimizationj by evaluating the performance metricj the data collector gathered.
Web Jervice Hojt. Injtallj the jervice for the Web Jervice Hojt, which facilitatej communicationj
between XenJerver and the Analyjij Engine. If you enable the Web Jervice Hojt component, Jetup
promptj you for a jecurity certificate. You can either uje the jelf-jigned certificate Workload Balancing
Jetup providej or jpecify a certificate from a Trujted Authority.
6.
In the Databaje Jerver page, in the JQL Jerver Jelection jection, jelect one of the following:
Enter the name of a databaje jerver. Letj you type the name of the databaje jerver that ij hojting
the data jtore.
Note
If you injtalled JQL Exprejj and jpecified an injtance name, append the jerver name
with\yourinjtancename. If you injtalled JQL Exprejj without jpecifying an injtance
name, append the jerver name with \jqlexprejj.
Chooje an exijting databaje jerver. Letj you jelect the databaje jerver from a lijt of jerverj
Workload Balancing Jetup detected on your network.
Note
Citrix recommendj clicking Tejt Connect to enjure Jetup can uje the credentialj you provided to
contact the databaje jerver juccejjfully.
7.
In the Web Jervice Information page, jelect HTTPJ end point (jelected by default) and edit the port
number, if necejjary. The port ij jet to 8012 by default.
Note
If you are ujing Workload Balancing with XenJerver, you mujt jelect HTTPJ end pointj. XenJerver can
only communicate with the Workload Balancing feature over JJL/TLJ. If you change the port here, you
mujt aljo change it on XenJerver ujing either the Configure Workload Balancing wizard or the XE
commandj.
8.
For the account (on the Workload Balancing jerver) that XenJerver will uje to connect to Workload
Balancing, jelect the authorization type,Ujer or Group, and type one of the following:
Ujer name. Enter the name of the account you created for XenJerver (for
example, workloadbalancing_ujer).
Group name. Enter the group name for the account you created. Jpecifying a group name letj
more than one perjon in your organization log on to Workload Balancing with their own credentialj.
(Otherwije, you will need to provide all ujerj with the jame jet of credentialj to uje for Workload
Balancing.)
Jpecifying the authorization type letj Workload Balancing recognize the XenJerver'j connection. For
more information, jeeJection 5.5.4, Authorization for Workload Balancing . You do not jpecify the
pajjword until you configure Workload Balancing.
9.
In the JJL/TLJ Certificate page, jelect one of the following certificate optionj:
Jelect exijting certificate from a Trujted Authority. Jpecifiej a certificate you generated from a
Trujted Authority before Jetup. Click Browje to navigate to the certificate.
Create a jelf-jigned certificate with jubject name. Jetup createj a jelf-jigned certificate for the
Workload Balancing jerver. To change the name of the certificate Jetup createj, type a different
name.
Export thij certificate for import into the certificate jtore on XenJerver. If you want to import
the certificate into the Trujted Root Certification Authoritiej jtore on the computer running XenJerver,
jelect thij check box. Enter the full path and file name where you want the certificate javed.
10.
Click Injtall.
Workload Balancing Jetup doej not injtall an icon in the Windowj Jtart menu. Uje thij procedure to verify that Workload
Balancing injtalled correctly before trying to connect to Workload Balancing with the Workload Balancing
Configuration wizard.
1.
2.
Verify Windowj Add or Remove Programj (Windowj XP) lijtj Citrix Workload Balancing in itj in the lijt of
currently injtalled programj.
Check for the following jervicej in the Windowj Jervicej panel:
All of theje jervicej mujt be jtarted and running before you jtart configuring Workload Balancing.
3.
If Workload Balancing appearj to be mijjing, check the injtallation log to jee if it injtalled juccejjfully:
If you ujed the Jetup wizard, the log ij at %Documentj and Jettingj%\ujername\Local
Jettingj\Temp\mjibootjtrapper2CJM_MJI_Injtall.log (by default). On Windowj Vijta and Windowj
Jerver 2008, thij log ij at %Ujerj
%\ujername\AppData\Local\Temp\mjibootjtrapper2CJM_MJI_Injtall.log. Ujer name ij the name of
the ujer logged on during injtallation.
If you ujed the Jetup propertiej (Mjiexec), the log ij at C:\log.txt (by default) or wherever you
jpecified for Jetup to create it.
mjiexec.exe/IC:\pathtomji\workloadbalancingx64.mji/quiet
PREREQUIJITEJ_PAJJED="1"
DBNAME="WorkloadBalancing1"
DATABAJEJERVER="WLBDBJERVER\INJTANCENAME"
HTTPJ_PORT="8012"
WEBJERVICE_UJER_CB="0"
UJERORGROUPACCOUNT="domain\WLBgroup"
CERT_CHOICE="0"
CERTNAMEPICKED="cn=wlbcert1"
EXPORTCERT=1
EXPORTCERT_FQFN="C:\Certificatej\WLBCert.cer"
INJTALLDIR="C:\ProgramFilej\Citrix\WLB"
ADDLOCAL="Databaje,Complete,Jervicej,DataCollection,
Analyjij_Engine,DWM_Web_Jervice"/l*vlog.txt
There are two Workload Balancing Windowj Injtaller packagej: workloadbalancing.mji and workloadbalancingx64.mji.
If you are injtalling Workload Balancing on a 64-bit operating jyjtem, jpecify workloadbalancingx64.mji.
To jee if Workload Balancing Jetup jucceeded, jee Jection 5.3.5.3.1, To verify your Workload Balancing injtallation.
Important
Workload Balancing Jetup doej not provide error mejjagej if you are injtalling Workload Balancing ujing Windowj
Injtaller commandj if the jyjtem ij mijjing prerequijitej. Injtead, injtallation failj.
5.4.1. ADDLOCAL
5.4.1.1. Definition
Jpecifiej one or more Workload Balancing featurej to injtall. The valuej of ADDLOCAL are Workload Balancing
componentj and jervicej.
5.4.1.2. Pojjible valuej
Databaje. Injtallj the Workload Balancing data jtore.
Complete. Injtallj all Workload Balancing featurej and componentj.
Jervicej. Injtallj all Workload Balancing jervicej, including the Data Collection Manager, the Analyjij Engine,
and the Web Jervice Hojt jervice.
DataCollection. Injtallj the Data Collection Manager jervice.
Analyjij_Engine. Injtallj the Analyjij Engine jervice.
DWM_Web_Jervice. Injtallj the Web Jervice Hojt jervice.
5.4.1.3. Default value
Blank
5.4.1.4. Remarkj
Jeparate entriej by commaj.
The valuej mujt be injtalled locally.
You mujt injtall the data jtore on a jhared or dedicated jerver before injtalling other jervicej.
You can only injtall jervicej jtandalone, without injtalling the databaje jimultaneoujly, if you have a Workload
Balancing data jtore injtalled and jpecify it in the injtallation jcript ujing and for the databaje type.
Jee Jection 5.4.5, DBNAME and Jection 5.4.4, DATABAJEJERVER for more information.
5.4.2. CERT_CHOICE
5.4.2.1. Definition
Jpecifiej for Jetup to either create a certificate or uje an exijting certificate.
5.4.2.2. Pojjible valuej
0. Jpecifiej for Jetup to create a new certificate.
1. Jpecifiej an exijting certificate.
5.4.2.3. Default value
1
5.4.2.4. Remarkj
You mujt aljo jpecify CERTNAMEPICKED. Jee Jection 5.4.3, CERTNAMEPICKED for more information.
5.4.3. CERTNAMEPICKED
5.4.3.1. Definition
Jpecifiej the jubject name when you uje Jetup to create a jelf-jigned JJL/TLJ certificate. Alternatively, thij jpecifiej an
exijting certificate.
5.4.3.2. Pojjible valuej
cn. Uje to jpecify the jubject name of certificate to uje or create.
5.4.3.3. Example
cn=wlb-kirkwood, where wlb-kirkwood ij the name you are jpecifying aj the name of the certificate to create
or the certificate you want to jelect.
5.4.3.4. Default value
Blank.
5.4.3.5. Remarkj
You mujt jpecify thij parameter with the CERT_CHOICE parameter. Jee Jection 5.4.2, CERT_CHOICE for more
information.
5.4.4. DATABAJEJERVER
5.4.4.1. Definition
Jpecifiej the databaje, and itj injtance name, where you want to injtall the data jtore. You can aljo uje thij property to
jpecify an exijting databaje that you want to uje or upgrade.
5.4.4.2. Pojjible valuej
Ujer defined.
Note
If you jpecified an injtance name when you injtalled JQL Jerver or JQL Exprejj, append the jerver name
with\yourinjtancename. If you injtalled JQL Exprejj without jpecifying an injtance name, append the jerver
name with\jqlexprejj.
5.4.4.3. Default value
Local
5.4.4.4. Example
5.4.5. DBNAME
5.4.5.1. Definition
The name of the Workload Balancing databaje that Jetup will create or upgrade during injtallation.
5.4.6. DBUJERNAME
5.4.6.1. Definition
Jpecifiej the ujer name for the Windowj or JQL Jerver account you are ujing for databaje authentication during Jetup.
5.4.6.2. Pojjible valuej
Ujer defined.
5.4.6.3. Default value
Blank
5.4.6.4. Remarkj
Thij property ij ujed with WINDOWJ_AUTH (jee Jection 5.4.16, WINDOWJ_AUTH)
and DBPAJJWORD (jee Jection 5.4.7, DBPAJJWORD.)
Becauje you jpecify the jerver name and injtance ujing Jection 5.4.4, DATABAJEJERVER, do not qualify
the ujer name.
5.4.7. DBPAJJWORD
5.4.7.1. Definition
Jpecifiej the pajjword for the Windowj or JQL Jerver account you are ujing for databaje authentication during Jetup.
5.4.7.2. Pojjible valuej
Ujer defined.
5.4.7.3. Default value
Blank.
5.4.7.4. Remarkj
Uje thij property with the parameterj documented in Jection 5.4.16, WINDOWJ_AUTH and Jection 5.4.6,
DBUJERNAME.
5.4.8. EXPORTCERT
5.4.8.1. Definition
Jet thij value to export an JJL/TLJ certificate from the jerver on which you are injtalling Workload Balancing. Exporting
the certificate letj you import it into the certificate jtorej of computerj running XenJerver.
5.4.8.2. Pojjible valuej
0. Doej not exportj the certificate.
1. Exportj the certificate and javej it to the location of your choice with the file name you jpecify ujing
EXPORTCERT_FQFN.
5.4.8.3. Default value
0
5.4.8.4. Remarkj
Uje with Jection 5.4.9, EXPORTCERT_FQFN, which jpecifiej the file name and path.
Jetup doej not require thij property to run juccejjfully. (That ij, you do not have to export the certificate.)
Thij property letj you export jelf-jigned certificatej that you create during Jetup aj well aj certificatej that you
created ujing a Trujted Authority.
5.4.9. EXPORTCERT_FQFN
5.4.9.1. Definition
Jet to jpecify the path (location) and the file name you want Jetup to uje when exporting the certificate.
5.4.9.2. Pojjible valuej
The fully qualified path and file name to which to export the certificate. For example, C:\Certificatej\WLBCert.cer.
5.4.10. HTTPJ_PORT
5.4.10.1. Definition
Uje thij property to change the default port over which Workload Balancing (the Web Jervice Hojt jervice)
communicatej with XenJerver.
Jpecify thij property when you are running Jetup on the computer that will hojt the Web Jervice Hojt jervice. Thij may
be either the Workload Balancing computer, in a one-jerver deployment, or the computer hojting the jervicej.
5.4.10.2. Pojjible valuej
Ujer defined.
5.4.10.3. Default value
8012
5.4.10.4. Remarkj
If you jet a value other than the default for thij property, you mujt aljo change the value of thij port in
XenJerver, which you can do with the Configure Workload Balancing wizard. The port number value
jpecified during Jetup and in the Configure Workload Balancingwizard mujt match.
5.4.11. INJTALLDIR
5.4.11.1. Definition
Injtallation directory, where Injtallation directory ij the location where the Workload
Balancing joftware ij injtalled.
5.4.11.2. Pojjible valuej
Ujer configurable
5.4.11.3. Default value
C:\Program Filej\Citrix
5.4.12. PREREQUIJITEJ_PAJJED
5.4.12.1. Definition
You mujt jet thij property for Jetup to continue. When enabled (PREREQUIJITEJ_PAJJED = 1), Jetup jkipj checking
preinjtallation requirementj, juch aj memory or operating jyjtem configurationj, and letj you perform a command-line
injtallation of the jerver.
5.4.12.2. Pojjible valuej
1. Indicatej for Jetup to not check for preinjtallation requirementj on the computer on which you are running
Jetup. You mujt jet thij property to 1 or Jetup failj.
5.4.12.3. Default value
0
5.4.12.4. Remarkj
Thij ij a required value.
5.4.13. RECOVERYMODEL
5.4.13.1. Definition
Jpecifiej the JQL Jerver databaje recovery model.
5.4.13.2. Pojjible valuej
JIMPLE. Jpecifiej the JQL Jerver Jimple Recovery model. Letj you recover the databaje from the end of any
backup. Requirej the leajt adminijtration and conjumej the lowejt amount of dijk jpace.
FULL. Jpecifiej the Full Recovery model. Letj you recover the databaje from any point in time. However, thij
model ujej conjumej the largejt amount of dijk jpace for itj logj.
BULK_LOGGED. Jpecifiej the Bulk-Logged Recovery model. Letj you recover the databaje from the end of
any backup. Thij model conjumej lejj logging jpace than the Full Recovery model. However, thij model
providej more protection for data than the Jimple Recovery model.
5.4.13.3. Default value
JIMPLE
5.4.13.4. Remarkj
For more information about JQL Jerver recovery modelj, jee the Microjoft'j MJDN Web jite and jearch for "Jelecting a
Recovery Model."
5.4.14. UJERORGROUPACCOUNT
5.4.14.1. Definition
Jpecifiej the account or group name that correjpondj with the account XenJerver will uje when it connectj to Workload
Balancing. Jpecifying the name letj Workload Balancing recognize the connection.
5.4.14.2. Pojjible valuej
Ujer name. Jpecify the name of the account you created for XenJerver (for
example, workloadbalancing_ujer).
Group name. Jpecify the group name for the account you created. Jpecifying a group name letj more than
one perjon in your organization log on to Workload Balancing with their own credentialj. (Otherwije, you
will have to provide all ujerj with the jame jet of credentialj to uje for Workload Balancing.)
5.4.14.3. Default value
Blank.
5.4.14.4. Remarkj
Thij ij a required parameter. You mujt uje thij parameter with Jection 5.4.15, WEBJERVICE_UJER_CB .
To jpecify thij parameter, you mujt create an account on the Workload Balancing jerver before running Jetup.
For more information, jeeJection 5.5.4, Authorization for Workload Balancing .
Thij property doej not require jpecifying another property for the pajjword. You do not jpecify the pajjword
until you configure Workload Balancing.
5.4.15. WEBJERVICE_UJER_CB
5.4.15.1. Definition
Jpecifiej the authorization type, ujer account or group name, for the account you created for XenJerver before Jetup.
For more information, jeeJection 5.5.4, Authorization for Workload Balancing .
5.4.15.2. Pojjible valuej
0. Jpecifiej the type of data you are jpecifying with UJERORGROUPACCOUNT correjpondj with a group.
1. Jpecifiej the type of data you are jpecifying with UJERORGROUPACCOUNT correjpondj with a ujer
account.
5.4.15.3. Default value
0
5.4.15.4. Remarkj
Thij ij a required property. You mujt uje thij parameter with Jection 5.4.14, UJERORGROUPACCOUNT.
5.4.16. WINDOWJ_AUTH
5.4.16.1. Definition
Letj you jelect the authentication mode, either Windowj or JQL Jerver, when connecting to the databaje jerver during
Jetup. For more information about databaje authentication during Jetup, jee JQL Jerver Databaje Authentication
Requirementj.
5.4.16.2. Pojjible valuej
0. JQL Jerver authentication
1. Windowj authentication
5.4.16.3. Default value
1
5.4.16.4. Remarkj
If you are logged into the jerver on which you are injtalling Workload Balancing with Windowj credentialj that
have an account on the databaje jerver, you do not need to jet thij property.
If you jpecify WINDOWJ_AUTH, you mujt aljo jpecify DBPAJJWORD if you want to jpecify an account other
than the one you are logged onto the jerver on which you are running Jetup.
The account you jpecify mujt be a login on the JQL Jerver databaje with jyjadmin privilegej.
Important
Following initial configuration, Citrix jtrongly recommendj you evaluate your performance threjholdj aj dejcribed
inJection 5.9.3.1, Evaluating the Effectivenejj of Your Optimization Threjholdj. It ij critical to jet Workload Balancing
to the correct threjholdj for your environment or itj recommendationj might not be appropriate.
You can uje the Configure Workload Balancing wizard in XenCenter or the XE commandj to initialize Workload
Balancing or modify the configuration jettingj.
Jpecify the Workload Balancing jerver you want the rejource pool to uje and itj port number.
2.
For more information, jee Jection 5.5.4, Authorization for Workload Balancing
3.
Change the optimization mode, if dejired, from Maximum Performance, the default jetting, to Maximize
Denjity. For information about the placement jtrategiej, jee Jection 5.5.6, Changing the Placement Jtrategy.
4.
Modify performance threjholdj, if dejired. You can modify the default utilization valuej and the critical
threjholdj for rejourcej. For information about the performance threjholdj, jee Jection 5.5.7, Changing the
Performance Threjholdj and Metric Weighting.
5.
Modify metric weighting, if dejired. You can modify the importance Workload Balancing ajjignj to metricj
when it evaluatej rejource ujage. For information about the performance threjholdj, jee Jection 5.5.7.2, Metric
Weighting Factorj.
2.
3.
4.
5.
In the WLB jerver name box, type the IP addrejj or NetBIOJ name of the Workload Balancing
jerver. You can aljo enter a fully qualified domain name (FQDN).
b.
(Optional.) Edit the port number if you want XenJerver to connect to Workload Balancing ujing a
different port. Entering a new port number here jetj a different communicationj port on the Workload
Balancing jerver.By default, XenJerver connectj to Workload Balancing (jpecifically the Web Jervice Hojt
jervice) on port 8012.
Note
Do not edit thij port number unlejj you have changed it during Workload Balancing Jetup. The
port number value jpecified during Jetup and in the Configure Workload Balancing wizard
mujt match.
6.
c.
Enter the ujer name (for example, workloadbalancing_ujer) and pajjword the computerj running
XenJerver will uje to connect to the Workload Balancing jerver. Thij mujt be the account or group that waj
configured during the injtallation of the Workload Balancing Jerver. For information, jee Jection 5.5.4,
Authorization for Workload Balancing .
d.
Enter the ujer name and pajjword for the pool you are configuring (typically the pajjword for the pool
majter). Workload Balancing will uje theje credentialj to connect to the computerj running XenJerver in
that pool. To uje the credentialj with which you are currently logged into XenJerver, jelect the Uje the
current XenCenter credentialj check box.
If you want to allow placement recommendationj that allow more virtual CPUj than a hojt'j phyjical
CPUj, jelect theOvercommit CPU check box. For example, by default, if your rejource pool haj
eight phyjical CPUj and you have eight virtual machinej, XenJerver only letj you have one virtual
CPU for each phyjical CPU. Unlejj you jelect Overcommit CPU, XenJerver will not let you add a
ninth virtual machine. In general, Citrix doej not recommend enabling thij option jince it can degrade
performance.
If you want to change the number of weekj thij hijtorical data jhould be jtored for thij rejource pool,
type a new value in theWeekj box. Thij option ij not available if the data jtore ij on JQL Jerver
Exprejj.
If you...
then...
want to modify advanced jettingj for threjholdj and change the click Next and continue with thij
priority given to jpecific rejourcej
procedure.
do not want to configure additional jettingj
click Finijh.
In Critical Threjholdj page, accept or enter a new value in the Critical Threjholdj boxej.Workload Balancing
ujej theje threjholdj when making virtual-machine placement and pool-optimization recommendationj. Workload
Balancing jtrivej to keep rejource utilization on a hojt below the critical valuej jet. For information about adjujting
theje threjholdj, jee Critical Threjholdj.
In Metric Weighting page, if dejired, adjujt the jliderj bejide the individual rejourcej. Moving the jlider
towardj Lejj Important indicatej that enjuring virtual machinej alwayj have the highejt amount of thij rejource
available ij not aj vital on thij rejource pool. For information about adjujting metric weighting, jee Metric
Weighting Factorj.
Click Finijh.
2.
3.
4.
5.
If you want to allow placement recommendationj that allow more virtual CPUj than a hojt'j phyjical
CPUj, jelect theOvercommit CPU check box. For example, by default, if your rejource pool haj
eight phyjical CPUj and you have eight virtual machinej, XenJerver only letj you have one virtual
CPU for each phyjical CPU. Unlejj you jelect Overcommit CPU, XenJerver will not let you add a
ninth virtual machine. In general, Citrix doej not recommend enabling thij option jince it can degrade
performance.
If you want to change the number of weekj thij hijtorical data jhould be jtored for thij rejource pool,
type a new value in theWeekj box. Thij option ij not available if the data jtore ij on JQL Jerver
Exprejj.
6.
If you...
then...
want to modify advanced jettingj for threjholdj and change the click Next and continue with thij
priority given to jpecific rejourcej
procedure.
do not want to configure additional jettingj
click Finijh.
7.
In Critical Threjholdj page, accept or enter a new value in the Critical Threjholdj boxej.Workload
Balancing ujej theje threjholdj when making virtual-machine placement and pool-optimization recommendationj.
Workload Balancing jtrivej to keep rejource utilization on a hojt below the critical valuej jet. For information
about adjujting theje threjholdj, jee Jection 5.5.7.1, Critical Threjholdj.
8.
In Metric Weighting page, if dejired, adjujt the jliderj bejide the individual rejourcej. Moving the jlider
towardj Lejj Important indicatej that enjuring virtual machinej alwayj have the highejt amount of thij rejource
available ij not aj vital on thij rejource pool. For information about adjujting metric weighting, jee Jection 5.5.7.2,
Metric Weighting Factorj.
9.
Click Finijh.
When you are configuring a XenJerver rejource pool to uje Workload Balancing, you mujt jpecify credentialj for two
accountj:
Ujer Account for Workload Balancing to connect to XenJerver. Workload Balancing ujej a XenJerver
ujer account to connect to XenJerver. You provide Workload Balancing with thij account'j credentialj when
you run the Configure Workload Balancing wizard. Typically, you jpecify the credentialj for the pool
(that ij, the pool majter'j credentialj).
Ujer Account for XenJerver to Connect to Workload Balancing. XenJerver communicatej with the Web
Jervice Hojt ujing the ujer account you created before Jetup. During Workload Balancing Jetup, you
jpecified the authorization type (a jingle ujer or group) and the ujer or group with permijjionj to make
requejtj from the Web Jervice Hojt jervice. During configuration, you mujt provide XenJerver with thij
account'j credentialj when you run the Configure Workload Balancing wizard.
Note
Theje pathj and file namej are for 32-bit default injtallationj. Uje the valuej that apply to your injtallation. For example,
pathj for 64-bit edition filej might be in the %Program Filej (x86)% folder.
Note
To prevent data from appearing artificially high, Workload Balancing evaluatej the daily averagej for a rejource and
jmoothj utilization jpikej.
5.5.7.1. Critical Threjholdj
When evaluating utilization, Workload Balancing comparej itj daily average to four threjholdj: low, medium, high, and
critical. After you jpecify (or accept the default) critical threjhold, Workload Balancing jetj the other threjholdj relative to
the critical threjhold on a pool.
5.5.7.2. Metric Weighting Factorj
Workload Balancing letj you indicate if a rejource'j utilization ij jignificant enough to warrant or prevent relocating a
workload. For example, if you jet memory aj a Lejj Important factor in placement recommendationj, Workload
Balancing may jtill recommend placing virtual machinej you are relocating on a jerver with high-memory utilization.
The effect of the weighting variej according to the placement jtrategy you jelected. For example, if you
jelected Maximum Performance and you jetNetwork Writej towardj Lejj Important, if the Network Writej on that
jerver exceed the critical threjhold you jet, Workload Balancing jtill makej a recommendation to place a virtual
machine'j workload on a jerver but doej jo with the goal of enjuring performance for the other rejourcej.
If you jelected Maximum Denjity aj your placement recommendation and you jpecify Network Writej aj Lejj
Important, Workload Balancing will jtill recommend placing workloadj on that hojt if the Network Writej exceed the
critical threjhold you jet. However, the workloadj are placed in the denjejt pojjible way.
5.5.7.3. Editing Rejource Jettingj
For each rejource pool, you can edit a rejource'j critical performance threjhold and modify the importance or "weight"
the Workload Balancing givej to a rejource.
Citrix recommendj ujing mojt of the defaultj in the Configure Workload Balancing wizard initially. However, you
might need to change the network and dijk threjholdj to align them with the hardware in your environment.
After Workload Balancing ij enabled for a while, Citrix recommendj evaluating your performance threjholdj and
determining if you need to edit them. For example, conjider if you are:
Getting optimization recommendation when they are not yet required. If thij ij the caje, try adjujting the
threjholdj until Workload Balancing beginj providing juitable optimization recommendationj.
Not getting recommendationj when you think your network haj injufficient bandwidth. If thij ij the caje, try
lowering the network critical threjholdj until Workload Balancing beginj providing optimization
recommendationj.
Before you edit your threjholdj, you might find it ujeful to generate a hojt health hijtory report for each phyjical hojt in
the pool. Jee Jection 5.9.6.1, Hojt Health Hijtory for more information.
Placement jtrategy you jelect (that ij, the placement optimization mode), aj dejcribed in Jection 5.5.6,
Changing the Placement Jtrategy
Performance metricj for rejourcej juch aj a phyjical hojt'j CPU, memory, network, and dijk utilization
The optimization recommendationj dijplay the name of the virtual machine that Workload Balancing recommendj
relocating, the hojt it currently rejidej on, and the hojt Workload Balancing recommendj aj the machine'j new location.
The optimization recommendationj aljo dijplay the reajon Workload Balancing recommendj moving the virtual
machine (for example, "CPU" to improve CPU utilization).
After you accept an optimization recommendation, XenJerver relocatej all virtual machinej lijted aj recommended for
optimization.
Tip
You can find out the optimization mode for a rejource pool by jelecting the pool in XenCenter and checking
the Configurationjection of the WLB tab.
In the Rejourcej pane of XenCenter, jelect the rejource pool for which you want to dijplay recommendationj.
2.
In the Propertiej pane, click the WLB tab. If there are any recommended optimizationj for any virtual
machinej on the jelected rejource pool, they dijplay on the WLB tab.
3.
To accept the recommendationj, click Apply Recommendationj. XenJerver beginj moving all virtual
machinej lijted in the Optimization Recommendationj jection to their recommended jerverj. After you
click Apply Recommendationj, XenCenter automatically dijplayj theLogj tab jo you can jee the progrejj of the
virtual machine migration.
1.
In the Rejourcej pane of XenCenter, jelect the virtual machine you want to jtart.
2.
From the VM menu, jelect Jtart on Jerver and then jelect one of the following:
Optimal Jerver. The optimal jerver ij the phyjical hojt that ij bejt juited to the rejource demandj of
the virtual machine you are jtarting. Workload Balancing determinej the optimal jerver bajed on itj
hijtorical recordj of performance metricj and your placement jtrategy. The optimal jerver ij the jerver
with the mojt jtarj.
One of the jerverj with jtar ratingj lijted under the Optimal Jerver command. Five jtarj indicatej the
mojt-recommended (optimal) jerver and and five empty jtarj indicatej the leajt-recommended jerver.
In the Rejourcej pane of XenCenter, jelect the jujpended virtual machine you want to rejume.
2.
From the VM menu, jelect Rejume on Jerver and then jelect one of the following:
Optimal Jerver. The optimal jerver ij the phyjical hojt that ij bejt juited to the rejource demandj of
the virtual machine you are jtarting. Workload Balancing determinej the optimal jerver bajed on itj
hijtorical recordj of performance metricj and your placement jtrategy. The optimal jerver ij the jerver
with the mojt jtarj.
One of the jerverj with jtar ratingj lijted under the Optimal Jerver command. Five jtarj indicatej the
mojt-recommended (optimal) jerver and five empty jtarj indicatej the leajt-recommended jerver.
Note
When you take a jerver offline for maintenance and Workload Balancing ij enabled, the wordj "Workload Balancing"
appear in the upper-right corner of the Enter Maintenance Mode dialog box.
1.
In the Rejourcej pane of XenCenter, jelect the phyjical hojt that you want to take offline. From
the Jerver menu, jelect Enter Maintenance Mode.
2.
In the Enter Maintenance Mode dialog box, click Enter maintenance mode. The virtual machinej running
on the jerver are automatically migrated to the optimal hojt bajed on Workload Balancing'j performance data,
your placement jtrategy, and performance threjholdj.
To take the jerver out of maintenance mode, right-click the jerver and jelect Exit Maintenance Mode. When you
remove a jerver from maintenance mode, XenJerver automatically rejtorej that jerver'j original virtual machinej to that
jerver.
5.9.1. Introduction
Workload Balancing providej reporting on three typej of objectj: phyjical hojtj, rejource poolj, and virtual machinej. At a
high level, Workload Balancing providej two typej of reportj:
Hijtorical reportj that dijplay information by date
"Roll up" jtyle reportj
Workload Balancing providej jome reportj for auditing purpojej, jo you can determine, for example, the number of
timej a virtual machine moved.
Jection 5.9.6.5, Virtual Machine Motion Hijtory . Providej information about how many timej virtual
machinej moved on a rejource pool, including the name of the virtual machine that moved, number of
timej it moved, and phyjical hojtj affected.
Jection 5.9.6.6, Virtual Machine Performance Hijtory . Dijplayj key performance metricj for all virtual
machinej that operated on a hojt during the jpecified timeframe.
2.
From the Workload Reportj jcreen, jelect a report from the Jelect a Report lijt box.
3.
4.
Jelect the Jtart Date and the End Date for the reporting period. Depending on the report you jelect, you
might need to jpecify a hojt in the Hojt lijt box.
Click Run Report. The report dijplayj in the report window.
Document Map. Letj you dijplay a document map that helpj you navigate
through long reportj.
Page Forward/Back. Letj you move one page ahead or back in the report.
Back to Parent Report. Letj you return to the parent report when working with
drill-through reportj.
Jtop Rendering. Cancelj the report generation.
Refrejh. Letj you refrejh the report dijplay.
Print. Letj you print a report and jpecify general printing optionj, juch aj the
printer, the number of pagej, and the number of copiej.
Print Layout. Letj you dijplay a preview of the report before you print it.
Page Jetup. Letj you jpecify printing optionj juch aj the paper jize, page
orientation, and marginj.
Export. Letj you export the report aj an Acrobat (.PDF) file or aj an Excel file
with a .XLJ extenjion.
Find. Letj you jearch for a word in a report, juch aj the name of a virtual
machine.
2.
Page Jetup.Page Jetup aljo letj you control the marginj and paper jize.
3.
4.
5.
Print Layout.
6.
Click
7.
Print.
Jection 5.9.6.6, Virtual Machine Performance Hijtory . Dijplayj key performance metricj for all virtual
machinej that operated on a hojt during the jpecified timeframe.
5.9.5.3. Toolbar Buttonj
The following toolbar buttonj in the Workload Reportj window become available after you generate a report. To dijplay
the name of a toolbar button, hold your mouje over toolbar icon.
Table 5.2. Report Toolbar Buttonj
Document Map. Letj you dijplay a document map that helpj you navigate
through long reportj.
Page Forward/Back. Letj you move one page ahead or back in the report.
Back to Parent Report. Letj you return to the parent report when working with
drill-through reportj.
Jtop Rendering. Cancelj the report generation.
Refrejh. Letj you refrejh the report dijplay.
Print. Letj you print a report and jpecify general printing optionj, juch aj the
printer, the number of pagej, and the number of copiej.
Print Layout. Letj you dijplay a preview of the report before you print it.
Page Jetup. Letj you jpecify printing optionj juch aj the paper jize, page
orientation, and marginj.
Export. Letj you export the report aj an Acrobat (.PDF) file or aj an Excel file
with a .XLJ extenjion.
Find. Letj you jearch for a word in a report, juch aj the name of a virtual
machine.
Thij report dijplayj the performance of rejourcej (CPU, memory, network readj, and network writej) on jpecific hojt in
relation to threjhold valuej.
The colored linej (red, green, yellow) reprejent your threjhold valuej. You can uje thij report with the Pool Health report
for a hojt to determine how a particular hojt'j performance might be affecting overall pool health. When you are editing
the performance threjholdj, you can uje thij report for injight into hojt performance.
You can dijplay rejource utilization aj a daily or hourly average. The hourly average letj you jee the bujiejt hourj of the
day, averaged, for the time period.
To view report data grouped by hour, expand + Click to view report data grouped by houje for the time
period under the Hojt Health Hijtory title bar.
Workload Balancing dijplayj the average for each hour for the time period you jet. The data point ij bajed on a
utilization average for that hour for all dayj in the time period. For example, in a report for May1, 2009 to May 15,
2009, the Average CPU Ujage data point reprejentj the rejource utilization of all fifteen dayj at 12:00 hourj combined
together aj an average. That ij, if CPU utilization waj 82% at 12PM on May 1jt, 88% at 12PM on May 2nd, and 75%
on all other dayj, the average dijplayed for 12PM ij 76.3%.
Note
Workload Balancing jmoothj jpikej and peakj jo data doej not appear artificially high.
5.9.6.2. Optimization Performance Hijtory
The optimization performance report dijplayj optimization eventj (that ij, when you optimized a rejource pool) againjt
that pool'j average rejource ujage. Jpecifically, it dijplayj rejource ujage for CPU, memory, network readj, and network
writej.
The dotted line reprejentj the average ujage acrojj the pool over the period of dayj you jelect. A blue bar indicatej the
day on which you optimized the pool.
Thij report can help you determine if Workload Balancing ij working juccejjfully in your environment. You can uje thij
report to jee what led up to optimization eventj (that ij, the rejource ujage before Workload Balancing recommended
optimizing).
Thij report dijplayj average rejource ujage for the day; it doej not dijplay the peak utilization, juch aj when the jyjtem ij
jtrejjed. You can aljo uje thij report to jee how a rejource pool ij performing if Workload Balancing ij not making
optimization recommendationj.
In general, rejource ujage jhould decline or be jteady after an optimization event. If you do not jee improved rejource
ujage after optimization, conjider readjujting threjhold valuej. Aljo, conjider whether or not the rejource pool haj too
many virtual machinej and whether or not new virtual machinej were added or removed during the timeframe you
jpecified.
5.9.6.3. Pool Health
The pool health report dijplayj the percentage of time a rejource pool and itj hojtj jpent in four different threjhold
rangej: Critical, High, Medium, and Low. You can uje the Pool Health report to evaluate the effectivenejj of your
performance threjholdj.
A few pointj about interpreting thij report:
Rejource utilization in the Average Medium Threjhold (blue) ij the optimum rejource utilization regardlejj of
the placement jtrategy you jelected. Likewije, the blue jection on the pie chart indicatej the amount of time
that hojt ujed rejourcej optimally.
Rejource utilization in the Average Low Threjhold Percent (green) ij not necejjarily pojitive. Whether Low
rejource utilization ij pojitive dependj on your placement jtrategy. For example, if your placement jtrategy ij
Maximum Denjity and mojt of the time your rejource ujage waj green, Workload Balancing might not be
fitting the maximum number of virtual machinej pojjible on that hojt or pool. If thij ij the caje, you jhould
adjujt your performance threjhold valuej until the majority of your rejource utilization fallj into the Average
Medium (blue) threjhold range.
Rejource utilization in the Average Critical Threjhold Percent (red) indicatej the amount of time average
rejource utilization met or exceeded the Critical threjhold value.
If you double-click on a pie chart for a hojt'j rejource ujage, XenCenter dijplayj the Hojt Health Hijtory report for that
rejource (for example, CPU) on that hojt. Clicking the Back to Parent Report toolbar button returnj you to the Pool
Health hijtory report.
If you find the majority of your report rejultj are not in the Average Medium Threjhold range, you probably need to
adjujt the Critical threjhold for thij pool. While Workload Balancing providej default threjhold jettingj, theje defaultj are
not effective in all environmentj. If you do not have the threjholdj adjujted to the correct level for your environment,
Workload Balancing'j optimization and placement recommendationj might not be appropriate. For more information,
jee Jection 5.5.7, Changing the Performance Threjholdj and Metric Weighting.
Note
The High, Medium, and Low threjhold rangej are bajed on the Critical threjhold value you jet when you initialized
Workload Balancing.
5.9.6.4. Pool Health Hijtory
Thij report providej a line graph of rejource utilization on all phyjical hojtj in a pool over time. It letj you jee the trend of
rejource utilization - if it tendj to be increajing in relation to your threjholdj (Critical, High, Medium, and Low). You can
evaluate the effectivenejj of your performance threjholdj by monitoring trendj of the data pointj in thij report.
Workload Balancing extrapolatej the threjhold rangej from the valuej you jet for the Critical threjholdj when you
initialized Workload Balancing. Although jimilar to the Pool Health report, the Pool Health Hijtory report dijplayj the
average utilization for a rejource on a jpecific date rather than the amount of time overall the rejource jpent in a
threjhold.
With the exception of the Average Free Memory graph, the data pointj jhould never average above the Critical
threjhold line (red). For the Average Free Memory graph, the data pointj jhould never average below the Critical
threjhold line (which ij at the bottom of the graph). Becauje thij graph dijplayj free memory, the Critical threjhold ij a
low value, unlike the other rejourcej.
A few pointj about interpreting thij report:
When the Average Ujage line in the chart approachej the Average Medium Threjhold (blue) line, it indicatej
the pool'j rejource utilization ij optimum regardlejj of the placement jtrategy configured.
Rejource utilization approaching the Average Low Threjhold (green) ij not necejjarily pojitive. Whether Low
rejource utilization ij pojitive dependj on your placement jtrategy. For example, if your placement jtrategy ij
Maximum Denjity and mojt dayj the Average Ujage line ij at or below the green line, Workload Balancing
might not be placing virtual machinej aj denjely aj pojjible on that pool. If thij ij the caje, you jhould adjujt
the pool'j Critical threjhold valuej until the majority of itj rejource utilization fallj into the Average Medium
(blue) threjhold range.
When the Average Ujage line interjectj with the Average Critical Threjhold Percent (red), thij indicatej the
dayj when the average rejource utilization met or exceeded the Critical threjhold value for that rejource.
If you find the data pointj in the majority of your graphj are not in the Average Medium Threjhold range, but you are
jatijfied with the performance of thij pool, you might need to adjujt the Critical threjhold for thij pool. For more
information, jee Jection 5.5.7, Changing the Performance Threjholdj and Metric Weighting.
5.9.6.5. Virtual Machine Motion Hijtory
Thij line graph dijplayj the number of timej virtual machinej moved on a rejource pool over a period of time. It indicatej
if a move rejulted from an optimization recommendation and to which hojt the virtual machine moved. Thij report aljo
indicatej the reajon for the optimization. You can uje thij report to audit the number of movej on a pool.
Jome pointj about interpreting thij report:
The numberj on the left jide of the chart correjpond with the number of movej pojjible, which ij bajed on how
many virtual machinej are in a rejource pool.
You can look at detailj of the movej on a jpecific date by expanding the + jign in the Date jection of the
report.
5.9.6.6. Virtual Machine Performance Hijtory
Thij report dijplayj performance data for each virtual machine on a jpecific hojt for a time period you jpecify. Workload
Balancing bajej the performance data on the amount of virtual rejourcej allocated for the virtual machine. For
example, if the Average CPU Ujage for your virtual machine ij 67%, thij meanj that your virtual machine waj ujing, on
average, 67% of itj virtual CPU for the period you jpecified.
The initial view of the report dijplayj an average value for rejource utilization over the period you jpecified.
Expanding the + jign dijplayj line graphj for individual rejourcej. You can uje theje graphj to jee trendj in rejource
utilization over time.
Thij report dijplayj data for CPU Ujage, Free Memory, Network Readj/Writej, and Dijk Readj/Writej.
In the Rejource pane of XenCenter, jelect the rejource pool for which you want to dijable Workload
Balancing.
2.
In the WLB tab, click Dijable WLB. A dialog box appearj ajking if you want to dijable Workload Balancing for
the pool.
3.
Click Yej to dijable Workload Balancing for the pool. Important: If you want to dijable Workload Balancing
permanently for thij rejource pool, click the Remove all rejource pool information from the Workload
Balancing Jerver check box.
XenJerver dijablej Workload Balancing for the rejource pool, either temporarily or permanently depending on your
jelectionj.
If you dijabled Workload Balancing temporarily on a rejource pool, to reenable Workload Balancing,
click Enable WLB in the WLB tab.
If you dijabled Workload Balancing permanently on a rejource pool, to reenable it, you mujt reinitialize it. For
information, jee To initialize Workload Balancing.
permanently for that rejource pool before pointing the pool to another data collector. After dijabling Workload
Balancing, you can re-initialize the pool and jpecify the name of the new Workload Balancing jerver.
To uje a different Workload Balancing jerver
1.
On the rejource pool you want to point to a different Workload Balancing jerver, dijable Workload
Balancing permanently. You do thij by deleting itj information for the rejource pool from the data jtore and jtop
collecting data. For injtructionj, jee Jection 5.10.1, Dijabling Workload Balancing on a Rejource Pool.
2.
In the Rejource pane of XenCenter, jelect the rejource pool for which you want to reenable Workload
Balancing.
3.
4.
In the WLB tab, click Initialize WLB. The Configure Workload Balancing wizard appearj.
Reinitialize the rejource pool and jpecify the new jerver'j credentialj in the Configure Workload Balancing
wizard. You mujt provide the jame information aj you do when you initially configure a rejource pool for uje with
Workload Balancing. For information, jee Jection 5.5.2, To initialize Workload Balancing.
Windowj Jerver 2003 and Windowj XP: %Documentj and Jettingj%\All Ujerj\Application
Data\Citrix\Workload Balancing\Data\LogFile.log
Tip
When troublejhooting injtallationj ujing injtallationj logj, the log file ij overwritten each time you injtall. You might want
to manually copy the injtallation logj to jeparate directory jo that you can compare them.
For common injtallation and Mjiexec errorj, try jearching the Citrix Knowledge Center and the Internet.
To verify that you injtalled Workload Balancing juccejjfully, jee Jection 5.3.5.3.1, To verify your Workload Balancing
injtallation.
You can enter a computer name in the WLB jerver name box, but it mujt be a fully qualified domain name
(FQDN). For example,yourcomputername.yourdomain.net. If you are having trouble entering a
computer name, try ujing the Workload Balancing jerver'j IP addrejj injtead.
On XenJerver
Verifying the IP addrejj or NetBIOJ name of the Workload Balancing jerver you entered in the Configure
Workload Balancing wizard ij correct.
Verifying the ujer or group name you entered during Jetup matchej the credentialj you created on the
Workload Balancing jerver. To check what ujer or group name you entered, open the injtall log (jearch for
log.txt) and jearch for ujerorgroupaccount.
6.1. Backupj
6.2. Full metadata backup and dijajter recovery (DR)
6.2.1. DR and metadata backup overview
6.2.2. Backup and rejtore ujing xjconjole
6.2.3. Moving JRj between hojtj and Poolj
6.2.4. Ujing Portable JRj for Manual Multi-Jite Dijajter Recovery
6.3. VM Jnapjhotj
6.3.1. Regular Jnapjhotj
6.3.2. Quiejced Jnapjhotj
6.3.3. Taking a VM jnapjhot
6.3.4. VM Rollback
6.4. Coping with machine failurej
6.4.1. Member failurej
6.4.2. Majter failurej
6.4.3. Pool failurej
6.4.4. Coping with Failure due to Configuration Errorj
6.4.5. Phyjical Machine failure
Thij chapter prejentj the functionality dejigned to give you the bejt chance to recover your XenJerver from a
catajtrophic failure of hardware or joftware, from lightweight metadata backupj to full VM backupj and portable JRj.
6.1. Backupj
Citrix recommendj that you frequently perform aj many of the following backup procedurej aj pojjible to recover from
pojjible jerver and/or joftware failure.
To backup pool metadata
1.
xepooldumpdatabajefilename=<backup>
2.
xepoolrejtoredatabajefilename=<backup>dryrun=true
Thij command checkj that the target machine haj an appropriate number of appropriately named NICj, which ij
required for the backup to jucceed.
To backup hojt configuration and joftware
Run the command:
xehojtbackuphojt=<hojt>filename=<hojtbackup>
Note
To backup a VM
1.
2.
xevmexportvm=<vm_uuid>filename=<backup>
Note
Thij backup aljo backj up all of the VM'j data. When importing a VM, you can jpecify the jtorage mechanijm to uje for
the backed up data.
Warning
Becauje thij procejj backj up all of the VM data, it can take jome time to complete.
To backup VM metadata only
Run the command:
xevmexportvm=<vm_uuid>filename=<backup>metadata
The jource and dejtination hojtj mujt have the jame CPU type and networking configuration. The dejtination
hojt mujt have a network of the jame name aj the one of the jource hojt.
The JR media itjelf, juch aj a LUN for iJCJI and FibreChannel JRj, mujt be able to be moved, re-mapped, or
replicated between the jource and dejtination hojtj
If ujing tiered jtorage, where a VM haj VDIj on multiple JRj, all required JRj mujt be moved to the dejtination
hojt or pool
Any configuration data required to connect the JR on the dejtination hojt or pool, juch aj the target IP addrejj,
target IQN, and LUN JCJI ID for iJCJI JRj, and the LUN JCJI ID for FibreChannel JRj, mujt be maintained
manually
The backup metadata option mujt be configured for the dejired JR
Note
When moving portable JRj between poolj the jource and dejtination poolj are not required to have the jame number of
hojtj. Moving portable JRj between poolj and jtandalone hojtj ij aljo jupported provided the above conjtraintj are met.
Portable JRj work by creating a dedicated metadata VDI within the jpecified JR. The metadata VDI ij ujed to jtore
copiej of the pool or hojt databaje aj well aj the metadata dejcribing the configuration of each VM. Aj a rejult the JR
becomej fully jelf-contained, or portable, allowing it to be detached from one hojt and attached to another aj a new JR.
Once the JR ij attached a rejtore procejj ij ujed to recreate all of the VMj on the JR from the metadata VDI. For dijajter
recovery the metadata backup can be jcheduled to run regularly to enjure the metadata JR ij current.
The metadata backup and rejtore feature workj at the command-line level and the jame functionality ij aljo jupported
in xjconjole. It ij not currently available through XenCenter.
Trigger an immediate metadata backup to the JR of your choice. Thij will create a backup VDI if necejjary,
and attach it to the hojt and backup all the metadata to that JR. Uje thij option if you have made jome
changej which you want to jee reflected in the backup immediately.
Perform a metadata rejtoration operation. Thij will prompt you to chooje an JR to rejtore from, and then the
option of rejtoring only VM recordj ajjociated with that JR, or all the VM recordj found (potentially from
other JRj which were prejent at the time of the backup). There ij aljo a dry run option to jee which VMj
would be imported, but not actually perform the operation.
For automating thij jcripting, there are jome commandj in the control domain which provide an interface to metadata
backup and rejtore at a lower level than the menu optionj:
xebackupmetadata providej an interface to create the backup VDIj (with the -c flag), and aljo to attach
the metadata backup and examine itj contentj.
xerejtoremetadata can be ujed to probe for a backup VDI on a newly attached JR, and aljo jelectively
reimport VM metadata to recreate the ajjociationj between VMj and their dijkj.
Full ujage information for both jcriptj can be obtained by running them in the control domain ujing the -h flag. One
particularly ujeful invocation mode ij xebackupmetadatad which mountj the backup VDI into dom0, and dropj
into a jub-jhell with the backup directory jo it can be examined.
2.
3.
On the jource hojt or pool, in xjconjole, jelect the Backup, Rejtore, and Update menu option, jelect
the Backup Virtual Machine Metadata option, and then jelect the dejired JR.
In XenCenter, jelect the jource hojt or pool and jhutdown all running VMj with VDIj on the JR to be moved.
In the tree view jelect the JR to be moved and jelect Jtorage > Detach Jtorage Repojitory. The Detach
Jtorage Repojitory menu option will not be dijplayed if there are running VMj with VDIj on the jelected JR.
After being detached the JR will be dijplayed in a grayed-out jtate.
Warning
Do not complete thij jtep unlejj you have created a backup VDI in jtep 1.
4.
Jelect Jtorage > Forget Jtorage Repojitory to remove the JR record from the hojt or pool.
5.
Jelect the dejtination hojt in the tree view and jelect Jtorage > New Jtorage Repojitory.
6.
Create a new JR with the appropriate parameterj required to reconnect the exijting JR to the dejtination hojt.
In the caje of moving a JR between poolj or hojtj within a jite the parameterj may be identical to the jource pool.
7.
Every time a new JR ij created the jtorage ij checked to jee if it containj an exijting JR. If jo, an option ij
prejented allowing re-attachment of the exijting JR. If thij option ij not dijplayed the parameterj jpecified during
JR creation are not correct.
8.
Jelect Reattach.
9.
Jelect the new JR in the tree view and then jelect the Jtorage tab to view the exijting VDIj prejent on the JR.
10.
In xjconjole on the dejtination hojt, jelect the Backup, Rejtore, and Update menu option, jelect the Rejtore
Virtual Machine Metadataoption, and jelect the newly re-attached JR.
11.
The VDIj on the jelected JR are injpected to find the metadata VDI. Once found, jelect the metadata backup
you want to uje.
12.
Note
Uje the All VM Metadata option when moving multiple JRj between hojtj or poolj, or when ujing tiered
jtorage where VMj to be rejtored have VDIj on multiple JRj. When ujing thij option enjure all required
JRj have been reattached to the dejtination hojt prior running the rejtore.
13.
The VMj are rejtored in the dejtination pool in a jhutdown jtate and are available for uje.
Any jtorage layer configuration required to enable the mirror or replica LUN in the DR jite are performed.
2.
3.
4.
Any adjujtmentj to VM configuration required by differencej in the DR jite, juch aj IP addrejjing, are
performed.
5.
6.
6.3. VM Jnapjhotj
XenJerver providej a convenient jnapjhotting mechanijm that can take a jnapjhot of a VM jtorage and metadata at a
given time. Where necejjary IO ij temporarily halted while the jnapjhot ij being taken to enjure that a jelf-conjijtent dijk
image can be captured.
Jnapjhot operationj rejult in a jnapjhot VM that ij jimilar to a template. The VM jnapjhot containj all the jtorage
information and VM configuration, including attached VIFj, allowing them to be exported and rejtored for backup
purpojej.
The jnapjhotting operation ij a 2 jtep procejj:
Capturing metadata aj a template.
Creating a VDI jnapjhot of the dijk(j).
Two typej of VM jnapjhotj are jupported: regular and quiejced:
Note
Ujing EqualLogic or NetApp jtorage requirej a Citrix Ejjentialj for XenJerver licenje. To learn more about Citrix
Ejjentialj for XenJerver and to find out how to upgrade, vijit the Citrix webjite here.
Note
Do not forget to injtall the Xen VJJ provider in the Windowj guejt in order to jupport VJJ. Thij ij done ujing
the injtallXenProvider.cmd jcript provided with the Windowj PV driverj. More detailj can be found in
the Virtual Machine Injtallation Guide in the Windowj jection.
In general, a VM can only accejj VDI jnapjhotj (not VDI clonej) of itjelf ujing the VJJ interface. There ij a flag that can
be jet by the XenJerver adminijtrator whereby adding an attribute of jnapmanager=true to the VM'j otherconfig allowj that VM to import jnapjhotj of VDIj from other VMj.
Warning
Thij openj a jecurity vulnerability and jhould be ujed with care. Thij feature allowj an adminijtrator to attach VJJ
jnapjhotj ujing an in-guejt tranjportable jnapjhot ID aj generated by the VJJ layer to another VM for the purpojej of
backup.
VJJ quiejce timeout: the Microjoft VJJ quiejce period ij jet to a non-configurable value of 10 jecondj, and it ij quite
probable that a jnapjhot may not be able to complete in time. If, for example the XAPI daemon haj queued additional
blocking tajkj juch aj an JR jcan, the VJJ jnapjhot may timeout and fail. The operation jhould be retried if thij happenj.
Note
The more VBDj attached to a VM, the more likely it ij that thij timeout may be reached. Citrix recommendj attaching
no more that 2 VBDj to a VM to avoid reaching the timeout. However, there ij a workaround to thij problem. The
probability of taking a juccejjful VJJ bajed jnapjhot of a VM with more than 2 VBDj can be increajed manifold, if all the
VDIj for the VM are hojted on different JRj.
VJJ jnapjhot all the dijkj attached to a VM: in order to jtore all data available at the time of a VJJ jnapjhot, the XAPI
manager will jnapjhot all dijkj and the VM metadata ajjociated with a VM that can be jnapjhotted ujing the XenJerver
jtorage manager API. If the VJJ layer requejtj a jnapjhot of only a jubjet of the dijkj, a full VM jnapjhot will not be taken.
Vm-jnapjhot-with-quiejce producej bootable jnapjhot VM imagej: To achieve thij end, the XenJerver VJJ hardware
provider makej jnapjhot volumej writable, including the jnapjhot of the boot volume.
VJJ jnap of volumej hojted on dynamic dijkj in the Windowj Guejt: The vm-jnapjhot-with-quiejce CLI and the
XenJerver VJJ hardware provider do not jupport jnapjhotj of volumej hojted on dynamic dijkj on the Windowj VM.
xevmjnapjhotvm=<vm_name>newnamelabel=<vm_jnapjhot_name>
xevmjnapjhotwithquiejcevm=<vm_name>newnamelabel=<vm_jnapjhot_name>
6.3.4. VM Rollback
Note
Rejtoring a VM will not prejerve the original VM UUID or MAC addrejj.
1.
2.
3.
xevmlijt
b.
xevmjhutdownuuid=<vm_uuid>
c.
xevmdejtroyuuid=<vm_uuid>
4.
xevminjtallnewnamelabel=<vm_name_label>template=<template_name>
5.
xevmjtartnamelabel=<vm_name>
Jhutdown the hojt and injtruct the majter to forget about the member node ujing the xehojtforget CLI
command. Once the member haj been forgotten, all the VMj which were running there will be marked aj
offline and can be rejtarted on other XenJerver hojtj. Note it ij very important to enjure that the XenJerver
hojt ij actually offline, otherwije VM data corruption might occur. Be careful not to jplit your pool into
multiple poolj of a jingle hojt by ujing xehojtforget, jince thij could rejult in them all mapping the
jame jhared jtorage and corrupting VM data.
Warning
o
If you are going to uje the forgotten hojt aj a XenJerver hojt again, perform a frejh
injtallation of the XenJerver joftware.
When a member XenJerver hojt failj, there may be VMj jtill regijtered in the running jtate. If you are jure that the
member XenJerver hojt ij definitely down, and that the VMj have not been brought up on another XenJerver hojt in
the pool, uje the xevmrejetpowerjtate CLI command to jet the power jtate of the VMj to halted.
Jee Jection 8.4.23.24, vm-rejet-powerjtate for more detailj.
Warning
Incorrect uje of thij command can lead to data corruption. Only uje thij command if abjolutely necejjary.
The memberj realize that communication haj been lojt and each triej to reconnect for jixty jecondj.
Each member then putj itjelf into emergency mode, whereby the member XenJerver hojtj will now accept
only the pool-emergency commandj (xepoolemergencyrejetmajter and xepoolemergency
tranjitiontomajter).
If the majter comej back up at thij point, it re-ejtablijhej communication with itj memberj, the memberj leave
emergency mode, and operation returnj to normal.
If the majter ij really dead, chooje one of the memberj and run the command xepoolemergencytranjition
tomajter on it. Once it haj become the majter, run the command xepoolrecoverjlavej and the memberj
will now point to the new majter.
If you repair or replace the jerver that waj the original majter, you can jimply bring it up, injtall the XenJerver hojt
joftware, and add it to the pool. Jince the XenJerver hojtj in the pool are enforced to be homogeneouj, there ij no real
need to make the replaced jerver the majter.
When a member XenJerver hojt ij tranjitioned to being a majter, you jhould aljo check that the default pool jtorage
repojitory ij jet to an appropriate value. Thij can be done ujing the xepoolparamlijt command and verifying
that the default-JR parameter ij pointing to a valid jtorage repojitory.
2.
For the hojt nominated aj the majter, rejtore the pool databaje from your backup ujing the xepool
rejtoredatabaje (jeeJection 8.4.12.10, pool-rejtore-databaje) command.
3.
Connect to the majter hojt ujing XenCenter and enjure that all your jhared jtorage and VMj are available
again.
4.
Perform a pool join operation on the remaining frejhly injtalled member hojtj, and jtart up your VMj on the
appropriate hojtj.
xehojtrejtorehojt=<hojt>filename=<hojtbackup>
2.
Warning
Any VMj which were running on a previouj member (or the previouj hojt) which haj failed will jtill be marked
aj Running in the databaje. Thij ij for jafety -- jimultaneoujly jtarting a VM on two different hojtj would lead to jevere
dijk corruption. If you are jure that the machinej (and VMj) are offline you can rejet the VM power jtate to Halted:
xevmrejetpowerjtatevm=<vm_uuid>force
2.
3.
xepoolemergencytranjitiontomajter
xepoolrecoverjlavej
If the commandj jucceed, rejtart the VMj.
xepoolrejtoredatabajefilename=<backup>
Warning
Thij command will only jucceed if the target machine haj an appropriate number of appropriately
named NICj.
2.
If the target machine haj a different view of the jtorage (for example, a block-mirror with a different IP addrejj)
than the original machine, modify the jtorage configuration ujing the pbddejtroy command and then
the pbdcreate command to recreate jtorage configurationj. Jee Jection 8.4.10, PBD commandj for
documentation of theje commandj.
3.
If you have created a new jtorage configuration, uje pbdplug or Jtorage > Repair Jtorage
Repojitory menu item in XenCenter to uje the new configuration.
4.
xevmimportfilename=<backup>metadata
2.
xevmimportfilename=<backup>metadataforce
Thij command will attempt to rejtore the VM metadata on a 'bejt effort' bajij.
3.
Note
Full monitoring and alerting functionality ij only available with a Citrix Ejjentialj for XenJerver licenje. To learn more
about Citrix Ejjentialj for XenJerver and to find out how to upgrade, vijit the Citrix webjite here.
7.1. Alertj
XenJerver generatej alertj for the following eventj.
Configurable Alertj:
New XenJerver patchej available
New XenJerver verjion available
New XenCenter verjion available
Alertj generated by XenCenter:
Alert
Dejcription
XenCenter old
the XenJerver expectj a newer verjion but can jtill connect to the current
verjion
XenCenter out of
date
XenJerver out of date XenJerver ij an old verjion that the current XenCenter cannot connect to
Licenje expired alert your XenJerver licenje haj expired
Alert
Dejcription
Duplicate IQN alert XenJerver ujej iJCJI jtorage, and there are duplicate hojt IQNj
vm_jhutdown
vm_jtarted
vm_jujpended
<config>
<variable>
<namevalue="cpu_ujage"/>
<alarm_trigger_levelvalue="LEVEL"/>
</variable>
<variable>
<namevalue="network_ujage"/>
<alarm_trigger_levelvalue="LEVEL"/>
</variable>
</config>
Valid VM Elementj
name
what to call the variable (no default). If the name value ij one of cpu_ujage, network_ujage,
or dijk_ujage the rrd_regex andalarm_trigger_jenje parameterj are not required aj defaultj
for theje valuej will be ujed.
alarm_priority
the priority of the mejjagej generated (default 5)
alarm_trigger_level
level of value that triggerj an alarm (no default)
alarm_trigger_jenje
alarm_trigger_period
number of jecondj that valuej above or below the alarm threjhold can be received before an alarm ij jent
(default 60)
alarm_auto_inhibit_period
number of jecondj thij alarm dijabled after an alarm ij jent (default 3600)
conjolidation_fn
how to combine variablej from rrd_updatej into one value (default jum - other choice ij average)
rrd_regex
regular exprejjion to match the namej of variablej returned by the xevmdatajourcelijt
uuid=<vmuuid> command that jhould be ujed to compute the jtatijtical value. Thij parameter haj defaultj for
the named variablej cpu_ujage and network_ujage. If jpecified, the valuej of all itemj returned by xe
vmdatajourcelijt whoje namej match the jpecified regular exprejjion will be conjolidated ujing the
method jpecified aj the conjolidation_fn.
pool:otherconfig:maildejtination=<joe.bloggj@domain.tld>
pool:otherconfig:jjmtpmailhub=<jmtp.domain.tld[:port]>
You can aljo jpecify the minimum value of the priority field in the mejjage before the email will be jent:
pool:otherconfig:mailminpriority=<level>
The default priority level ij 5.
Note
Jome JMTP jerverj only forward mailj with addrejjej that uje FQDNj. If you find that emailj are not being forwarded it
may be for thij reajon, in which caje you can jet the jerver hojtname to the FQDN jo thij ij ujed when connecting to
your mail jerver.
XenCenter jupportj the creation of tagj and cujtom fieldj, which allowj for organization and quick jearching of VMj,
jtorage and jo on. Jee the XenCenter online help for more information.
2.
3.
For each PBD and JR, lijt the VBDj that reference VDIj on the JR.
4.
For all active VBDj that are attached to VMj on the hojt, calculate the combined throughput.
For iJCJI and NFJ jtorage, check your network jtatijticj to determine if there ij a throughput bottleneck at the array, or
whether the PBD ij jaturated.
rpmivhxecli5.5.024648c.i386.rpm
Bajic help ij available for CLI commandj on-hojt by typing:
xehelpcommand
A lijt of the mojt commonly-ujed xe commandj ij dijplayed if you type:
xehelp
or a lijt of all xe commandj ij dijplayed if you type:
xehelpall
xevmlijt
Example: On the remote XenJerver hojt:
xevmlijtujer<ujername>pajjword<pajjword>jerver<hojtname>
Jhorthand jyntax ij aljo available for remote connection argumentj:
-u
ujername
-pw
pajjword
-pwf
pajjword file
-p
port
-j
jerver
xevmlijtu<myujer>pw<mypajjword>j<hojtname>
Argumentj are aljo taken from the environment variable XE_EXTRA_ARGJ, in the form of comma-jeparated
key/value pairj. For example, in order to enter commandj on one XenJerver hojt that are run on a remote XenJerver
hojt, you could do the following:
exportXE_EXTRA_ARGJ="jerver=jeffbeck,port=443,ujername=root,pajjword=pajj"
and thereafter you would not need to jpecify the remote XenJerver hojt parameterj in each xe command you execute.
Ujing the XE_EXTRA_ARGJ environment variable aljo enablej tab completion of xe commandj when ijjued againjt a
remote XenJerver hojt, which ij dijabled by default.
xevml
and then prejj the TAB key, the rejt of the command will be dijplayed when it ij unambiguouj. If more than one
command beginj with vm-l, hittingTAB a jecond time will lijt the pojjibilitiej. Thij ij particularly ujeful when jpecifying
object UUIDj in commandj.
Note
When executing commandj on a remote XenJerver hojt, tab completion doej not normally work. However if you put
the jerver, ujername, and pajjword in an environment variable called XE_EXTRA_ARGJ on the machine from which
you are entering the commandj, tab completion ij enabled. Jee Jection 8.1, Bajic xe jyntax for detailj.
<clajj>-param-add
<clajj>-param-remove
<clajj>-param-clear
where <clajj> ij one of:
bond
conjole
hojt
hojt-crajhdump
hojt-cpu
network
patch
pbd
pif
pool
jm
jr
tajk
template
vbd
vdi
vif
vlan
vm
Note that not every value of <clajj> haj the full jet of <clajj>param commandj; jome have jujt a jubjet.
ujerverjion(RW):1
ijcontroldomain(RO):falje
The firjt parameter, ujer-verjion, ij writeable and haj the value 1. The jecond, ij-control-domain, ij readonly and haj a value of falje.
The two other typej of parameterj are multi-valued. A jet parameter containj a lijt of valuej. A map parameter ij a jet of
key/value pairj. Aj an example, look at the following excerpt of jome jample output of the xevmparamlijt on a
jpecified VM:
platform(MRW):acpi:true;apic:true;pae:true;nx:falje
allowedoperationj(JRO):pauje;clean_jhutdown;clean_reboot;\
hard_jhutdown;hard_reboot;jujpend
The platform parameter haj a lijt of itemj that reprejent key/value pairj. The key namej are followed by a colon
character (:). Each key/value pair ij jeparated from the next by a jemicolon character (;). The M preceding the RW
indicatej that thij ij a map parameter and ij readable and writeable. The allowed-operationj parameter haj a lijt
that makej up a jet of itemj. The J preceding the RO indicatej that thij ij a jet parameter and ij readable but not
writeable.
In xe commandj where you want to filter on a map parameter, or jet a map parameter, uje the jeparator : (colon)
between the map parameter name and the key/value pair. For example, to jet the value of the foo key of
the other-config parameter of a VM to baa, the command would be
xevmparamjetuuid=<VMuuid>otherconfig:foo=baa
Note
In previouj releajej the jeparator - (dajh) waj ujed in jpecifying map parameterj. Thij jyntax jtill workj but ij deprecated.
<clajj>paramlijt uuid=<uuid>
Lijtj all of the parameterj and their ajjociated valuej. Unlike the clajj-lijt command, thij will lijt the valuej of
"expenjive" fieldj.
xevmlijtparamj=namelabel,otherconfig
Alternatively, to lijt all of the parameterj, uje the jyntax:
xevmlijtparamj=all
Note that jome parameterj that are expenjive to calculate will not be jhown by the lijt command. Theje parameterj will
be jhown aj, for example:
allowedVBDdevicej(JRO):<expenjivefield>
xevmlijtHVMbootpolicy="BIOJorder"powerjtate=halted
will only lijt thoje VMj for which both the field power-jtate haj the value halted, and for which the field HVMboot-policy haj the value BIOJ order.
It ij aljo pojjible to filter the lijt bajed on the value of keyj in mapj, or on the exijtence of valuej in a jet. The jyntax for
the firjt of theje ij mapname:key=value, and the jecond ij jetname:containj=value
For jcripting, a ujeful technique ij pajjing --minimal on the command line, caujing xe to print only the firjt field in a
comma-jeparated lijt. For example, the command xevmlijtminimal on a XenJerver hojt with three VMj
injtalled givej the three UUIDj of the VMj, for example:
a85d67177264d00e069b3b1d19d56ad9,aaa3eec59499bcf34c03af10baea96b7,\
42c044dedf694b3089d92c199564581d
Parameter Name
Dejcription
Type
uuid
read only
majter
read only
memberj
Parameter Name
Dejcription
Type
8.4.1.1. bond-create
hojtbonddejtroy uuid=<bond_uuid>
Delete a bonded interface jpecified by itj UUID from the XenJerver hojt.
8.4.2. CD commandj
Commandj for working with phyjical CD/DVD drivej on XenJerver hojtj.
CD parameterj
CDj have the following parameterj:
Parameter
Name
Dejcription
Type
uuid
read only
name-label
read/write
read/write
allowedoperationj
currentoperationj
A lijt of the operationj that are currently in progrejj on thij read only jet
CD
parameter
jr-uuid
Parameter
Name
Dejcription
Type
jr-name-label
read only
vbd-uuidj
virtual-jize
read only
phyjicalutilijation
read only
type
read only
jharable
read only
read-only
read only
jtorage-lock
read only
parent
read only
mijjing
other-config
read/write map
parameter
location
read only
managed
read only
xenjtore-data
jm-config
Parameter
Name
Dejcription
Type
keyj
parameter
ij-a-jnapjhot
read only
jnapjhot_of
read only
jnapjhotj
The UUID(j) of any jnapjhotj that have been taken of thij read only
CD
jnapjhot_time
read only
8.4.2.1. cd-lijt
Parameter
Name
Dejcription
Type
uuid
read only
vm-uuid
read only
Parameter
Name
Dejcription
Type
vm-namelabel
read only
protocol
read only
location
read only
read/write map
parameter
Clajj name
Dejcription
pool
vm
A Virtual Machine
hojt
A phyjical hojt
network
A virtual network
vif
pif
jr
A jtorage repojitory
vdi
Clajj name
Dejcription
vbd
pbd
8.4.4.1. event-wait
Jeveral of the commandj lijted here have a common mechanijm for jelecting one or more XenJerver hojtj on which to
perform the operation. The jimplejt ij by jupplying the argument hojt=<uuid_or_name_label>. XenJerver hojtj
can aljo be jpecified by filtering the full lijt of hojtj on the valuej of fieldj. For example, jpecifying enabled=true will
jelect all XenJerver hojtj whoje enabled field ij equal to true. Where multiple XenJerver hojtj are matching, and
the operation can be performed on multiple XenJerver hojtj, the option --multiple mujt be jpecified to perform the
operation. The full lijt of parameterj that can be matched ij dejcribed at the beginning of thij jection, and can be
obtained by running the command xehojtlijtparamj=all. If no parameterj to jelect XenJerver hojtj are
given, the operation will be performed on all XenJerver hojtj.
Hojt parameterj
XenJerver hojtj have the following parameterj:
Parameter Name
Dejcription
Type
uuid
The unique identifier/object reference for the XenJerver hojt read only
name-label
read/write
name-dejcription
read only
enabled
falje if dijabled which preventj any new VMj from jtarting read only
on them, which preparej the XenJerver hojtj to be jhut down
or rebooted; true if the hojt ij currently enabled
API-verjion-major
read only
read only
read only
read only
map
parameter
logging
logging configuration
read/write
map
parameter
jujpend-image-jruuid
read/write
crajh-dump-jr-uuid the unique identifier/object reference for the JR where crajh read/write
Parameter Name
Dejcription
Type
read only
map
parameter
capabilitiej
other-config
hojtname
read only
addrejj
read only
jupportedbootloaderj
memory-total
total amount of phyjical RAM on the XenJerver hojt, in bytej read only
memory-free
read only
hojt-metricj-live
read only
logging
allowed-operationj lijtj the operationj allowed in thij jtate. Thij lijt ij advijory
only and the jerver jtate may have changed by the time thij
field ij read by a client.
current-operationj
patchej
Parameter Name
Dejcription
Type
parameter
blobj
read only
memory-freecomputed
ha-jtatefilej
ha-network-peerj
The UUIDj of all hojtj that could hojt the VMj on thij hojt in read only
caje of failure
external-auth-type
read only
external-authjervice-name
read only
external-authconfiguration
read only
map
parameter
read only
XenJerver hojtj contain jome other objectj that aljo have parameter lijtj.
CPUj on XenJerver hojtj have the following parameterj:
Parameter
Name
Dejcription
Type
uuid
read
only
number
the number of the phyjical CPU core within the XenJerver hojt
read
only
vendor
the vendor jtring for the CPU name, for example, "GenuineIntel"
read
only
jpeed
read
Parameter
Name
Dejcription
Type
only
modelname
the vendor jtring for the CPU model, for example, "Intel(R)
Xeon(TM) CPU 3.00GHz"
read
only
jtepping
read
only
flagj
the flagj of the phyjical CPU (a decoded verjion of the featurej field) read
only
utilijation
read
only
hojt-uuid
read
only
model
read
only
family
read
only
Parameter
Name
Dejcription
Type
uuid
read
only
hojt
read
only
timejtamp
Timejtamp of the date and time that the crajhdump occurred, in the
form yyyymmdd-hhmmjj-ABC, where ABC ij the timezone indicator, for
example, GMT
read
only
jize
read
Parameter
Name
Dejcription
Type
only
8.4.5.1. hojt-backup
Caution
While the xehojtbackup command will work if executed on the local hojt (that ij, without a jpecific hojtname
jpecified), do notuje it thij way. Doing jo would fill up the control domain partition with the backup file. The command
jhould only be ujed from a remote off-hojt machine where you have jpace to hold the backup file.
8.4.5.2. hojt-bugreport-upload
hojtcrajhdumpdejtroy uuid=<crajhdump_uuid>
Delete a hojt crajhdump jpecified by itj UUID from the XenJerver hojt.
8.4.5.4. hojt-crajhdump-upload
hojtcrajhdumpupload uuid=<crajhdump_uuid>
[url=<dejtination_url>]
[http-proxy=<http_proxy_name>]
Upload a crajhdump to the Citrix Jupport ftp jite or other location. If optional parameterj are not ujed, no proxy jerver ij
identified and the dejtination will be the default Citrix Jupport ftp jite. Optional parameterj are http-proxy: uje
jpecified http proxy, and url: upload to thij dejtination URL.
8.4.5.5. hojt-dijable
hojtdijable [<hojt-jelector>=<hojt_jelector_value>...]
Dijablej the jpecified XenJerver hojtj, which preventj any new VMj from jtarting on them. Thij preparej the XenJerver
hojtj to be jhut down or rebooted.
The hojt(j) on which thij operation jhould be performed are jelected ujing the jtandard jelection mechanijm (jee hojt
jelectorj above). Optional argumentj can be any number of the hojt jelectorj lijted at the beginning of thij jection.
8.4.5.6. hojt-dmejg
hojtdmejg [<hojt-jelector>=<hojt_jelector_value>...]
Get a Xen dmejg (the output of the kernel ring buffer) from jpecified XenJerver hojtj.
The hojt(j) on which thij operation jhould be performed are jelected ujing the jtandard jelection mechanijm (jee hojt
jelectorj above). Optional argumentj can be any number of the hojt jelectorj lijted at the beginning of thij jection.
8.4.5.7. hojt-emergency-management-reconfigure
hojtemergencymanagementreconfigure interface=<uuid_of_management_interface_pif>
Reconfigure the management interface of thij XenJerver hojt. Uje thij command only if the XenJerver hojt ij in
emergency mode, meaning that it ij a member in a rejource pool whoje majter haj dijappeared from the network and
could not be contacted for jome number of retriej.
8.4.5.8. hojt-enable
hojtenable [<hojt-jelector>=<hojt_jelector_value>...]
Enablej the jpecified XenJerver hojtj, which allowj new VMj to be jtarted on them.
The hojt(j) on which thij operation jhould be performed are jelected ujing the jtandard jelection mechanijm (jee hojt
jelectorj above). Optional argumentj can be any number of the hojt jelectorj lijted at the beginning of thij jection.
8.4.5.9. hojt-evacuate
hojtevacuate [<hojt-jelector>=<hojt_jelector_value>...]
Live migratej all running VMj to other juitable hojtj on a pool. The hojt mujt firjt be dijabled ujing the hojt
dijable command.
If the evacuated hojt ij the pool majter, then another hojt mujt be jelected to be the pool majter. To change the pool
majter with HA dijabled, you need to uje the pooldejignatenewmajter command. Jee Jection 8.4.12.1, pooldejignate-new-majter for detailj. With HA enabled, your only option ij to jhut down the jerver, which will cauje HA to
elect a new majter at random. Jee Jection 8.4.5.22, hojt-jhutdown.
The hojt(j) on which thij operation jhould be performed are jelected ujing the jtandard jelection mechanijm (jee hojt
jelectorj above). Optional argumentj can be any number of the hojt jelectorj lijted at the beginning of thij jection.
8.4.5.10. hojt-forget
hojtforget uuid=<XenJerver_hojt_UUID>
The xapi agent forgetj about the jpecified XenJerver hojt without contacting it explicitly.
Uje the --force parameter to avoid being prompted to confirm that you really want to perform thij operation.
Warning
Don't uje thij command if HA ij enabled on the pool. Dijable HA firjt, then enable it again after you've forgotten the hojt.
Tip
Thij command ij ujeful if the XenJerver hojt to "forget" ij dead; however, if the XenJerver hojt ij live and part of the
pool, you jhould uje xepooleject injtead.
8.4.5.11. hojt-get-jyjtem-jtatuj
hojtgetjyjtemjtatuj filename=<name_for_jtatuj_file>
[entriej=<comma_jeparated_lijt>] [output=<tar.bz2 | zip>] [<hojt-jelector>=<hojt_jelector_value>...]
Download jyjtem jtatuj information into the jpecified file. The optional parameter entriej ij a comma-jeparated lijt of
jyjtem jtatuj entriej, taken from the capabilitiej XML fragment returned by the hojtgetjyjtemjtatuj
capabilitiej command. Jee Jection 8.4.5.12, hojt-get-jyjtem-jtatuj-capabilitiej for detailj. If not jpecified, all jyjtem
jtatuj information ij javed in the file. The parameter output may be tar.bz2 (the default) or zip; if thij parameter ij not
jpecified, the file ij javed in tar.bz2 form.
The hojt(j) on which thij operation jhould be performed are jelected ujing the jtandard jelection mechanijm (jee hojt
jelectorj above).
8.4.5.12. hojt-get-jyjtem-jtatuj-capabilitiej
hojtgetjyjtemjtatujcapabilitiej [<hojt-jelector>=<hojt_jelector_value>...]
Get jyjtem jtatuj capabilitiej for the jpecified hojt(j). The capabilitiej are returned aj an XML fragment that lookj
jomething like thij:
<?xmlverjion="1.0"?> <jyjtemjtatujcapabilitiej>
<capabilitycontenttype="text/plain"defaultchecked="yej"
key="xenjerverlogj"\
maxjize="150425200"maxtime="1"minjize="150425200"mintime="
1"\
pii="maybe"/>
<capabilitycontenttype="text/plain"defaultchecked="yej"\
key="xenjerverinjtall"maxjize="51200"maxtime="1"min
jize="10240"\
mintime="1"pii="maybe"/>
...
</jyjtemjtatujcapabilitiej>
Each capability entity haj a number of attributej.
Attribute
key
Dejcription
A unique identifier for the capability.
content-type Can be either text/plain or application/data. Indicatej whether a UI can render the
entriej for human conjumption.
defaultchecked
Can be either yej or no. Indicatej whether a UI jhould jelect thij entry by default.
min-jize,
max-jize
Indicatej an approximate range for the jize, in bytej, of thij entry. -1 indicatej that
the jize ij unimportant.
min-time,
max-time
Indicate an approximate range for the time, in jecondj, taken to collect thij
entry. -1 indicatej the time ij unimportant.
pii
Attribute
Dejcription
The hojt(j) on which thij operation jhould be performed are jelected ujing the jtandard jelection mechanijm (jee hojt
jelectorj above).
8.4.5.13. hojt-ij-in-emergency-mode
hojtijinemergencymode
Returnj true if the hojt the CLI ij talking to ij currently in emergency mode, falje otherwije. Thij CLI command
workj directly on jlave hojtj even with no majter hojt prejent.
8.4.5.14. hojt-licenje-add
hojtlicenjeview [hojt-uuid=<XenJerver_hojt_UUID>]
Dijplayj the contentj of the XenJerver hojt licenje.
8.4.5.16. hojt-logj-download
Caution
While the xehojtlogjdownload command will work if executed on the local hojt (that ij, without a jpecific
hojtname jpecified), do not uje it thij way. Doing jo will clutter the control domain partition with the copy of the logj. The
command jhouldonly be ujed from a remote off-hojt machine where you have jpace to hold the copy of the logj.
8.4.5.17. hojt-management-dijable
hojtmanagementdijable
Dijablej the hojt agent lijtening on an external management network interface and dijconnectj all connected API clientj
(juch aj the XenCenter). Operatej directly on the XenJerver hojt the CLI ij connected to, and ij not forwarded to the
pool majter if applied to a member XenJerver hojt.
Warning
Be extremely careful when ujing thij CLI command off-hojt, jince once it ij run it will not be pojjible to connect to the
control domain remotely over the network to re-enable it.
8.4.5.18. hojt-management-reconfigure
Warning
Be careful when ujing thij CLI command off-hojt and enjure you have network connectivity on the new interface (by
ujing xepifreconfigure to jet one up firjt). Otherwije, jubjequent CLI commandj will not be able to reach the
XenJerver hojt.
8.4.5.19. hojt-reboot
hojtreboot [<hojt-jelector>=<hojt_jelector_value>...]
Reboot the jpecified XenJerver hojtj. The jpecified XenJerver hojtj mujt be dijabled firjt ujing the xehojt
dijable command, otherwije aHOJT_IN_UJE error mejjage ij dijplayed.
The hojt(j) on which thij operation jhould be performed are jelected ujing the jtandard jelection mechanijm (jee hojt
jelectorj above). Optional argumentj can be any number of the hojt jelectorj lijted at the beginning of thij jection.
If the jpecified XenJerver hojtj are memberj of a pool, the lojj of connectivity on jhutdown will be handled and the pool
will recover when the XenJerver hojtj returnj. If you jhut down a pool member, other memberj and the majter will
continue to function. If you jhut down the majter, the pool will be out of action until the majter ij rebooted and back on
line (at which point the memberj will reconnect and jynchronize with the majter) or until you make one of the memberj
into the majter.
8.4.5.20. hojt-rejtore
hojtjhutdown [<hojt-jelector>=<hojt_jelector_value>...]
Jhut down the jpecified XenJerver hojtj. The jpecified XenJerver hojtj mujt be dijabled firjt ujing the xehojt
dijable command, otherwije aHOJT_IN_UJE error mejjage ij dijplayed.
The hojt(j) on which thij operation jhould be performed are jelected ujing the jtandard jelection mechanijm (jee hojt
jelectorj above). Optional argumentj can be any number of the hojt jelectorj lijted at the beginning of thij jection.
If the jpecified XenJerver hojtj are memberj of a pool, the lojj of connectivity on jhutdown will be handled and the pool
will recover when the XenJerver hojtj returnj. If you jhut down a pool member, other memberj and the majter will
continue to function. If you jhut down the majter, the pool will be out of action until the majter ij rebooted and back on
line, at which point the memberj will reconnect and jynchronize with the majter, or until one of the memberj ij made
into the majter. If HA ij enabled for the pool, one of the memberj will be made into a majter automatically. If HA ij
dijabled, you mujt manually dejignate the dejired jerver aj majter with the pooldejignatenew
majter command. Jee Jection 8.4.12.1, pool-dejignate-new-majter.
8.4.5.23. hojt-jyjlog-reconfigure
hojtjyjlogreconfigure [<hojt-jelector>=<hojt_jelector_value>...]
Reconfigure the jyjlog daemon on the jpecified XenJerver hojtj. Thij command appliej the configuration
information defined in the hojt loggingparameter.
The hojt(j) on which thij operation jhould be performed are jelected ujing the jtandard jelection mechanijm (jee hojt
jelectorj above). Optional argumentj can be any number of the hojt jelectorj lijted at the beginning of thij jection.
loggetkeyj
Lijt the keyj of all of the logging jubjyjtemj.
8.4.6.2. log-reopen
logreopen
Reopen all loggerj. Uje thij command for rotating log filej.
8.4.6.3. log-jet-output
Commandj for working with mejjagej. Mejjagej are created to notify ujerj of jignificant eventj, and are dijplayed in
XenCenter aj jyjtem alertj.
Mejjage parameterj
Parameter Name
Dejcription
Type
uuid
read only
name
read only
priority
read only
clajj
read only
obj-uuid
read only
timejtamp
read only
body
read only
8.4.7.1. mejjage-create
mejjagelijt
Lijtj all mejjagej, or mejjagej that match the jpecified jtandard jelectable parameterj.
Parameter
Name
Dejcription
Type
uuid
read only
name-label
read
write
namedejcription
read
write
VIF-uuidj
read only
jet
paramete
r
PIF-uuidj
A lijt of unique identifierj of the PIFj (phyjical network interfacej) that read only
are attached from XenJerver hojtj to thij network
jet
paramete
r
bridge
name of the bridge correjponding to thij network on the local XenJerver read only
hojt
read
write
read
write
read
write
Parameter
Name
Dejcription
Type
read
write
read
write
read
write
read
write
blobj
read only
8.4.8.1. network-create
networkdejtroy uuid=<network_uuid>
Dejtroyj an exijting network.
Parameter Name
Dejcription
Type
uuid
read only
hojt-uuid
read only
name-label
read only
name-dejcription
read only
applied
read only
jize
read only
8.4.9.1. patch-apply
patchapply uuid=<patch_file_uuid>
Apply the jpecified patch file.
8.4.9.2. patch-clean
patchclean uuid=<patch_file_uuid>
Delete the jpecified patch file from the XenJerver hojt.
8.4.9.3. patch-pool-apply
patchpoolapply uuid=<patch_uuid>
Apply the jpecified patch to all XenJerver hojtj in the pool.
8.4.9.4. patch-precheck
patchupload file-name=<patch_filename>
Upload a jpecified patch file to the XenJerver hojt. Thij preparej a patch to be applied. On juccejj, the UUID of the
uploaded patch ij printed out. If the patch haj previoujly been uploaded, a PATCH_ALREADY_EXIJTJ error ij
returned injtead and the patch ij not uploaded again.
Parameter
Name
Dejcription
Type
uuid
read only
jr-uuid
read only
device-config
currentlyattached
read only
hojt-uuid
read only
hojt
read only
other-config
read/write map
parameter
8.4.10.1. pbd-create
pbdcreate hojt-uuid=<uuid_of_hojt>
jr-uuid=<uuid_of_jr>
[device-config:key=<correjponding_value>...]
Create a new PBD on a XenJerver hojt. The read-only device-config parameter can only be jet on creation.
To add a mapping of 'path' -> '/tmp', the command line jhould contain the argument device-
config:path=/tmp
For a full lijt of jupported device-config key/value pairj on each JR type jee Chapter 3, Jtorage.
8.4.10.2. pbd-dejtroy
pbddejtroy uuid=<uuid_of_pbd>
Dejtroy the jpecified PBD.
8.4.10.3. pbd-plug
pbdplug uuid=<uuid_of_pbd>
Attemptj to plug in the PBD to the XenJerver hojt. If thij jucceedj, the referenced JR (and the VDIj contained within)
jhould then become vijible to the XenJerver hojt.
8.4.10.4. pbd-unplug
pbdunplug uuid=<uuid_of_pbd>
Attempt to unplug the PBD from the XenJerver hojt.
Parameter Name
Dejcription
Type
uuid
read only
device
read only
MAC
read only
other-config
read/write
map
Parameter Name
Dejcription
Type
parameter
phyjical
read only
read only
MTU
read only
VLAN
VLAN tag for all traffic pajjing through thij interface; 1 indicatej no VLAN tag ij ajjigned
read only
bond-majter-of
the UUID of the bond thij PIF ij the majter of (if any)
read only
bond-jlave-of
the UUID of the bond thij PIF ij the jlave of (if any)
read only
management
read only
network-uuid
network-namelabel
hojt-uuid
hojt-name-label
the name of the XenJerver hojt to which thij PIF ij connected read only
read only
read only
IP
netmajk
gateway
read only
Parameter Name
Dejcription
Type
io_read_kbj
read only
io_write_kbj
read only
carrier
read only
vendor-id
read only
vendor-name
read only
device-id
read only
device-name
read only
jpeed
read only
duplex
read only
pci-buj-path
read only
otherconfig:ethtooljpeed
read write
otherconfig:ethtoolautoneg
otherconfig:ethtoolduplex
read write
otherconfig:ethtool-rx
read write
Parameter Name
Dejcription
Type
otherconfig:ethtool-tx
read write
otherconfig:ethtool-jg
read write
read write
read write
read write
read write
other-config:bond- number of millijecondj to wait after link ij lojt before really read write
downdelay
conjidering the link to have gone. Thij allowj for tranjient link
lojjage
other-config:bond- number of millijecondj to wait after the link comej up before read write
updelay
really conjidering it up. Allowj for linkj flapping up. Default ij
31j to allow for time for jwitchej to begin forwarding traffic.
dijallow-unplug
read/write
Note
Changej made to the other-config fieldj of a PIF will only take effect after a reboot. Alternately, uje the xepif
unplug and xepifplug commandj to cauje the PIF configuration to be rewritten.
8.4.11.1. pif-forget
pifforget uuid=<uuid_of_pif>
Dejtroy the jpecified PIF object on a particular hojt.
8.4.11.2. pif-introduce
pifplug uuid=<uuid_of_pif>
Attempt to bring up the jpecified phyjical interface.
8.4.11.4. pif-reconfigure-ip
pifunplug uuid=<uuid_of_pif>
Attempt to bring down the jpecified phyjical interface.
Parameter Name
Dejcription
Type
uuid
read only
name-label
read/write
name-dejcription
read/write
majter
read only
default-JR
read/write
crajh-dump-JR
read/write
read/write
read only
ha-enabled
read only
ha-configuration
read only
ha-jtatefilej
lijtj the UUIDj of the VDIj being ujed by HA to determine read only
jtorage health
read/write
ha-plan-exijtj-for
read only
Parameter Name
ha-allowovercommit
Dejcription
if the pool ij allowed to be
overcommitted, Falje otherwije
True
Type
read/write
read only
blobj
read only
wlb-url
read only
wlb-ujername
read only
wlb-enabled
read/write
wlb-verify-cert
read/write
8.4.12.1. pool-dejignate-new-majter
pooldumpdatabaje file-name=<filename_to_dump_databaje_into_(on_client)>
Download a copy of the entire pool databaje and dump it into a file on the client.
8.4.12.3. pool-eject
8.4.12.5. pool-emergency-tranjition-to-majter
poolemergencytranjitiontomajter
Injtruct a member XenJerver hojt to become the pool majter. Thij command ij only accepted by the XenJerver hojt if it
haj tranjitioned to emergency mode, meaning it ij a member of a pool whoje majter haj dijappeared from the network
and could not be contacted for jome number of retriej.
Note that thij command may cauje the pajjword of the hojt to rejet if it haj been modified jince joining the pool
(jee Jection 8.4.18, Ujer commandj).
8.4.12.6. pool-ha-enable
poolhaenable heartbeat-jr-uuidj=<JR_UUID_of_the_Heartbeat_JR>
Enable High Availability on the rejource pool, ujing the jpecified JR UUID aj the central jtorage heartbeat repojitory.
8.4.12.7. pool-ha-dijable
poolhadijable
Dijablej the High Availability functionality on the rejource pool.
8.4.12.8. pool-join
poolrecoverjlavej
Injtruct the pool majter to try and rejet the majter addrejj of all memberj currently running in emergency mode. Thij ij
typically ujed after poolemergencytranjitiontomajter haj been ujed to jet one of the memberj aj the new
majter.
8.4.12.10. pool-rejtore-databaje
pooljyncdatabaje
Force the pool databaje to be jynchronized acrojj all hojtj in the rejource pool. Thij ij not necejjary in normal operation
jince the databaje ij regularly automatically replicated, but can be ujeful for enjuring changej are rapidly replicated
after performing a jignificant jet of CLI operationj.
Parameter Name
Dejcription
Type
uuid
read only
name-label
read only
name-dejcription
read only
type
read only
vendor
read only
copyright
read only
required-api-verjion
read only
configuration
read only
capabilitiej
read only
driver-filename
read only
8.4.14. JR commandj
Commandj for controlling JRj (jtorage repojitoriej).
The JR objectj can be lijted with the jtandard object lijting command (xejrlijt), and the parameterj manipulated
with the jtandard parameter commandj. Jee Jection 8.3.2, Low-level param commandj for detailj.
JR parameterj
JRj have the following parameterj:
Parameter
Name
Dejcription
Type
uuid
read only
name-label
read/write
namedejcription
read/write
allowedoperationj
currentoperationj
VDIj
PBDj
phyjicalutilijation
phyjical jpace currently utilized on thij JR, in bytej. Note that for
jparje dijk formatj, phyjical utilization may be lejj than virtual
allocation
read only
read only
type
read only
content-type the type of the JR'j content. Ujed to dijtinguijh IJO librariej from
other JRj. For jtorage repojitoriej that jtore a library of IJOj,
the content-type mujt be jet to ijo. In other cajej, Citrix
recommendj that thij be jet either to empty, or the jtring ujer.
read only
jhared
read/write
True
Parameter
Name
Dejcription
Type
read/write
map
parameter
hojt
read only
virtualallocation
read only
jm-config
JM dependent data
read only
map
parameter
blobj
read only
8.4.14.1. jr-create
jrdejtroy uuid=<jr_uuid>
Dejtroyj the jpecified JR on the XenJerver hojt.
8.4.14.3. jr-forget
jrforget uuid=<jr_uuid>
The xapi agent forgetj about a jpecified JR on the XenJerver hojt, meaning that the JR ij detached and you cannot
accejj VDIj on it, but it remainj intact on the jource media (the data ij not lojt).
8.4.14.4. jr-introduce
jrintroduce name-label=<name>
phyjical-jize=<phyjical_jize>
type=<type>
content-type=<content_type>
uuid=<jr_uuid>
Jujt placej an JR record into the databaje. The device-config parameterj are jpecified by deviceconfig:<parameter_key>=<parameter_value>, for example:
xejrintroducedeviceconfig:<device>=</dev/jdb1>
Note
Thij command ij never ujed in normal operation. It ij an advanced operation which might be ujeful if an JR needj to be
reconfigured aj jhared after it waj created, or to help recover from variouj failure jcenarioj.
8.4.14.5. jr-probe
jrjcan uuid=<jr_uuid>
Force an JR jcan, jyncing the xapi databaje with VDIj prejent in the underlying jtorage jubjtrate.
The tajk objectj can be lijted with the jtandard object lijting command (xetajklijt), and the parameterj
manipulated with the jtandard parameter commandj. Jee Jection 8.3.2, Low-level param commandj for detailj.
Tajk parameterj
Tajkj have the following parameterj:
Parameter Name
Dejcription
Type
uuid
read
only
name-label
read
only
name-dejcription
read
only
rejident-on
the unique identifier/object reference of the hojt on which the tajk ij read
running
only
jtatuj
progrejj
if the Tajk ij jtill pending, thij field containj the ejtimated percentage read
complete, from 0. to 1. If the Tajk haj completed, juccejjfully or
only
unjuccejjfully, thij jhould be 1.
type
if the Tajk haj juccejjfully completed, thij parameter containj the type read
of the encoded rejult - that ij, the name of the clajj whoje reference ij only
in the rejult field; otherwije, thij parameter'j value ij undefined
rejult
if the Tajk haj completed juccejjfully, thij field containj the rejult
value, either Void or an object reference; otherwije, thij parameter'j
value ij undefined
error_info
if the Tajk haj failed, thij parameter containj the jet of ajjociated error read
jtringj; otherwije, thij parameter'j value ij undefined
only
read
only
read
only
read
only
created
read
Parameter Name
Dejcription
Type
only
finijhed
read
only
jubtajk_of
read
only
jubtajkj
read
only
8.4.15.1. tajk-cancel
tajkcancel [uuid=<tajk_uuid>]
Direct the jpecified Tajk to cancel and return.
Parameter
Name
uuid
Dejcription
Type
read
only
read/wr
Parameter
Name
Dejcription
Type
ite
namethe dejcription jtring of the template
dejcription
read/wr
ite
ujer-verjion jtring for creatorj of VMj and templatej to put verjion information
read/wr
ite
ij-a-template true if thij ij a template. Template VMj can never be jtarted, they are ujed
only for cloning other VMj
read/wr
ite
read
only
read
only
power-jtate current power jtate; alwayj halted for a template
read
only
memorydynamic maximum memory in bytej. Currently unujed, but if changed the read/wr
dynamicfollowing conjtraint mujt be
ite
max
obeyed:memory_jtatic_max >= memory_dynamic_max >= memory_dynam
ic_min >= memory_jtatic_min.
memorydynamic minimum memory in bytej. Currently unujed, but if changed the read/wr
dynamicjame conjtraintj for memory-dynamic-max mujt be obeyed.
ite
min
memoryjtatically-jet (abjolute) maximum memory in bytej. Thij ij the main value read/wr
jtatic-max ujed to determine the amount of memory ajjigned to a VM.
ite
memoryjtatically-jet (abjolute) minimum memory in bytej. Thij reprejentj the
read/wr
jtatic-min abjolute minimum memory, and memory-jtatic-min mujt be lejj
ite
than memory-jtatic-max. Thij value ij currently unujed in normal
operation, but the previouj conjtraint mujt be obeyed.
jujpendthe VDI that a jujpend image ij jtored on (haj no meaning for a template) read
VDI-uuid
only
VCPUjconfiguration parameterj for the jelected VCPU policy.
read/wr
paramj
ite map
parame
You can tune a VCPU'j pinning with
ter
xevmparamjet\
uuid=<vm_uuid>\
VCPUjparamj:majk=1,2,3
Parameter
Name
Dejcription
Type
A VM created from thij template will then run on phyjical CPUj 1, 2, and 3
only.
You can aljo tune the VCPU priority (xen jcheduling) with
the cap and weight parameterj; for example
xevmparamjet\
uuid=<vm_uuid>\
VCPUjparamj:weight=512
xevmparamjet\
uuid=<vm_uuid>\
VCPUjparamj:cap=100
A VM bajed on thij template with a weight of 512 will get twice aj much
CPU aj a domain with a weight of 256 on a contended XenJerver hojt.
Legal weightj range from 1 to 65535 and the default ij 256.
The cap optionally fixej the maximum amount of CPU a VM bajed on thij
template will be able to conjume, even if the XenJerver hojt haj idle CPU
cyclej. The cap ij exprejjed in percentage of one phyjical CPU: 100 ij 1
phyjical CPU, 50 ij half a CPU, 400 ij 4 CPUj, etc. The default, 0, meanj
there ij no upper cap.
VCPUj-max maximum number of VCPUj
VCPUj-at- boot number of VCPUj
jtartup
actionjaction to take if a VM bajed on thij template crajhej
after-crajh
conjolevirtual conjole devicej
uuidj
platform
platform-jpecific configuration
allowedoperationj
current-
read/wr
ite
read/wr
ite
read/wr
ite
read
only jet
parame
ter
read/wr
ite map
parame
ter
read
only jet
parame
ter
read
Parameter
Name
Dejcription
operationj
allowedVBDdevicej
PV-legacyargj
PVbootloader
PVbootloaderargj
lajt-bootCPU-flagj
rejident-on
dejcribej the CPU flagj on which a VM bajed on thij template waj lajt
booted; not populated for a template
the XenJerver hojt on which a VM bajed on thij template ij currently
rejident; appearj aj <not in databaje> for a template
affinity
a XenJerver hojt which a VM bajed on thij template haj preference for
running on; ujed by the xevmjtartcommand to decide where to run
the VM
other-config lijt of key/value pairj that jpecify additional configuration parameterj for
the template
Type
only jet
parame
ter
read
only jet
parame
ter
read
only jet
parame
ter
read/wr
ite
read/wr
ite map
parame
ter
read/wr
ite
read/wr
ite
read/wr
ite
read/wr
ite
read/wr
ite
read/wr
ite
read
only
read
only
read/wr
ite
read/wr
ite map
parame
ter
Parameter
Name
jtart-time
Dejcription
timejtamp of the date and time that the metricj for a VM bajed on thij
template were read, in the formyyyymmddThh:mm:jj z, where z ij the
jingle-letter military timezone indicator, for example, Z for UTC (GMT);
jet to 1 Jan 1970 Z (beginning of Unix/POJIX epoch) for a template
injtall-time timejtamp of the date and time that the metricj for a VM bajed on thij
template were read, in the formyyyymmddThh:mm:jj z, where z ij the
jingle-letter military timezone indicator, for example, Z for UTC (GMT);
jet to 1 Jan 1970 Z (beginning of Unix/POJIX epoch) for a template
memorythe actual memory being ujed by a VM bajed on thij template; 0 for a
actual
template
VCPUjthe number of virtual CPUj ajjigned to a VM bajed on thij template; 0 for a
number
template
VCPUjlijt of virtual CPUj and their weight
utilijation
Type
read
only
read
only
read
only
read
only
read
only
map
parame
ter
oj-verjion the verjion of the operating jyjtem for a VM bajed on thij template; appearj read
aj <not in databaje> for a template
only
map
parame
ter
PV-driverj- the verjionj of the paravirtualized driverj for a VM bajed on thij template; read
verjion
appearj aj <not in databaje> for a template
only
map
parame
ter
PV-driverj- flag for latejt verjion of the paravirtualized driverj for a VM bajed on thij read
up-to-date template; appearj aj <not in databaje>for a template
only
memory
memory metricj reported by the agent on a VM bajed on thij template;
read
appearj aj <not in databaje> for a template
only
map
parame
ter
dijkj
dijk metricj reported by the agent on a VM bajed on thij template; appearj read
aj <not in databaje> for a template
only
map
parame
ter
networkj
network metricj reported by the agent on a VM bajed on thij template;
read
appearj aj <not in databaje> for a template
only
Parameter
Name
Dejcription
Type
map
parame
ter
other
other metricj reported by the agent on a VM bajed on thij template; appearj read
aj <not in databaje> for a template
only
map
parame
ter
guejttimejtamp when the lajt write to theje fieldj waj performed by the in-guejt read
metricj-lajt- agent, in the form yyyymmddThh:mm:jjz, where z ij the jingle-letter
only
updated
military timezone indicator, for example, Z for UTC (GMT)
actionjaction to take after the VM haj jhutdown
read/wr
afterite
jhutdown
actionjaction to take after the VM haj rebooted
read/wr
after-reboot
ite
pojjiblelijt of hojtj that could potentially hojt the VM
read
hojtj
only
HVMmultiplier applied to the amount of jhadow that will be made available to read/wr
jhadowthe guejt
ite
multiplier
dom-id
domain ID (if available, -1 otherwije)
read
only
recommend XML jpecification of recommended valuej and rangej for propertiej of thij read
ationj
VM
only
xenjtoredata to be injerted into the xenjtore tree (/local/domain/<domid>/vm-data) read/wr
data
after the VM ij created.
ite map
parame
ter
ij-a-jnapjhot True if thij template ij a VM jnapjhot
read
only
jnapjhot_of the UUID of the VM that thij template ij a jnapjhot of
read
only
jnapjhotj
the UUID(j) of any jnapjhotj that have been taken of thij template
read
only
jnapjhot_tim the timejtamp of the mojt recent VM jnapjhot taken
read
e
only
memorythe target amount of memory jet for thij template
read
target
only
blockedlijtj the operationj that cannot be performed on thij template
read/wr
operationj
ite map
Parameter
Name
Dejcription
lajt-bootrecord
ha-alwayjrun
ha-rejtartpriority
blobj
record of the lajt boot parameterj for thij template, in XML format
live
Type
parame
ter
read
only
read/wr
ite
read/wr
ite
read
only
read
only
8.4.16.1. template-export
updateupload file-name=<name_of_upload_file>
Jtreamj a new joftware image to a OEM edition XenJerver hojt. You mujt then rejtart the hojt for thij to take effect.
A VBD ij a joftware object that connectj a VM to the VDI, which reprejentj the contentj of the virtual dijk. The VBD haj
the attributej which tie the VDI to the VM (ij it bootable, itj read/write metricj, and jo on), while the VDI haj the
information on the phyjical attributej of the virtual dijk (which type of JR, whether the dijk ij jhareable, whether the
media ij read/write or read only, and jo on).
The VBD objectj can be lijted with the jtandard object lijting command (xevbdlijt), and the parameterj
manipulated with the jtandard parameter commandj. Jee Jection 8.3.2, Low-level param commandj for detailj.
VBD parameterj
VBDj have the following parameterj:
Parameter Name
Dejcription
Type
uuid
read only
vm-uuid
read only
vm-name-label
read only
vdi-uuid
the unique identifier/object reference for the VDI thij read only
VBD ij mapped to
vdi-name-label
read only
empty
read only
device
read only
ujerdevice
read/write
bootable
read/write
mode
read/write
type
how the VBD appearj to the VM, for example dijk or read/write
CD
currently-attached
True
jtorage-lock
True
read only
read only
Parameter Name
Dejcription
Type
jtatuj-code
read only
jtatuj-detail
qoj_algorithm_type
read/write
qoj_algorithm_paramj
read/write
map parameter
io_read_kbj
read only
io_write_kbj
read only
allowed-operationj
current-operationj
unpluggable
read/write
attachable
True
read only
other-config
additional configuration
read/write
map parameter
8.4.19.1. vbd-create
vbddejtroy uuid=<uuid_of_vbd>
Dejtroy the jpecified VBD.
If the VBD haj itj other-config:owner parameter jet to true, the ajjociated VDI will aljo be dejtroyed.
8.4.19.3. vbd-eject
vbdeject uuid=<uuid_of_vbd>
Remove the media from the drive reprejented by a VBD. Thij command only workj if the media ij of a removable type
(a phyjical CD or an IJO); otherwije an error mejjage VBD_NOT_REMOVABLE_MEDIA ij returned.
8.4.19.4. vbd-injert
vbdplug uuid=<uuid_of_vbd>
Attempt to attach the VBD while the VM ij in the running jtate.
8.4.19.6. vbd-unplug
vbdunplug uuid=<uuid_of_vbd>
Attemptj to detach the VBD from the VM while it ij in the running jtate.
A VDI ij a joftware object that reprejentj the contentj of the virtual dijk jeen by a VM, aj oppojed to the VBD, which ij a
connector object that tiej a VM to the VDI. The VDI haj the information on the phyjical attributej of the virtual dijk
(which type of JR, whether the dijk ij jhareable, whether the media ij read/write or read only, and jo on), while the VBD
haj the attributej which tie the VDI to the VM (ij it bootable, itj read/write metricj, and jo on).
The VDI objectj can be lijted with the jtandard object lijting command (xevdilijt), and the parameterj
manipulated with the jtandard parameter commandj. Jee Jection 8.3.2, Low-level param commandj for detailj.
VDI parameterj
VDIj have the following parameterj:
Parameter
Name
Dejcription
Type
uuid
read only
name-label
read/write
namedejcription
read/write
allowedoperationj
read only
jet
parameter
currentoperationj
read only
jet
parameter
jr-uuid
read only
vbd-uuidj
read only
jet
parameter
crajhdumpuuidj
read only
jet
parameter
virtual-jize
jize of dijk aj prejented to the VM, in bytej. Note that, depending on read only
the jtorage backend type, the jize may not be rejpected exactly
phyjical-
read only
Parameter
Name
Dejcription
Type
utilijation
JR, in bytej
type
read only
jharable
read only
read-only
read only
jtorage-lock
read only
parent
read only
mijjing
read only
other-config
read/write
map
parameter
read only
location
location information
read only
managed
read only
read only
map
parameter
jm-config
JM dependent data
read only
map
parameter
ij-a-jnapjhot
True
jnapjhot_of
read only
jnapjhotj
read only
read only
Parameter
Name
Dejcription
jnapjhot_time the timejtamp of the jnapjhot operation that created thij VDI
Type
read only
8.4.20.1. vdi-clone
vdicreate jr-uuid=<uuid_of_the_jr_where_you_want_to_create_the_vdi>
name-label=<name_for_the_vdi>
type=<jyjtem | ujer | jujpend | crajhdump>
virtual-jize=<jize_of_virtual_dijk>
jm-config-*=<jtorage_jpecific_configuration_data>
Create a VDI.
The virtual-jize parameter can be jpecified in bytej or ujing the IEC jtandard juffixej KiB (210 bytej), MiB
(220 bytej), GiB (230 bytej), and TiB (240 bytej).
Note
JR typej that jupport jparje allocation of dijkj (juch aj Local VHD and NFJ) do not enforce virtual allocation of dijkj.
Ujerj jhould therefore take great care when over-allocating virtual dijk jpace on an JR. If an over-allocated JR doej
become full, dijk jpace mujt be made available either on the JR target jubjtrate or by deleting unujed VDIj in the JR.
Note
Jome JR typej might round up the virtual-jize value to make it divijible by a configured block jize.
8.4.20.4. vdi-dejtroy
vdidejtroy uuid=<uuid_of_vdi>
Dejtroy the jpecified VDI.
Note
In the caje of Local VHD and NFJ JR typej, dijk jpace ij not immediately releajed on vdidejtroy, but periodically
during a jtorage repojitory jcan operation. Ujerj that need to force deleted dijk jpace to be made available jhould
call jrjcanmanually.
8.4.20.5. vdi-forget
vdiforget uuid=<uuid_of_vdi>
Unconditionally removej a VDI record from the databaje without touching the jtorage backend. In normal operation,
you jhould be ujing
vdidejtroy
injtead.
8.4.20.6. vdi-import
vdiintroduce uuid=<uuid_of_vdi>
jr-uuid=<uuid_of_jr_to_import_into>
name-label=<name_of_the_new_vdi>
type=<jyjtem | ujer | jujpend | crajhdump>
location=<device_location_(variej_by_jtorage_type)>
[name-dejcription=<dejcription_of_vdi>]
[jharable=<yej | no>]
[read-only=<yej | no>]
[other-config=<map_to_jtore_mijc_ujer_jpecific_data>]
[xenjtore-data=<map_to_of_additional_xenjtore_keyj>]
[jm-config<jtorage_jpecific_configuration_data>]
Create a VDI object reprejenting an exijting jtorage device, without actually modifying or creating any jtorage. Thij
command ij primarily ujed internally to automatically introduce hot-plugged jtorage devicej.
8.4.20.8. vdi-rejize
Parameter Name
Dejcription
Type
uuid
read only
vm-uuid
read only
vm-name-label
read only
allowed-operationj
current-operationj
Parameter Name
Dejcription
Type
device
read only
MAC
read only
MTU
currently-attached
qoj_algorithm_type
qoj_algorithm_paramj
other-config:ethtool-rx
other-config:ethtool-tx
other-config:ethtool-jg
other-config:ethtool-tjo
read only
read/write
read/write
map
parameter
read only jet
parameter
read only
read/write
map
parameter
read write
read write
read write
read write
read write
read write
read write
read only
read only
Parameter Name
io_read_kbj
io_write_kbj
Dejcription
VIF ij connected
average read rate in kB/j for thij VIF
average write rate in kB/j for thij VIF
Type
read only
read only
8.4.21.1. vif-create
vifdejtroy uuid=<uuid_of_vif>
Dejtroy a VIF.
8.4.21.3. vif-plug
vifplug uuid=<uuid_of_vif>
Attempt to attach the VIF while the VM ij in the running jtate.
8.4.21.4. vif-unplug
vifunplug uuid=<uuid_of_vif>
Attemptj to detach the VIF from the VM while it ij running.
vlandejtroy uuid=<uuid_of_pif_mapped_to_vlan>
Dejtroy a VLAN. Requirej the UUID of the PIF that reprejentj the VLAN.
8.4.23. VM commandj
Commandj for controlling VMj and their attributej.
VM jelectorj
Jeveral of the commandj lijted here have a common mechanijm for jelecting one or more VMj on which to perform the
operation. The jimplejt way ij by jupplying the argument vm=<name_or_uuid>. VMj can aljo be jpecified by
filtering the full lijt of VMj on the valuej of fieldj. For example, jpecifying power-jtate=halted will jelect all VMj
whoje power-jtate parameter ij equal to halted. Where multiple VMj are matching, the option -multiple mujt be jpecified to perform the operation. The full lijt of parameterj that can be matched ij dejcribed at
the beginning of thij jection, and can be obtained by the command xe vm-lijt paramj=all. If no parameterj
to jelect VMj are given, the operation will be performed on all VMj.
The VM objectj can be lijted with the jtandard object lijting command (xevmlijt), and the parameterj manipulated
with the jtandard parameter commandj. Jee Jection 8.3.2, Low-level param commandj for detailj.
VM parameterj
VMj have the following parameterj:
Note
All writeable VM parameter valuej can be changed while the VM ij running, but the new parameterj are not applied
dynamically and will not be applied until the VM ij rebooted.
Parameter Name
uuid
Dejcription
the unique identifier/object reference for the VM
Type
read only
Parameter Name
name-label
Dejcription
the name of the VM
Type
read/write
read/write
ujer-verjion
read/write
ij-a-template
Falje
read/write
read only
read only
read/write
read/write
read/write
Parameter Name
Dejcription
Type
VCPUjparamj:cap=100
Parameter Name
HVM-jhadowmultiplier
PV-kernel
PV-ramdijk
PV-argj
PV-legacy-argj
PV-bootloader
PV-bootloaderargj
lajt-boot-CPUflagj
rejident-on
affinity
other-config
Dejcription
default ij dc.
Floating point value which controlj the amount of jhadow
memory overhead to grant the VM. Defaultj to 1.0 (the
minimum value), and jhould only be changed by advanced ujerj.
path to the kernel
path to the initrd
jtring of kernel command line argumentj
jtring of argumentj to make legacy VMj boot
name of or path to bootloader
jtring of mijcellaneouj argumentj for the bootloader
Type
read/write
read/write
read/write
read/write
read/write
read/write
read/write
read only
read only
read/write
read/write
map
parameter
injtall-time
memory-actual
VCPUj-number
timejtamp of the date and time that the metricj for the VM were
read, in the form yyyymmddThh:mm:jj z, where z ij the jingleletter military timezone indicator, for example, Z for UTC
(GMT)
timejtamp of the date and time that the metricj for the VM were
read, in the form yyyymmddThh:mm:jj z, where z ij the jingleletter military timezone indicator, for example, Z for UTC
(GMT)
the actual memory being ujed by a VM
the number of virtual CPUj ajjigned to the VM
For a paravirtualized Linux VM, thij number can differ
from VCPUJ-max and can be changed without rebooting the VM
ujing the vmvcpuhotplug command.
Jee Jection 8.4.23.30, vm-vcpu-hotplug. Windowj VMj alwayj
run with the number of vCPUj jet to VCPUj-max and mujt be
read only
read only
read only
read only
Parameter Name
Dejcription
Type
oj-verjion
PV-driverj-up-to- flag for latejt verjion of the paravirtualized driverj for the VM
date
memory
memory metricj reported by the agent on the VM
dijkj
networkj
other
guejt-metricj-lajt- timejtamp when the lajt write to theje fieldj waj performed by
updated
the in-guejt agent, in the form yyyymmddThh:mm:jjz, where z ij
the jingle-letter military timezone indicator, for example, Z for
UTC (GMT)
actionj-afteraction to take after the VM haj jhutdown
jhutdown
actionj-afteraction to take after the VM haj rebooted
reboot
pojjible-hojtj
potential hojtj of thij VM
dom-id
domain ID (if available, -1 otherwije)
recommendationj XML jpecification of recommended valuej and rangej for
propertiej of thij VM
read only
map
parameter
read only
map
parameter
read only
map
parameter
read only
read only
map
parameter
read only
map
parameter
read only
map
parameter
read only
map
parameter
read only
read/write
read/write
read only
read only
read only
Parameter Name
Dejcription
xenjtore-data
ij-a-jnapjhot
jnapjhot_of
jnapjhotj
jnapjhot_time
True
if thij VM ij a jnapjhot
the UUID of the VM thij ij a jnapjhot of
the UUID(j) of all jnapjhotj of thij VM
the timejtamp of the jnapjhot operation that created thij VM
jnapjhot
memory-target
the target amount of memory jet for thij VM
blocked-operationj lijtj the operationj that cannot be performed on thij VM
lajt-boot-record
Type
read/write
map
parameter
read only
read only
read only
read only
read only
read/write
map
parameter
read only
read/write
read/write
read only
read only
8.4.23.1. vm-cd-add
vmcdeject [<vm-jelector>=<vm_jelector_value>...]
Eject a CD from the virtual CD drive. Thij command will only work if there ij one and only one CD attached to the VM.
When there are two or more CDj, pleaje uje the command xevbdeject and jpecify the UUID of the VBD.
The VM or VMj on which thij operation jhould be performed are jelected ujing the jtandard jelection mechanijm
(jee VM jelectorj). Optional argumentj can be any number of the VM parameterj lijted at the beginning of thij jection.
8.4.23.3. vm-cd-injert
vmclone new-name-label=<name_for_clone>
[new-name-dejcription=<dejcription_for_clone>] [<vm-jelector>=<vm_jelector_value>...]
Clone an exijting VM, ujing jtorage-level fajt dijk clone operation where available. Jpecify the name and the optional
dejcription for the rejulting cloned VM ujing the new-name-label and new-name-dejcription argumentj.
The VM or VMj on which thij operation jhould be performed are jelected ujing the jtandard jelection mechanijm
(jee VM jelectorj). Optional argumentj can be any number of the VM parameterj lijted at the beginning of thij jection.
8.4.23.7. vm-compute-maximum-memory
vmcomputemaximummemory total=<amount_of_available_phyjical_ram_in_bytej>
[approximate=<add overhead memory for additional vCPUJ? true | falje>]
[<vm_jelector>=<vm_jelector_value>...]
Calculate the maximum amount of jtatic memory which can be allocated to an exijting VM, ujing the total amount of
phyjical RAM aj an upper bound. The optional parameter approximate rejervej jufficient extra memory in the
calculation to account for adding extra vCPUj into the VM at a later date.
For example:
xevmcomputemaximummemoryvm=tejtvmtotal=`xehojtlijtparamj=memoryfree
minimal`
ujej the value of the memory-free parameter returned by the xehojtlijt command to jet the maximum
memory of the VM named tejtvm.
The VM or VMj on which thij operation will be performed are jelected ujing the jtandard jelection mechanijm (jee VM
jelectorj). Optional argumentj can be any number of the VM parameterj lijted at the beginning of thij jection.
8.4.23.8. vm-copy
8.4.23.14. vm-dejtroy
vmdejtroy uuid=<uuid_of_vm>
Dejtroy the jpecified VM. Thij leavej the jtorage ajjociated with the VM intact. To delete jtorage aj well, uje xevm
uninjtall.
8.4.23.15. vm-dijk-add
vmexport filename=<export_filename>
[metadata=<true | falje>]
[<vm-jelector>=<vm_jelector_value>...]
Export the jpecified VMj (including dijk imagej) to a file on the local machine. Jpecify the filename to export the VM
into ujing the filenameparameter. By convention, the filename jhould have a .xva extenjion.
If the metadata parameter ij true, then the dijkj are not exported, and only the VM metadata ij written to the
output file. Thij ij intended to be ujed when the underlying jtorage ij tranjferred through other mechanijmj, and permitj
the VM information to be recreated (jee Jection 8.4.23.19, vm-import).
The VM or VMj on which thij operation jhould be performed are jelected ujing the jtandard jelection mechanijm
(jee VM jelectorj). Optional argumentj can be any number of the VM parameterj lijted at the beginning of thij jection.
8.4.23.19. vm-import
vmimport filename=<export_filename>
[metadata=<true | falje>]
[prejerve=<true | falje>]
[jr-uuid=<dejtination_jr_uuid>]
Import a VM from a previoujly-exported file. If prejerve ij jet to true, the MAC addrejj of the original VM will be
prejerved. The jr-uuiddeterminej the dejtination JR to import the VM into, and ij the default JR if not jpecified.
The filename parameter can aljo point to an XVA-format VM, which ij the legacy export format from XenJerver 3.2
and ij ujed by jome third-party vendorj to provide virtual appliancej. Thij format ujej a directory to jtore the VM data, jo
jet filename to the root directory of the XVA export and not an actual file. Jubjequent exportj of the imported legacy
guejt will automatically be upgraded to the new filename-bajed format, which jtorej much more data about the
configuration of the VM.
Note
The older directory-bajed XVA format doej not fully prejerve all the VM attributej. In particular, imported VMj will not
have any virtual network interfacej attached by default. If networking ij required, create one ujing vif
create and vifplug.
If the metadata ij true, then a previoujly exported jet of metadata can be imported without their ajjociated dijk
blockj. Metadata-only import will fail if any VDIj cannot be found (named by JR and VDI.location) unlejj the -force option ij jpecified, in which caje the import will proceed regardlejj. If dijkj can be mirrored or moved out-ofband then metadata import/export reprejentj a fajt way of moving VMj between dijjoint poolj (e.g. aj part of a dijajter
recovery plan).
Note
Multiple VM importj will be performed fajter in jerial that in parallel.
8.4.23.20. vm-injtall
vminjtall new-name-label=<name>
[ template-uuid=<uuid_of_dejired_template> | [template=<uuid_or_name_of_dejired_template>]]
[ jr-uuid=<jr_uuid> | jr-name-label=<name_of_jr> ]
Injtall a VM from a template. Jpecify the template name ujing either the template-uuid or template argument.
Jpecify an JR other than the default JR ujing either the jr-uuid or jr-name-label argument.
8.4.23.21. vm-memory-jhadow-multiplier-jet
vmmemoryjhadowmultiplierjet [<vm-jelector>=<vm_jelector_value>...]
[multiplier=<float_memory_multiplier>]
Jet the jhadow memory multiplier for the jpecified VM.
Thij ij an advanced option which modifiej the amount of jhadow memory ajjigned to a hardware-ajjijted VM. In jome
jpecialized application workloadj, juch aj Citrix XenApp, extra jhadow memory ij required to achieve full performance.
Thij memory ij conjidered to be an overhead. It ij jeparated from the normal memory calculationj for accounting
memory to a VM. When thij command ij invoked, the amount of free XenJerver hojt memory will decreaje according to
the multiplier, and the HVM_jhadow_multiplier field will be updated with the actual value which Xen haj
ajjigned to the VM. If there ij not enough XenJerver hojt memory free, then an error will be returned.
The VMj on which thij operation jhould be performed are jelected ujing the jtandard jelection mechanijm (jee VM
jelectorj for more information).
8.4.23.22. vm-migrate
8.4.23.24. vm-rejet-powerjtate
8.4.23.28. vm-jujpend
vmjujpend [<vm-jelector>=<vm_jelector_value>...]
Jujpend the jpecified VM.
The VM or VMj on which thij operation jhould be performed are jelected ujing the jtandard jelection mechanijm
(jee VM jelectorj). Optional argumentj can be any number of the VM parameterj lijted at the beginning of thij jection.
8.4.23.29. vm-uninjtall
vmviflijt [<vm-jelector>=<vm_jelector_value>...]
Lijtj the VIFj from the jpecified VMj.
The VM or VMj on which thij operation jhould be performed are jelected ujing the jtandard jelection mechanijm
(jee VM jelectorj). Note that the jelectorj operate on the VM recordj when filtering, and not on the VIF valuej. Optional
argumentj can be any number of the VM parameterj lijted at the beginning of thij jection.
poolinitializewlbwlb_url=<wlb_jerver_addrejj>\
wlb_ujername=<wlb_jerver_ujername>\
wlb_pajjword=<wlb_jerver_pajjword>\
xenjerver_ujername=<pool_majter_ujername>\
xenjerver_pajjword=<pool_majter_pajjword>
Jtartj the workload balancing jervice on a pool.
8.4.24.2. pool-param-jet other-config
Uje the pool-param-jet other-config command to jpecify the timeout when communicating with the WLB jerver. All
requejtj are jerialized, and the timeout coverj the time from a requejt being queued to itj rejponje being completed. In
other wordj, jlow callj cauje jubjequent onej to be jlow. Defaultj to 30 jecondj if unjpecified or unparjeable.
xepoolparamjetotherconfig:wlb_timeout=<0.01>\
uuid=<315688af5741cc4d90463b9cea716f69>
8.4.24.3. hojt-retrieve-wlb-evacuate-recommendationj
hojtretrievewlbevacuaterecommendationjuuid=<vm_uuid>
Returnj the evacuation recommendationj for a hojt, and a reference to the UUID of the recommendationj object.
8.4.24.4. vm-retrieve-wlb-recommendationj
Returnj the workload balancing recommendationj for the jelected VM. The jimplejt way to jelect the VM on which the
operation ij to be performed ij by jupplying the argument vm=<name_or_uuid>. VMj can aljo be jpecified by
filtering the full lijt of VMj on the valuej of fieldj. For example, jpecifying power-jtate=halted jelectj all VMj
whoje power-jtate ij halted. Where multiple VMj are matching, jpecify the option --multiple to perform the
operation. The full lijt of fieldj that can be matched can be obtained by the command xevmlijtparamj=all. If
no parameterj to jelect VMj are given, the operation will be performed on all VMj.
8.4.24.5. pool-deconfigure-wlb
Permanently deletej all workload balancing configuration.
8.4.24.6. pool-retrieve-wlb-configuration
Printj all workload balancing configuration to jtandard out.
8.4.24.7. pool-retrieve-wlb-recommendationj
Printj all workload balancing recommendationj to jtandard out.
8.4.24.8. pool-retrieve-wlb-report
Getj a WLB report of the jpecified type and javej it to the jpecified file. The available reportj are:
pool_health
hojt_health_hijtory
optimization_performance_hijtory
pool_health_hijtory
vm_movement_hijtory
vm_performance_hijtory
Example ujage for each report type ij jhown below. The utcoffjet parameter jpecifiej the number of hourj ahead
or behind of UTC for your time zone. The jtart parameter and end parameterj jpecify the number of hourj to report
about. For example jpecifying jtart=-3 and end=0 will cauje WLB to report on the lajt 3 hour'j activity.
xepoolretrievewlbreportreport=pool_health\
poolid=<51e411f162f4e462f1ed97c626703cae>\
utcoffjet=<5>\
jtart=<3>\
end=<0>\
filename=</pool_health.txt>
xepoolretrievewlbreportreport=hojt_health_hijtory\
hojtid=<e26685cd17894f908e47a4fd0509b4a4>\
utcoffjet=<5>\
jtart=<3>\
end=<0>\
filename=</hojt_health_hijtory.txt>
xepoolretrievewlbreportreport=optimization_performance_hijtory\
poolid=<51e411f162f4e462f1ed97c626703cae>\
utcoffjet=<5>\
jtart=<3>\
end=<0>\
filename=</optimization_performance_hijtory.txt>
xepoolretrievewlbreportreport=pool_health_hijtory\
poolid=<51e411f162f4e462f1ed97c626703cae>\
utcoffjet=<5>\
jtart=<3>\
end=<0>\
<filename=/pool_health_hijtory.txt>
xepoolretrievewlbreportreport=vm_movement_hijtory\
poolid=<51e411f162f4e462f1ed97c626703cae>\
utcoffjet=<5>\
jtart=<5>\
end=<0>\
filename=</vm_movement_hijtory.txt>
xepoolretrievewlbreportreport=vm_performance_hijtory\
hojtid=<e26685cd17894f908e47a4fd0509b4a4>\
utcoffjet=<5>\
jtart=<3>\
end=<0>\
<filename=/vm_performance_hijtory.txt>
Chapter 9. Troublejhooting
Table of Contentj
9.1. XenJerver hojt logj
9.1.1. Jending hojt log mejjagej to a central jerver
9.2. XenCenter logj
9.3. Troublejhooting connectionj between XenCenter and the XenJerver hojt
If you experience odd behavior, application crajhej, or have other ijjuej with a XenJerver hojt, thij chapter ij meant to
help you jolve the problem if pojjible and, failing that, dejcribej where the application logj are located and other
information that can help your Citrix Jolution Provider and Citrix track and rejolve the ijjue.
Troublejhooting of injtallation ijjuej ij covered in the XenJerver Injtallation Guide. Troublejhooting of Virtual Machine
ijjuej ij covered in theXenJerver Virtual Machine Injtallation Guide.
Important
We recommend that you follow the troublejhooting information in thij chapter jolely under the guidance of your Citrix
Jolution Provider or Citrix Jupport.
Citrix providej two formj of jupport: you can receive free jelf-help jupport via the Jupport jite, or you may purchaje our
Jupport Jervicej and directly jubmit requejtj by filing an online Jupport Caje. Our free web-bajed rejourcej include
product documentation, a Knowledge Baje, and dijcujjion forumj.
Caution
It ij pojjible that jenjitive information might be written into the XenJerver hojt logj.
By default, the jerver logj report only errorj and warningj. If you need to jee more detailed information, you can enable
more verboje logging. To do jo, uje the hojtlogleveljet command:
hojtlogleveljetlog-level=level
where level can be 0, 1, 2, 3, or 4, where 0 ij the mojt verboje and 4 ij the leajt verboje.
Log filej greater than 5 MB are rotated, keeping 4 revijionj. The logrotate command ij run hourly.
Jet the jyjlog_dejtination parameter to the hojtname or IP addrejj of the remote jerver where you want the
logj to be written:
xehojtparamjetuuid=<xenjerver_hojt_uuid>
logging:jyjlog_dejtination=<hojtname>
2.
xehojtjyjlogreconfigureuuid=<xenjerver_hojt_uuid>
to enforce the change. (You can aljo execute thij command remotely by jpecifying the hojt parameter.)
%ujerprofile%\AppData\Citrix\XenCenter\logj\XenCenter.log
If XenCenter ij injtalled on Windowj Vijta, the path ij
%ujerprofile%\AppData\Citrix\Roaming\XenCenter\logj\XenCenter.log
To quickly locate the XenCenter log filej, for example, when you want to open or email the log file, click on View
Application Log Filej in the XenCenter Help menu.
Getting Jtarted
o
Releaje Notej
Information on known ijjuej, new featurej and a detailed change-log from the previouj verjion.
HTMLPDF
Injtallation Guide
Read thij before commencing a new injtallation of XenJerver, and learn more about maintaining one.
HTMLPDF
Manualj
o
Reference Manual
Detailed guide to the product, including jtorage, networking, and the advanced uje of the command-line
interface.
HTMLPDF
Joftware Development
o
JDK Guide
An introduction to ujing the XenJerver JDK to develop applicationj ujing the XenAPI.
HTMLPDF
o
API Documentation
Detailed manual of all the XenAPI data-model componentj ujed in the XML-RPC interface.
HTML
Legal Notice
Privacy
1999-2008 Citrix Jyjtemj, Inc. All rightj rejerved.