How many iSCSI targets will ESX support 8 for 3.01, (64 for 3.5)
what is the different when you use viclient connect to VC and directly to ESX server itself.
When you connect to VC you manage ESX server via vpxa (Agent on esx server). Vpxa then pass
those request to hostd (management service on esx server).
When you connect to ESX server directly, you connect to hostd (bypass vpxa). You can extend this to
a trobleshoot case,
where connect to esx see one thing and connect to VC see another.
So the problem is most likely out of sync between hostd and vpxa, "service vmware-vpxa restart"
should take care of it.
the default partitions in vmware --/ (5 gb),/boot(100mb),/swap(544 mb) /vmkcore (100) ,/vmfs
Standard ; production oriented features and all the add-on licenses can be configured with this
edition.
Unlimited max number of vms,san,iscsi,nas, vsmp support and VCB addon at additional cost.
Vmware vitual machine files are vmdk, vmx (configuration), nvram (BIOS file), Log file
Hypervisor ESX server offers basic partition of server resources, however it also acts as the
foundation for virtual infrastructure software enabling Vmotion,DRS and so forth kerys to the dynamic,
automated datacenter.
host agent on each managed host software collects , communicates, and executes the action
recieved through the VI client. it is installed as a part of the ESX Server Installation.
Virtual Center agent : On each managed host, software that collects commmunicates and executes
the actions received from the virtual center server. Ther virtual center agent is installed the firsttime
any host is added to the virtual center inventory.
ESX server installation requirements 1500 mhz cpu intel or amd, memory 1 gb minimum up to 256
mb, 4 gb hard drive space,
The configuration file that manages the mapping of service console file system to mount point is
/etc/fstab
----
ESX mount points at the time of installaiton
--------------
/cciss/c0d0 consider as local SCSI storage
----
VI client provides direct access to an ESX server for configuration and virtual machine management
The VI Client is also used to access Vitual center to provide management configuration and monitoring
of all ESX servers and their virtual machines within the virtual infractur environment. However when
using the VI client to connect directly to the ESX server, no management of Virtual center feature is
possible.
EG; you cannot cofigure and administer, Vmware DRS or Vmware HA.
Vmware license mode : default 60 days trail. after 60 days you can create VM's but you cannot power
on VM's
---
The license types are Foundation, Standard and Enterprise
Fondation license : VMFS,Virtual SMP, Virtual center agent, Vmware update manager, VCB
VI standard license : Foundation license + HA feature
Enterprise : Foundation license + STandard license + Vmotion, VM Storage vmotion, and VMware
DRS
By default the first service console network connection is always named service console. It always in
Vswitch 0. The switch always connects vmnic 0
--------
To gather vmware diagnostics information run the script vm-spport from the service console
If you generate the diagnostic information, this information will be stored in Vmware-virtualcenter-
support-date@time folder
The folder contains Viclient-support, which will holds vi client log files
Another file is esx-support-date@time.tgz, which is compressed archive file contains esx server
diagnostics information
----
You cannot have two virtual switchs mapped to one physical NIC
You can map two or more physical NIC mapped to one virtual switch.
Virtual switch used to give the service console access to a management LaN
Vitual swich can have 1016 ports adn 8 ports used for management purpose total is 1024
Multiple service console connections can be created only if they are configured on different network. In
addition only a single service console gateway, IP address can be defined
---------------
A VMkernal port allow to use ISCSI, NAS based networks. Vmkernal port is requied for Vmotion.
It requires network lablel, vlan id optional, IP setting
Multiple Vmkernal connections can be configured only if they are configured on a different
networks,only single vmkernal gateway Ip address can be defined.
----------
A network lable
VLAN id optional
-------
esxcfg -nics -l
-------------
Security
Traffic shaping
NIC teaming
-----------
Network security policy mode
Promiscos mode : when set to reject, placing a guest adapter in promiscuous mode has no which
frames are received by the adapter
Mac address changer : when set to reject, if the guest attempts to change the MAC adress assigned to
the virtual NIC, it stops receiving frames
Forged trasmitts - When set to reject drops any frames that the guest sends, where the source
address field contains a MAC address other than the assigned virtual NIC mac addresss ( default
accept)
---------
-------
vmhba0�11:3
-----------
-----------
-------------
ISCSI uses CHAP authuntication
Vmware license port is 27000
After changing made at command line for reflecting the changes you need to start the hostd daemon
----------------
Log files should be used only when you are having trouble with a virtual machine.
VMDK files – VMDK files are the actual virtual hard drive for the virtual guest operation system (virtual
machine / VM). You can create either dynamic or fixed virtual disks. With dynamic disks, the disks start
small and grow as the disk inside the guest OS grows. With fixed disks, the virtual disk and guest OS
disk start out at the same (large) disk. For more information on monolithic vs. split disks see this
comparison from sanbarrow.com.
VMEM – A VMEM file is a backup of the virtual machine’s paging file. It will only appear if the virtual
machine is running, or if it has crashed.
VMSN & VMSD files – these files are used for VMware snapshots. A VMSN file is used to store the
exact state of the virtual machine when the snapshot was taken. Using this snapshot, you can then
restore your machine to the same state as when the snapshot was taken. A VMSD file stores
information about snapshots (metadata). You’ll notice that the names of these files match the names
of the snapshots.
NVRAM files – these files are the BIOS for the virtual machine. The VM must know how many hard
drives it has and other common BIOS settings. The NVRAM file is where that BIOS information is
stored.
VMX files – a VMX file is the primary configuration file for a virtual machine. When you create a new
virtual machine and answer questions about the operating system, disk sizes, and networking, those
answers are stored in this file. As you can see from the screenshot below, a VMX file is actually a
simple text file that can be edited with Notepad. Here is the “Windows XP Professional.vmx” file from
the directory listing, above:
-------------
We can create VM
1. Vm from scratch
2.Deploy from templete
3. Cloned
4. P2V
5. ISO file
6..vmx file
Max CPU's per core is 4 to 8 vcpu's
a bit map file will be created, and uses will be working on the bitmap file
Third, the networks being used by the virtual machine are also virtualized
by the underlying ESX Server, ensuring that even after the
migration, the virtual machine network identity and network connections
are preserved. VMotion manages the virtual MAC address
as part of the process. Once the destination machine is activated,
VMotion pings the network router to ensure that it is aware of the
new physical location of the virtual MAC address.
Since the migration of a virtual machine with VMotion preserves the precise execution
state, the network identity, and the active network connections,
the result is zero downtime and no disruption to users.
-----------------------------------
DRS
DRS will balance the workload across the resources you presented to the cluster. It is an essential
component of any successful ESX implementation.
With VMware ESX 3.x and VirtualCenter 2.x, it's possible to configure VirtualCenter to manage the
access to the resources automatically, partially, or manually by an administrator.
This option is particularly useful for setting an ESX server into maintenance mode. Maintenance mode
is a good environment to perform tasks such as scanning for new storage area network (SAN) disks,
reconfiguring the host operating system's networking or shutting down the server for maintenance.
Since virtual machines can't be run during maintenance mode, the virtual machines need to be
relocated to other host servers. Commonly, administrators will configure the ESX cluster to fully
automate the rules for the DRS settings. This allows VirtualCenter to take action based on workload
statistics, available resources and available host servers
An important point to keep in mind is that DRS works in conjunction with any established resource
pools defined in the VirtualCenter configuration. Poor resource pool configuration (such as using
unlimited options) can cause DRS to make unnecessary performance adjustments. If you truly need to
use unlimited resources within a resource pool the best practice would be to isolate. Isolation requires
a separate ESX cluster with a limited number of ESX hosts that share a single resource pool where
the virtual machines that require unlimited resources are allowed to operate.
Sharing unlimited setting resource pools with limited setting resource pools within the same cluster
could cause DRS to make unnecessary performance adjustments. DRS can compensate for this
scenario, but that could be by bypassing any resource provisioning and planning previously
established.
----------------------
The basic concept of VMotion is that ESX will move a virtual machine while it is running to another
ESX host with the move being transparent to the virtual machine.
Not all virtual machines can be moved. Certain situations, such as optical image binding to an image
file, prevent a virtual machine from migrating. With VMotion enabled, an active virtual machine can be
moved automatically or manually from one ESX host to another. An automatic situation would be as
described earlier when a DRS cluster is configured for full automation. When the cluster goes into
maintenance mode, the virtual machines are moved to another ESX host by VMotion. Should the DRS
cluster be configured for all manual operations, the migration via VMotion is approved within the
Virtual Infrastructure Client, then VMotion proceeds with the moves.
VMware ESX 3.5 introduces the highly anticipated Storage VMotion. Should your shared storage need
to be brought offline for maintenance,Storage VMotion can migrate an active virtual machine to
another storage location. This migration will take longer, as the geometry of the virtual machine's
storage is copied to the new storage location. Because this is not a storage solution, the traffic is
managed through the VMotion network interface.
Points to consider
One might assume that with the combined use of DRS and VMotion that all bases are covered. Well,
not entirely. There are a few considerations that you need to be aware of so that you know what DRS
and VMotion can and cannot do for you.
VMotion does not give an absolute zero gap of connectivity during a migration. In my experiences the
drop in connectivity via ping is usually limited to one ping from a client or a miniscule increase in ping
time on the actual virtual machine. Most situations will not notice the change and reconnect over the
network during a VMotion migration. There also is a slight increase in memory usage and on larger
virtual machines this may cause a warning light on RAM usage that usually clears independently.
Some virtual machines may fail to migrate, whether by automatic VMotion task or if evoked manually.
This is generally caused by obsolete virtual machines, CD-ROM binding or other reasons that may not
be intuitive. In one migration failure I experienced recently, the Virtual Infrastructure client did not
provide any information other than the operation timed out. The Virtual Center server had no
information related to the migration task in the local logs. In the database
Identification of your risks is the most important pre-implementation task you can do with DRS and
VMotion. So what can you do to identify your risks? Here are a couple of easy tasks:
Schedule VMotion for all systems to keep them moving across hosts.
Regularly put ESX hosts in and then exit maintenance mode.
Do not leave mounted CD-ROM media on virtual machines (datastore/ISO file or host device options).
Keep virtual machines up to date with VMware tools and virtual machine versioning.
Monitor the VPX_EVENT table in your ESX database for the EVENT_TYPE =
vim.event.VmFailedMigrateEvent
All in all, DRS and VMotion are solid technologies. Anomalies can happen, and the risks should be
identified and put into your regular monitoring for visibility.
Now that VMotion is enabled on two or more hosts, when should it be used? There are two primary
reasons to use VMotion: to balance the load on the physical ESX servers and eliminate the need to
take a service offline in order to perform maintenance on the server.
VI3 balances its load by using a new feature called DRS. DRS is included in the VI3 Enterprise edition
along with VMotion. This is because DRS uses VMotion to balance the load of an ESX cluster in real
time between all of the server involved in the cluster. For information on how to configure DRS see
page 95 of the VMware VI3 Resource Management Guide. Once DRS is properly configured it will
constantly be evaluating how best to distribute the load of running VMs amongst all of the host servers
involved in the DRS-enabled cluster. If DRS decides that a particular VM would be better suited to run
on a different host then it will utilize VMotion to seamlessly migrate the VM over to the other host.
While DRS migrates VMs here and there with VMotion, it is also possible to migrate all of the VMs off
of one host server (resources permitting) and onto another. This is accomplished by putting a server
into "maintenance mode." When a server is put into maintenance mode, VMotion will be used to
migrate all of the running VMs off it onto another server. This way it is possible to bring the first server
offline to perform physical maintenance on it without impacting the services that it provides.
As stated above, VMotion is the process that VMware has invented to migrate, or move, a virtual
machine that is powered on from one host server to another host server without the VM incurring
downtime. This is known as a "hot-migration." How does this hot-migration technology that VMware
has dubbed VMotion work? Well, as with everything, in a series of steps:
A request has been made that VM-A should be migrated (VMotioned) from ESX-A to ESX-B
VM-A's memory is pre-copied from ESX-A to ESX-B while ongoing changes are written to a memory
bitmap on ESX-A.
VM-A is started on ESX-B and all access to VM-A is now directed to the copy running on ESX-B.
The rest of VM-A's memory is copied from ESX-A all the while memory is being read and written from
VM-A on ESX-A when applications attempt to access that memory on VM-A on ESX-B.
The VM cannot be connected to a CD-ROM or floppy drive that is using an ISO or floppy image stored
on a drive that is local to the host server.
The VM's affinity must not be set, i.e., binding it to physical CPU(s).
The VM must not be clustered with another VM (using a cluster service like the Microsoft Cluster
Service (MSCS)).
The two ESX servers involved must both be using (the same!) shared storage.
The two ESX servers involved must be connected via Gigabit Ethernet (or better).
The two ESX servers involved must have access to the same physical networks.
The two ESX servers involved must not have virtual switch port groups that are labeled the same.
The two ESX servers involved must have compatible CPUs. (See support on Intel and AMD).
If any of the above conditions are not met, VMotion is not supported and will not start. The simplest
way to test these conditions is to attempt a manual VMotion event. This is accomplished by right-
clicking on VM in the VI3 client and clicking on "Migrate..." The VI3 client will ask to which host this VM
should be migrated. When a host is selected, several validation checks are performed. If any of the
above conditions are true then the VI3 client will halt the VMotion operation with an error.
Conclusion
The intent of this article was to provide readers with a solid grasp of what VMotion is and how it can
benefit them. If you have any outstanding questions with regards to VMotion or any VMware
technology please do not hesitate to send them to me via ask the experts.
------------------------------------------------------------------------
What Is VMware VMotion ?
VMware® VMotion™ enables the live migration of running virtual machines from one physical server
to another with zero downtime, continuous service availability, and complete transaction integrity.
VMotion allows IT organization to:
• Continuously and automatically allocate virtual machines within resource pools.
• Improve availability by conducting maintenance without disrupting business operations
VMotion is a key enabling technology for creating the dynamic, automated, and self-optimizing data
center.
Live migration of a virtual machine from one physical server to another with VMotion is enabled by
three underlying
technologies.
First, the entire state of a virtual machine is encapsulated by a set of files stored on shared storage
such as Fibre Channel
or iSCSI Storage Area Network (SAN) or Network Attached Storage (NAS). VMware’s clustered Virtual
Machine File
System (VMFS) allows multiple installations of ESX Server to access the same virtual machine files
concurrently.
Second, the active memory and precise execution state of the virtual machine is rapidly transferred
over a high speed
network, allowing the virtual machine to instantaneously switch from running on the source ESX
Server to the destination
ESX Server. VMotion keeps the transfer period imperceptible to users by keeping track of on-going
memory transactions
in a bitmap. Once the entire memory and system state has been copied over to the target ESX Server,
VMotion
suspends the source virtual machine, copies the bitmap to the target ESX Server, and resumes the
virtual machine on
the target ESX Server. This entire process takes less than two seconds on a Gigabit Ethernet network.
Third, the networks being used by the virtual machine are also virtualized by the underlying ESX
Server, ensuring
that even after the migration, the virtual machine network identity and network connections are
preserved. VMotion
manages the virtual MAC address as part of the process.
Once the destination machine is activated, VMotion pings the network router to ensure that it is aware
of the new
physical location of the virtual MAC address. Since the migration of a virtual machine with VMotion
preserves the precise
execution state, the network identity, and the active network connections, the result is zero downtime
and no disruption
to users.
---------------------------------------------
What is VirtualCenter?
-------------------------------------------------
What is Storage VMotion (SVMotion) and How do you perform a SVMotion using the VI Plugin?
there are at least 3 ways to perform a SVMotion – from the remote command line, interactively from
the command line, and with the SVMotion VI Client Plugin
Note:
You need to have VMotion configured and working for SVMotion to work. Additionally, there are a ton
of caveats about SVMotion in the ESX 3.5 administrator’s guide (page 245) that could cause
SVMotion not to work. One final reminder, SVMotion works to move the storage for a VM from a local
datastore on an ESX server to a shared datastore (a SAN) and back – SVMotion will not move a VM
at all – only the storage for a VM.
----------------------------
Hardware level virtualization – no based operating system license is needed, ESXi installs right on
your hardware (bare metal installation).
VMFS file system – see advanced feature #2, below.
SAN Support – connectivity to iSCSI and Fibre Channel (FC) SAN storage, including features like boot
from SAN
Local SATA storage support.
64 bit guest OS support.
Network Virtualization – virtual switches, virtual NICs, QoS & port configuration policies, and VLAN.
Enhanced virtual machine performance – virtual machines may perform, in some cases, even better in
a VM than on a physical server because of features like transparent page sharing and nested page
table.
Virtual SMP – see advanced feature #4, below.
Support for up to 64GB of RAM for VMs, up to 32 logical CPUs and 256GB of RAM on the host.
#2 VMFS
VMware’s VMFS was created just for VMware virtualization. Thus, it is the highest performance file
system available to use in virtualizing your enterprise. While VMFS is included with any edition or
package of ESX Server or VI that you choose, VMFS is still listed as a separate product by VMware.
This is because it is so unique.
VMFS is a high performance cluster file system allowing multiple systems to access the file system at
the same time. VMFS is what gives you a solid platform to perform VMotion and VMHA. With VMFS
you can dynamically increase a volume, support distributed journaling, and the addition of a virtual
disk on the fly.
#3 Virtual SMP
VMware’s Virtual SMP (or VSMP) is the feature that allows a VMware ESX Server to utilize up to 4
physical processors on the host system, simultaneously. Additionally, with VSMP, processing tasks will
be balanced among the various CPUs.
Storage VMotion (or SVMotion) is similar to VMotion in the sense that "something" related to the VM
is moved and there is no downtime to the VM guest and end users. However, with SVMotion the VM
Guest stays on the server that it resides on but the virtual disk for that VM is what moves. Thus, you
could move a VM guest's virtual disks from one ESX server’s local datastore to a shared SAN
datastore (or vice versa) with no downtime for the end users of that VM guest. There are a number of
restrictions on this. To read more technical details on how it works, please see the VMware ESX
Server 3.5 Administrators Guide.
VMware ESXi – the slimmed down (yet fully functional) version of ESX server that has no service
console. By buying ESXi, you get VMFS and virtual SMP only.
VMware Infrastructure Foundation – (previously called the starter kit, the Foundation package includes
ESX or ESXi, VMFS, Virtual SMP, Virtual Center agent, Consolidated backup, and update manager.
VMware Infrastructure Standard – includes ESX or ESXi, VMFS, Virtual SMP, Virtual center agent,
consolidated backup, update manager, and VMware HA.
VMware Infrastructure Enterprise – includes ESX or ESXi, VMFS, Virtual SMP, Virtual center agent,
consolidated backup, update manager, VMware HA, VMotion, Storage VMotion, and DRS.
You should note that Virtual Center is required for some of the more advanced features and it is
purchased separately. Also, there are varying levels of support available for these products. As the
length and the priority of your support package increase, so does the cost
----------------------------------------
Advantages of VMFS
VMware’s VMFS was created just for VMware virtualization. VMFS is a high performance cluster file
system allowing multiple systems to access the file system at the same time. VMFS is what gives you
the necessary foundation to perform VMotion and VMHA. With VMFS you can dynamically increase a
volume, support distributed journaling, and the addition of a virtual disk on the fly.
------------------
However, all licensed functionality currently operating at the time the license server
becomes unavailable continues to operate as follows:
?? All VirtualCenter licensed features continue to operate indefinitely, relying on a cached version of
the license state. This includes not only basic VirtualCenter
operation, but licenses for VirtualCenter add-ons, such as VMotion and DRS.
?? For ESX Server licensed features, there is a 14-day grace period during which hosts continue
operation, relying on a cached version of the license state, even across
reboots. After the grace period expires, certain ESX Server operations, such as powering on virtual
machines, become unavailable.
---------------
During the ESX Server grace period, when the license server is unavailable, the following operations
are unaffected:
?? Virtual machines continue to run. VI Clients can configure and operate virtual machines.
?? ESX Server hosts continue to run. You can connect to any ESX Server host in the VirtualCenter
inventory for operation and maintenance. Connections to the
VirtualCenter Server remain. VI Clients can operate and maintain virtual machines from their host
even if the VirtualCenter Server connection is also lost.
?? Adding ESX Server hosts to the VirtualCenter inventory. You cannot change VirtualCenter agent
licenses for hosts.
?? Adding or removing hosts from a cluster. You cannot change host membership for the current
VMotion, HA, or DRS configuration.
When the grace period has expired, cached license information is no longer stored.
As a result, virtual machines can no longer be powered on. Running virtual machines continue to run
but cannot be rebooted.
When the license server becomes available again, hosts reconnect to the license server.
No rebooting or manual action is required to restore license availability. The grace period timer is reset
whenever the license server becomes available again.
# vmkfstools -R /vmfs/volumes/SAN-storage-2/
Recently VMware added a some what useful command line tool named vmfs-undelete which exports
metadata to a recovery log file which can restore vmdk block addresses in the event of deletion. It's a
simple tool and at present it's experimental and unsupported and is not available on ESXi. The tool of
course demands that you were proactive and ran it's backup function in order to use it. Well I think this
falls well short of what we need here. What if you have no previous backups of the VMFS
configuration, so we really need to know what to look for and how to correct it and that's exactly why I
created this blog.
----
Login to the service console as root and execute e sxc f g - vmhbadevs to identify which
LUNs are currently seen by the ESX server.
# esxcfg-vmhbadevs
Run the esxcf g-vmhbadevs command with the -m option to map VMFS names to
VMFS UUIDs. Note that the LUN partition numbers are shown in this output. The
hexidecimal values are described later.
# esxcfg-vmhbadevs -m
-------------
Use the vdf -h comand to identify disk statistics (Size, Used, Avail, Use%, Mounted
on) for all file system volumes recognized by your ESX host.
List the contents of the /vmfs/volumes directory. The hexidecimal numbers (in dark blue) are
unique VMFS names. The names in light blue are the VMFS labels. The labels are
symbolically linked to the VMFS volumes.
ls -l \vmfs\volumes
-----------
Using the Linux device name (obtained using e sxc f g - vmhbadevs command), check
LUNs A, B and C to see if any are partitioned.
If there is no partition table, example a. below, go to step 3. If there is a table, example b. go
to step 2.
# fdisk -1 /dev/sd<?>
------------
1. Format a partitioned LUN using vmkf s tool s . Use the - C and - S options respectively,
to create and label the volume. Using the command below, create a VMFS volume on LUN A.
Ask your instructor if you should use a custom VMFS label name.
# vmkfstools -C vmfs3 -S LUNc#> vmhbal:O:#:l
----------------
Now that the LUN has been partitioned and formatted as a VMFS volume, it can be used as a
datastore. Your ESX host recognizes these new volumes.
vdf -h
---------------------
Use the esxcf g-vmhbadevs command with the -m option to map the VMFS hex
names to SAN LUNs.
# esxcfg-vmhbadevs -m
--------------
It may be helpful to change the label to identify that this VMFS volume is spanned.
Add - spanned to the VMFS label name.
# In -sf /vmfs/volumes/<V~~S-UUID> /vmfs/volumes/
<New- L abel- N ame>
-------------
In order to remove a span, you must reformat LUN B with a new VMFS volume (because it
was the LUN that was spanned to).
THIS WILL DELETE ALL DATA ON BOTH LUNS IN THE SPAN !
# vmkfstools -C vmfs3 -S <label> vmhbal:O:#:l
------------------------
------------
------
Configure the system to synchronize the hwclock and the operating system clocks each time
the ESX Server host is rebooted.
# nano -w /etc/sysconfig/clock
UTC= t rue
------------
-------------
Communication between VI client and ESX server the ports reqired 902, 903
Communication between ESX server and License server 27010 (in), 27000(out)
vcbMounter is used, among other things, to create the snapshot for the 3rd party
backup software to access:
vcbMounter -h <VC - IP - address - or - hostname>
-u <VC- u ser- a ccount>
-p cVC user password>
-a ~~dzntifi-eo rf - t he- V M -t o- b ackup>
-r <Directory - on - VCB Proxy - to - putbackup>
-t <Backup - type: - file - or - fullvm>
------------------
List the different ways to identify your virtual machine. To do this, use the
vcbVmName command:
vcbVmName
-h < V i r t u a l Center - Server-IP-Address-or-Hostname>
-u < V i r t u a l c e n t e r- S e r v e r- u s e r- a ccount>
-p < V i r t u a l c e n t e r- S e r v e r- p assword>
-s ipaddr:<IP - address - of - virtual-machine - to - backup>
-----------------
VMFS Volume can be created one partition 256 GB in the maimum size of a VM
-----------------
vmname.vmdk -- actual virtual hard drive for the virtual guest operation system
vmname_flot.vmdk--preallocated space