Anda di halaman 1dari 19

Solaris Zones

Zones or LDOMs?
Recently (especially since the SPARC T4 release) I got this question a
couple of times - "We are running/migrating to T2/T3/T4 servers, and
considering for our setup the virtualization possibilities. What shall we
go for, zones or ldoms?"
Of course one can't answer this question without talking about the
platform requirements and the reasons to pick the right technologies,
but before we'd go into details, let me get the most important
statement straight:
Zones and LDOMs are not rivalling, but complementary technologies. If
you need kernelspace separation, use ldoms. But run your applications
in zones within those ldoms anyway!
Let's get some terminology clear first:

LDOMs are now called Oracle VM for SPARC. I will use these terms
interchangably.
Zones have started their lives as project Kevlar, then named
zones, then marketed as containers, we are now back to zones
again.
LDOMs are the HW-Virtualization technology of the SPARC-T (CMT,
ChipMultiThreading, Coolthread, sun4v, etc) server series, it is
their ability to carve up the server into Logical DOMains, running
on a hypervisor that runs in the firmware.
Zones are the featherweight OS-Virtualization technology of
Solaris on all of the platforms (Sparc-T, Sparc-M and x86 too)
Every T server is running ldoms. If you don't partition your box
into domains, you are still running one single large ldom, called
the primary domain, encapsulating the complete server.
Every Solaris 10+ OS installation has one zone, the global zone
(GZ). This is where the [shared] kernel[space] runs, and the nonglobal zones (NGZ) are the containers separating applications in
the userspace.

Now, why would you want to run zones?

Container principle: They cleanly separate your applications from


each other, by maintaining for them a separate set of Solaris
packages, their dedicated CPU resources, their IP-stack, their
filesystems, etc.
Clean architecture: You won't poison your OS installation in the
GZ running on the HW with additional packages/settings. The GZ
manages the resources between the zones, runs the kernel, does
the scheduling, runs the cluster, manages the devices, etc. The
NGZs run the applications.
Flexibility: You can simply detach a zone from the GZ and attach it
to another GZ on another box, including the application. You can
easily clone zones too.
Security: Should a NGZ ever get compromised, the attacker can't
bother the GZ, or applications running in other NGZs.

Resource Management: You can dedicate the guaranteed amount


of CPU shares a zone should get (using the FairShareScheduler),
but as long as your CPU pool isn't 100% utilized, every zone can
use more than the amount dedicated to it - that is, you can
overcommit your resources.

And what are the reasons to run LDOMs?

Kernel level separation:


You might want to run different updates of Solaris 10 within a
box.
You might want to run Solaris 10 and Solaris 11 right next to each
other within a box.
Live migration: You can't livemigrate zones, but you can
livemigrate ldoms.
Some of your applications might require to run in the GZ, and you
don't like the idea of running applications both in the GZ and its
NGZ at the same time, hence you separate them into ldoms.
You need to reduce the number of vCPUs in a box for licensing
issues. LDOMs are now recognized as hardpartitions by Oracle,
license boundaries.
You don't want your I/O to depend on a single service domain you can build multipathgroups of devices between two I/O device
providing service domains.

As you see these two technologies fulfill different requirements, they


are in different levels of your operation-stack, ldoms being a HWvirtualization - a host for kernels to run, and zones being an OSvirtualization, to provide containers for your application to run in:

To give you an idea: run S10 and S11 in ldoms next to eachother within
the same box, run branded and native zones on top of them
To summarize:
The question shouldn't be about zones vs. ldoms. Use zones, they are
your friends. The question is, if you partition your T-SPARC server into
ldoms below your global zones to run your NGZs in.
Especially with Solaris 11, with Crossbow, the new network
virtualization technology (that enables all your NGZs to have a
dedicated IP stack) and the possibility to run Solaris 11 native zones
and Solaris10 branded zones on top of Solaris 11, you have two quite
powerful technologies to really get your server's worth - and by that I

mean having a high server utilization. The higher that utilization is, the
more you get for your costs.

Introduction Of Solaris Zones


A zone is a virtualized OS which is created within a single instance of
Solaris 10. Each environment has its own identification that is separate
from the primary hardware. Each environment works independently as
if running on its own system, making consolidation simple, safe, and
secure. In this article, we will discuss about the Solaris Zones/container
in detail.
whenever we go to the new topic, there are 3 questions will raise,
what ? why ? and how? let we can find the answers...
Advantages of ZONES
Reduce costs by running multiple instances
workloads on the same system
Better hardware utilization
Reduced infrastructure overhead
Lesser administration costs (admins/workload)
Resource controls
Security isolation
Software package administration
ZONE FEATURES
Granularity

Zones can run on any number of available CPUs and


amount of available memory.

Isolate

Run the Multiple application on same Global Zone.

Security

Hacking one zone does not compromise applications


running in other zones.

Transparency

Applications do not need to be recompiled to run in


zones (except for some privileged operations)

Virtualization

Hide configuration information from applications

Memory
Capping

Manage the memory usage of zones

Dynamic
Resource Pools

Assign CPUs to Zones

Fair share
scheduler

Grant a zone minimal CPU usage.

Key Points : Depends on our hardware capability we can create the non-global
zones up to 8191.
Each zone has an ID assigned by system when it's booted with the
global zone, always listed as zone ID 0.
Only the global zone contains a bootable Solaris kernel and is aware
of all devices, file systems, zones.

Types Of ZONES : Zones come in two flavors:


Global Zone
Global zones controls the hardware resources and are
administrating the Non-Global Zones.
Non-Global Zone
Virtualized Solaris execution environments, but that look and feel
just like a normal standalone servers and also its called as Local
Zones. There are 3 types Of Local Zones.

Types Of Local Zones

Sparse Root Zones

Share binaries with the global zone and also called as Native Zones.
/usr, /platform, /sbin, /lib are the FS are shared from global zone as
read-only loopback filesystem.
Very Less disk Space is sufficient for creating this type of Zones.
Quick and Very less time is required to create this type of Zones.

Whole Root Zones


Contain a complete copy of the Solaris binaries that are installed in
the global zone
Approximately its required 3 GB space for creating this type of zones.
Branded Zone
Supports different versions of Solaris OS. For example, you can
install Solaris 8 or 9 in a branded zone.

ZONE States
As shown in below image we can understand the flow of zone states
clearly.

ZONE
States
Configured Configuration was completed and Committed
Incomplete Transition state during install or uninstall Operations
Installed

The packages have been successfully installed

Ready

The virtual platform has been established

Running

The zone booted successfully and is now running

Shutting d
own

The zone is in the process of shutting down - this is a


temporary state, leading to "Down"

Down

The zone has completed the shut down process and is

down - this is a temporary state, leading to "Installed"


Zone Daemons There are 2 Daemons associated with Zone.
Zoneadmd
Zoneadmd daemon starts whenever zones requires to be
managed.
Each zones have single instance of Zoneadmd ( ie zoneadmd -z
zonename)
Its started automatically by SMF and its stop automatically when
no longer required.
Allocates the zone ID and starts the zsched process
Sets system-wide resource controls
Plumbs the virtual network interface
Mounts any loopback or conventional file systems
Zsched
The zsched process is started by zoneadmd.
The zsched job is to keep the track of kernel threads running
within the zone.
It is also known as the zone scheduler.
Frequently Using Zone Commands
zonecfg
Add/Delete/Modify/info zone configuration
# zonecfg -z zone-name: Interactive mode; can be used to remove
properties of the following types: fs, device, rctl, net, attr
# zonecfg -z zone-name commit
# zonecfg -z zone-name create
# zonecfg -z zone-name delete
# zonecfg -z zone-name verify
zoneadm
Change the Zone states or Administration of Zones
# zoneadm -z zone-name boot
# zoneadm -z zone-name halt
# zoneadm -z zone-name install
# zoneadm -z zone-name ready
# zoneadm -z zone-name reboot
# zoneadm -z zone-name uninstall
# zoneadm -z zone-name verify
zlogin
Login non-global from global zone
# zlogin zone-name
# zlogin -C zone-name ( Login to zone console)
Zone
Componen
Definition
ts
zonepath

Path of the zone root which is from global zone's file space.

Autoboot

Define whether we need automatically boot the zone

pool

Associate the zone with a resource pool; multiple zones


may share a pool.

net

Network interface of Zone

fs

File systems from the zone's /etc/vfstab, automounted file

systems configured within the zone, manually mounted file


systems or ZFS mounts from within the zone
dataset

To manage non-global zone with ZFS file system.

inheritpkg-dir

In a sparse root zone, represents directories containing


packaged software that a non-global zone shares with the
global zone. (Should not be used in a whole root zone.)

device

Devices that should be configured in a non-global zone.

rctl

Zone-wide resource controls such as zone.cpushares andzone.max-lwps

attr

Zone comments

also please note the below "sub commands", this will also important
while configuration the zone.
SUB
COMMANDS
add

Add the specified resource or components

cancel

Ends the resource specification and returns to the global


scope without retaining partially specified resources.

commit

Save the current configuration to the disk.

create

Create new zone configurations

delete

Destroy configuration.

end

Ends the resource specification

exit

Ends the zonecfg session.

info

Display information about the configuration of the current


scope.

remove

Remove the specified resource

revert

Return to the last state written to disk.

set

Set the specified property to the specified value

verify

Verify the current configuration for correctness.

Let we can see how to add the listed zone components with using
"zonecfg" command
* Set zonepath and Autoboot (the zones
servicesvc:/system/zones:defaultmust also be enabled when we go for
autoboot=true)
zonecfg:zone1> set zonepath=/export/home/zone1
zonecfg:zone1> set autoboot=true
zonecfg:zone1> verify
zonecfg:zone1> commit
zonecfg:zone1> exit
* In the following example, Filesystem is added into the non-global zone
bash-3.00# zonecfg -z zone1
zonecfg:zone1> add fs
zonecfg:zone1:fs> set dir=/test/mnt
zonecfg:zone1:fs> set special=/dev/vx/dsk/zonedg/vol1
zonecfg:zone1:fs> set raw=/dev/vx/rdsk/zonedg/vol1
zonecfg:zone1:fs> set type=vxfs
zonecfg:zone1:fs> end
zonecfg:zone1> verify
zonecfg:zone1> commit

zonecfg:zone1> exit
* In the following example, Network is added into the non-global zone
zonecfg:zone1> add net
zonecfg:zone1:net> set physical=e1000g0
zonecfg:zone1:net> set address=192.168.10.35
zonecfg:zone1:net> end
zonecfg:zone1> verify
zonecfg:zone1> commit
zonecfg:zone1> exit
* In the following example, ZFS Dataset filesystem is added into the
non-global zone
bash-3.00# zonecfg -z zone1
zonecfg:zone1> add dataset
zonecfg:zone1:dataset> set name=zonepool/zone1vol
zonecfg:zone1:dataset> end
zonecfg:zone1> verify
zonecfg:zone1> commit
zonecfg:zone1> exit
* In this example, Specify the Memory Limits. Each limit is optional, but
at least one must be set.
zonecfg:zone1> add capped-memory
zonecfg:zone1:capped-memory> set physical=50m
zonecfg:zone1:capped-memory> set swap=100m
zonecfg:zone1:capped-memory> set locked=30m
zonecfg:zone1:capped-memory> end
zonecfg:zone1> verify
zonecfg:zone1> commit
zonecfg:zone1> exit
* In this example, Assigning Dedicated CPU (1-3). we can set the
importance as well.
zonecfg:zone1> add dedicated-cpu
zonecfg:zone1:dedicated-cpu> set ncpus=1-3
zonecfg:zone1:dedicated-cpu> set importance=2
zonecfg:zone1:dedicated-cpu> end
zonecfg:zone1> verify
zonecfg:zone1> commit
zonecfg:zone1> exit
*In this example, specifies Capped CPU of 3.5 CPUs for the zone1
zonecfg:zone1> add capped-cpu
zonecfg:zone1:capped-cpu> set ncpus=3.5
zonecfg:zone1:capped-cpu> end
zonecfg:zone1> verify
zonecfg:zone1> commit
zonecfg:zone1> exit
How to Identify Sparse or Whole Root Zones
First question in your mind is to identify the Non-Global Zone whether
they Sparse or Whole Root Zone in real time. This post will help you to
find those details in handy ways. Theoretically everyone having good
knowledge about Sparse and Whole Root Zones. But in real
time.....Please follow the Steps.
Here We have 2 Non-Global zone, but we are not sure which one is
Sparse or Whole Root Zones. Let us identify them.

[root@Sol10 ~]# zoneadm list -cv


ID NAME
STATUS
PATH
BRAND IP
0 global
running /
native shared
1 Sol10LZ
running /export/zones/Sol10LZ
native shared
2 Sol10LZ1
running /export/zones/Sol10LZ1
native shared
[root@Sol10 ~]#
There is a Command called "pkgcond" allows you to determine the type
of target being operated on (global zone, non-global zone). more details
you can refer man Page of pkgcond. I'm executing the command on one
Local zone let we can see the result.
[root@Sol10LZ ~]# pkgcond -n is_what
can_add_driver=1
can_remove_driver=1
can_update_driver=1
is_alternative_root=0
is_boot_environment=0
is_diskless_client=0
is_global_zone=0
is_mounted_miniroot=0
is_netinstall_image=0
is_nonglobal_zone=1
is_path_writable=1
is_running_system=0
is_sparse_root_nonglobal_zone=0
is_whole_root_nonglobal_zone=1
[root@Sol10LZ ~]#
you can see the parameter "is_whole_root_nonglobal_zone=1" is stating
that this zone is whole Root. I'm executing the same command on
another LZ,
[root@Sol10LZ1 ~]# pkgcond -n is_what
can_add_driver=1
can_remove_driver=1
can_update_driver=1
is_alternative_root=0
is_boot_environment=0
is_diskless_client=0
is_global_zone=0
is_mounted_miniroot=0
is_netinstall_image=0
is_nonglobal_zone=1
is_path_writable=1
is_running_system=0
is_sparse_root_nonglobal_zone=1
is_whole_root_nonglobal_zone=0
[root@@Sol10LZ1 ~]#
you can see the parameter "is_sparse_root_nonglobal_zone=1" is stating
that this zone is Sparse Root. is there any way to check the same
details from Global zone, yes we can. just follow the steps
[root@Sol10 ~]# zonecfg -z Sol10LZ info
zonename: Sol10LZ
zonepath: /export/zones/Sol10LZ
brand: native

autoboot: true
bootargs:
pool:
limitpriv:
scheduling-class: FSS
ip-type: shared
hostid:
net:
address: 192.168.1.20/24
physical: e1000g0
[root@Sol10 ~]#
above configuration is for Whole Root zone
[root@Sol10 ~]# zonecfg -z Sol10LZ1 info
zonepath: /export/zones/Sol10LZ1
brand: native
autoboot: true
bootargs:
pool:
limitpriv:
ip-type: shared
inherit-pkg-dir:
dir: /lib
inherit-pkg-dir:
dir: /platform
inherit-pkg-dir:
dir: /sbin
inherit-pkg-dir:
dir: /usr
net:
address: 192.168.1.21/24
physical: e1000g0
[root@Sol10 ~]#
If you find those inherit-pkg-dir of /lib, /platform, /sbin, /usr then blindly
we can tell this is Sparse Root Zone. these files shared as Read Only file
systems from its Global. FYI, Still we can't identify the GlobalZone
Name from its non-global zone until unless you placed any script to find
those details or "arp -a|grep -i SPLA" you might get more IP, But one of
the IP is from Global, however its hard to find the details. I will post
soon to find the Global Zone Name from non-global in easiest way.
Thanks for reading this Post, if you have any doubt please comment, i
will respond you.

How to create and configure solaris 10 zones


Solaris zones enables a software partitioning of solaris 10 OS to
support multiple independent, secure OS environments to run in the
same OS. Each environment has separate process space, resource
allocation and users. Zones is widely used in production environments
as it is easy to setup and doesnt require any special hardware like
ldoms does.
Zone types
Global zone every installed OS acts like a global zone, which is present

by default. All non-global zones can only be intalled, configured and


administered from global zone.
Non-global zone They share the functioning of the kernel booted
under the global zone. All the software and other resources are
inherited from the global zone.
Whole Root zone (Big zone) It gets their own writable copy of all the
file systems like /opt, /usr. It takes more disk space.
Sparse root zone (Small zone) File systems like /opt, /usr are shared
from global zone as loopback file-system (you only have a read-only
access to these directories in non-global zone). It takes very less disk
space.
Branded zones These are solaris 8 or solaris 9 zones on the solaris 10
global zones.
Configuring a zone with minimal settings
Let us create a new zone with a minimal resources and settings
required to get it up and running. Well see how to add other resources
like cpu, memory, file system etc later in this post.We would be creating
a sparse root zone in this case. To create a whole root zone we just
have to use create -b instead of just create in the configuration prompt.
global# mkdir -p /zones/zone01
global# chmod 700 /zones/zone01
global# zonecfg -z zone01
zone01: No such zone configured
Use 'create' to begin configuring a new zone.
zonecfg:zone01> create
zonecfg:zone01> set zonepath=/zones/zone01
zonecfg:zone01> set autoboot=true
zonecfg:zone01> verify
zonecfg:zone01> commit
zonecfg:zone01> exit
Install and boot the zone
Now install the zone and boot it. Upon booting we can login into the
console of the zoen to configure it.
global# zoneadm -z zone01 verify
global# zoneadm -z my-zone install
global# zoneadm -z my-zone list -ivc
ID NAME
STATUS
PATH
BRAND
IP
0 global running
/
native
shared
- zone01 installed /zones/zone01
native
shared
global# zoneadm -z my-zone boot
global# zoneadm list -v
ID NAME
STATUS
PATH
BRAND
IP
0 global running
/
native
shared
1 my-zone running
/zones/zone01
native
shared
global# zlogin -C zone01
global # zlogin zone01
-C here connects you to the console of the zone. This has to be done
only once to get the zone configured with hostname, timezone and
other basic settings.

Resource configuration examples


Below are some most commonly used examples of resource
configuration in a zone.

CPU
1. Dedicated CPU
To see the CPU information in the global zone you can use
global# psrinfo -v
global# psrinfo -vp
After you have confirmed the CPUs you want to use, you can add a fixed
no of CPUs to the zone.
zonecfg:zone01> add dedicated-cpu
zonecfg:zone01:dedicated-cpu> set ncpus=1-2
zonecfg:zone01:dedicated-cpu> set importance=10 (optional, default
is 1)
zonecfg:zone01:dedicated-cpu> end
Memory
Capped Memory
zonecfg:my-zone> add capped-memory
zonecfg:zone01:capped-memory> set physical=50m [max memory that
can be used by this zone]
zonecfg:zone01:capped-memory> set swap=100m
zonecfg:zone01:capped-memory> set locked=30m [memory locked for
use by this zone]
zonecfg:zone01:capped-memory> end
File system
a. Loopback FS
zonecfg:zone01> add fs
zonecfg:zone01:fs> set dir=/usr/local
zonecfg:zone01:fs> set special=/opt/zones/my-zone/local
zonecfg:zone01:fs> set type=lofs
zonecfg:zone01:fs> end
here /usr/local will be readable and writable in non-global zone
b. Normal file system
zonecfg:zone01> add fs
zonecfg:zone01:fs> set dir=/data01
zonecfg:my-zone01:fs> set special=/dev/dsk/c1t1d0s0
zonecfg:my-zone01:fs> set raw=/dev/rdsk/c1t1d0s0
zonecfg:my-zone01:fs> add options [logging, nosuid] (optional)
zonecfg:my-zone01:fs> end
ZFS dataset
When we delegate a dataset to a non-global zone we can do any
operation on that dataset inside of the zone without requiring global
zone to configure it all the time.
zonecfg:zone01> add dataset
zonecfg:zone01> set name=tank/sales
zonecfg:zone01> end
Inherit package (sparse root zone only)
Now in case of sparse root zone we can inherit some of the packages
from the global zone.
zonecfg:my-zone> add inherit-pkg-dir
zonecfg:my-zone:inherit-pkg-dir> set dir=/opt/sfw
zonecfg:my-zone:inherit-pkg-dir> end
NOTE: These resources can not be modified once the zone is installed

IP
We can either give an exclusive IP using a dedicated interface to a nonglobal zone or use an existing interface in the global zone to share it
with the non-global zone. When we configure an exclusive IP we have to
configure IP address inside of the non-global zone and not during the
configuration.
a. Exclusive IP
zonecfg:my-zone> set ip-type=exclusive
zonecfg:zone01> add net
zonecfg:zone01:net> set physical=hme0
NOTE: No need to specify IP here you can control everything from inside
of the non-global zone
b. Shared IP
In this case zone uses a shared interface which is already plumbed and
being used in the global zone.
zonecfg:zone01> add net
zonecfg:zone01:net> set address=192.168.1.2
zonecfg:zone01:net> set physical=hme0
zonecfg:zone01:net> set defrouter=10.0.0.1 [optional]
zonecfg:zone01:net> end
Device
We can also directly assign a physical device like disk to a non-global
disk.
zonecfg:zone01> add device
zonecfg:zone01:device> set match=/dev/rdsk/c0t1d0
zonecfg:zone01:device> end
Comments
In case you want to add some comments like function of the non-global
zone or anything else for that matter.
zonecfg:zone01> add attr
zonecfg:zone01:attr> set name=comment
zonecfg:zone01:attr> set type=string
zonecfg:zone01:attr> set value="Hello World. This is my zone"
zonecfg:zone01:attr> end
Other
Other settings like scheduling class of the CPU in the non-global zone
can also be configured from the global zone.
zonecfg:zone01> set limitpriv="default,sys_time"
zonecfg:zone01> set scheduling-class=FSS
Other administrative commands
To reboot a zone :
# zoneadm -z reboot
To halt a zone :
# zoneadm -z zone halt
To uninstalling a zone :
# zoneadm -z zone uninstall -F
To delete an uninstalled zone : # zoneadm -z zone delete -F
Get all configuration info :
# zonecfg -z zone info
login into a zone in safe mode : # zlogin -S zone
prstat on all zones :
# prstat -Z
prstat on a single zone :
# prstat -z zone

Examples of adding VxFS, ZFS, SVM, UFS, lofs, Raw


volumes and disk devices to non-global zones.
Adding file system or disk devices to a non-global zone is an integral
part of creating a zone. We can add different types of file systems, raw
devices and disk devices as well to a non-global zone. The post
describes one of the most common ways of adding different file
systems, raw and disk devices to a non-global zone.
Adding a Raw Disk device
We can either add a slice or a complete raw disk to the non global zone.
In case of a full disk use s2 slice or else use any other slice you want to
add.
global # zonecfg -z zone01
zonecfg:zone01> add device
zonecfg:zone01:device> set match=/dev/rdsk/c0t0d0s6
zonecfg:zone01:device> end
zonecfg:zone01> commit
zonecfg:zone01> verify
zonecfg:zone01>exit
Adding a VxFS file system
1. Adding a VxVM file system
global # zonecfg -z zone01
zonecfg:zone01> add fs
zonecfg:zone01:fs> set
zonecfg:zone01:fs> set special=/dev/vx/dsk/datadg/datavol
zonecfg:zone01:fs> set raw=/dev/vx/rdsk/datadg/datavol
zonecfg:zone01:fs> set dir=/data
zonecfg:zone01:fs> end
zonecfg:zone01> commit
zonecfg:zone01> verify
zonecfg:zone01> exit
2. Adding a VxVM raw volume
global# zonecfg -z zone01
zonecfg:zone01> add device
zonecfg:zone01:device> set match=/dev/vx/rdsk/dg1/vol1
zonecfg:zone01:device> end
zonecfg:zone01> commit
zonecfg:zone01> verify
zonecfg:zone01> exit
Adding UFS file system
1. Adding UFS under SVM
global # zonecfg -z zone01
zonecfg:zone01> add fs
zonecfg:zone01:fs> set dir=/u01
zonecfg:zone01:fs> set special=/dev/md/dsk/d100
zonecfg:zone01:fs> set raw=/dev/md/rdsk/d100
zonecfg:zone01:fs> set

zonecfg:zone01:fs> add options [nodevices,logging]


zonecfg:zone01:fs> end
zonecfg:zone01> commit
zonecfg:zone01> verify
zonecfg:zone01> exit
2. Adding UFS under VxVM volume
We can also create a UFS file system on a VxVM volume as foillows.
global # vxassist -g datadg make datavol 1g
global # mkfs -F ufs /dev/vx/rdsk/datadg/datavol
global # mount -F ufs /dev/vx/dsk/datadg/datavol
/zones/zone01/root/data
To add the UFS under VxVM :
global # zonecfg -z zone01
zonecfg:zone01> add fs
zonecfg:zone01:fs> set
zonecfg:zone01:fs> set special=/dev/vx/dsk/datadg/datavol
zonecfg:zone01:fs> set raw=/dev/vx/rdsk/datadg/datavol
zonecfg:zone01:fs> set dir=/zones/zone1/root/data
zonecfg:zone01:fs> end
zonecfg:zone01> commit
zonecfg:zone01> verify
zonecfg:zone01> exit
Adding ZFS
1. Adding a ZFS file system to a non-global zone Make sure the mount
point property of the ZFS file system getting added to a zone is set to
legacy, otherwise it may get assigned to multiple non-global zones
simultaneously.
global # zonecfg -z zone01
zonecfg:zone01> add fs
zonecfg:zone01:fs> set
zonecfg:zone01:fs> set special=rpool/data
zonecfg:zone01:fs> set dir=/data
zonecfg:zone01:fs> end
zonecfg:zone01> verify
zonecfg:zone01> commit
zonecfg:zone01> exit
2. Adding ZFS file system as a loopback file system (lofs)to a non-global
zone :
global # zonecfg -z zone01
zonecfg:zone01> add fs
zonecfg:zone01:fs> set special=rpool/data
zonecfg:zone01:fs> set dir=/data
zonecfg:zone01:fs> set
zonecfg:zone01:fs> end
zonecfg:zone01> commit
zonecfg:zone01> verify
zonecfg:zone01> exit
global # mkdir -p /zoneroot/zone01/root/data
global # mount -F lofs rpool/data /zoneroot/zone01/root/data
3. Delegating a dataset to a non-global zone here you have a complete

control of the dataset you delegate to the non-global zone. For example
you can create your own child datasets under the dataset you delegate
and set properties of the delegated dataset etc. The ZFS file system
data will be available as a pool in the non-globa zone.
global # zonecfg -z zone01
zonecfg:zone01> add dataset
zonecfg:zone01:dataset> set name=rpool/data
zonecfg:zone01:dataset> end
zonecfg:zone01> commit
zonecfg:zone01> verify
zonecfg:zone01> exit
3. Adding ZFS volumes to non-global zones
global # zonecfg -z zone01
zonecfg:zone1> add device
zonecfg:zone1:device> set match=/dev/zvol/dsk/rpool/datavol
zonecfg:zone1:device> end
zonecfg:zone1> verify
zonecfg:zone1> commit
zonecfg:zone1> exit
Adding a CD-ROM to non-global zone
To adda a CD-ROM to the non-global zone :
global # zonecfg -z zone01
zonecfg:zone01> add fs
zonecfg:zone01:fs> set dir=/cdrom
zonecfg:zone01:fs> set special=/cdrom
zonecfg:zone01:fs> set
zonecfg:zone01:fs> end
zonecfg:zone01> verify
zonecfg:zone01> commit
zonecfg:zone01> exit

Procedure for Creating Sparse Root Zone


In this article we will be creating the Sparse root zone.
Pre-Requesties :
Zonepath ( /export/zones/zone1 FS with atleast approx 300MB Size)
Interface (e1000g0)
Ip address with subnetmask (192.168.10.35 255.255.255.0)
Lets us start creating the sparse root zones from Global zone.
bash-3.00# zonecfg -z zone1
zone1: No such zone configured
Use 'create' to begin configuring a new zone.
zonecfg:zone1> create
zonecfg:zone1> set autoboot=true
zonecfg:zone1> set zonepath=/export/zones/zone1
zonecfg:zone1> add net
zonecfg:zone1:net> set address=192.168.10.35
zonecfg:zone1:net> set physical=e1000g0
zonecfg:zone1:net> end
zonecfg:zone1> verify
zonecfg:zone1> commit
zonecfg:zone1> exit

When we set zone as autoboot=true function, the zone service needs to


be enabled. we can check the status with using "svcs" command.
bash-3.00# svcs -a|grep -i /zones
online
22:27:25 svc:/system/zones:default
Another rule is zones home directory permission should be as 700.
bash-3.00# chmod 700 /export/zones/zone1
bash-3.00# ls -ld /export/zones/zone1
drwx------ 3 root
root
96 Jun 27 01:41 /export/zones/zone1
bash-3.00#
Now the zone status is configured status
bash-3.00# zoneadm list -cv
ID NAME
STATUS
PATH
BRAND IP
0 global
running /
native shared
- zone1
configured /export/zones/zone1
native
bash-3.00#

shared

Now we are ready to install the sparse root zone


bash-3.00#
bash-3.00# zoneadm -z zone1 install
Preparing to install zone .
Creating list of files to copy from the global zone.
Copying <5988> files to the zone.
Initializing zone product registry.
Determining zone package initialization order.
Preparing to initialize <1114> packages on the zone.
Initialized <1114> packages on zone.
Zone <zone1> is initialized.
Installation of <2> packages was skipped.
The file </export/zones/zone1/root/var/sadm/system/logs/install_log>
contains a log of the zone installation.
bash-3.00#
Now the zone status is changed to installed status
bash-3.00# zoneadm list -cv
ID NAME
STATUS
PATH
BRAND IP
0 global
running /
native shared
- zone1
installed /export/zones/zone1
native shared
bash-3.00#
Now the configuration looks like below and the zone is ready to boot
now.
bash-3.00# zonecfg -z zone1 info
zonename: zone1
zonepath: /export/zones/zone1
brand: native
autoboot: true
bootargs:
pool:
limitpriv:
scheduling-class:

ip-type: shared
hostid:
inherit-pkg-dir:
dir: /lib
inherit-pkg-dir:
dir: /platform
inherit-pkg-dir:
dir: /sbin
inherit-pkg-dir:
dir: /usr
net:
address: 192.168.10.35
physical: e1000g0
defrouter not specified
bash-3.00# zoneadm -z zone1 boot
bash-3.00#
bash-3.00# zoneadm list -cv
ID NAME
STATUS
PATH
BRAND IP
0 global
running /
native shared
3 zone1
running /export/zones/zone1
native shared
bash-3.00#
Since this is the first time that this zone is being booted up, some initial
configurations needs to be performed.
For this we need to login in zone console with using "zlogin -C zone1"
command.