Anda di halaman 1dari 28

Storage management with LVM2, mdadm and device

mapper
Storage management with LVM2, mdadm and device mapper
Storage management with LVM2, mdadm and device mapper
Introduction
Exercise: Preparing block devices for LVM2 use
Exercise: Creating physical volumes and volume groups
Exercise: Creating logical volumes and file systems
Exercise: Creating and checking ext4 file systems (mkfs/fsck)
Exercise: Mounting file systems at bootup time
Exercise: Creating a snapshot volume
Exercise: Increasing logical volumes and file systems
Exercise: Adding a physical volume to a volume group
Exercise: Removing/replacing a physical volume
Exercise: Setting up a RAID1 device using mdadm
Exercise: Setting up Encryption using dm_crypt and LUKS

Introduction
In the early days, Linux could only be installed on fixed hard disk partitions (primary or logical partitions on PCs), which are usually hard to change
after the fact, especially if Linux had to live alongside another operating system (e.g. Microsoft Windows) on the same hard disk drive. Making
changes to the existing partitioning layout usually involved using proprietary tools like Partition Magic or biting the bullet and re-installing
everything from scratch after changing the partition configuration. Also, it was not possible to create file systems that could span across several
physical devices or to provide redundancy (RAID) or encryption.
With the introduction of the Linux device mapper (DM) and LVM2, the logical volume manager for Linux several years ago, Linux provides very
powerful and much more flexible support for managing storage. DM provides an abstraction layer on top of the actual storage block devices and
provides the foundation for LVM2, RAID, encryption and other features.
Linux LVM2 provides features like growing volumes, adding additional block devices, moving volumes between storage devices. Cluster volume
manager supports working with shared storage devices (e.g. SANs). Block devices are arranged as physical volumes that can be grouped into
volume groups. Logical volumes are created within the volume groups. File systems are created on top of the logical volumes, like on a regular
disk partition. Volume Groups and Logical Volumes can be named individually for easy addressing/organizing storage.
The following picture illustrates a possible LVM configuration:

In addition to logical volume management with LVM2, the Linux kernel supports software-RAID with the MD (multiple devices) driver. MD

organizes disk drives into RAID arrays (providing different RAID levels), including fault management.
This lab session will walk you through the basic uses of LVM2, MD RAID and encryption with dm_crypt device mapper module on the command
line.
To avoid messing up the operating system itself, we created two additional virtual disk drives that will be used for these lab exercises.
These two additional virtual SATA disks should appear as SCSI disk drives /dev/sdb and /dev/sdc in addition to the primary disk drive
containing the operating system (/dev/sda) in the booted guest system.
To verify, check the output of the kernel boot messages:

[oracle@oraclelinux6 ~]$ dmesg | grep "sd "


sd
sd
sd
sd
sd
sd
sd
sd
sd
sd
sd
sd
sd
sd
sd
sd
sd
sd

2:0:0:0:
2:0:0:0:
2:0:0:0:
2:0:0:0:
3:0:0:0:
3:0:0:0:
3:0:0:0:
3:0:0:0:
4:0:0:0:
4:0:0:0:
4:0:0:0:
4:0:0:0:
4:0:0:0:
3:0:0:0:
2:0:0:0:
2:0:0:0:
3:0:0:0:
4:0:0:0:

[sda] 20971520 512-byte logical blocks: (10.7 GB/10.0 GiB)


[sda] Write Protect is off
[sda] Mode Sense: 00 3a 00 00
[sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[sdb] 8388608 512-byte logical blocks: (4.29 GB/4.00 GiB)
[sdb] Write Protect is off
[sdb] Mode Sense: 00 3a 00 00
[sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[sdc] 8388608 512-byte logical blocks: (4.29 GB/4.00 GiB)
[sdc] Write Protect is off
[sdc] Mode Sense: 00 3a 00 00
[sdc] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[sdc] Attached SCSI disk
[sdb] Attached SCSI disk
[sda] Attached SCSI disk
Attached scsi generic sg1 type 0
Attached scsi generic sg2 type 0
Attached scsi generic sg3 type 0

You can also use the lsscsi command or read the content of the /proc/scsi/scsi file to list all connected SATA/SCSI devices:

[oracle@oraclelinux6 ~]$ lsscsi


[1:0:0:0]
[2:0:0:0]
[3:0:0:0]
[4:0:0:0]

cd/dvd
disk
disk
disk

VBOX
ATA
ATA
ATA

CD-ROM
VBOX HARDDISK
VBOX HARDDISK
VBOX HARDDISK

1.0
1.0
1.0
1.0

/dev/sr0
/dev/sda
/dev/sdb
/dev/sdc

[oracle@oraclelinux6 ~]$ cat /proc/scsi/scsi


Attached devices:
Host: scsi1 Channel: 00 Id: 00 Lun: 00
Vendor: VBOX
Model: CD-ROM
Type:
CD-ROM
Host: scsi2 Channel: 00 Id: 00 Lun: 00
Vendor: ATA
Model: VBOX HARDDISK
Type:
Direct-Access
Host: scsi3 Channel: 00 Id: 00 Lun: 00
Vendor: ATA
Model: VBOX HARDDISK
Type:
Direct-Access
Host: scsi4 Channel: 00 Id: 00 Lun: 00
Vendor: ATA
Model: VBOX HARDDISK
Type:
Direct-Access

Rev: 1.0
ANSI SCSI revision: 05
Rev: 1.0
ANSI SCSI revision: 05
Rev: 1.0
ANSI SCSI revision: 05
Rev: 1.0
ANSI SCSI revision: 05

Now that we have verified that we have two additional disk drives for our experiments, let's get going with the LVM2 configuration.

Exercise: Preparing block devices for LVM2 use


While it's possible to use entire disk drives without any partitioning information with LVM2, it's usually a good idea to create one big primary
partition that spans the entire disk. LVM2 partitions use a dedicated partition ID that makes it easier to determine disk drives that are included in
an LVM2 setup.
We will therefore first partition the two additional disks by creating large primary partitions that spans the entire disk. We'll also choose Linux
LVM2 (Hex code "8e") as the partition ID.
On Linux, you can either use various tools like fdisk, cfdisk or parted for that the following example uses fdisk to create the disk

partition:

[oracle@oraclelinux6 ~]$ sudo fdisk /dev/sdb


Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel with disk identifier 0xcd14f5f9.
Changes will remain in memory only, until you decide to write them.
After that, of course, the previous content won't be recoverable.
Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)
WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
switch off the mode (command 'c') and change display units to
sectors (command 'u').

Command (m for help): n


Command action
e
extended
p
primary partition (1-4)

p
Partition number (1-4): 1
First cylinder (1-522, default 1): 1
Last cylinder, +cylinders or +size{K,M,G} (1-522, default 522): 522
Command (m for help): t
Selected partition 1
Hex code (type L to list codes): 8e
Changed system type of partition 1 to 8e (Linux LVM)
Command (m for help): p
Disk /dev/sdb: 4294 MB, 4294967296 bytes
255 heads, 63 sectors/track, 522 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xcd14f5f9
Device Boot
/dev/sdb1

Start
1

End
522

Blocks
4192933+

Id
8e

System
Linux LVM

Command (m for help): w


The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.

Note
Repeat the procedure above to partition the second disk drive (/dev/sdc) in the same way.

Exercise: Creating physical volumes and volume groups


Now that we've prepared the block devices, we need to make them known to LVM2 as a physical volumes. This is done using the pvcreate
tool. It initializes one or more physical volumes for later use by the Logical Volume Manager. Each volume can be a disk partition, an entire disk,
meta device (e.g. a RAID array), or loopback file.
Initialize the two disk drives for use by LVM2:

[oracle@oraclelinux6 ~]$ sudo pvcreate -v /dev/sdb1 /dev/sdc1


Set up physical volume for "/dev/sdb1" with 8385867 available sectors
Zeroing start of device /dev/sdb1
Writing physical volume data to disk "/dev/sdb1"
Physical volume "/dev/sdb1" successfully created
Set up physical volume for "/dev/sdc1" with 8385867 available sectors
Zeroing start of device /dev/sdc1
Writing physical volume data to disk "/dev/sdc1"
Physical volume "/dev/sdc1" successfully created

The -v option makes the output more verbose, so you can see what the command is actually doing. You can use pvdisplay to print all known
physical volumes:

[oracle@oraclelinux6 ~]$ sudo pvdisplay


--- Physical volume --PV Name
/dev/sda2
VG Name
vg_oraclelinux6
PV Size
7.51 GiB / not usable 3.00 MiB
Allocatable
yes (but full)
PE Size
4.00 MiB
Total PE
1922
Free PE
0
Allocated PE
1922
PV UUID
VnESLQ-yehh-35Yg-KJ90-l8z4-FhrE-FFjoIr
--- Physical volume --PV Name
/dev/sda3
VG Name
vg_oraclelinux6
PV Size
2.00 GiB / not usable 4.73 MiB
Allocatable
yes (but full)
PE Size
4.00 MiB
Total PE
510
Free PE
0
Allocated PE
510
PV UUID
bgCPow-IlA3-3Vkw-ueJv-Lfip-cI12-mfnug8
"/dev/sdb1" is a new physical volume of "4.00 GiB"
--- NEW Physical volume --PV Name
/dev/sdb1
VG Name
PV Size
4.00 GiB
Allocatable
NO
PE Size
0
Total PE
0
Free PE
0
Allocated PE
0
PV UUID
EyXQq7-jc1X-tZbX-Btmo-YHKh-06Dt-5JHrc2
"/dev/sdc1" is a new physical volume of "4.00 GiB"
--- NEW Physical volume --PV Name
/dev/sdc1
VG Name
PV Size
4.00 GiB
Allocatable
NO
PE Size
0
Total PE
0
Free PE
0
Allocated PE
0
PV UUID
ebngT2-fMuj-3PEB-0gGc-VegQ-PLOI-VcIgCR

In the example above, you will notice that the base operating system is also installed on top of LVM2, the second and third partition of the first
disk drive (/dev/sda2 and /dev/sda3) belong to the volume group vg_oraclelinux6.
As an alternative to the above, the pvs command displays all available PVs in a more condensed form:

[oracle@oraclelinux6 ~]$ sudo pvs


PV

VG
/dev/sda2
/dev/sda3
/dev/sdb1
/dev/sdc1

Fmt Attr PSize PFree


vg_oraclelinux6 lvm2 a-- 7.51g
0
vg_oraclelinux6 lvm2 a-- 1.99g
0
lvm2 a-- 4.00g 4.00g
lvm2 a-- 4.00g 4.00g

We now have two additional physical volumes that we can assign to an existing or a completely new volume group. We will start with using just
one of the two additional physical volumes for the first examples. Later, the second volume will come into play, too.
You can now use the vgcreate command to create a new volume group on the physical volume(s). Space in a volume group is divided into
extents, chunks of space that are allocated at once. The default is 4 MB. The basic syntax is:
vgcreate -v <volume group name> <device>

It's possible to provide more than one physical device here, to create a volume group that spans across multiple physical volumes. Again, the -v
option makes the command's execution a bit more verbose so we can see what's going on. Now let's create a new volume group myvolg on
physical volume /dev/sdb1:

[oracle@oraclelinux6 ~]$ sudo vgcreate -v myvolg /dev/sdb1


Wiping cache of LVM-capable devices
Adding physical volume '/dev/sdb1' to volume group 'myvolg'
Archiving volume group "myvolg" metadata (seqno 0).
Creating volume group backup "/etc/lvm/backup/myvolg" (seqno 1).
Volume group "myvolg" successfully created

The command vgdisplay will list all known volume groups in the system. Note how our new volume group is there, too:

[oracle@oraclelinux6 ~]$ sudo vgdisplay


--- Volume group --VG Name
System ID
Format
Metadata Areas
Metadata Sequence No
VG Access
VG Status
MAX LV
Cur LV
Open LV
Max PV
Cur PV
Act PV
VG Size
PE Size
Total PE
Alloc PE / Size
Free PE / Size
VG UUID
--- Volume group --VG Name
System ID
Format
Metadata Areas
Metadata Sequence No
VG Access
VG Status
MAX LV
Cur LV
Open LV
Max PV
Cur PV
Act PV
VG Size
PE Size
Total PE
Alloc PE / Size
Free PE / Size
VG UUID

myvolg
lvm2
1
1
read/write
resizable
0
0
0
0
1
1
4.00 GiB
4.00 MiB
1023
0 / 0
1023 / 4.00 GiB
Tb30rU-AcHP-Cfvq-2cfH-jMa0-NOF1-DuGMvz

vg_oraclelinux6
lvm2
2
6
read/write
resizable
0
2
2
0
2
2
9.50 GiB
4.00 MiB
2432
2432 / 9.50 GiB
0 / 0
0tE3oy-Jylq-PABw-mPQf-Cl9Z-2pqz-zU02su

An alternative short form is using the vgs command, which displays the known volume groups in a more condensed fashion:

[oracle@oraclelinux6 ~]$ sudo vgs


VG

#PV #LV #SN Attr


VSize VFree
myvolg
1
0
0 wz--n- 4.00g 4.00g
vg_oraclelinux6
2
2
0 wz--n- 9.50g
0

Using this command you can quickly get an overview of your LVM setup and it's also particularly suitable to be used inside of shell scripts. Check
the vgs(8) man page for more details.
In our example you can see that the volume group vg_oraclelinux6 consists of two physical volumes (#PV), contains two logical volumes
(#LV) and has no free space left for additional logical volumes (VFree=0). Our newly created volume group myvolg consists of one physical
volume, contains no logical volumes yet and has 4 gigabytes of free space available.
Storage space in LVM2 is divided into so-called extents this is the smallest logical unit a volume can be made of. By default, vgcreate
chooses a physical extent size of 4 megabytes, but you can change this by using the --physicalextentsize option, depending on your
storage requirements.

Exercise: Creating logical volumes and file systems


Now that we've created our volume group, we can finally go ahead and create our logical volumes inside. As you have probably guessed by now,
the lvcreate tool creates a logical volume inside an existing volume group. It supports a large number of options, this is the basic usage:

lvcreate --size <size> --name <logical volume name> <volume group name>

The --size option defines the size of the logical volume, by allocating the respective amount of logical extents from the free physical extent pool
of that volume group.
This will create a new logical volume in the given volume group. LVM2 automatically creates the appropriate block device nodes (named dm-x,
where "x" is a sequence number) in the /dev subdirectory. Additionally LVM2 creates named entries for each volume:

/dev/mapper/<volume group name>-<logical volume name>


/dev/<volume group name>/<logical volume name>

These are symbolic links that point to the dm-x device node.
Let's create a logical volume named myvol inside of the myvolg volume group, with a size of 2 gigabytes:

[oracle@oraclelinux6 ~]$ sudo lvcreate -v --size 2g --name myvol myvolg


Setting logging type to disk
Finding volume group "myvolg"
Archiving volume group "myvolg" metadata (seqno 1).
Creating logical volume myvol
Creating volume group backup "/etc/lvm/backup/myvolg" (seqno 2).
Found volume group "myvolg"
activation/volume_list configuration setting not defined: Checking only host tags for
myvolg/myvol
Creating myvolg-myvol
Loading myvolg-myvol table (252:2)
Resuming myvolg-myvol (252:2)
Clearing start of logical volume "myvol"
Creating volume group backup "/etc/lvm/backup/myvolg" (seqno 2).
Logical volume "myvol" created

Now let's take a look at the existing logical volumes:

[oracle@oraclelinux6 ~]$ sudo lvdisplay


--- Logical volume --LV Path
LV Name
VG Name
LV UUID
LV Write Access
LV Creation host, time
LV Status
# open
LV Size
Current LE
Segments
Allocation
Read ahead sectors
- currently set to
Block device

/dev/myvolg/myvol
myvol
myvolg
igrKHo-IdMv-rECU-b3ju-xQde-FV3U-ffabjx
read/write
oraclelinux6.localdomain, 2013-01-09 01:09:43 -0800
available
0
2.00 GiB
512
1
inherit
auto
256
252:2

--- Logical volume --LV Path


LV Name
VG Name
LV UUID
LV Write Access
LV Creation host, time
LV Status
# open
LV Size
Current LE
Segments
Allocation
Read ahead sectors
- currently set to
Block device

/dev/vg_oraclelinux6/lv_root
lv_root
vg_oraclelinux6
l4kAq3-ahhE-cw8Y-D0G4-fkml-8W4X-kgEJmd
read/write
,
available
1
7.53 GiB
1928
2
inherit
auto
256
252:0

--- Logical volume --LV Path


LV Name
VG Name
LV UUID
LV Write Access
LV Creation host, time
LV Status
# open
LV Size
Current LE
Segments
Allocation
Read ahead sectors
- currently set to
Block device

/dev/vg_oraclelinux6/lv_swap
lv_swap
vg_oraclelinux6
1olLkX-fTZ0-X79l-eDJo-9b6L-pLmp-Pp8dpm
read/write
,
available
2
1.97 GiB
504
1
inherit
auto
256
252:1

[oracle@oraclelinux6 ~]$ sudo lvs


LV

VG
Attr
LSize Pool Origin Data%
myvol
myvolg
-wi-a--- 2.00g
lv_root vg_oraclelinux6 -wi-ao-- 7.53g
lv_swap vg_oraclelinux6 -wi-ao-- 1.97g

Move Log Copy%

Convert

The logical volume myvol has been created. LVM2 and the device mapper also created the corresponding block device nodes in /dev for us:

[oracle@oraclelinux6 ~]$ ls -l /dev/mapper/myvolg-myvol /dev/myvolg/myvol /dev/dm-2


brw-rw---- 1 root disk 252, 2 Jan 18 16:19 /dev/dm-2
lrwxrwxrwx 1 root root
7 Jan 18 16:19 /dev/mapper/myvolg-myvol -> ../dm-2
lrwxrwxrwx 1 root root
7 Jan 18 16:19 /dev/myvolg/myvol -> ../dm-2

The free space in our volume group has also been reduced and the number of logical volumes has been updated:

[oracle@oraclelinux6 ~]$ sudo vgs


VG

#PV #LV #SN Attr


VSize VFree
myvolg
1
1
0 wz--n- 4.00g 2.00g
vg_oraclelinux6
2
2
0 wz--n- 9.50g
0

By the way, it's possible to rename existing volume groups or logical volumes, using the vgrename and lvrename commands:

vgrename <old VG name> <new VG name>


lvrename <volume group> <old LV name> <new LV name>

Exercise: Creating and checking ext4 file systems (mkfs/fsck)


Now that we've created a logical volume, we can treat it as any other block device and create a file system on top of it. In this example, we're
using the ext4 file system, but you are free to use any other file system like XFS, Btrfs or ReiserFS, of course.

[oracle@oraclelinux6 ~]$ sudo mkfs.ext4 -v /dev/mapper/myvolg-myvol


mke2fs 1.41.12 (17-May-2010)
fs_types for mke2fs.conf resolution: 'ext4', 'default'
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
131072 inodes, 524288 blocks
26214 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=536870912
16 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912
Writing inode tables: done
Creating journal (16384 blocks): done
Writing superblocks and filesystem accounting information: done
This filesystem will be automatically checked every 31 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.

After the file system has been created, you need to mount it somewhere in your directory structure in order to be able to access it. First you need
to create a new empty directory that will act as the mount point, then you mount the file system to this location. We'll be creating a new toplevel
directory named /myvol in the exercise below:

[oracle@oraclelinux6 ~]$
mkdir: created directory
[oracle@oraclelinux6 ~]$
/dev/mapper/myvolg-myvol
[oracle@oraclelinux6 ~]$

sudo mkdir -v /myvol


`/myvol'
sudo mount -v -t ext4 /dev/myvolg/myvol /myvol
on /myvol type ext4 (rw)
df -h /myvol

Filesystem
Size
/dev/mapper/myvolg-myvol
2.0G

Used Avail Use% Mounted on


67M

1.9G

4% /myvol

[oracle@oraclelinux6 ~]$ mount | grep myvol


/dev/mapper/myvolg-myvol on /myvol type ext4 (rw)

The -h option instructs df to use human readable values for printing the file system size, used and available disk space. Now you can access
the file system and start using it for storing data! Try creating some directories or copying some files into the file system. In the example below, we
use some kernel source files to populate some of the logical volume's disk space.

[oracle@oraclelinux6 ~]$
mkdir: created directory
[oracle@oraclelinux6 ~]$
[oracle@oraclelinux6 ~]$

sudo mkdir -v /myvol/src


`/myvol/src'
sudo cp -a /usr/src/kernels/2.6.39-300* /myvol/src/
df -h /myvol

Filesystem
Size
/dev/mapper/myvolg-myvol
2.0G

Used Avail Use% Mounted on


145M

1.8G

8% /myvol

[oracle@oraclelinux6 ~]$ ls -l /myvol/src/


total 4
drwxr-xr-x 22 root root 4096 Jan

8 15:35 2.6.39-300.17.2.el6uek.x86_64

Exercise: Mounting file systems at bootup time


Mounting a file manually like in the exercise above is not a persistent change. If you want to make sure that a file system is mounted at system
bootup time, you need to add an entry to the /etc/fstab file, a plain-text file that defines which file systems should be mounted. The fstab file
lists one file system per line, the various fields are separated by white space (tabs or blanks). The basic format of an fstab entry looks as follows:
<device>

<mount point>

<file system type>

<mount options>

<dump option> <fs check order>

See the fstab(5) manual page for a more detailed description of these fields. Open /etc/fstab in your preferred text editor as the root user (e.g.
in sudo gedit /etc/fstab or sudo vi /etc/fstab) and add a new line for the file system we created to the end of the list:

#
# /etc/fstab
# Created by anaconda on Thu Jan 12 13:21:03 2012
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/vg_oraclelinux6-lv_root /
btrfs
defaults
UUID=ed6b5002-07d3-4381-9057-47ee31704c78 /boot
ext4
defaults
/dev/mapper/vg_oraclelinux6-lv_swap swap
swap
defaults
tmpfs
/dev/shm
tmpfs
defaults
0 0
devpts
/dev/pts
devpts gid=5,mode=620 0 0
sysfs
/sys
sysfs
defaults
0 0
proc
/proc
proc
defaults
0 0

1 1
1 2
0 0

/dev/myvolg/myvol /myvol ext4 defaults 0 0

Now the file system will be mounted automatically on the next reboot of your system. Try it out by rebooting your virtual machine at this point!

Exercise: Creating a snapshot volume


One of the nice features of LVM2 is the capability of creating an atomic and instant snapshot copy of a logical volume. This comes in handy if you
want to perform a consistent backup of a file system without having to bring down any services that might need to access these files. Snapshots
can be mounted at a different location and can even be modified.
Space requirements for a snapshot are very low LVM2 only needs to keep a backup copy of blocks that have been changed since the snapshot
was created. You determine the size of this backing store when creating the snapshot volume. This also implies that you need to have free
space available in your volume group.
However, bear in mind that creating a snapshot comes with a performance penalty these snapshots are not designed to stay around for a long
time and keeping multiple snapshots of the same volume significantly degrades the overall I/O performance. So LVM snapshots aren't as cheap
as they are for file systems like Btrfs or Solaris' ZFS it is strongly recommended to discard the LVM snapshot as soon as it has fulfilled its duty.
The basic command syntax looks like this:
lvcreate --size <size> --snapshot --name <snapshot name> <logical volume>

Example:

[oracle@oraclelinux6 ~]$ sudo lvcreate -v --size 500m --snapshot --name myvol-snapshot


myvolg/myvol
Setting logging type to disk
Setting chunksize to 8 sectors.
Finding volume group "myvolg"
Archiving volume group "myvolg" metadata (seqno 2).
Creating logical volume myvol-snapshot
Creating volume group backup "/etc/lvm/backup/myvolg" (seqno 3).
Found volume group "myvolg"
activation/volume_list configuration setting not defined: Checking only host tags for
myvolg/myvol-snapshot
Creating myvolg-myvol--snapshot
Loading myvolg-myvol--snapshot table (252:3)
Resuming myvolg-myvol--snapshot (252:3)
Clearing start of logical volume "myvol-snapshot"
Creating logical volume snapshot0
Found volume group "myvolg"
Found volume group "myvolg"
Executing: /sbin/modprobe dm-snapshot
Creating myvolg-myvol-real
Loading myvolg-myvol-real table (252:4)
Loading myvolg-myvol table (252:0)
Creating myvolg-myvol--snapshot-cow
Loading myvolg-myvol--snapshot-cow table (252:5)
Resuming myvolg-myvol--snapshot-cow (252:5)
Loading myvolg-myvol--snapshot table (252:3)
Suspending myvolg-myvol (252:0) with filesystem sync with device flush
Suspending myvolg-myvol-real (252:4) with filesystem sync with device flush
Found volume group "myvolg"
Loading myvolg-myvol--snapshot-cow table (252:5)
Suppressed myvolg-myvol--snapshot-cow (252:5) identical table reload.
Resuming myvolg-myvol-real (252:4)
Resuming myvolg-myvol--snapshot (252:3)
Resuming myvolg-myvol (252:0)
Monitoring myvolg/snapshot0
Creating volume group backup "/etc/lvm/backup/myvolg" (seqno 4).
Logical volume "myvol-snapshot" created

We can now go ahead and mount this snapshot like any other volume:

[oracle@oraclelinux6 ~]$ sudo mount -v -t ext4 /dev/myvolg/myvol-snapshot /mnt


/dev/mapper/myvolg-myvol--snapshot on /mnt type ext4 (rw)
[oracle@oraclelinux6 ~]$ ls -l /mnt/src/
total 4
drwxr-xr-x 22 root root 4096 Jan

8 15:35 2.6.39-300.17.2.el6uek.x86_64

As you can see, the snapshot contains the exact same content as the volume it has been taken from. Removing files from the original volume
does not change the snapshot's content:

[oracle@oraclelinux6 ~]$ sudo rm -rf /myvol/src/2.6.39-300.17.2.el6uek.x86_64


[oracle@oraclelinux6 ~]$ ls -l /myvol/src/
total 0
[oracle@oraclelinux6 ~]$ ls -l /mnt/src/
total 4
drwxr-xr-x 22 root root 4096 Jan

8 15:35 2.6.39-300.17.2.el6uek.x86_64

A snapshot is not just an identical read-only copy of a volume, it can be modified as well:

[oracle@oraclelinux6 ~]$ sudo touch /mnt/testfile


[oracle@oraclelinux6 ~]$ ls -l /mnt/testfile
-rw-r-r- 1 root root 0 Jan 9 01:33 /mnt/testfile
[oracle@oraclelinux6 ~]$ ls -l /myvol/
total 20
drwx------ 2 root root 16384 Jan
drwxr-xr-x 2 root root 4096 Jan

9 01:14 lost+found
9 01:32 src

Note that it's not possible to promote a snapshot volume into becoming a replacement for the original volume. Deleting the underlying volume
automatically erases all related snapshots as well. Also, creating snapshots of snapshots is not supported yet LVM2 is still evolving.
To remove an LVM snapshot, use the lvremove command, which can also be used to remove regular logical volumes:

lvremove <volume group>/<logical volume>

To remove all logical volumes from a volume group, just provide the volume group name without listing any particular logical volume.
Example:

[oracle@oraclelinux6 ~]$ sudo umount -v /mnt


/dev/mapper/myvolg-myvol--snapshot umounted
[oracle@oraclelinux6 ~]$ sudo lvremove -v myvolg/myvol-snapshot
Using logical volume(s) on command line
Do you really want to remove active logical volume myvol-snapshot? [y/n]: y
Archiving volume group "myvolg" metadata (seqno 4).
Removing snapshot myvol-snapshot
Found volume group "myvolg"
Found volume group "myvolg"
Loading myvolg-myvol table (252:0)
Loading myvolg-myvol--snapshot table (252:3)
Not monitoring myvolg/snapshot0
Suspending myvolg-myvol (252:0) with device flush
Suspending myvolg-myvol--snapshot (252:3) with device flush
Suspending myvolg-myvol-real (252:4) with device flush
Suspending myvolg-myvol--snapshot-cow (252:5) with device flush
Found volume group "myvolg"
Resuming myvolg-myvol--snapshot-cow (252:5)
Resuming myvolg-myvol-real (252:4)
Resuming myvolg-myvol--snapshot (252:3)
Removing myvolg-myvol--snapshot-cow (252:5)
Found volume group "myvolg"
Resuming myvolg-myvol (252:0)
Removing myvolg-myvol-real (252:4)
Found volume group "myvolg"
Removing myvolg-myvol--snapshot (252:3)
Releasing logical volume "myvol-snapshot"
Creating volume group backup "/etc/lvm/backup/myvolg" (seqno 6).
Logical volume "myvol-snapshot" successfully removed

Exercise: Increasing logical volumes and file systems


One of the nice features of LVM2 is the capability to dynamically resize logical volumes. This can be done without interfering with any of the other
volumes inside the same volume group, as long as there are free extents available. By shrinking other logical volumes which might not need that
much disk space, you can reclaim free space and allocate it to another volume instead.
You can use the lvextend utility to increase the logical volume and grow the file system on top of it in one step. lvextend invokes the fsadm
utility in the background, to grow or shrink the file system itself. Alternatively, you can perform these steps separately, in case you're using a file
system other than the ones supported by fsadm (currently ext2/3/4, ReiserFS and XFS).
Some file systems may require this operation to be performed while the file system is unmounted (especially shrinking a file system), others can
perform this operation on the fly. For example, Ext4 or XFS on a Linux 2.6 kernel supports on-line resize, so you don't need to unmount and
disrupt ongoing system activity when you need to provide more disk space.
In the following example, we're increasing the size of our existing logical volume and file system by 500 MB in one step, using lvextend with the
-r option (resize file system):

[oracle@oraclelinux6 ~]$ sudo lvs


LV

VG
Attr
LSize Pool Origin Data%
myvol
myvolg
-wi-ao-- 2.00g
lv_root vg_oraclelinux6 -wi-ao-- 7.53g
lv_swap vg_oraclelinux6 -wi-ao-- 1.97g

Move Log Copy%

Convert

[oracle@oraclelinux6 ~]$ df -h /myvol/


Filesystem
Size
/dev/mapper/myvolg-myvol
2.0G

Used Avail Use% Mounted on


122M

1.8G

7% /myvol

[oracle@oraclelinux6 ~]$ sudo lvextend -v -L +500M -r myvolg/myvol


Finding volume group myvolg
Executing: fsadm --verbose check /dev/myvolg/myvol
fsadm: "ext4" filesystem found on "/dev/mapper/myvolg-myvol"
fsadm: Skipping filesystem check for device "/dev/mapper/myvolg-myvol" as the filesystem is
mounted on /myvol
fsadm failed: 3
Archiving volume group "myvolg" metadata (seqno 6).
Extending logical volume myvol to 2.49 GiB
Found volume group "myvolg"
Found volume group "myvolg"
Loading myvolg-myvol table (252:0)
Suspending myvolg-myvol (252:0) with device flush
Found volume group "myvolg"
Resuming myvolg-myvol (252:0)
Creating volume group backup "/etc/lvm/backup/myvolg" (seqno 7).
Logical volume myvol successfully resized
Executing: fsadm --verbose resize /dev/myvolg/myvol 2609152K
fsadm: "ext4" filesystem found on "/dev/mapper/myvolg-myvol"
fsadm: Device "/dev/mapper/myvolg-myvol" size is 2671771648 bytes
fsadm: Parsing tune2fs -l "/dev/mapper/myvolg-myvol"
fsadm: Resizing filesystem on device "/dev/mapper/myvolg-myvol" to 2671771648 bytes (524288 ->
652288 blocks of 4096 bytes)
fsadm: Executing resize2fs /dev/mapper/myvolg-myvol 652288
resize2fs 1.41.12 (17-May-2010)
Filesystem at /dev/mapper/myvolg-myvol is mounted on /myvol; on-line resizing required
old desc_blocks = 1, new_desc_blocks = 1
Performing an on-line resize of /dev/mapper/myvolg-myvol to 652288 (4k) blocks.
The filesystem on /dev/mapper/myvolg-myvol is now 652288 blocks long.

[oracle@oraclelinux6 ~]$ df -h /myvol/


Filesystem
Size
/dev/mapper/myvolg-myvol
2.5G

Used Avail Use% Mounted on


122M

2.3G

6% /myvol

[oracle@oraclelinux6 ~]$ sudo lvs


LV

VG
Attr
LSize Pool Origin Data%
myvol
myvolg
-wi-ao-- 2.49g
lv_root vg_oraclelinux6 -wi-ao-- 7.53g
lv_swap vg_oraclelinux6 -wi-ao-- 1.97g

Move Log Copy%

Convert

Exercise: Adding a physical volume to a volume group


But what if we wanted to extend a logical volume's size to more than what the current volume group and underlying physical volumes can
provide? With LVM2, you can easily add additional physical volumes to an existing volume group on the fly, which then allows you to grow the
logical volumes across these new physical volumes.
This is performed by initializing a new physical volume by running pvcreate first (see the previous exercise "Preparing block devices for LVM2
use" for details). Then you use the vgextend command to add the new physical volume to an existing volume group, followed by calling
lvextend to resize the logical volume and the file system on top of it:

[oracle@oraclelinux6 ~]$ sudo lvs

LV

VG
Attr
LSize Pool Origin Data%
myvol
myvolg
-wi-ao-- 2.49g
lv_root vg_oraclelinux6 -wi-ao-- 7.53g
lv_swap vg_oraclelinux6 -wi-ao-- 1.97g

Move Log Copy%

[oracle@oraclelinux6 ~]$ sudo vgs


VG

#PV #LV #SN Attr


VSize VFree
myvolg
1
1
0 wz--n- 4.00g 1.51g
vg_oraclelinux6
2
2
0 wz--n- 9.50g
0

[oracle@oraclelinux6 ~]$ sudo pvs


PV

VG

Fmt Attr PSize PFree


vg_oraclelinux6 lvm2 a-- 7.51g
0
vg_oraclelinux6 lvm2 a-- 1.99g
0
myvolg
lvm2 a-- 4.00g 1.51g
lvm2 a-- 4.00g 4.00g

/dev/sda2
/dev/sda3
/dev/sdb1
/dev/sdc1

[oracle@oraclelinux6 ~]$ sudo vgextend -v myvolg /dev/sdc1


Checking for volume group "myvolg"
Archiving volume group "myvolg" metadata (seqno 7).
Wiping cache of LVM-capable devices
Adding physical volume '/dev/sdc1' to volume group 'myvolg'
Volume group "myvolg" will be extended by 1 new physical volumes
Creating volume group backup "/etc/lvm/backup/myvolg" (seqno 8).
Volume group "myvolg" successfully extended

[oracle@oraclelinux6 ~]$ sudo pvs


PV

VG
/dev/sda2
/dev/sda3
/dev/sdb1
/dev/sdc1

Fmt Attr PSize PFree


vg_oraclelinux6 lvm2 a-- 7.51g
0
vg_oraclelinux6 lvm2 a-- 1.99g
0
myvolg
lvm2 a-- 4.00g 1.51g
myvolg
lvm2 a-- 4.00g 4.00g

[oracle@oraclelinux6 ~]$ sudo vgs


VG

#PV #LV #SN Attr


VSize VFree
myvolg
2
1
0 wz--n- 7.99g 5.50g
vg_oraclelinux6
2
2
0 wz--n- 9.50g
0

[oracle@oraclelinux6 ~]$ sudo lvextend -v -L +5G -r myvolg/myvol

Convert

Finding volume group myvolg


Executing: fsadm --verbose check /dev/myvolg/myvol
fsadm: "ext4" filesystem found on "/dev/mapper/myvolg-myvol"
fsadm: Skipping filesystem check for device "/dev/mapper/myvolg-myvol" as the filesystem is
mounted on /myvol
fsadm failed: 3
Archiving volume group "myvolg" metadata (seqno 8).
Extending logical volume myvol to 7.49 GiB
Found volume group "myvolg"
Found volume group "myvolg"
Loading myvolg-myvol table (252:0)
Suspending myvolg-myvol (252:0) with device flush
Found volume group "myvolg"
Resuming myvolg-myvol (252:0)
Creating volume group backup "/etc/lvm/backup/myvolg" (seqno 9).
Logical volume myvol successfully resized
Executing: fsadm --verbose resize /dev/myvolg/myvol 7852032K
fsadm: "ext4" filesystem found on "/dev/mapper/myvolg-myvol"
fsadm: Device "/dev/mapper/myvolg-myvol" size is 8040480768 bytes
fsadm: Parsing tune2fs -l "/dev/mapper/myvolg-myvol"
fsadm: Resizing filesystem on device "/dev/mapper/myvolg-myvol" to 8040480768 bytes (652288 ->
1963008 blocks of 4096 bytes)
fsadm: Executing resize2fs /dev/mapper/myvolg-myvol 1963008
resize2fs 1.41.12 (17-May-2010)
Filesystem at /dev/mapper/myvolg-myvol is mounted on /myvol; on-line resizing required
old desc_blocks = 1, new_desc_blocks = 1
Performing an on-line resize of /dev/mapper/myvolg-myvol to 1963008 (4k) blocks.
The filesystem on /dev/mapper/myvolg-myvol is now 1963008 blocks long.

[oracle@oraclelinux6 ~]$ df -h /myvol


Filesystem
Size
/dev/mapper/myvolg-myvol
7.4G

Used Avail Use% Mounted on


124M

6.9G

2% /myvol

Exercise: Removing/replacing a physical volume


Now let's consider that the first disk drive in our myvolg volume group /dev/sdb1 is showing signs of failure (e.g. when running SMART checks
with the smartctl utility). Currently, the extents on this physical volume are completely allocated by the myvolg volume group these would
need to be moved to the other physical volume before we could remove the failing one, provided there is enough free space available. LVM2
supports this operation by using the pvmove command:

pvmove <source PV> <destination PV>

The destination PV can be omitted; in this case LVM2 attempts to move all extents to any other available physical volume related to the affected
volume group.
In our case, we have an existing logical volume in the volume group that spans two physical volumes (by allocating physical extents from both),
so we currently would not be able to move it off the first disk. Fortunately the file system on that logical volume currently does not require that
much disk space, it can easily fit on the remaining working physical volume after shrinking it. As a first step, we therefore must reduce the file
system and the logical volume, to free up enough allocated extents:

[oracle@oraclelinux6 ~]$ sudo resize2fs -M /dev/myvolg/myvol


resize2fs 1.41.12 (17-May-2010)
Filesystem at /dev/myvolg/myvol is mounted on /myvol; on-line resizing required
On-line shrinking from 1963008 to 1476099 not supported.

Doh! While increasing an ext4 file system can be done on the fly, it needs to be unmounted and checked before we can shrink it:

[oracle@oraclelinux6 ~]$ df -h /myvol


Filesystem
Size
/dev/mapper/myvolg-myvol
7.4G

Used Avail Use% Mounted on


124M

6.9G

2% /myvol

[oracle@oraclelinux6 ~]$ sudo umount -v /dev/myvolg/myvol


/dev/mapper/myvolg-myvol umounted
[oracle@oraclelinux6 ~]$ sudo resize2fs -M /dev/myvolg/myvol
resize2fs 1.41.12 (17-May-2010)
Please run 'e2fsck -f /dev/myvolg/myvol' first.

[oracle@oraclelinux6 ~]$ sudo e2fsck -f /dev/myvolg/myvol


e2fsck 1.41.12 (17-May-2010)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
/dev/myvolg/myvol: 12094/491520 files (0.0% non-contiguous), 62351/1963008 blocks

[oracle@oraclelinux6 ~]$ sudo resize2fs -M /dev/myvolg/myvol


resize2fs 1.41.12 (17-May-2010)
Resizing the filesystem on /dev/myvolg/myvol to 42680 (4k) blocks.
The filesystem on /dev/myvolg/myvol is now 42680 blocks long.

[oracle@oraclelinux6 ~]$ sudo mount -v /myvol/


/dev/mapper/myvolg-myvol on /myvol type ext4 (rw)
[oracle@oraclelinux6 ~]$ df -h /myvol
Filesystem
Size
/dev/mapper/myvolg-myvol
163M

Used Avail Use% Mounted on


120M

35M

78% /myvol

As you can see, the file system has now been reduced in size significantly. The -M option instructs the resizing tool to shrink the file system to the
absolute minimum.
The same result could also be achieved by using the fsadm utility instead, which performs the checking, unmounting and resizing of a given file
system automatically and supports the ext2/3/4 file systems as well as ReiserFS and XFS (two other popular journaling file systems for Linux).
However, it does not support the option of shrinking a file system to it's minimum possible size, you need to provide an absolute size manually.

[oracle@oraclelinux6 ~]$ sudo fsadm -y resize /dev/myvolg/myvol 200M


resize2fs 1.41.12 (17-May-2010)
Filesystem at /dev/mapper/myvolg-myvol is mounted on /myvol; on-line resizing required
old desc_blocks = 1, new_desc_blocks = 1
Performing an on-line resize of /dev/mapper/myvolg-myvol to 51200 (4k) blocks.
The filesystem on /dev/mapper/myvolg-myvol is now 51200 blocks long.

[oracle@oraclelinux6 ~]$ df -h /myvol


Filesystem
Size
/dev/mapper/myvolg-myvol
196M

Used Avail Use% Mounted on


120M

69M

64% /myvol

Now let's reduce the size of the logical volume underneath it. We'll choose to be on the safe side and reduce it from 7.5 GB to 200M, so we don't
accidentally damage the file system:

[oracle@oraclelinux6 ~]$ sudo lvs


LV

VG
Attr
LSize Pool Origin Data%
myvol
myvolg
-wi-ao-- 7.49g
lv_root vg_oraclelinux6 -wi-ao-- 7.53g
lv_swap vg_oraclelinux6 -wi-ao-- 1.97g

Move Log Copy%

Convert

[oracle@oraclelinux6 ~]$ sudo lvreduce -v -L 200M myvolg/myvol


Finding volume group myvolg
WARNING: Reducing active and open logical volume to 200.00 MiB
THIS MAY DESTROY YOUR DATA (filesystem etc.)

Do you really want to reduce myvol? [y/n]: y


Archiving volume group "myvolg" metadata (seqno 9).
Reducing logical volume myvol to 200.00 MiB
Found volume group "myvolg"
Found volume group "myvolg"
Loading myvolg-myvol table (252:2)
Suspending myvolg-myvol (252:2) with device flush
Found volume group "myvolg"
Resuming myvolg-myvol (252:2)
Creating volume group backup "/etc/lvm/backup/myvolg" (seqno 10).
Logical volume myvol successfully resized

[oracle@oraclelinux6 ~]$ sudo lvs


LV

VG
Attr
LSize
Pool Origin Data%
myvol
myvolg
-wi-ao-- 200.00m
lv_root vg_oraclelinux6 -wi-ao-7.53g
lv_swap vg_oraclelinux6 -wi-ao-1.97g

Move Log Copy%

Convert

[oracle@oraclelinux6 ~]$ df -h /myvol


Filesystem
Size
/dev/mapper/myvolg-myvol
196M

Used Avail Use% Mounted on


120M

69M

64% /myvol

The logical volume has now been reduced in size, so it does allocate much less extents from the volume group.
Alternatively, lvreduce can take care of reducing the file system on top of it automatically, by invoking fsadm by itself. This combines several of
the steps above into a single call:

[oracle@oraclelinux6 ~]$ sudo lvreduce -v -L 150M -r myvolg/myvol


Finding volume group myvolg
Rounding size to boundary between physical extents: 152.00 MiB
Executing: fsadm --verbose check /dev/myvolg/myvol
fsadm: "ext4" filesystem found on "/dev/mapper/myvolg-myvol"
fsadm: Skipping filesystem check for device "/dev/mapper/myvolg-myvol" as the filesystem is
mounted on /myvol
fsadm failed: 3
Executing: fsadm --verbose resize /dev/myvolg/myvol 155648K
fsadm: "ext4" filesystem found on "/dev/mapper/myvolg-myvol"
fsadm: Device "/dev/mapper/myvolg-myvol" size is 209715200 bytes
fsadm: Parsing tune2fs -l "/dev/mapper/myvolg-myvol"
fsadm: resize2fs needs unmounted filesystem

Do you want to unmount "/myvol"? [Y|n] y


fsadm: Executing umount /myvol
fsadm: Executing fsck -f -p /dev/mapper/myvolg-myvol
fsck from util-linux-ng 2.17.2
/dev/mapper/myvolg-myvol: 12094/16384 files (0.0% non-contiguous), 31636/51200 blocks
fsadm: Resizing filesystem on device "/dev/mapper/myvolg-myvol" to 159383552 bytes (51200 ->
38912 blocks of 4096 bytes)
fsadm: Executing resize2fs /dev/mapper/myvolg-myvol 38912
resize2fs 1.41.12 (17-May-2010)
Resizing the filesystem on /dev/mapper/myvolg-myvol to 38912 (4k) blocks.
The filesystem on /dev/mapper/myvolg-myvol is now 38912 blocks long.
fsadm: Remounting unmounted filesystem back
fsadm: Executing mount /dev/mapper/myvolg-myvol /myvol
Archiving volume group "myvolg" metadata (seqno 10).
Reducing logical volume myvol to 152.00 MiB
Found volume group "myvolg"
Found volume group "myvolg"
Loading myvolg-myvol table (252:2)
Suspending myvolg-myvol (252:2) with device flush
Found volume group "myvolg"
Resuming myvolg-myvol (252:2)
Creating volume group backup "/etc/lvm/backup/myvolg" (seqno 11).
Logical volume myvol successfully resized

[oracle@oraclelinux6 ~]$ sudo lvs


LV

VG
Attr
LSize
Pool Origin Data%
myvol
myvolg
-wi-ao-- 152.00m
lv_root vg_oraclelinux6 -wi-ao-7.53g
lv_swap vg_oraclelinux6 -wi-ao-1.97g

[oracle@oraclelinux6 ~]$ df -h /myvol/


Filesystem
Size
/dev/mapper/myvolg-myvol
148M

Used Avail Use% Mounted on


120M

23M

85 /myvol

Now we can proceed with moving the allocated physical extents from the failing disk:

Move Log Copy%

Convert

[oracle@oraclelinux6 ~]$ sudo pvmove -v /dev/sdb1


Finding volume group "myvolg"
Archiving volume group "myvolg" metadata (seqno 10).
Creating logical volume pvmove0
Moving 38 extents of logical volume myvolg/myvol
Found volume group "myvolg"
activation/volume_list configuration setting not defined: Checking only host tags for
myvolg/myvol
Updating volume group metadata
Found volume group "myvolg"
Found volume group "myvolg"
Creating myvolg-pvmove0
Loading myvolg-pvmove0 table (252:3)
Loading myvolg-myvol table (252:0)
Suspending myvolg-myvol (252:0) with device flush
Suspending myvolg-pvmove0 (252:3) with device flush
Found volume group "myvolg"
activation/volume_list configuration setting not defined: Checking only host tags for
myvolg/pvmove0
Resuming myvolg-pvmove0 (252:3)
Found volume group "myvolg"
Loading myvolg-pvmove0 table (252:3)
Suppressed myvolg-pvmove0 (252:3) identical table reload.
Resuming myvolg-myvol (252:0)
Creating volume group backup "/etc/lvm/backup/myvolg" (seqno 11).
Checking progress before waiting every 15 seconds
/dev/sdb1: Moved: 0.0%
/dev/sdb1: Moved: 100.0%
Found volume group "myvolg"
Found volume group "myvolg"
Loading myvolg-myvol table (252:0)
Loading myvolg-pvmove0 table (252:3)
Suspending myvolg-myvol (252:0) with device flush
Suspending myvolg-pvmove0 (252:3) with device flush
Found volume group "myvolg"
Resuming myvolg-pvmove0 (252:3)
Found volume group "myvolg"
Resuming myvolg-myvol (252:0)
Found volume group "myvolg"
Removing myvolg-pvmove0 (252:3)
Removing temporary pvmove LV
Writing out final volume group after pvmove
Creating volume group backup "/etc/lvm/backup/myvolg" (seqno 13).

[oracle@oraclelinux6 ~]$ sudo vgreduce -v myvolg /dev/sdb1


Finding volume group "myvolg"
Using physical volume(s) on command line
Archiving volume group "myvolg" metadata (seqno 13).
Removing "/dev/sdb1" from volume group "myvolg"
Creating volume group backup "/etc/lvm/backup/myvolg" (seqno 14).
Removed "/dev/sdb1" from volume group "myvolg"

[oracle@oraclelinux6 ~]$ sudo vgs


VG

#PV #LV #SN Attr


VSize VFree
myvolg
1
1
0 wz--n- 4.00g 3.85g
vg_oraclelinux6
2
2
0 wz--n- 9.50g
0

[oracle@oraclelinux6 ~]$ sudo pvs


PV

VG
/dev/sda2
/dev/sda3
/dev/sdb1
/dev/sdc1

Fmt Attr PSize PFree


vg_oraclelinux6 lvm2 a-- 7.51g
0
vg_oraclelinux6 lvm2 a-- 1.99g
0
lvm2 a-- 4.00g 4.00g
myvolg
lvm2 a-- 4.00g 3.85g

[oracle@oraclelinux6 ~]$ df -h /myvol


Filesystem
Size
/dev/mapper/myvolg-myvol
148M

Used Avail Use% Mounted on


120M

23M

85% /myvol

The failing physical volume has now been removed from the volume group and can be replaced. The file system in logical volume myvol is still
available and could even be increased in size to make use of the remaining available space in the volume group.
For the sake of time, we skip the step of actually removing and re-adding the virtual disk drive from the virtual machine, let's just assume you
replaced the failing disk drive with a new one.
Once the replacement disk is in place, you can partition it and use pvcreate as outlined in an earlier exercise to make it available to LVM2
again.
Now you can add the physical volume to the volume group again:

[oracle@oraclelinux6 ~]$ sudo vgextend -v myvolg /dev/sdb1


Checking for volume group "myvolg"
Archiving volume group "myvolg" metadata (seqno 14).
Wiping cache of LVM-capable devices
Adding physical volume '/dev/sdb1' to volume group 'myvolg'
Volume group "myvolg" will be extended by 1 new physical volumes
Creating volume group backup "/etc/lvm/backup/myvolg" (seqno 15).
Volume group "myvolg" successfully extended

[oracle@oraclelinux6 ~]$ sudo vgs


VG

#PV #LV #SN Attr


VSize VFree
myvolg
2
1
0 wz--n- 7.99g 7.84g
vg_oraclelinux6
2
2
0 wz--n- 9.50g
0

Exercise: Setting up a RAID1 device using mdadm


As you can see from the example above, LVM2 is capable of managing multiple physical volumes like hard disks. It also supports capabilities of
mirroring and striping of logical volumes, to provide some redundancy and to increase performance. However, it is not a replacement for a RAID
system; this is better handled by either utilizing a hardware RAID array with a dedicated controller or by using the Linux kernel's built-in
software-RAID functionality. You would then create logical volumes on top of these RAID block devices.
In the following exercise, we'll create a mirrored (RAID1) device using the mdadm utility and the kernel's multiple device driver, consisting of two
disk drives. This provides some basic redundancy one of the disk drives can fail without data loss. We'll then add a file system on top of the
RAID device directly. In a production environment, you would probably use LVM2 on top of it instead, to stay more flexible.
First we need to revert the LVM2 configuration from the previous exercises. You can achieve that by either restarting the virtual machine from the
previous snapshot and repartition the virtual disks, or by entering the following commands:

[oracle@oraclelinux6 ~]$ sudo umount -v /myvol


/dev/mapper/myvolg-myvol umounted
[oracle@oraclelinux6 ~]$ sudo lvremove myvolg/myvol
Do you really want to remove active logical volume myvol? [y/n]: y
Logical volume "myvol" successfully removed
[oracle@oraclelinux6 ~]$ sudo vgremove myvolg
Volume group "myvolg" successfully removed
[oracle@oraclelinux6 ~]$ sudo pvremove /dev/sdb1 /dev/sdc1
Labels on physical volume "/dev/sdb1" successfully wiped
Labels on physical volume "/dev/sdc1" successfully wiped
[oracle@oraclelinux6 ~]$ sudo fdisk /dev/sdb
WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
switch off the mode (command 'c') and change display units to
sectors (command 'u').

Command (m for help): t


Selected partition 1
Hex code (type L to list codes): 83
Changed system type of partition 1 to 83 (Linux)
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.

[oracle@oraclelinux6 ~]$ sudo fdisk /dev/sdc


WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
switch off the mode (command 'c') and change display units to
sectors (command 'u').

Command (m for help): t


Selected partition 1
Hex code (type L to list codes): 83
Changed system type of partition 1 to 83 (Linux)
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.

Also don't forget to remove the mount point (rmdir /myvol) and taking out the mount point entry in /etc/fstab!
Now that we have two clean disk drives for testing, let's start with create a mirrored set out of them. This is done using the mdadm utility, which
is used for building, managing and monitoring Linux MD devices. Check the mdadm(8) manual page for a detailed description of its features and
options.

[oracle@oraclelinux6 ~]$ sudo mdadm --create --verbose /dev/md0 --level=1 --raid-devices=2


/dev/sdb1 /dev/sdc1
mdadm: Note: this array has metadata at the start and
may not be suitable as a boot device. If you plan to
store '/boot' on this device please ensure that
your boot-loader understands md/v1.x metadata, or use
--metadata=0.90
mdadm: size set to 4191897K

Continue creating array? y


mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.

[oracle@oraclelinux6 ~]$ cat /proc/mdstat


Personalities : [raid1]
md0 : active raid1 sdc1[1] sdb1[0]
4191897 blocks super 1.2 [2/2] [UU]
[=>...................] resync = 9.5% (401088/4191897) finish=0.9min speed=66848K/sec
unused devices: <none>

The /proc/mdstat file is a useful resource to quickly check the status of your MD RAID devices. In the example above, MD was busy initializing
the RAID1 device. After this initialization phase, the status should look as follows:

[oracle@oraclelinux6 ~]$ cat /proc/mdstat


Personalities : [raid1]
md0 : active raid1 sdc1[1] sdb1[0]
4191897 blocks super 1.2 [2/2] [UU]
unused devices: <none>

You can use the mdadm tool to get some more detailed information about the currently configured device:

[oracle@oraclelinux6 ~]$ sudo mdadm --query /dev/md0


/dev/md0: 3.100GiB raid1 2 devices, 0 spares. Use mdadm --detail for more detail.
[oracle@oraclelinux6 ~]$ sudo mdadm --detail /dev/md0
/dev/md0:
Version
Creation Time
Raid Level
Array Size
Used Dev Size
Raid Devices
Total Devices
Persistence

:
:
:
:
:
:
:
:

1.2
Wed Jan 9 02:49:09 2013
raid1
4191897 (4.00 GiB 4.29 GB)
4191897 (4.00 GiB 4.29 GB)
2
2
Superblock is persistent

Update Time
State
Active Devices
Working Devices
Failed Devices
Spare Devices

:
:
:
:
:
:

Wed Jan
clean
2
2
0
0

9 02:49:30 2013

Name : oraclelinux6.localdomain:0 (local to host oraclelinux6.localdomain)


UUID : 78e6f947:bbbcf414:d6916aae:37a1a21f
Events : 17
Number
0
1

Major
8
8

Minor
17
33

RaidDevice State
0
active sync
1
active sync

/dev/sdb1
/dev/sdc1

Now that we have a block device, we can create a file system on top of it and put some data into it. Note that we could use LVM on top of this
RAID set, too, but we're sticking to a plain file system on top of the RAID for simplicity.

[oracle@oraclelinux6 ~]$ sudo mkfs.ext4 /dev/md0


mke2fs 1.41.12 (17-May-2010)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
cp262144 inodes, 1047974 blocks
52398 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=1073741824
32 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736
Writing inode tables: done
Creating journal (16384 blocks): done
Writing superblocks and filesystem accounting information: done
This filesystem will be automatically checked every 32 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.

[oracle@oraclelinux6 ~]$ sudo mkdir -v /raid


mkdir: created directory `/raid'
[oracle@oraclelinux6 ~]$ sudo mount -v -t ext4 /dev/md0 /raid
/dev/md0 on /raid type ext4 (rw)
[oracle@oraclelinux6 ~]$ df -h /raid
Filesystem
/dev/md0

Size
4.0G

Used Avail Use% Mounted on


72M 3.7G
2% /raid

As you can see, the file system has a capacity of 4GB, which resembles the size of one disk drive. The data is being mirrored to the second one
transparently, in the background.
It's useful to store the raid configuration information in a configuration file named /etc/mdadm.conf, this will help mdadm to assemble existing
arrays at system bootup. You can either copy and adapt the sample configuration file from
/usr/share/doc/mdadm-3.2.1/mdadm.conf-example, or create a very minimalistic one from scratch, using your text editor of choice. Our
example looks as follows:

[oracle@oraclelinux6 ~]$ cat /etc/mdadm.conf


DEVICE /dev/sdb1 /dev/sdc1
ARRAY /dev/md0 devices=/dev/sdb1,/dev/sdc1
[oracle@oraclelinux6 ~]$ sudo mkinitrd --force /boot/initramfs-`uname -r`.img `uname -r`

See the mdadm.conf(5) manual page for more details about the format of this file. Now that this configuration file exists, it needs to be added to
the initial ramdisk so the RAID array will be properly detected and initialized upon system reboot (see
https://bugzilla.redhat.com/show_bug.cgi?id=606481 for more details on why this is necessary).
Now let's copy some files on the device, so we have some data for testing:

[oracle@oraclelinux6 ~]$ sudo cp -a /usr/src/kernels/2.6.39-300* /raid


[oracle@oraclelinux6 ~]$ ls -l /raid
total 20
drwxr-xr-x 22 root root 4096 Jan
drwx------ 2 root root 16384 Jan

8 15:35 2.6.39-300.17.2.el6uek.x86_64
9 02:53 lost+found

[oracle@oraclelinux6 ~]$ df -h /raid


Filesystem
/dev/md0

Size
4.0G

Used Avail Use% Mounted on


150M 3.6G
4% /raid

So far, our file system behaves like any other file system. Let's provoke a complete disk failure of one of the disk drives, so we can observe how
the MD driver handles this situation.

In VirtualBox, you can only make changes to the storage configuration when the VM has been powered off. So we need to shut down the VM first,
either by running the following command on the command line or by selecting System -> Shut Down... -> Shut Down from the virtual machine's
desktop menu:

[oracle@oraclelinux6 ~]$ sudo poweroff

Now we can detach one of the virtual disk drives from the system and reboot. Click on the VM's settings icon and select the Storage section.
Now right-click on the Disk2.vdi icon and select Remove attachment.

This will detach the disk drive from this virtual machine, to simulate a total failure of the entire disk drive.
Now let's restart the VM and figure out how MD copes with the missing disk drive. After the system has booted up, log in as the oracle user
again and open a Terminal.
Let's take a look at the status of our RAID device:

[oracle@oraclelinux6 ~]$ cat /proc/mdstat


Personalities : [raid1]
md0 : active (auto-read-only) raid1 sdb1[0]
4191897 blocks super 1.2 [2/1] [U_]
unused devices: <none>

The [U_] part indicates that only one of two devices is active, but you need have a trained eye to discover this. It's better to look at the output
from mdadm, which is a bit more clear about the degraded state of the device:

[oracle@oraclelinux6 ~]$ sudo mdadm --detail /dev/md0


/dev/md0:
Version
Creation Time
Raid Level
Array Size
Used Dev Size
Raid Devices
Total Devices
Persistence

:
:
:
:
:
:
:
:

1.2
Wed Jan 9 02:49:09 2013
raid1
4191897 (4.00 GiB 4.29 GB)
4191897 (4.00 GiB 4.29 GB)
2
1
Superblock is persistent

Update Time
State
Active Devices
Working Devices
Failed Devices
Spare Devices

:
:
:
:
:
:

Wed Jan 9 02:59:43 2013


clean, degraded
1
1
0
0

Name : oraclelinux6.localdomain:0 (local to host oraclelinux6.localdomain)


UUID : 78e6f947:bbbcf414:d6916aae:37a1a21f
Events : 17
Number
0
1

Major
8
0

Minor
17
0

RaidDevice State
0
active sync
1
removed

/dev/sdb1

Even though the RAID is degraded, our file system is still available:

[oracle@oraclelinux6 ~]$ sudo mount -v -t ext4 /dev/md0 /raid


/dev/md0 on /raid type ext4 (rw)
[oracle@oraclelinux6 ~]$ ls -l /raid
total 20
drwxr-xr-x 22 root root 4096 Jan
drwx------ 2 root root 16384 Jan

8 15:35 2.6.39-300.17.3.el6uek.x86_64
9 02:53 lost+found

However, it's a good idea to replace the failed disk drive as soon as possible. In our case, we can simply shut down the VM, re-attach the disk
image and reboot. In a live production system, you are likely able to hot-swap the disk drive on the fly without any downtime. mdadm supports
these kind of operations as well (disabling and replacing devices on the fly, rebuilding RAID arrays), but this is out of the scope of this lab session.
This exercise only scratched on the surface on what MD is capable of.

Exercise: Setting up Encryption using dm_crypt and LUKS


The Linux device mapper also supports the creation of encrypted block devices using the dm_crypt device driver, which provides strong
protection against data theft in case of physical loss of the hardware. Data on these devices can only be accessed if the appropriate password is
provided at system bootup time. Because the encryption takes place on the underlying block device, it is file system and application agnostic
any file system (or an LVM2 setup) can be used on top of an encrypted device, even swap space.
LUKS, Linux Unified Key Setup, is a standard for hard disk encryption. It standardizes a partition header, as well as the format of the bulk data.
LUKS can manage multiple passwords, that can be revoked effectively and that are protected against dictionary attacks.
We'll re-use the existing disk drive /dev/sdb1 from our previous exercises for this, so we first have to remove the current RAID configuration
manually (or reboot from a previous snapshot instead).
The following commands will unmount the file system, stop the RAID device and ensure that it's no longer recognized as a RAID volume:

[oracle@oraclelinux6 ~]$ sudo


removed `/etc/mdadm.conf'
[oracle@oraclelinux6 ~]$ sudo
/dev/md0 umounted
[oracle@oraclelinux6 ~]$ sudo
mdadm: stopped /dev/md0
[oracle@oraclelinux6 ~]$ sudo

rm -v /etc/mdadm.conf
umount -v /raid
mdadm --stop /dev/md0
mdadm --zero-superblock /dev/sdb1

Now we have an empty device that we can use to store the encrypted volume. This is done using the cryptsetup utility.

The first command initializes the volume, and sets an initial key. The -y option ask for the passphrase twice, making sure your password is typed
in correctly. The second command opens the partition, and creates the device mapping (in this case /dev/mapper/cryptfs). This is the actual
device that will be used to create a file system on top of it don't use the real physical device ( /dev/sdb1) for this!

[oracle@oraclelinux6 ~]$ sudo cryptsetup -y luksFormat /dev/sdb1


WARNING!
========
This will overwrite data on /dev/sdb1 irrevocably.

Are you sure? (Type uppercase yes): YES


Enter LUKS passphrase: <passphrase>
Verify passphrase: <passphrase>
[oracle@oraclelinux6 ~]$ sudo cryptsetup luksOpen /dev/sdb1 cryptfs
Enter passphrase for /dev/sdb1: <passphrase>

Now let's check the status of our encrypted volume:


[oracle@oraclelinux6 ~]$ sudo cryptsetup status cryptfs
/dev/mapper/cryptfs is active.
type: LUKS1
cipher: aes-cbc-essiv:sha256
keysize: 256 bits
device: /dev/sdb1
offset: 4096 sectors
size:
8381771 sectors
mode:
read/write

As an additional optional safety measure, we could now write zeros to the new encrypted device. This will force the allocation of data blocks.
Because the zeros are encrypted, this will look like random data to the outside world, making it nearly impossible to track down encrypted data
blocks if someone gains access to the hard disk that contains the encrypted file system. We'll skip this step, as it takes quite some time.
dd if=/dev/zero of=/dev/mapper/cryptfs

Now that we have initialized our encrypted volume, we need to create a filesystem and mount point:

[oracle@oraclelinux6 ~]$ sudo mkfs.ext4 /dev/mapper/cryptfs


{noformat:nopanel=true}
mke2fs 1.41.12 (17-May-2010)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
262144 inodes, 1047721 blocks
52386 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=1073741824
32 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736
Writing inode tables: done
Creating journal (16384 blocks): done
Writing superblocks and filesystem accounting information: done
This filesystem will be automatically checked every 22 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.
[oracle@oraclelinux6 ~]$ sudo mkdir -v /cryptfs
mkdir: created directory `/cryptfs'
[oracle@oraclelinux6 ~]$ sudo mount -v /dev/mapper/cryptfs /cryptfs
mount: you didn't specify a filesystem type for /dev/mapper/cryptfs
I will try type ext4
/dev/mapper/cryptfs on /cryptfs type ext4 (rw)
[oracle@oraclelinux6 ~]$ df -h /cryptfs/
Filesystem
Size Used Avail Use% Mounted on
/dev/mapper/cryptfs
4.0G
72M 3.7G
2% /cryptfs
[oracle@oraclelinux6 ~]$ ls -l /cryptfs/
total 16
drwx------ 2 root root 16384 Jan 9 13:16 lost+found

You can now use this file system like any other. The encryption of the data blocks is done in a fully transparent fashion, unnoticed by the file
system or application accessing this data.
As a final step, we need to ensure that the encrypted file system is properly set up and mounted at system bootup time. For this to happen, we
need to create an appropriate entry in the configuration file /etc/crypttab, using our favorite text editor:

[oracle@oraclelinux6 ~]$ cat /etc/crypttab


# <target name> <source device> <key file> <options>
cryptfs /dev/sdb1 none luks

Additionally, we need to add the file system to /etc/fstab for the actual mounting to take place, by adding a line as the following one:

[oracle@oraclelinux6 ~]$ tail -1 /etc/fstab


/dev/mapper/cryptfs
/cryptfs

ext4

defaults

0 0

If you reboot your system now, you will be prompted to enter your passphrase to continue the boot process:
Password for /dev/sdb1 (luks-a7e...):**********

After entering the correct passphrase, the system continues to boot and the file system will be mounted at the given location:
[oracle@oraclelinux6 ~]$ df -h /cryptfs/
Filesystem
Size Used Avail Use% Mounted on
/dev/mapper/cryptfs
4.0G
72M 3.7G
2% /cryptfs

Now any files that you store in /cryptfs will be protected by the strong encryption of dm_crypt. This also means that your passphrase is an
invaluable asset if you lose it, you won't be able to access your data anymore! However, using LUKS it's actually possible to create multiple
keys to unlock the volume this can be handy to provide a recovery key or allowing multiple individuals to access the volume without sharing
the same password. To add a key, use the following command:
[oracle@oraclelinux6 ~]$ sudo cryptsetup luksAddKey /dev/sdb1
Enter any passphrase: <existing passphrase>
Enter new passphrase for key slot: <new passphrase>
Verify passphrase: <new passphrase>

Now you can unlock the volume by either providing the original or the new passphrase.

Anda mungkin juga menyukai