4.1.1.
I have LVM 1 installed and running on my system. How do
I start using LVM 2?
Here's the Quick Start instructions :)
Start by removing any snapshot LVs on the system.
These are not handled by LVM 2 and will prevent the
origin from being activated when LVM 2 comes up.
Make sure you have some way of booting the system
other than from your standard boot partition. Have
the LVM 1 tools, standard system tools (mount) and
an LVM 1 compatible kernel on it in case you need to
get back and fix some things.
Grab the LVM 2 tools source and the device mapper
source and compile them. You need to install the
device mapper library using "make
install" before compiling the LVM 2 tools.
Also copy the dm/scripts/devmap_mknod.sh script into
/sbin. I recommend only installing the 'lvm' binary
for now so you have access to the LVM 1 tools if you
need them. If you have access to packages for LVM 2
and device-mapper, you can install those instead,
but beware of them overwriting your LVM 1 tool set.
Get a device mapper compatible kernel, either built
in or as a kernel module.
Figure out where LVM 1 was activated in your startup
scripts. Make sure the device-mapper module is
loaded by that point (if you are using device mapper
as a module) and add '/sbin/devmap_mknod.sh; lvm
vgscan; lvm vgchange -ay' afterward.
Install the kernel with device mapper support in it.
Reboot. If all goes well, you should be running with
lvm2.
4.1.2.
Do I need a special lvm2 kernel module?
No. You need device-mapper. The lvm2 tools use
device-mapper to interface with the kernel and do all
their device mapping (hence the name device-mapper). As
long as you have device-mapper, you should be able to
use LVM2.
4.1.3.
I get errors about
/dev/mapper/control when I try to
use the LVM 2 tools. What's going on?
The primary cause of this is not having run the
"dmsetup mknodes" after rebooting into a dm
capable kernel. This script generates the control node
for device mapper.
If you don't have the "dmsetup mknodes",
don't despair! (Though you should probably upgrade to
the latest version of device-mapper.) It's pretty easy
to create the /dev/mapper/control
file on your own:
Make sure you have the device-mapper module loaded
(if you didn't build it into your kernel).
and note the number
printed. (If you don't get any output, refer to
step 1.)
Run
# mkdir /dev/mapper
- if you
get an error saying
/dev/mapper already exists,
make sure it's a directory and move on.
Run
# mknod /dev/mapper/control c 10 $number
where $number is the number printed in step 2.
You should be all set now!
4.1.4.
Which commands and types of logical volumes are
currently supported in LVM 2?
If you are using the stable 2.4 device mapper patch from
the lvm2 tarball, all the major functionality you'd
expect using lvm1 is supported with the lvm2 tools.
(You still need to remove snapshots before upgrading
from lvm1 to lvm2)
If you are using the version of device mapper in the 2.6
kernel.org kernel series the following commands and LV
types are not supported:
4.1.5.
Does LVM 2 use a different format from LVM 1 for it's
ondisk representation of Volume Groups and Logical
Volumes?
Yes. LVM 2 uses lvm 2 format metadata. This format is much
more flexible than the LVM 1 format metadata, removing
or reducing most of the limitations LVM 1 had.
4.1.6.
Does LVM 2 support VGs and LVs created with LVM 1?
Yes. LVM 2 will activate and operate on VG and LVs created
with LVM 1. The exception to this is snapshots created with
LVM 1 - these should be removed before upgrading. Snapshots
that remain after upgrading will have to be removed before
their origins can be activated by LVM 2.
4.1.7.
Can I upgrade my LVM 1 based VGs and LVs to LVM 2 native
format?
Yes. Use vgconvert to convert your VG and all LVs contained
within it to the new lvm 2 format metadata. Be warned that it's
not always possible to revert back to lvm 1 format metadata.
4.1.8.
I've upgraded to LVM 2, but the tools keep failing with out
of memory errors. What gives?
One possible cause of this is that some versions of LVM
1 (The user that reported this bug originally was using
Mandrake 9.2, but it is not necessarily limited to that
distribution) did not put a UUID into the PV and VG
structures as they were supposed to. The most current
versions of the LVM 2 tools automatically fill UUIDs in
for the structures if they see they are missing, so you
should grab a more current version and your problem
should be solved. If not, post to the linux-lvm mailing list
4.1.9.
I have my root partition on an LV in LVM 1. How do I
upgrade to LVM 2? And what happened to lvmcreate_initrd?
Upgrading to LVM 2 is a bit trickier with root on LVM,
but it's not impossible. You need to queue up a kernel
with device-mapper support and install the lvm2 tools
(you might want to make a backup of the lvm 1 tools, or
find a rescue disk with the lvm tools built in, in case
you need them before you're done). Then find a mkinitrd
script that has support for your distro and lvm 2.
Currently, this is the list of mkinitrd scripts that I
know support lvm2, sorted by distro:
mkinitrd scripts with lvm 2 support
fedora
The latest fedora core 2 mkinitrd
handles lvm2, but it relies on a statically
built lvm binary from the latest lvm 2 tarball.
There is a version in the lvm2 source tree under
scripts/lvm2_createinitrd/.
See the documentation in that directory for more
details.
4.1.10.
How resilient is LVM to a sudden renumbering of
physical hard disks?
It's fine - LVM identifies PVs by UUID, not by device
name.
Each disk (PV) is labeled with a UUID, which uniquely
identifies it to the system. 'vgscan' identifies this
after a new disk is added that changes your drive
numbering. Most distros run vgscan in the lvm startup
scripts to cope with this on reboot after a hardware
addition. If you're doing a hot-add, you'll have to run
this by hand I think. On the other hand, if your vg is
activated and being used, the renumbering should not
affect it at all. It's only the activation that needs
the identifier, and the worst case scenario is that the
activation will fail without a vgscan with a complaint
about a missing PV.
The failure or removal of a drive that LVM is
currently using will cause problems with current use
and future activations of the VG that was using it.
4.1.11.
I'm trying to fill my vg, and vgdisplay/vgs says that I
have 1.87 GB free, but when I do an lvcreate vg -L1.87G
it says "insufficient free extends". What's going on?
The 1.87 GB figure is rounded to 2 decimal places, so
it's probably 1.866 GB or something. This is a
human-readable output to give you a general idea of how
big the VG is. If you want to specify an exact size,
you must use extents instead of some multiple of bytes.
In the case of vgdisplay, use the Free PE count instead
of the human readable capacity.
Free PE / Size 478 / 1.87 GB
^^^
So, this would indicate that you should do run
# lvcreate vg -l478
Note that instead of an upper-case 'L',
we used a lower-case 'l' to tell lvm to use extents
instead of bytes.
In the case of vgs, you need to instruct it to tell you how many extents are available:
# vgs -o +vg_free_count,vg_extent_count
This tell vgs to add the free extents and the total
number of extents to the end of the vgs listing. Use
the free extent number the same way you would in the
above vgdisplay case.
4.1.12.
How are snapshots in LVM2 different from LVM1?
In LVM2 snapshots are read/write by default, whereas in
LVM1, snapshots were only read-only. See Section 3.8 for more details
4.1.13.
What is the maximum size of a single LV?
The answer to this question depends upon the CPU
architecture of your computer and the kernel you are a
running:
For 2.4 based kernels, the maximum LV size is 2TB.
For some older kernels, however, the limit was 1TB
due to signedness problems in the block layer.
Red Hat Enterprise Linux 3 Update 5 has fixes to
allow the full 2TB LVs. Consult your distribution
for more information in this regard.
For 32-bit CPUs on 2.6 kernels, the maximum LV size is 16TB.
For 64-bit CPUs on 2.6 kernels, the maximum LV
size is 8EB. (Yes, that is a very large number.)