|
|
|
|
|
Red Hat Enterprise Linux 9 Essentials Book now available.
Purchase a copy of Red Hat Enterprise Linux 9 (RHEL 9) Essentials Red Hat Enterprise Linux 9 Essentials Print and eBook (PDF) editions contain 34 chapters and 298 pages
|
Chapter 26. Storage pools
26.1. Creating storage pools
26.1.1. Dedicated storage device-based storage pools
This section covers dedicating storage devices to virtualized guests.
Guests should not be given write access to whole disks or block devices (for example, /dev/sdb ). Use partitions (for example, /dev/sdb1 ) or LVM volumes.
Guests with full access to a disk device may be able to maliciously access other disk devices that they are not assigned due to disks not having access control lists.
26.1.1.1. Creating a dedicated disk storage pool using virsh
This procedure creates a new storage pool using a dedicated disk device with the virsh command.
Dedicating a disk to a storage pool will reformat and erase all data presently stored on the disk device. Back up the storage device before commencing the procedure.
-
Create a GPT disk label on the disk
The disk must be relabeled with a GUID Partition Table (GPT) disk label. GPT disk labels allow for creating a large numbers of partitions, up to 128 partitions, on each device. GPT partition tables can store partition data for far more partitions than the msdos partition table.
# parted /dev/sdb
GNU Parted 2.1
Using /dev/sdb
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) mklabel
New disk label type? gpt
(parted) quit
Information: You may need to update /etc/fstab.
#
-
Create the storage pool configuration file
Create a temporary XML text file containing the storage pool information required for the new device.
The file must be in the format shown below, and contain the following fields:
- <name>guest_images_disk</name>
-
The name parameter determines the name of the storage pool. This example uses the name guest_images_disk in the example below.
- <device path='
/dev/sdb '/>
-
The device parameter with the path attribute specifies the device path of the storage device. This example uses the device /dev/sdb .
- <target> <path>
/dev </path>
-
The file system target parameter with the path sub-parameter determines the location on the host file system to attach volumes created with this this storage pool.
For example, sdb1, sdb2, sdb3. Using /dev/ , as in the example below, means volumes created from this storage pool can be accessed as /dev /sdb1, /dev /sdb2, /dev /sdb3.
- <format type='
gpt '/>
-
The format parameter specifies the partition table type. his example uses the gpt in the example below, to match the GPT disk label type created in the previous step.
Create the XML file for the storage pool device with a text editor.
Example 26.1. Dedicated storage device storage pool
<pool type='disk'>
<name>guest_images_disk </name>
<source>
<device path='/dev/sdb '/>
<format type='gpt '/>
</source>
<target>
<path>/dev </path>
</target>
</pool>
-
Attach the device
Add the storage pool definition using the virsh pool-define command with the XML configuration file created in the previous step.
# virsh pool-define ~/guest_images_disk.xml
Pool guest_images_disk defined from /root/guest_images_disk.xml
# virsh pool-list --all
Name State Autostart
-----------------------------------------
default active yes
guest_images_disk inactive no
-
Start the storage pool
Start the storage pool with the virsh pool-start command. Verify the pool is started with the virsh pool-list --all command.
# virsh pool-start guest_images_disk
Pool guest_images_disk started
# virsh pool-list --all
Name State Autostart
-----------------------------------------
default active yes
guest_images_disk active no
-
Turn on autostart
Turn on autostart for the storage pool. Autostart configures the libvirtd service to start the storage pool when the service starts.
# virsh pool-autostart guest_images_disk
Pool guest_images_disk marked as autostarted
# virsh pool-list --all
Name State Autostart
-----------------------------------------
default active yes
guest_images_disk active yes
-
Verify the storage pool configuration
Verify the storage pool was created correctly, the sizes reported correctly, and the state reports as running .
# virsh pool-info guest_images_disk
Name: guest_images_disk
UUID: 551a67c8-5f2a-012c-3844-df29b167431c
State: running
Capacity: 465.76 GB
Allocation: 0.00
Available: 465.76 GB
# ls -la /dev/sdb
brw-rw----. 1 root disk 8, 16 May 30 14:08 /dev/sdb
# virsh vol-list guest_images_disk
Name Path
-----------------------------------------
-
Optional: Remove the temporary configuration file
Remove the temporary storage pool XML configuration file if it is not needed.
# rm ~/guest_images_disk .xml
A dedicated storage device storage pool is now available.
26.1.2. Partition-based storage pools
This section covers using a pre-formatted block device, a partition, as a storage pool.
For the following examples, a host has a 500GB hard drive (/dev/sdc ) partitioned into one 500GB, ext4 formatted partition (/dev/sdc1 ). We set up a storage pool for it using the procedure below.
26.1.2.1. Creating a partition-based storage pool using virt-manager
This procedure creates a new storage pool using a partition of a storage device.
Procedure 26.1. Creating a partition-based storage pool with virt-manager
-
Open the storage pool settings
-
In the virt-manager graphical interface, select the host from the main window.
Open the Edit menu and select Host Details
-
Click on the Storage tab of the Host Details window.
-
Create the new storage pool
-
Add a new pool (part 1)
Press the + button (the add pool button). The Add a New Storage Pool wizard appears.
Choose a Name for the storage pool. This example uses the name guest_images_fs . Change the Type to fs: Pre-Formatted Block Device .
Press the Forward button to continue.
-
Add a new pool (part 2)
Change the Target Path, Format, and Source Path fields.
- Target Path
-
Enter the location to mount the source device for the storage pool in the Target Path field. If the location does does not already exist, virt-manager will create the directory.
- Format
-
Select a format from the Format list. The device is formatted with the selected format.
This example uses the ext4 file system, the default Red Hat Enterprise Linux file system.
- Source Path
-
Enter the device in the Source Path field.
This example uses the /dev/sdc1 device.
Verify the details and press the Finish button to create the storage pool.
-
Verify the new storage pool
The new storage pool appears in the storage list on the left after a few seconds. Verify the size is reported as expected, 458.20 GB Free in this example. Verify the State field reports the new storage pool as Active .
Select the storage pool. In the Autostart field, click the On Boot checkbox. This will make sure the storage device starts whenever the libvirtd service starts.
The storage pool is now created, close the Host Details window.
26.1.2.2. Creating a partition-based storage pool using virsh
This section covers creating a partition-based storage pool with the virsh command.
Do not use this procedure to assign an entire disk as a storage pool (for example, /dev/sdb ). Guests should not be given write access to whole disks or block devices. Only use this method to assign partitions (for example, /dev/sdb1 ) to storage pools.
Procedure 26.2. Creating pre-formatted block device storage pools using virsh
-
Create the storage pool definition
Use the virsh pool-define-as command to create a new storage pool definition. There are three options that must be provided to define a pre-formatted disk as a storage pool:
- Partition name
-
The name parameter determines the name of the storage pool. This example uses the name guest_images_fs in the example below.
- device
-
The device parameter with the path attribute specifies the device path of the storage device. This example uses the partition /dev/sdc1 .
- mountpoint
-
The mountpoint on the local file system where the formatted device will be mounted. If the mount point directory does not exist, the virsh command can create the directory.
The directory /guest_images is used in this example.
# virsh pool-define-as guest_images_fs fs - - /dev/sdc1 - "/guest_images "
Pool guest_images_fs defined
The new pool and mount points are now created.
-
Verify the new pool
List the present storage pools.
# virsh pool-list --all
Name State Autostart
-----------------------------------------
default active yes
guest_images_fs inactive no
-
Ceate the mount point
Use the virsh pool-build command to create a mount point for a pre-formatted file system storage pool.
# virsh pool-build guest_images_fs
Pool guest_images_fs built
# ls -la /guest_images
total 8
drwx------. 2 root root 4096 May 31 19:38 .
dr-xr-xr-x. 25 root root 4096 May 31 19:38 ..
# virsh pool-list --all
Name State Autostart
-----------------------------------------
default active yes
guest_images_fs inactive no
-
Start the storage pool
Use the virsh pool-start command to mount the file system onto the mount point and make the pool available for use.
# virsh pool-start guest_images_fs
Pool guest_images_fs started
# virsh pool-list --all
Name State Autostart
-----------------------------------------
default active yes
guest_images_fs active no
-
Turn on autostart
By default, a storage pool is defined with virsh is not set to automatically start each time the libvirtd starts. Turn on automatic start with the virsh pool-autostart command. The storage pool is now automatically started each time libvirtd starts.
# virsh pool-autostart guest_images_fs
Pool guest_images_fs marked as autostarted
# virsh pool-list --all
Name State Autostart
-----------------------------------------
default active yes
guest_images_fs active yes
-
Verify the storage pool
Verify the storage pool was created correctly, the sizes reported are as expected, and the state is reported as running . Verify there is a "lost+found" directory in the mount point on the file system, indicating the device is mounted.
# virsh pool-info guest_images_fs
Name: guest_images_fs
UUID: c7466869-e82a-a66c-2187-dc9d6f0877d0
State: running
Capacity: 458.39 GB
Allocation: 197.91 MB
Available: 458.20 GB
# mount | grep /guest_images
/dev/sdc1 on /guest_images type ext4 (rw)
# ls -la /guest_images
total 24
drwxr-xr-x. 3 root root 4096 May 31 19:47 .
dr-xr-xr-x. 25 root root 4096 May 31 19:38 ..
drwx------. 2 root root 16384 May 31 14:18 lost+found
26.1.3. Directory-based storage pools
This section covers storing virtualized guests in a directory on the host.
Directory-based storage pools can be created with virt-manager or the virsh command line tools.
26.1.3.1. Creating a directory-based storage pool with virt-manager
-
Create the local directory
-
Optional: Create a new directory for the storage pool
Create the directory on the host for the storage pool. An existing directory can be used if permissions and SELinux are configured correctly. This example uses a directory named /guest_images .
# mkdir /guest_images
-
Set directory ownership
Change the user and group ownership of the directory. The directory must be owned by the root user.
# chown root:root /guest_images
-
Set directory permissions
Change the file permissions of the directory.
# chmod 700 /guest_images
-
Verify the changes
Verify the permissions were modified. The output shows a correctly configured empty directory.
# ls -la /guest_images
total 8
drwx------. 2 root root 4096 May 28 13:57 .
dr-xr-xr-x. 26 root root 4096 May 28 13:57 ..
-
Configure SELinux file contexts
Configure the correct SELinux context for the new directory.
# semanage fcontext -a -t virt_image_t /guest_images
-
Open the storage pool settings
-
In the virt-manager graphical interface, select the host from the main window.
Open the Edit menu and select Host Details
-
Click on the Storage tab of the Host Details window.
-
Create the new storage pool
-
Add a new pool (part 1)
Press the + button (the add pool button). The Add a New Storage Pool wizard appears.
Choose a Name for the storage pool. This example uses the name guest_images_dir . Change the Type to dir: Filesystem Directory .
Press the Forward button to continue.
-
Add a new pool (part 2)
Change the Target Path field. This example uses /guest_images .
Verify the details and press the Finish button to create the storage pool.
-
Verify the new storage pool
The new storage pool appears in the storage list on the left after a few seconds. Verify the size is reported as expected, 36.41 GB Free in this example. Verify the State field reports the new storage pool as Active .
Select the storage pool. In the Autostart field, click the On Boot checkbox. This will make sure the storage pool starts whenever the libvirtd service sarts.
The storage pool is now created, close the Host Details window.
26.1.3.2. Creating a directory-based storage pool with virsh
-
Create the storage pool definition
Use the virsh pool-define-as command to define a new storage pool. There are two options required for creating directory-based storage pools:
-
The name of the storage pool.
This example uses the name guest_images_dir . All further virsh commands used in this example use this name.
-
The path to a file system directory for storing virtualized guest image files . If this directory does not exist, virsh will create it.
This example uses the /guest_images directory.
# virsh pool-define-as guest_images_dir dir - - - - "/guest_images "
Pool guest_images_dir defined
-
Verify the storage pool is listed
Verify the storage pool object is created correctly and the state reports it as inactive .
# virsh pool-list --all
Name State Autostart
-----------------------------------------
default active yes
guest_images_dir inactive no
-
Create the local directory
Use the virsh pool-build command to build the directory-based storage pool. virsh pool-build sets the required permissions and SELinux settings for the directory and creates the directory if it does not exist.
# virsh pool-build guest_images_dir
Pool guest_images_dir built
# ls -la /guest_images
total 8
drwx------. 2 root root 4096 May 30 02:44 .
dr-xr-xr-x. 26 root root 4096 May 30 02:44 ..
# virsh pool-list --all
Name State Autostart
-----------------------------------------
default active yes
guest_images_dir inactive no
-
Start the storage pool
Use the virsh command pool-start for this. pool-start enables a directory storage pool, allowing it to be used for volumes and guests.
# virsh pool-start guest_images_dir
Pool guest_images_dir started
# virsh pool-list --all
Name State Autostart
-----------------------------------------
default active yes
guest_images_dir active no
-
Turn on autostart
Turn on autostart for the storage pool. Autostart configures the libvirtd service to start the storage pool when the service starts.
# virsh pool-autostart guest_images_dir
Pool guest_images_dir marked as autostarted
# virsh pool-list --all
Name State Autostart
-----------------------------------------
default active yes
guest_images_dir active yes
-
Verify the storage pool configuration
Verify the storage pool was created correctly, the sizes reported correctly, and the state reports as running .
# virsh pool-info guest_images_dir
Name: guest_images_dir
UUID: 779081bf-7a82-107b-2874-a19a9c51d24c
State: running
Capacity: 49.22 GB
Allocation: 12.80 GB
Available: 36.41 GB
# ls -la /guest_images
total 8
drwx------. 2 root root 4096 May 30 02:44 .
dr-xr-xr-x. 26 root root 4096 May 30 02:44 ..
#
A directory-based storage pool is now available.
26.1.4. LVM-based storage pools
This chapter covers using LVM volume groups as storage pools.
LVM-based storage groups provide flexibility of
LVM-based storage pools require a full disk partition. This partition will be formatted and all data presently stored on the disk device will be erased. Back up the storage device before commencing the procedure.
26.1.4.1. Creating an LVM-based storage pool with virt-manager
LVM-based storage pools can use existing LVM volume groups or create new LVM volume groups on a blank partition.
-
Optional: Create new partition for LVM volumes
These steps describe how to create a new partition and LVM volume group on a new hard disk drive.
This procedure will remove all data from the selected storage device.
-
Create a new partition
Use the fdisk command to create a new disk partition from the command line. The following example creates a new partition that uses the entire disk on the storage device /dev/sdb .
# fdisk /dev/sdb
Command (m for help):
Press n for a new partition.
-
Press p for a primary partition.
Command action
e extended
p primary partition (1-4)
-
Choose an available partition number. In this example the first partition is chosen by entering 1 .
Partition number (1-4): 1
-
Enter the default first cylinder by pressing Enter .
First cylinder (1-400, default 1):
-
Select the size of the partition. In this example the entire disk is allocated by pressing Enter .
Last cylinder or +size or +sizeM or +sizeK (2-400, default 400):
-
Set the type of partition by pressing t .
Command (m for help): t
-
Choose the partition you created in the previous steps. In this example, the partition number is 1 .
Partition number (1-4): 1
-
Enter 8e for a Linux LVM partition.
Hex code (type L to list codes): 8e
-
write changes to disk and quit.
Command (m for help): w
Command (m for help): q
-
Create a new LVM volume group
Create a new LVM volume group with the vgcreate command. This example creates a volume group named guest_images_lvm .
# vgcreate guest_images_lvm /dev/sdb1
Physical volmue "/dev/vdb1" successfully created
Volume group "guest_images_lvm " successfully created
The new LVM volume group, guest_images_lvm , can now be used for an LVM-based storage pool.
-
Open the storage pool settings
-
In the virt-manager graphical interface, select the host from the main window.
Open the Edit menu and select Host Details
-
Click on the Storage tab of the Host Details window.
-
Create the new storage pool
-
Start the Wizard
Press the + button (the add pool button). The Add a New Storage Pool wizard appears.
Choose a Name for the storage pool. We use guest_images_lvm for this example. Then change the Type to logical: LVM Volume Group , and
Press the Forward button to continue.
-
Add a new pool (part 2)
Change the Target Path field. This example uses /guest_images .
Now fill in the Target Path and Source Path fields, then tick the Build Pool check box.
-
Use the Target Path field to either select an existing LVM volume group or as the name for a new volume group. The default format is /dev/ storage_pool_name .
This example uses a new volume group named /dev/guest_images_lvm .
-
The Source Path field is optional if an existing LVM volume group is used in the Target Path.
For new LVM volume groups, input the location of a storage device in the Source Path field. This example uses a blank partition /dev/sdc .
-
The Build Pool checkbox instructs virt-manager to create a new LVM volume group. If you are using an existing volume group you should not select the Build Pool checkbox.
This example is using a blank partition to create a new volume group so the Build Pool checkbox must be selected.
Verify the details and press the Finish button format the LVM volume group and create the storage pool.
-
Confirm the device to be formatted
A warning message appears.
Press the Yes button to proceed to erase all data on the storage device and create the storage pool.
-
Verify the new storage pool
The new storage pool will appear in the list on the left after a few seconds. Verify the details are what you expect, 465.76 GB Free in our example. Also verify the State field reports the new storage pool as Active .
It is generally a good idea to have the Autostart check box enabled, to ensure the storage pool starts automatically with libvirtd.
Close the Host Details dialog, as the task is now complete.
26.1.4.2. Creating an LVM-based storage pool with virsh
26.1.5. iSCSI-based storage pools
This section covers using iSCSI-based devices to store virtualized guests.
iSCSI (Internet Small Computer System Interface) is a network protocol for sharing storage devices. iSCSI connects initiators (storage clients) to targets (storage servers) using SCSI instructions over the IP layer. For more information and background on the iSCSI protocol refer to wikipedia's iSCSI article.
26.1.5.1. Configuring a software iSCSI target
The scsi-target-utils package provides a tool for creating software-backed iSCSI targets.
Procedure 26.3. Creating an iSCSI target
-
Install the required packages
Install the scsi-target-utils package and all dependencies
# yum install scsi-target-utils
-
Start the tgtd service
The tgtd service hosts SCSI targets and uses the iSCSI protocol to host targets. Start the tgtd service and make the service persistent after restarting with the chkconfig command.
# service tgtd start
# chkconfig tgtd on
-
Optional: Create LVM volumes
LVM volumes are useful for iSCSI backing images. LVM snapshots and resizing can be beneficial for virtualized guests. This example creates an LVM image named virtimage1 on a new volume group named virtstore on a RAID5 array for hosting virtualized guests with iSCSI.
-
Create the RAID array
Creating software RAID5 arrays is covered by the Red Hat Enterprise Linux Deployment Guide.
-
Create the LVM volume group
Create a volume group named virtstore with the vgcreate command.
# vgcreate virtstore /dev/md1
-
Create a LVM logical volume
Create a logical volume group named virtimage1 on the virtstore volume group with a size of 20GB using the lvcreate command.
# lvcreate --size 20G -n virtimage1
virtstore
The new logical volume, virtimage1 , is ready to use for iSCSI.
-
Optional: Create file-based images
File-based storage is sufficient for testing but is not recommended for production environments or any significant I/O activity. This optional procedure creates a file based imaged named virtimage2.img for an iSCSI target.
-
Create a new directory for the image
Create a new directory to store the image. The directory must have the correct SELinux contexts.
# mkdir -p /var/lib/tgtd/virtualization
-
Create the image file
Create an image named virtimage2.img with a size of 10GB.
# dd if=/dev/zero of=/var/lib/tgtd/virtualization /virtimage2.img bs=1M seek=10000 count=0
-
Configure SELinux file contexts
Configure the correct SELinux context for the new image and directory.
# restorecon -R /var/lib/tgtd
The new file-based image, virtimage2.img , is ready to use for iSCSI.
-
Create targets
Targets can be created by adding a XML entry to the /etc/tgt/targets.conf file. The target attribute requires an iSCSI Qualified Name (IQN). The IQN is in the format:
iqn.yyyy -mm .reversed domain name :optional identifier text
Where:
-
yyyy -mm represents the year and month the device was started (for example: 2010-05 );
-
reversed domain name is the hosts domain name in reverse (for example server1.example.com in an IQN would be com.example.server1 ); and
-
optional identifier text is any text string, without spaces, that assists the administrator in identifying devices or hardware.
This example creates iSCSI targets for the two types of images created in the optional steps on server1.example.com with an optional identifier trial . Add the following to the /etc/tgt/targets.conf file.
<target iqn.2010-05.com.example.server1 :trial >
backing-store /dev/virtstore /virtimage1 #LUN 1
backing-store /var/lib/tgtd/virtualization /virtimage2.img #LUN 2
write-cache off
</target>
Ensure that the /etc/tgt/targets.conf file contains the default-driver iscsi line to set the driver type as iSCSI. The driver uses iSCSI by default.
This example creates a globally accessible target without access control. Refer to the scsi-target-utils for information on implementing secure access.
-
Restart the tgtd service
Restart the tgtd service to reload the configuration changes.
# service tgtd restart
-
iptables configuration
Open port 3260 for iSCSI access with iptables .
# iptables -I INPUT -p tcp -m tcp --dport 3260 -j ACCEPT
# service iptables save
# service iptables restart
-
Verify the new targets
View the new targets to ensure the setup was success with the tgt-admin --show command.
# tgt-admin --show
Target 1: iqn.2010-05.com.example.server1:trial
System information:
Driver: iscsi
State: ready
I_T nexus information:
LUN information:
LUN: 0
Type: controller
SCSI ID: IET 00010000
SCSI SN: beaf10
Size: 0 MB
Online: Yes
Removable media: No
Backing store type: rdwr
Backing store path: None
LUN: 1
Type: disk
SCSI ID: IET 00010001
SCSI SN: beaf11
Size: 20000 MB
Online: Yes
Removable media: No
Backing store type: rdwr
Backing store path: /dev/virtstore /virtimage1
LUN: 2
Type: disk
SCSI ID: IET 00010002
SCSI SN: beaf12
Size: 10000 MB
Online: Yes
Removable media: No
Backing store type: rdwr
Backing store path: /var/lib/tgtd/virtualization /virtimage2.img
Account information:
ACL information:
ALL
The ACL list is set to all. This allows all systems on the local network to access this device. It is recommended to set host access ACLs for production environments.
-
Optional: Test discovery
Test whether the new iSCSI device is discoverable.
# iscsiadm --mode discovery --type sendtargets --portal server1.example.com
127.0.0.1:3260,1 iqn.2010-05.com.example.server1:trial1
-
Optional: Test attaching the device
Attach the new device (iqn.2010-05.com.example.server1:trial1 ) to determine whether the device can be attached.
# iscsiadm -d2 -m node --login
scsiadm: Max file limits 1024 1024
Logging in to [iface: default, target: iqn.2010-05.com.example.server1:trial1, portal: 10.0.0.1,3260]
Login to [iface: default, target: iqn.2010-05.com.example.server1:trial1, portal: 10.0.0.1,3260] successful.
Detach the device.
# iscsiadm -d2 -m node --logout
scsiadm: Max file limits 1024 1024
Logging out of session [sid: 2, target: iqn.2010-05.com.example.server1:trial1, portal: 10.0.0.1,3260
Logout of [sid: 2, target: iqn.2010-05.com.example.server1:trial1, portal: 10.0.0.1,3260] successful.
An iSCSI device is now ready to use for virtualization.
26.1.5.2. Adding an iSCSI target to virt-manager
This procedure covers creating a storage pool with an iSCSI target in virt-manager .
Procedure 26.4. Adding an iSCSI device to virt-manager
-
Open the host storage tab
Open the Storage tab in the Host Details window.
-
Open virt-manager .
-
Select a host from the main virt-manager window.
-
Open the Edit menu and select Host Details.
-
Click on the Storage tab of the Host Details window.
-
Add a new pool (part 1)
Press the + button (the add pool button). The Add a New Storage Pool wizard appears.
Choose a name for the storage pool, change the Type to iscsi, and press Forward to continue.
-
Add a new pool (part 2)
Enter the target path for the device, the host name of the target and the source path (the IQN). The Format option is not available as formatting is handled by the guests. It is not advised to edit the Target Path. The default target path value, /dev/disk/by-path/ , adds the drive path to that folder. The target path should be the same on all hosts for migration.
Enter the hostname or IP address of the iSCSI target. This example uses server1.example.com .
Enter the source path, the IQN for the iSCSI target. This example uses iqn.2010-05.com.example.server1:trial1 .
Press Finish to create the new storage pool.
26.1.5.3. Creating an iSCSI-based storage pool with virsh
-
Create the storage pool definition
The example below is an XML definition file for a iSCSI-based storage pool.
- <name>trial1</name>
-
The name element sets the name for the storage pool. The name is required and must be unique.
- <uuid>afcc5367-6770-e151-bcb3-847bc36c5e28</uuid>
-
The optional uuid element provides a unique global identifier for the storage pool. The uuid element can contain any valid UUID or an existing UUID for the storage device. If a UUID is not provided, virsh will generate a UUID for the storage pool.
- <host name='server1.example.com'/>
-
The host element with the name attribute specifies the hostname of the iSCSI server. The host element attribute can contain a port attribute for a non-standard iSCSI protocol port number.
- <device path='
iqn.2010-05.com.example.server1:trial1 '/>
-
The device element path attribute must contain the IQN for the iSCSI server.
With a text editor, create an XML file for the iSCSI storage pool. This example uses a XML definition named trial1.xml .
<pool type='iscsi'>
<name>trial1</name>
<uuid>afcc5367-6770-e151-bcb3-847bc36c5e28</uuid>
<source>
<host name='server1.example.com'/>
<device path='iqn.2010-05.com.example.server1:trial1'/>
</source>
<target>
<path>/dev/disk/by-path</path>
</target>
</pool>
Use the pool-define command to define the storage pool but not start it.
# virsh pool-define trial1.xml
Pool trial1 defined
-
Alternative step: Use pool-define-as to define the pool from the command line
Storage pool definitions can be created with the virsh command line tool. Creating storage pools with virsh is useful for systems administrators using scripts to create multiple storage pools.
The virsh pool-define-as command has several parameters which are accepted in the following format:
virsh pool-define-as name type source-host source-path source-dev source-name target
The type, iscsi , defines this pool as an iSCSI based storage pool. The name parameter must be unique and sets the name for the storage pool. The source-host and source-path parameters are the hostname and iSCSI IQN respectively. The source-dev and source-name parameters are not required for iSCSI-based pools, use a - character to leave the field blank. The target parameter defines the location for mounting the iSCSI device on the host.
The example below creates the same iSCSI-based storage pool as the previous step.
# virsh pool-define-as trial1 iscsi server1.example.com iqn.2010-05.com.example.server1:trial1 - - /dev/disk/by-path
Pool trial1 defined
-
Verify the storage pool is listed
Verify the storage pool object is created correctly and the state reports as inactive .
# virsh pool-list --all
Name State Autostart
-----------------------------------------
default active yes
trial1 inactive no
-
Start the storage pool
Use the virsh command pool-start for this. pool-start enables a directory storage pool, allowing it to be used for volumes and guests.
# virsh pool-start guest_images_disk
Pool guest_images_disk started
# virsh pool-list --all
Name State Autostart
-----------------------------------------
default active yes
trial1 active no
-
Turn on autostart
Turn on autostart for the storage pool. Autostart configures the libvirtd service to start the storage pool when the service starts.
# virsh pool-autostart trial1
Pool trial1 marked as autostarted
Verify that the trial1 pool has autostart set:
# virsh pool-list --all
Name State Autostart
-----------------------------------------
default active yes
trial1 active yes
-
Verify the storage pool configuration
Verify the storage pool was created correctly, the sizes reported correctly, and the state reports as running .
# virsh pool-info trial1
Name: trial1
UUID: afcc5367-6770-e151-bcb3-847bc36c5e28
State: running
Persistent: unknown
Autostart: yes
Capacity: 100.31 GB
Allocation: 0.00
Available: 100.31 GB
An iSCSI-based storage pool is now available.
26.1.6. NFS-based storage pools
This procedure covers creating a storage pool with a NFS mount point in virt-manager .
26.1.6.1. Creating a NFS-based storage pool with virt-manager
-
Open the host storage tab
Open the Storage tab in the Host Details window.
-
Open virt-manager .
-
Select a host from the main virt-manager window.
-
Open the Edit menu and select Host Details.
-
Click on the Storage tab of the Host Details window.
-
Create a new pool (part 1)
Press the + button (the add pool button). The Add a New Storage Pool wizard appears.
Choose a name for the storage pool and press Forward to continue.
-
Create a new pool (part 2)
Enter the target path for the device, the hostname and the NFS share path. Set the Format option to NFS or auto (to detect the type). The target path must be identical on all hosts for migration.
Enter the hostname or IP address of the NFS server. This example uses server1.example.com .
Enter the NFS path. This example uses /nfstrial .
Press Finish to create the new storage pool.
|
|
|