Chapter 30. Managing guests with virsh
virsh
is a command line interface tool for managing guests and the hypervisor.
The virsh
command-line tool is built on the libvirt
management API and operates as an alternative to the qemu-kvm
command and the graphical virt-manager
application. The virsh
command can be used in read-only mode by unprivileged users or, with root access, full administration functionality. The virsh
command is ideal for scripting virtualization administration.
The following tables provide a quick reference for all virsh command line options.
Table 30.1. Guest management commands
Command |
Description |
help |
Prints basic help information. |
list |
Lists all guests. |
dumpxml |
Outputs the XML configuration file for the guest. |
create |
Creates a guest from an XML configuration file and starts the new guest. |
start |
Starts an inactive guest. |
destroy |
Forces a guest to stop. |
define |
Outputs an XML configuration file for a guest. |
domid |
Displays the guest's ID. |
domuuid |
Displays the guest's UUID. |
dominfo |
Displays guest information. |
domname |
Displays the guest's name. |
domstate |
Displays the state of a guest. |
quit |
Quits the interactive terminal. |
reboot |
Reboots a guest. |
restore |
Restores a previously saved guest stored in a file. |
resume |
Resumes a paused guest. |
save |
Save the present state of a guest to a file. |
shutdown |
Gracefully shuts down a guest. |
suspend |
Pauses a guest. |
undefine |
Deletes all files associated with a guest. |
migrate |
Migrates a guest to another host. |
The following virsh
command options manage guest and hypervisor resources:
Table 30.2. Resource management options
Command |
Description |
setmem |
Sets the allocated memory for a guest. |
setmaxmem |
Sets maximum memory limit for the hypervisor. |
setvcpus |
Changes number of virtual CPUs assigned to a guest. |
vcpuinfo |
Displays virtual CPU information about a guest. |
vcpupin |
Controls the virtual CPU affinity of a guest. |
domblkstat |
Displays block device statistics for a running guest. |
domifstat |
Displays network interface statistics for a running guest. |
attach-device |
Attach a device to a guest, using a device definition in an XML file. |
attach-disk |
Attaches a new disk device to a guest. |
attach-interface |
Attaches a new network interface to a guest. |
detach-device |
Detach a device from a guest, takes the same kind of XML descriptions as command attach-device . |
detach-disk |
Detach a disk device from a guest. |
detach-interface |
Detach a network interface from a guest. |
The virsh
commands for managing and creating storage pools and volumes.
Table 30.3. Storage Pool options
Command |
Description |
find-storage-pool-sources |
Returns the XML definition for all storage pools of a given type that could be found. |
find-storage-pool-sources port |
Returns data on all storage pools of a given type that could be found as XML. If the host and port are provided, this command can be run remotely. |
pool-autostart |
Sets the storage pool to start at boot time. |
pool-build |
The pool-build command builds a defined pool. This command can format disks and create partitions. |
pool-create |
pool-create creates and starts a storage pool from the provided XML storage pool definition file. |
pool-create-as name |
Creates and starts a storage pool from the provided parameters. If the --print-xml parameter is specified, the command prints the XML definition for the storage pool without creating the storage pool. |
pool-define |
Creates a storage bool from an XML definition file but does not start the new storage pool. |
pool-define-as name |
Creates but does not start, a storage pool from the provided parameters. If the --print-xml parameter is specified, the command prints the XML definition for the storage pool without creating the storage pool. |
pool-destroy |
Permanently destroys a storage pool in libvirt . The raw data contained in the storage pool is not changed and can be recovered with the pool-create command. |
pool-delete |
Destroys the storage resources used by a storage pool. This operation cannot be recovered. The storage pool still exists after this command but all data is deleted. |
pool-dumpxml |
Prints the XML definition for a storage pool. |
pool-edit |
Opens the XML definition file for a storage pool in the users default text editor. |
pool-info |
Returns information about a storage pool. |
pool-list |
Lists storage pools known to libvirt. By default, pool-list lists pools in use by active guests. The --inactive parameter lists inactive pools and the --all parameter lists all pools. |
pool-undefine |
Deletes the definition for an inactive storage pool. |
pool-uuid |
Returns the UUID of the named pool. |
pool-name |
Prints a storage pool's name when provided the UUID of a storage pool. |
pool-refresh |
Refreshes the list of volumes contained in a storage pool. |
pool-start |
Starts a storage pool that is defined but inactive. |
This table contains miscellaneous virsh
commands:
Table 30.4. Miscellaneous options
Command |
Description |
version |
Displays the version of virsh |
nodeinfo |
Outputs information about the hypervisor |
Connect to a hypervisor session with virsh
:
# virsh connect {name}
Where {name}
is the machine name (hostname) or URL of the hypervisor. To initiate a read-only connection, append the above command with --readonly
.
Output a guest's XML configuration file with virsh
:
# virsh dumpxml {guest-id, guestname or uuid}
This command outputs the guest's XML configuration file to standard out (stdout
). You can save the data by piping the output to a file. An example of piping the output to a file called guest.xml
:
# virsh dumpxml GuestID
> guest.xml
An example of virsh dumpxml
output:
# virsh dumpxml r5b2-mySQL01
<domain type='kvm' id='13'>
<name>r5b2-mySQL01</name>
<uuid>4a4c59a7ee3fc78196e4288f2862f011</uuid>
<bootloader>/usr/bin/pygrub</bootloader>
<os>
<type>linux</type>
<kernel>/var/lib/libvirt/vmlinuz.2dgnU_</kernel>
<initrd>/var/lib/libvirt/initrd.UQafMw</initrd>
<cmdline>ro root=/dev/VolGroup00/LogVol00 rhgb quiet</cmdline>
</os>
<memory>512000</memory>
<vcpu>1</vcpu>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>restart</on_crash>
<devices>
<interface type='bridge'>
<source bridge='br0'/>
<mac address='00:16:3e:49:1d:11'/>
<script path='bridge'/>
</interface>
<graphics type='vnc' port='5900'/>
<console tty='/dev/pts/4'/>
</devices>
</domain>
# virsh create configuration_file.xml
# virsh edit softwaretesting
This opens a text editor. The default text editor is the $EDITOR
shell parameter (set to vi
by default).
Suspend a guest with virsh
:
# virsh suspend {domain-id, domain-name or domain-uuid}
When a guest is in a suspended state, it consumes system RAM but not processor resources. Disk and network I/O does not occur while the guest is suspended. This operation is immediate and the guest can be restarted with the
resume
(
Resuming a guest) option.
Restore a suspended guest with virsh
using the resume
option:
# virsh resume {domain-id, domain-name or domain-uuid}
This operation is immediate and the guest parameters are preserved for suspend
and resume
operations.
Save the current state of a guest to a file using the virsh
command:
# virsh save {domain-name, domain-id or domain-uuid} filename
This stops the guest you specify and saves the data to a file, which may take some time given the amount of memory in use by your guest. You can restore the state of the guest with the
restore
(
Restore a guest) option. Save is similar to pause, instead of just pausing a guest the present state of the guest is saved.
# virsh restore filename
This restarts the saved guest, which may take some time. The guest's name and UUID are preserved but are allocated for a new id.
Shut down a guest using the virsh
command:
# virsh shutdown {domain-id, domain-name or domain-uuid}
You can control the behavior of the rebooting guest by modifying the on_shutdown
parameter in the guest's configuration file.
Reboot a guest using virsh
command:
#virsh reboot {domain-id, domain-name or domain-uuid}
You can control the behavior of the rebooting guest by modifying the on_reboot
element in the guest's configuration file.
Force a guest to stop with the virsh
command:
# virsh destroy {domain-id, domain-name or domain-uuid}
This command does an immediate ungraceful shutdown and stops the specified guest. Using virsh destroy
can corrupt guest file systems . Use the destroy
option only when the guest is unresponsive.
To get the domain ID of a guest:
# virsh domid {domain-name or domain-uuid}
To get the domain name of a guest:
# virsh domname {domain-id or domain-uuid}
To get the Universally Unique Identifier (UUID) for a guest:
# virsh domuuid {domain-id or domain-name}
An example of virsh domuuid
output:
# virsh domuuid r5b2-mySQL01
4a4c59a7-ee3f-c781-96e4-288f2862f011
Using virsh
with the guest's domain ID, domain name or UUID you can display information on the specified guest:
# virsh dominfo {domain-id, domain-name or domain-uuid}
This is an example of virsh dominfo
output:
# virsh dominfo r5b2-mySQL01
id: 13
name: r5b2-mysql01
uuid: 4a4c59a7-ee3f-c781-96e4-288f2862f011
os type: linux
state: blocked
cpu(s): 1
cpu time: 11.0s
max memory: 512000 kb
used memory: 512000 kb
To display information about the host:
# virsh nodeinfo
An example of virsh nodeinfo
output:
# virsh nodeinfo
CPU model x86_64
CPU (s) 8
CPU frequency 2895 Mhz
CPU socket(s) 2
Core(s) per socket 2
Threads per core: 2
Numa cell(s) 1
Memory size: 1046528 kb
This displays the node information and the machines that support the virtualization process.
The virsh pool-edit
command takes the name or UUID for a storage pool and opens the XML definition file for a storage pool in the users default text editor.
The virsh pool-edit
command is equivalent to running the following commands:
# virsh pool-dumpxml pool
> pool
.xml
# vim pool
.xml
# virsh pool-define pool
.xml
The default editor is defined by the $VISUAL
or $EDITOR
environment variables, and default is vi
.
To display the guest list and their current states with virsh
:
# virsh list
Other options available include:
the --inactive
option to list inactive guests (that is, guests that have been defined but are not currently active), and
the --all
option lists all guests. For example:
# virsh list --all
Id Name State
----------------------------------
0 Domain-0 running
1 Domain202 paused
2 Domain010 inactive
3 Domain9600 crashed
The output from virsh list
is categorized as one of the six states (listed below).
-
The running
state refers to guests which are currently active on a CPU.
-
Guests listed as blocked
are blocked, and are not running or runnable. This is caused by a guest waiting on I/O (a traditional wait state) or guests in a sleep mode.
-
The paused
state lists domains that are paused. This occurs if an administrator uses the pause button in virt-manager
, xm pause
or virsh suspend
. When a guest is paused it consumes memory and other resources but it is ineligible for scheduling and CPU resources from the hypervisor.
-
The shutdown
state is for guests in the process of shutting down. The guest is sent a shutdown signal and should be in the process of stopping its operations gracefully. This may not work with all guest operating systems; some operating systems do not respond to these signals.
-
Domains in the dying
state are in is in process of dying, which is a state where the domain has not completely shut-down or crashed.
-
crashed
guests have failed while running and are no longer running. This state can only occur if the guest has been configured not to restart on crash.
To display virtual CPU information from a guest with virsh
:
# virsh vcpuinfo {domain-id, domain-name or domain-uuid}
An example of virsh vcpuinfo
output:
# virsh vcpuinfo r5b2-mySQL01
VCPU: 0
CPU: 0
State: blocked
CPU time: 0.0s
CPU Affinity: yy
To configure the affinity of virtual CPUs with physical CPUs:
# virsh vcpupin domain-id vcpu cpulist
The domain-id
parameter is the guest's ID number or name.
The vcpu
parameter denotes the number of virtualized CPUs allocated to the guest.The vcpu
parameter must be provided.
The cpulist
parameter is a list of physical CPU identifier numbers separated by commas. The cpulist
parameter determines which physical CPUs the VCPUs can run on.
To modify the number of CPUs assigned to a guest with virsh
:
# virsh setvcpus {domain-name, domain-id or domain-uuid} count
The new count
value cannot exceed the count above the amount specified when the guest was created.
To modify a guest's memory allocation with virsh
:
# virsh setmem {domain-id or domain-name} count
You must specify the count in kilobytes. The new count value cannot exceed the amount you specified when you created the guest. Values lower than 64 MB are unlikely to work with most guest operating systems. A higher maximum memory value does not affect an active guests. If the new value is lower the available memory will shrink and the guest may crash.
Use virsh domblkstat
to display block device statistics for a running guest.
# virsh domblkstat GuestName block-device
Use virsh domifstat
to display network interface statistics for a running guest.
# virsh domifstat GuestName interface-device
A guest can be migrated to another host with virsh
. Migrate domain to another host. Add --live for live migration. The migrate
command accepts parameters in the following format:
# virsh migrate --live GuestName DestinationURL
The --live
parameter is optional. Add the --live
parameter for live migrations.
The GuestName
parameter represents the name of the guest which you want to migrate.
The DestinationURL
parameter is the URL or hostname of the destination system. The destination system requires:
-
Red Hat Enterprise Linux 5.4 (ASYNC update 4) or newer,
-
the same hypervisor version, and
-
the libvirt
service must be started.
Once the command is entered you will be prompted for the root password of the destination system.
This section covers managing virtual networks with the virsh
command. To list virtual networks:
# virsh net-list
This command generates output similar to:
# virsh net-list
Name State Autostart
-----------------------------------------
default active yes
vnet1 active yes
vnet2 active yes
To view network information for a specific virtual network:
# virsh net-dumpxml NetworkName
This displays information about a specified virtual network in XML format:
# virsh net-dumpxml vnet1
<network>
<name>vnet1</name>
<uuid>98361b46-1581-acb7-1643-85a412626e70</uuid>
<forward dev='eth0'/>
<bridge name='vnet0' stp='on' forwardDelay='0' />
<ip address='192.168.100.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.100.128' end='192.168.100.254' />
</dhcp>
</ip>
</network>
Other virsh
commands used in managing virtual networks are:
-
virsh net-autostart network-name
— Autostart a network specified as network-name
.
-
virsh net-create XMLfile
— generates and starts a new network using an existing XML file.
-
virsh net-define XMLfile
— generates a new network device from an existing XML file without starting it.
-
virsh net-destroy network-name
— destroy a network specified as network-name
.
-
virsh net-name networkUUID
— convert a specified networkUUID
to a network name.
-
virsh net-uuid network-name
— convert a specified network-name
to a network UUID.
-
virsh net-start nameOfInactiveNetwork
— starts an inactive network.
-
virsh net-undefine nameOfInactiveNetwork
— removes the definition of an inactive network.