Creating and Destroying ZFS Storage Pools
The following sections describe different scenarios for creating and destroying ZFS storage pools.
By design, creating and destroying pools is fast and easy. However, be cautious
when doing these operations. Although checks are performed to prevent using devices known
to be in use in a new pool, ZFS cannot always know when
a device is already in use. Destroying a pool is even easier.
Use zpool destroy with caution. This is a simple command with significant consequences.
Creating a ZFS Storage Pool
To create a storage pool, use the zpool create command. This command takes a
pool name and any number of virtual devices as arguments. The pool name
must satisfy the naming conventions outlined in ZFS Component Naming Requirements.
Creating a Basic Storage Pool
The following command creates a new pool named tank that consists of the
disks c1t0d0 and c1t1d0:
# zpool create tank c1t0d0 c1t1d0
These whole disks are found in the /dev/dsk directory and are labelled appropriately
by ZFS to contain a single, large slice. Data is dynamically striped across
both disks.
Creating a Mirrored Storage Pool
To create a mirrored pool, use the mirror keyword, followed by any number
of storage devices that will comprise the mirror. Multiple mirrors can be specified
by repeating the mirror keyword on the command line. The following command creates
a pool with two, two-way mirrors:
# zpool create tank mirror c1d0 c2d0 mirror c3d0 c4d0
The second mirror keyword indicates that a new top-level virtual device is being
specified. Data is dynamically striped across both mirrors, with data being redundant between
each disk appropriately.
Currently, the following operations are supported on a ZFS mirrored configuration:
Adding another set of disks for an additional top-level vdev to an existing mirrored configuration. For more information, see Adding Devices to a Storage Pool.
Attaching additional disks to an existing mirrored configuration. Or, attaching additional disks to a non-replicated configuration to create a mirrored configuration. For more information, see Attaching and Detaching Devices in a Storage Pool.
Replace a disk or disks in an existing mirrored configuration as long as the replacement disks are greater than or equal to the device to be replaced. For more information, see Replacing Devices in a Storage Pool.
Detach a disk or disk in a mirrored configuration as long as the remaining devices provide adequate redundancy for the configuration. For more information, see Attaching and Detaching Devices in a Storage Pool.
Currently, the following operations are not supported on a mirrored configuration:
Creating RAID-Z Storage Pools
Creating a single-parity RAID-Z pool is identical to creating a mirrored pool, except
that the raidz or raidz1 keyword is used instead of mirror. The following
example shows how to create a pool with a single RAID-Z device that
consists of five disks:
# zpool create tank raidz c1t0d0 c2t0d0 c3t0d0 c4t0d0 /dev/dsk/c5t0d0
This example demonstrates that disks can be specified by using their full paths.
The /dev/dsk/c5t0d0 device is identical to the c5t0d0 device.
A similar configuration could be created with disk slices. For example:
# zpool create tank raidz c1t0d0s0 c2t0d0s0 c3t0d0s0 c4t0d0s0 c5t0d0s0
However, the disks must be preformatted to have an appropriately sized slice zero.
You can create a double-parity RAID-Z configuration by using the raidz2 keyword
when the pool is created. For example:
# zpool create tank raidz2 c1t0d0 c2t0d0 c3t0d0
# zpool status -v tank
pool: tank
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
tank ONLINE 0 0 0
raidz2 ONLINE 0 0 0
c1t0d0 ONLINE 0 0 0
c2t0d0 ONLINE 0 0 0
c3t0d0 ONLINE 0 0 0
errors: No known data errors
Currently, the following operations are supported on a ZFS RAID-Z configuration:
Add another set of disks for an additional top-level vdev to an existing RAID-Z configuration. For more information, see Adding Devices to a Storage Pool.
Replace a disk or disks in an existing RAID-Z configuration as long as the replacement disks are greater than or equal to the device to be replaced. For more information, see Replacing Devices in a Storage Pool.
Currently, the following operations are not supported on a RAID-Z configuration:
Attach an additional disk to an existing RAID-Z configuration.
Detach a disk from a RAID-Z configuration.
You cannot outright remove a device from a RAID-Z configuration. An RFE is filed for this feature.
For more information about a RAID-Z configuration, see RAID-Z Storage Pool Configuration.
Creating a ZFS Storage Pool with Log Devices
By default, the ZIL is allocated from blocks within the
main pool. However, better performance might be possible by using separate intent log devices,
such as NVRAM or a dedicated disk. For more information about ZFS log
devices, see Setting Up Separate ZFS Logging Devices.
You can set up a ZFS logging device when the storage pool
is created or after the pool is created.
For example, create a mirrored storage pool with mirrored log devices.
# zpool create datap mirror c1t1d0 c1t2d0 mirror c1t3d0 c1t4d0 log mirror c1t5d0 c1t8d0
# zpool status
pool: datap
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
datap ONLINE 0 0 0
mirror ONLINE 0 0 0
c1t1d0 ONLINE 0 0 0
c1t2d0 ONLINE 0 0 0
mirror ONLINE 0 0 0
c1t3d0 ONLINE 0 0 0
c1t4d0 ONLINE 0 0 0
logs ONLINE 0 0 0
mirror ONLINE 0 0 0
c1t5d0 ONLINE 0 0 0
c1t8d0 ONLINE 0 0 0
errors: No known data errors
Creating a ZFS Storage Pool with Cache Devices
You can create a storage pool with cache devices to cache storage
pool data. For example:
# zpool create tank mirror c2t0d0 c2t1d0 c2t3d0 cache c2t5d0 c2t8d0
# zpool status tank
pool: tank
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
tank ONLINE 0 0 0
mirror ONLINE 0 0 0
c2t0d0 ONLINE 0 0 0
c2t1d0 ONLINE 0 0 0
c2t3d0 ONLINE 0 0 0
cache
c2t5d0 ONLINE 0 0 0
c2t8d0 ONLINE 0 0 0
Review the following points when considering whether to create a ZFS storage pool
with cache devices:
Using cache devices provide the greatest performance improvement for random read-workloads of mostly static content.
Capacity and reads can be monitored by using the zpool iostat command.
Single or multiple cache devices can be added when the pool is created or added and removed after the pool is created. For more information, see Example 4-3.
Cache devices cannot be mirrored or be part of a RAID-Z configuration.
If a read error is encountered on a cache device, that read I/O is reissued to the original storage pool device, which might be part of a mirrored or RAID-Z configuration. The content of the cache devices is considered volatile, as is the case with other system caches.
Handling ZFS Storage Pool Creation Errors
Pool creation errors can occur for many reasons. Some of these reasons are
obvious, such as when a specified device doesn't exist, while other reasons are
more subtle.
Detecting in Use Devices
Before formatting a device, ZFS first determines if the disk is in
use by ZFS or some other part of the operating system. If the
disk is in use, you might see errors such as the following:
# zpool create tank c1t0d0 c1t1d0
invalid vdev specification
use '-f' to override the following errors:
/dev/dsk/c1t0d0s0 is currently mounted on /. Please see umount(1M).
/dev/dsk/c1t0d0s1 is currently mounted on swap. Please see swap(1M).
/dev/dsk/c1t1d0s0 is part of active ZFS pool zeepool. Please see zpool(1M).
Some of these errors can be overridden by using the -f option, but
most errors cannot. The following uses cannot be overridden by using the
-f option, and you must manually correct them:
- Mounted file system
The disk or one of its slices contains a file system that is currently mounted. To correct this error, use the umount command.
- File system in /etc/vfstab
The disk contains a file system that is listed in the /etc/vfstab file, but the file system is not currently mounted. To correct this error, remove or comment out the line in the /etc/vfstab file.
- Dedicated dump device
The disk is in use as the dedicated dump device for the system. To correct this error, use the dumpadm command.
- Part of a ZFS pool
The disk or file is part of an active ZFS storage pool. To correct this error, use the zpool command to destroy the pool.
The following in-use checks serve as helpful warnings and can be overridden by
using the -f option to create the pool:
- Contains a file system
The disk contains a known file system, though it is not mounted and doesn't appear to be in use.
- Part of volume
The disk is part of an SVM volume.
- Live upgrade
The disk is in use as an alternate boot environment for Solaris Live Upgrade.
- Part of exported ZFS pool
The disk is part of a storage pool that has been exported or manually removed from a system. In the latter case, the pool is reported as potentially active, as the disk might or might not be a network-attached drive in use by another system. Be cautious when overriding a potentially active pool.
The following example demonstrates how the -f option is used:
# zpool create tank c1t0d0
invalid vdev specification
use '-f' to override the following errors:
/dev/dsk/c1t0d0s0 contains a ufs filesystem.
# zpool create -f tank c1t0d0
Ideally, correct the errors rather than use the -f option.
Mismatched Replication Levels
Creating pools with virtual devices of different replication levels is not recommended. The
zpool command tries to prevent you from accidentally creating a pool with mismatched
levels of redundancy. If you try to create a pool with such a
configuration, you see errors similar to the following:
# zpool create tank c1t0d0 mirror c2t0d0 c3t0d0
invalid vdev specification
use '-f' to override the following errors:
mismatched replication level: both disk and mirror vdevs are present
# zpool create tank mirror c1t0d0 c2t0d0 mirror c3t0d0 c4t0d0 c5t0d0
invalid vdev specification
use '-f' to override the following errors:
mismatched replication level: 2-way mirror and 3-way mirror vdevs are present
You can override these errors with the -f option, though this practice is
not recommended. The command also warns you about creating a mirrored or RAID-Z
pool using devices of different sizes. While this configuration is allowed, mismatched levels
of redundancy result in unused space on the larger device, and requires the
-f option to override the warning.
Doing a Dry Run of Storage Pool Creation
Because creating a pool can fail unexpectedly in different ways, and because formatting
disks is such a potentially harmful action, the zpool create command has an
additional option, -n, which simulates creating the pool without actually writing data to
disk. This option performs the device in-use checking and replication level validation, and reports
any errors in the process. If no errors are found, you see
output similar to the following:
# zpool create -n tank mirror c1t0d0 c1t1d0
would create 'tank' with the following layout:
tank
mirror
c1t0d0
c1t1d0
Some errors cannot be detected without actually creating the pool. The most common
example is specifying the same device twice in the same configuration. This error
cannot be reliably detected without writing the data itself, so the create -n command
can report success and yet fail to create the pool when run for
real.
Default Mount Point for Storage Pools
When a pool is created, the default mount point for the root
dataset is /pool-name. This directory must either not exist or be empty. If the
directory does not exist, it is automatically created. If the directory is empty,
the root dataset is mounted on top of the existing directory. To create
a pool with a different default mount point, use the -m option
of the zpool create command:
# zpool create home c1t0d0
default mountpoint '/home' exists and is not empty
use '-m' option to specify a different default
# zpool create -m /export/zfs home c1t0d0
# zpool create home c1t0d0
default mountpoint '/home' exists and is not empty
use '-m' option to provide a different default
# zpool create -m /export/zfs home c1t0d0
This command creates a new pool home and the home dataset with
a mount point of /export/zfs.
For more information about mount points, see Managing ZFS Mount Points.
Destroying ZFS Storage Pools
Pools are destroyed by using the zpool destroy command. This command destroys the pool
even if it contains mounted datasets.
# zpool destroy tank
Caution - Be very careful when you destroy a pool. Make sure you are destroying
the right pool and you always have copies of your data. If
you accidentally destroy the wrong pool, you can attempt to recover the pool. For
more information, see Recovering Destroyed ZFS Storage Pools.
Destroying a Pool With Faulted Devices
The act of destroying a pool requires that data be written to
disk to indicate that the pool is no longer valid. This state information
prevents the devices from showing up as a potential pool when you perform
an import. If one or more devices are unavailable, the pool can still
be destroyed. However, the necessary state information won't be written to these damaged
devices.
These devices, when suitably repaired, are reported as potentially active when you create
a new pool, and appear as valid devices when you search for pools
to import. If a pool has enough faulted devices such that the pool
itself is faulted (meaning that a top-level virtual device is faulted), then the
command prints a warning and cannot complete without the -f option. This option
is necessary because the pool cannot be opened, so whether data is stored
there or not is unknown. For example:
# zpool destroy tank
cannot destroy 'tank': pool is faulted
use '-f' to force destruction anyway
# zpool destroy -f tank
For more information about pool and device health, see Determining the Health Status of ZFS Storage Pools.
For more information about importing pools, see Importing ZFS Storage Pools.