Creating and Upgrading a Boot Environment When Non-Global Zones Are Installed (Tasks)
The following sections provide information about creating a boot environment when non-global zones are
installed and a procedure for upgrading when non-global zones are installed.
Creating a Boot Environment When a Non-Global Zone Is on a Separate File System
Creating a new boot environment from the currently running boot environment remains the same
as in previous releases with one exception. You can specify a destination disk
slice for a shared file system within a non-global zone. This exception occurs
under the following conditions:
If on the current boot environment the zonecfg add fs command was used to create a separate file system for a non-global zone
If this separate file system resides on a shared file system, such as /zone/root/export
To prevent this separate file system from being shared in the new
boot environment, the lucreate command enables specifying a destination slice for a separate file
system for a non-global zone. The argument to the -m option has a
new optional field, zonename. This new field places the non-global zone's separate file
system on a separate slice in the new boot environment. For more information
about setting up a non-global zone with a separate file system, see zonecfg(1M).
Note - By default, any file system other than the critical file systems (root (/),
/usr, and /opt file systems) is shared between the current and new boot
environments. Updating shared files in the active boot environment also updates data in
the inactive boot environment. For example, the /export file system is a shared file
system. If you use the -m option and the zonename option, the
non-global zone's file system is copied to a separate slice and data is
not shared. This option prevents non-global zone file systems that were created with
the zonecfg add fs command from being shared between the boot environments.
Upgrading With Solaris Live Upgrade When Non-Global Zones Are Installed on a System (Tasks)
The following procedure provides detailed instructions for upgrading with Solaris Live Upgrade for
a system with non-global zones installed.
- Remove existing Solaris Live Upgrade packages.
The three Solaris Live Upgrade packages, SUNWluu, SUNWlur, and SUNWlucfg, comprise the software
needed to upgrade by using Solaris Live Upgrade. These packages include existing software, new
features, and bug fixes. If you do not remove the existing packages and
install the new packages on your system before using Solaris Live Upgrade,
upgrading to the target release fails.
Note - The SUNWlucfg package is introduced in Solaris Express 5/07 build 52. If you
are upgrading from build 52 or later, you must remove this package along
with the other two packages. Then install the three packages from the target
release.
# pkgrm SUNWlucfg SUNWluu SUNWlur
- Install the Solaris Live Upgrade packages.
- Insert the Solaris DVD or CD.
This media contains the packages for the release to which you are upgrading.
- Install the packages in the following order from the installation media or network
installation image.
# pkagadd -d path_to_packages SUNWlucfg SUNWlur SUNWluu
In the following example, the packages are installed from the installation media.
- Verify that the packages have been installed successfully.
# pkgchk -v SUNWlucfg SUNWlur SUNWluu
- Create the new boot environment.
# lucreate [-A 'BE_description'] [-c BE_name] \ -m mountpoint:device[,metadevice]:fs_options[:zonename] [-m ...] -n BE_name
- -n BE_name
The name of the boot environment to be created. BE_name must be unique on the system.
- -A 'BE_description'
(Optional) Enables the creation of a boot environment description that is associated with the boot environment name (BE_name). The description can be any length and can contain any characters.
- -c BE_name
Assigns the name BE_name to the active boot environment. This option is not required and is only used when the first boot environment is created. If you run lucreate for the first time and you omit the -c option, the software creates a default name for you.
- -m mountpoint:device[,metadevice]:fs_options[:zonename] [-m ...]
Specifies the file systems' configuration of the new boot environment in the vfstab. The file systems that are specified as arguments to -m can be on the same disk or they can be spread across multiple disks. Use this option as many times as needed to create the number of file systems that are needed.
mountpoint can be any valid mount point or – (hyphen), indicating a swap partition.
device field can be one of the following:
The name of a disk device, of the form /dev/dsk/cwtxdysz
The name of a Solaris Volume Manager volume, of the form /dev/md/dsk/dnum
The name of a Veritas Volume Manager volume, of the form /dev/md/vxfs/dsk/dnum
The keyword merged, indicating that the file system at the specified mount point is to be merged with its parent
fs_options field can be one of the following:
ufs, which indicates a UFS file system.
vxfs, which indicates a Veritas file system.
swap, which indicates a swap file system. The swap mount point must be a – (hyphen).
For file systems that are logical devices (mirrors), several keywords specify actions to be applied to the file systems. These keywords can create a logical device, change the configuration of a logical device, or delete a logical device. For a description of these keywords, see To Create a Boot Environment With RAID-1 Volumes (Mirrors).
zonename specifies that a non-global zone's separate file system be placed on a separate slice. This option is used when the zone's separate file system is in a shared file system such as /zone1/root/export. This option copies the zone's separate file system to a new slice and prevents this file system from being shared. The separate file system was created with the zonecfg add fs command.
In the following example, a new boot environment named newbe is created.
The root (/) file system is placed on c0t1d0s4. All non-global zones
in the current boot environment are copied to the new boot environment.
The non-global zone named zone1 is given a separate mount point on c0t1d0s1.
Note - By default, any file system other than the critical file systems (root (/),
/usr, and /opt file systems) is shared between the current and new boot
environments. The /export file system is a shared file system. If you
use the -m option, the non-global zone's file system is placed on a
separate slice and data is not shared. This option prevents zone file systems
that were created with the zonecfg add fs command from being shared between the boot environments.
See zonecfg(1M) for details.
# lucreate -n newbe -m /:/dev/dsk/c0t1d0s4:ufs -m /export:/dev/dsk/c0t1d0s1:ufs:zone1
- Upgrade the boot environment.
The operating system image to be used for the upgrade is taken
from the network.
# luupgrade -u -n BE_name -s os_image_path
- -u
Upgrades an operating system image on a boot environment
- -n BE_name
Specifies the name of the boot environment that is to be upgraded
- -s os_image_path
Specifies the path name of a directory that contains an operating system image
In this example, the new boot environment, newbe, is upgraded from a network
installation image.
# luupgrade -n newbe -u -s /net/server/export/Solaris_11/combined.solaris_wos
- (Optional) Verify that the boot environment is bootable.
The lustatus command reports if the boot environment creation is complete and bootable.
# lustatus
boot environment Is Active Active Can Copy
Name Complete Now OnReboot Delete Status
------------------------------------------------------------------------
c0t1d0s0 yes yes yes no -
newbe yes no no yes -
- Activate the new boot environment.
# luactivate BE_name
BE_name specifies the name of the boot environment that is to be activated.
Note - For an x86 based system, the luactivate command is required when booting a
boot environment for the first time. Subsequent activations can be made by selecting
the boot environment from the GRUB menu. For step-by-step instructions, see x86: Activating a Boot Environment With the GRUB Menu.
To successfully activate a boot environment, that boot environment must meet several conditions.
For more information, see Activating a Boot Environment.
- Reboot.
# init 6
Caution - Use only the init or shutdown commands to reboot. If you use the
reboot, halt, or uadmin commands, the system does not switch boot environments. The most
recently active boot environment is booted again.
The boot environments have switched and the new boot environment is now the
current boot environment.
- (Optional) Fall back to a different boot environment.
If the new boot environment is not viable or you want to
switch to another boot environment, see Chapter 6, Failure Recovery: Falling Back to the Original Boot Environment (Tasks).