IPMP and Dynamic Reconfiguration
The dynamic reconfiguration (DR) feature enables you to reconfigure system hardware, such as
interfaces, while the system is running. This section explains how DR interoperates with
IPMP.
On a system that supports DR of NICs, IPMP can be used
to preserve connectivity and prevent disruption of existing connections. You can safely attach,
detach, or reattach NIC's on a system that supports DR and uses IPMP.
This is possible because IPMP is integrated into the Reconfiguration Coordination Manager (RCM) framework.
RCM manages the dynamic reconfiguration of system components.
You typically use the cfgadm command to perform DR operations. However, some platforms
provide other methods. Consult your platform's documentation for details. You can find specific
documentation about DR from the following resources.
Table 30-1 Documentation Resources for Dynamic Reconfiguration
Attaching NICs
You can add interfaces to an IPMP group at any time by
using the ifconfig command, as explained in How to Configure an IPMP Group With Multiple Interfaces. Thus, any interfaces on
system components that you attach after system boot can be plumbed and added
to an existing IPMP group. Or, if appropriate, you can configure the newly
added interfaces into their own IPMP group.
These interfaces and the data addresses that are configured on them are immediately
available for use by the IPMP group. However, for the system to
automatically configure and use the interfaces after a reboot, you must create an
/etc/hostname.interface file for each new interface. For instructions, refer to How to Configure a Physical Interface After System Installation.
If an /etc/hostname.interface file already exists when the interface is attached, then RCM
automatically configures the interface according to the contents of this file. Thus,
the interface receives the same configuration that it would have received after system
boot.
Detaching NICs
All requests to detach system components that contain NICs are first checked to
ensure that connectivity can be preserved. For instance, by default you cannot
detach a NIC that is not in an IPMP group. You also
cannot detach a NIC that contains the only functioning interfaces in an IPMP
group. However, if you must remove the system component, you can override
this behavior by using the -f option of cfgadm, as explained in the cfgadm(1M)
man page.
If the checks are successful, the data addresses associated with the detached NIC
fail over to a functioning NIC in the same group, as if
the NIC being detached had failed. When the NIC is detached, all
test addresses on the NIC's interfaces are unconfigured. Then, the NIC is unplumbed
from the system. If any of these steps fail, or if the
DR of other hardware on the same system component fails, then the previous
configuration is restored to its original state. You should receive a status message
regarding this event. Otherwise, the detach request completes successfully. You can remove the
component from the system. No existing connections are disrupted.
Reattaching NICs
RCM records the configuration information associated with any NIC's that are detached from
a running system. As a result, RCM treats the reattachment of a
NIC that had been previously detached identically as it would to the attachment
of a new NIC. That is, RCM only performs plumbing.
However, reattached NICs typically have an existing /etc/hostname.interface file. In this
case, RCM automatically configures the interface according to the contents of the existing
/etc/hostname.interface file. Additionally, RCM informs the in.mpathd daemon of each data address
that was originally hosted on the reattached interface. Thus, once the reattached
interface is functioning properly, all of its data addresses are failed back to
the reattached interface as if it had been repaired.
If the NIC being reattached does not have an /etc/hostname.interface file, then
no configuration information is available. RCM has no information regarding how to configure
the interface. One consequence of this situation is that addresses that were previously
failed over to another interface are not failed back.
NICs That Were Missing at System Boot
NICs that are not present at system boot represent a special instance of
failure detection. At boot time, the startup scripts track any interfaces with
/etc/hostname.interface files that cannot be plumbed. Any data addresses in such an
interface's /etc/hostname.interface file are automatically hosted on an alternative interface in the IPMP
group.
In such an event, you receive error messages similar to the following
moving addresses from failed IPv4 interfaces: hme0 (moved to hme1)
moving addresses from failed IPv6 interfaces: hme0 (moved to hme1)
If no alternative interface exists, you receive error messages similar to the following:
moving addresses from failed IPv4 interfaces: hme0 (couldn't move;
no alternative interface)
moving addresses from failed IPv6 interfaces: hme0 (couldn't move;
no alternative interface)
Note - In this instance of failure detection, only data addresses that are explicitly specified
in the missing interface's /etc/hostname.interface file move to an alternative interface. Any
addresses that are usually acquired through other means, such as through RARP or
DHCP, are not acquired or moved.
If an interface with the same name as another interface that was
missing at system boot is reattached using DR, RCM automatically plumbs the interface. Then,
RCM configures the interface according to the contents of the interface's /etc/hostname.interface
file. Finally, RCM fails back any data addresses, just as if the interface
had been repaired. Thus, the final network configuration is identical to the configuration
that would have been made if the system had been booted with the
interface present.