| ZPOOL-ATTACH(8) | System Manager's Manual | ZPOOL-ATTACH(8) | 
zpool-attach —
    attach new device to existing ZFS vdev
| zpool | attach[-fsw]
      [-oproperty=value]
      pool device new_device | 
Attaches new_device to the existing device. The behavior differs depending on if the existing device is a RAID-Z device, or a mirror/plain device.
If the existing device is a mirror or plain device (e.g. specified
    as "sda" or
    "mirror-7"), the new device will be
    mirrored with the existing device, a resilver will be initiated, and the new
    device will contribute to additional redundancy once the resilver completes.
    If device is not currently part of a mirrored
    configuration, device automatically transforms into a
    two-way mirror of device and
    new_device. If device is part of
    a two-way mirror, attaching new_device creates a
    three-way mirror, and so on. In either case,
    new_device begins to resilver immediately and any
    running scrub is canceled.
If the existing device is a RAID-Z device (e.g. specified as
    "raidz2-0"), the new device will become part
    of that RAID-Z group. A "raidz expansion" will be initiated, and
    once the expansion completes, the new device will contribute additional
    space to the RAID-Z group. The expansion entails reading all allocated space
    from existing disks in the RAID-Z group, and rewriting it to the new disks
    in the RAID-Z group (including the newly added
    device). Its progress can be monitored with
    zpool status.
Data redundancy is maintained during and after the expansion. If a disk fails while the expansion is in progress, the expansion pauses until the health of the RAID-Z vdev is restored (e.g. by replacing the failed disk and waiting for reconstruction to complete). Expansion does not change the number of failures that can be tolerated without data loss (e.g. a RAID-Z2 is still a RAID-Z2 even after expansion). A RAID-Z vdev can be expanded multiple times.
After the expansion completes, old blocks retain their old
    data-to-parity ratio (e.g. 5-wide RAID-Z2 has 3 data and 2 parity) but
    distributed among the larger set of disks. New blocks will be written with
    the new data-to-parity ratio (e.g. a 5-wide RAID-Z2 which has been expanded
    once to 6-wide, has 4 data and 2 parity). However, the vdev's assumed parity
    ratio does not change, so slightly less space than is expected may be
    reported for newly-written blocks, according to zfs
    list, df,
    ls -s, and similar
  tools.
A pool-wide scrub is initiated at the end of the expansion in order to verify the checksums of all blocks which have been copied during the expansion.
-f-o
    property=value-s-wzpool-add(8), zpool-detach(8), zpool-import(8), zpool-initialize(8), zpool-online(8), zpool-replace(8), zpool-resilver(8)
| June 28, 2023 | OpenZFS |