ZFS Configuration Guide

From Siwiki

Jump to: navigation, search

Contents

[edit] Setting Up ZFS on Different Hardware and Storage Components

The purpose of this guide is provide examples of configuring ZFS on specific systems or storage components. The ZFS configuration examples in this guide include systems with 2, 4, 8, and 48 disks and eventually, ZFS configurations for systems with storage arrays.

Consider the following best practices when configuring ZFS on any hardware:

  • Run ZFS on a 64-bit system with at least 1 Gbyte or more of memory.
  • Alway use ZFS redundancy, such as a mirrored or RAID-Z configuration, regardless of the underlying storage technology, in a production environment.
  • Set up ZFS hot spares to reduce hardware failures.
  • Use whole disks for a ZFS configuration in a production environment, which are generally better for performance and for an easier replacement/recovery process than using slices.
  • Use two disks for a mirrored root pool and then use additional disks to create a redundant non root pool, if necessary.

For detailed ZFS best practice information, go to the following pages:

ZFS Best Practices Guide[[1]]

For detailed descriptions of ZFS features, commands, and more details about setting up ZFS storage pools and file systems, see the following doc:

SXCE ZFS Administration Guide[[2]]

Solaris 10 ZFS Administration Guide[[3]]

[edit] General ZFS Configuration Guidelines for Two-Disk Systems

Consider the following guidelines when configuring ZFS on systems with two disks.

  • Solaris Express Community Edition (SXCE), build 90 release and starting in the Solaris 10 10/08 release, the ability to install and boot a ZFS mirrored root pool is provided. However, a bootable ZFS root pool needs to be created from disk slices. For example:
mirror c0t0d0s0 and c0t1d0s0
  • A bootable ZFS root pool can be created during an initial installation or a Custom JumpStart installation.
  • Solaris 10 releases, use SVM mirroring to mirror root slice and swap areas across the two disks.
  • Use remaining slices to create a ZFS mirrored configuration. For example:
 mirror c0t0d0s7 and c0t1d0s7
  • Using slices in a ZFS configuration is not recommended for a production environment. However, you can create a slice that represents the entire disk if you want to create a bootable ZFS pool in the SXCE, build 90 release. The other alternative is to add more disks so that you can create a redundant ZFS configuration across whole disks.

[edit] How to Set Up ZFS on an x4500 System

The x4500 has 6 controllers with 8 disks each for a total of 48 disks. By default, this system is configured with raidz1 devices comprised of disks on each of the 6 controllers. This redundant configuration is optimized for space with single-parity data protection, not for performance.

Consider the following general configurations guidelines if the default configuration doesn't meet your needs:

  • In the SXCE release or Solaris 10 release starting in Solaris 10 10/08, mirror the boot disks across the controllers, if possible, depending on your configuration during an initial installation or a Custom JumpStart installation. For example, mirror c4t0d0 to c5t0d0.
  • Set up hot spares for the ZFS storage pool.
  • Use a redundant ZFS mirrored or raidz configuration.

Setting up a RAIDZ configuration on x4540 system that includes an illustration is provided in Recipe for a ZFS RAID-Z Storage Pool on Sun Fire X4540.

[edit] ZFS Configuration Example (x4500 with raidz2)

In this example, the disks are configured as follows:

  • SXCE and Solaris 10 releases, c4t0d0 and c5t0d0 can be a mirrored ZFS root pool for bootable ZFS dataset and dump and swap devices
  • A ZFS root pool must be created with slices to be bootable. The equivalent zpool create syntax is, for example:

zpool create mpool mirror c4t0d0s0 c5t0d0s0

  • ZFS storage pool that contains 7 raidz2 devices comprised of 6 disks each
  • This raidz2 configuration provides approximately 12.5 terabytes of file system space
  • c0t0d0, c1t0d0, c6t0d0, and c7t0d0 are used for hot spares

In the following examples, the rzpool storage pool is created with 4 raidz2 devices of 6 disks. Depending on your shell environment, you might run into a maximum character line limit, so the commands are separated into different steps.

First, create the storage pool with 4 raidz2 devices of 6 disks.

# zpool create rzpool raidz2 c0t1d0 c1t1d0 c4t1d0 c5t1d0 c6t1d0 c7t1d0 raidz2 c0t2d0 c1t2d0 c4t2d0 c5t2d0 
c6t2d0 c7t2d0 raidz2 c0t3d0 c1t3d0 c4t3d0 c5t3d0 c6t3d0 c7t3d0 raidz2 c0t4d0 c1t4d0 c4t4d0 c5t4d0 c6t4d0 c7t4d0
# zpool status
  pool: rpool
 state: ONLINE
 scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        rpool       ONLINE       0     0     0
          raidz2    ONLINE       0     0     0
            c0t1d0  ONLINE       0     0     0
            c1t1d0  ONLINE       0     0     0
            c4t1d0  ONLINE       0     0     0
            c5t1d0  ONLINE       0     0     0
            c6t1d0  ONLINE       0     0     0
            c7t1d0  ONLINE       0     0     0
          raidz2    ONLINE       0     0     0
            c0t2d0  ONLINE       0     0     0
            c1t2d0  ONLINE       0     0     0
            c4t2d0  ONLINE       0     0     0
            c5t2d0  ONLINE       0     0     0
            c6t2d0  ONLINE       0     0     0
            c7t2d0  ONLINE       0     0     0
          raidz2    ONLINE       0     0     0
            c0t3d0  ONLINE       0     0     0
            c1t3d0  ONLINE       0     0     0
            c4t3d0  ONLINE       0     0     0
            c5t3d0  ONLINE       0     0     0
            c6t3d0  ONLINE       0     0     0
            c7t3d0  ONLINE       0     0     0
          raidz2    ONLINE       0     0     0
            c0t4d0  ONLINE       0     0     0
            c1t4d0  ONLINE       0     0     0
            c4t4d0  ONLINE       0     0     0
            c5t4d0  ONLINE       0     0     0
            c6t4d0  ONLINE       0     0     0
            c7t4d0  ONLINE       0     0     0

errors: No known data errors

Then, add the 3 additional raidz2 devices of 6 disks each as well as the 4 spares.

# zpool add rzpool raidz2 c0t5d0 c1t5d0 c4t5d0 c5t5d0 c6t5d0 c7t5d0
# zpool add rzpool raidz2 c0t6d0 c1t6d0 c4t6d0 c5t6d0 c6t6d0 c7t6d0
# zpool add rzpool raidz2 c0t7d0 c1t7d0 c4t7d0 c5t7d0 c6t7d0 c7t7d0
# zpool add rzpool spare c0t0d0 c1t0d0 c6t0d0 c7t0d0

Review the pool configuration.

# zpool status
  pool: rzpool
 state: ONLINE
 scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        rzpool      ONLINE       0     0     0
          raidz2    ONLINE       0     0     0
            c0t1d0  ONLINE       0     0     0
            c1t1d0  ONLINE       0     0     0
            c4t1d0  ONLINE       0     0     0
            c5t1d0  ONLINE       0     0     0
            c6t1d0  ONLINE       0     0     0
            c7t1d0  ONLINE       0     0     0
          raidz2    ONLINE       0     0     0
            c0t2d0  ONLINE       0     0     0
            c1t2d0  ONLINE       0     0     0
            c4t2d0  ONLINE       0     0     0
            c5t2d0  ONLINE       0     0     0
            c6t2d0  ONLINE       0     0     0
            c7t2d0  ONLINE       0     0     0
          raidz2    ONLINE       0     0     0
            c0t3d0  ONLINE       0     0     0
            c1t3d0  ONLINE       0     0     0
            c4t3d0  ONLINE       0     0     0
            c5t3d0  ONLINE       0     0     0
            c6t3d0  ONLINE       0     0     0
            c7t3d0  ONLINE       0     0     0
          raidz2    ONLINE       0     0     0
            c0t4d0  ONLINE       0     0     0
            c1t4d0  ONLINE       0     0     0
            c4t4d0  ONLINE       0     0     0
            c5t4d0  ONLINE       0     0     0
            c6t4d0  ONLINE       0     0     0
            c7t4d0  ONLINE       0     0     0
          raidz2    ONLINE       0     0     0
            c0t5d0  ONLINE       0     0     0
            c1t5d0  ONLINE       0     0     0
            c4t5d0  ONLINE       0     0     0
            c5t5d0  ONLINE       0     0     0
            c6t5d0  ONLINE       0     0     0
            c7t5d0  ONLINE       0     0     0
          raidz2    ONLINE       0     0     0
            c0t6d0  ONLINE       0     0     0
            c1t6d0  ONLINE       0     0     0
            c4t6d0  ONLINE       0     0     0
            c5t6d0  ONLINE       0     0     0
            c6t6d0  ONLINE       0     0     0
            c7t6d0  ONLINE       0     0     0
          raidz2    ONLINE       0     0     0
            c0t7d0  ONLINE       0     0     0
            c1t7d0  ONLINE       0     0     0
            c4t7d0  ONLINE       0     0     0
            c5t7d0  ONLINE       0     0     0
            c6t7d0  ONLINE       0     0     0
            c7t7d0  ONLINE       0     0     0
        spares
          c0t0d0    AVAIL   
          c1t0d0    AVAIL   
          c6t0d0    AVAIL   
          c7t0d0    AVAIL   

errors: No known data errors

Identify the available pool and file system space.

# zpool list
NAME                    SIZE    USED   AVAIL    CAP  HEALTH     ALTROOT
rzpool                 19.0T    238K   19.0T     0%  ONLINE     -
# zfs list
NAME     USED  AVAIL  REFER  MOUNTPOINT
rzpool   151K  12.5T  49.0K  /rpool

Create file systems as needed. For example:

# zfs create rzpool/users
# zfs create rzpool/devs
# zfs create rzpool/data
# zfs list
NAME           USED  AVAIL  REFER  MOUNTPOINT
rzpool         331K  12.5T  54.9K  /rzpool
rzpool/data   49.0K  12.5T  49.0K  /rzpool/data
rzpool/devs   49.0K  12.5T  49.0K  /rzpool/devs
rzpool/users  49.0K  12.5T  49.0K  /rzpool/users

[edit] ZFS Configuration Example (x4500 with mirror)

In this example, the disks are configured as follows:

  • SXCE and Solaris 10 releases, c5t0d0 and c4t0d0 can be a mirrored ZFS root pool for bootable ZFS dataset and dump and
  • Create a mirrored ZFS root pool during an initial installation or a Custom JumpStart installation. The equivalent zpool create syntax is, for example:

zpool create rpool mirror c5t0d0s0 c4t0d0s0

  • ZFS storage pool contains 14 3-way mirrors
  • c0t0d0, c1t0d0, c6t0d0, and c7t0d0 are used for ZFS hot spares
  • This mirrored ZFS configuration provides approximately 6.24 terabytes of file system space

In the following example, the mpool storage pool is created with 7 3-way mirrors for the first 3 controllers.

# zpool create mpool mirror c0t1d0 c1t1d0 c4t1d0 mirror c0t2d0 c1t2d0 c4t2d0 
mirror c0t3d0 c1t3d0 c4t3d0 mirror c0t4d0 c1t4d0 c4t4d0 mirror c0t5d0 c1t5d0 c4t5d0
mirror c0t6d0 c1t6d0 c4t6d0 mirror c0t7d0 c1t7d0 c4t7d0

Review the pool configuration.

# zpool status
  pool: mpool
 state: ONLINE
 scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        mpool       ONLINE       0     0     0
          mirror    ONLINE       0     0     0
            c0t1d0  ONLINE       0     0     0
            c1t1d0  ONLINE       0     0     0
            c4t1d0  ONLINE       0     0     0
          mirror    ONLINE       0     0     0
            c0t2d0  ONLINE       0     0     0
            c1t2d0  ONLINE       0     0     0
            c4t2d0  ONLINE       0     0     0
          mirror    ONLINE       0     0     0
            c0t3d0  ONLINE       0     0     0
            c1t3d0  ONLINE       0     0     0
            c4t3d0  ONLINE       0     0     0
          mirror    ONLINE       0     0     0
            c0t4d0  ONLINE       0     0     0
            c1t4d0  ONLINE       0     0     0
            c4t4d0  ONLINE       0     0     0
          mirror    ONLINE       0     0     0
            c0t5d0  ONLINE       0     0     0
            c1t5d0  ONLINE       0     0     0
            c4t5d0  ONLINE       0     0     0
          mirror    ONLINE       0     0     0
            c0t6d0  ONLINE       0     0     0
            c1t6d0  ONLINE       0     0     0
            c4t6d0  ONLINE       0     0     0
          mirror    ONLINE       0     0     0
            c0t7d0  ONLINE       0     0     0
            c1t7d0  ONLINE       0     0     0
            c4t7d0  ONLINE       0     0     0

errors: No known data errors

Then, add the 7 remaining mirrors and the 4 hot spares.

# zpool add mpool mirror c5t1d0 c6t1d0 c7t1d0
# zpool add mpool mirror c5t2d0 c6t2d0 c7t2d0
# zpool add mpool mirror c5t3d0 c6t3d0 c7t3d0
# zpool add mpool mirror c5t4d0 c6t4d0 c7t4d0
# zpool add mpool mirror c5t5d0 c6t5d0 c7t5d0
# zpool add mpool mirror c5t6d0 c6t6d0 c7t6d0
# zpool add mpool mirror c5t7d0 c6t7d0 c7t7d0
# zpool add mpool spare c0t0d0 c1t0d0 c6t0d0 c7t0d0

Review the pool configuration.

# zpool status
  pool: mpool
 state: ONLINE
 scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        mpool       ONLINE       0     0     0
          mirror    ONLINE       0     0     0
            c0t1d0  ONLINE       0     0     0
            c1t1d0  ONLINE       0     0     0
            c4t1d0  ONLINE       0     0     0
          mirror    ONLINE       0     0     0
            c0t2d0  ONLINE       0     0     0
            c1t2d0  ONLINE       0     0     0
            c4t2d0  ONLINE       0     0     0
          mirror    ONLINE       0     0     0
            c0t3d0  ONLINE       0     0     0
            c1t3d0  ONLINE       0     0     0
            c4t3d0  ONLINE       0     0     0
          mirror    ONLINE       0     0     0
            c0t4d0  ONLINE       0     0     0
            c1t4d0  ONLINE       0     0     0
            c4t4d0  ONLINE       0     0     0
          mirror    ONLINE       0     0     0
            c0t5d0  ONLINE       0     0     0
            c1t5d0  ONLINE       0     0     0
            c4t5d0  ONLINE       0     0     0
          mirror    ONLINE       0     0     0
            c0t6d0  ONLINE       0     0     0
            c1t6d0  ONLINE       0     0     0
            c4t6d0  ONLINE       0     0     0
          mirror    ONLINE       0     0     0
            c0t7d0  ONLINE       0     0     0
            c1t7d0  ONLINE       0     0     0
            c4t7d0  ONLINE       0     0     0
          mirror    ONLINE       0     0     0
            c5t1d0  ONLINE       0     0     0
            c6t1d0  ONLINE       0     0     0
            c7t1d0  ONLINE       0     0     0
          mirror    ONLINE       0     0     0
            c5t2d0  ONLINE       0     0     0
            c6t2d0  ONLINE       0     0     0
            c7t2d0  ONLINE       0     0     0
          mirror    ONLINE       0     0     0
            c5t3d0  ONLINE       0     0     0
            c6t3d0  ONLINE       0     0     0
            c7t3d0  ONLINE       0     0     0
          mirror    ONLINE       0     0     0
            c5t4d0  ONLINE       0     0     0
            c6t4d0  ONLINE       0     0     0
            c7t4d0  ONLINE       0     0     0
          mirror    ONLINE       0     0     0
            c5t5d0  ONLINE       0     0     0
            c6t5d0  ONLINE       0     0     0
            c7t5d0  ONLINE       0     0     0
          mirror    ONLINE       0     0     0
            c5t6d0  ONLINE       0     0     0
            c6t6d0  ONLINE       0     0     0
            c7t6d0  ONLINE       0     0     0
          mirror    ONLINE       0     0     0
            c5t7d0  ONLINE       0     0     0
            c6t7d0  ONLINE       0     0     0
            c7t7d0  ONLINE       0     0     0
         spares
          c0t0d0    AVAIL   
          c1t0d0    AVAIL   
          c6t0d0    AVAIL   
          c7t0d0    AVAIL   

errors: No known data errors

Identify the available pool and file system space.

# zpool list
NAME                    SIZE    USED   AVAIL    CAP  HEALTH     ALTROOT
mpool                  6.34T    125K   6.34T     0%  ONLINE     -
# zfs list
NAME    USED  AVAIL  REFER  MOUNTPOINT
mpool   118K  6.24T  24.5K  /mpool

Create file systems as needed. For example:

# zfs create mpool/users
# zfs create mpool/devs
# zfs create mpool/data
# zfs list
NAME          USED  AVAIL  REFER  MOUNTPOINT
mpool         208K  6.24T  28.5K  /mpool
mpool/data   24.5K  6.24T  24.5K  /mpool/data
mpool/devs   24.5K  6.24T  24.5K  /mpool/devs
mpool/users  24.5K  6.24T  24.5K  /mpool/users

[edit] Setting Up ZFS on Various Systems

The following examples illustrate how to set up ZFS on various system types.

[edit] How to Set Up ZFS on an x4200 System

The x4200 has 4 146-Gbyte disks. The actual capacity is 136 Gbytes per disk.

Consider the following general configuration guidelines:

  • Solaris 10 releases:
  • Use SVM mirroring for the boot disk. For example, c0t0d0 and c1t0d0 are SVM mirrors of UFS file systems for root, usr, var, and swap space.
  • SXCE, build 90 release:
  • Create a mirrored ZFS root pool during an initial installation or a Custom JumpStart installation. The equivalent zpool create syntax is, for example:

zpool create rpool mirror c0t0d0s0 c1t0d0s0

  • Configuration common to both releases:
  • Set up a ZFS mirrored configuration on the two remaining disks c2t0d0 and c3t0d0. This configuration provides approximately 272 Gbytes of file system space.
  • A best practice would be to add some additional disks as ZFS hot spares.

[edit] ZFS Configuration Example (x4200 with mirror)

In the following example, the tank storage pool is created with one mirror device of two disks.

# zpool create tank mirror c2t0d0 c3t0d0

Review the storage pool configuration.

# zpool status tank
  pool: tank
 state: ONLINE
 scrub: none requested
config:

        NAME         STATE     READ WRITE CKSUM
        tank         ONLINE       0     0     0
          mirror     ONLINE       0     0     0
            c2t0d0   ONLINE       0     0     0
            c3t0d0   ONLINE       0     0     0

errors: No known data errors

Identify the available storage pool space.

# zpool list
NAME                    SIZE    USED   AVAIL    CAP  HEALTH     ALTROOT
tank                    272G    178K    272G     0%  ONLINE     -

Create the ZFS file systems. For example:

# zfs create tank/users
# zfs create tank/data
# zfs list tank
NAME   USED  AVAIL  REFER  MOUNTPOINT
tank   144K   134G  26.5K  /tank
# zfs list
NAME          USED  AVAIL  REFER  MOUNTPOINT
tank          146K   134G  27.5K  /tank
tank/data    24.5K   134G  24.5K  /tank/data
tank/users   24.5K   134G  24.5K  /tank/users


[edit] ZFS Configuration Example (Ultra 45 with mirror)

The Sun Ultra 45 system can be configured with 2 or 4 250 Gbyte disks. The actual disk capacity is 232.87 Gbytes.

Consider a similar configuration as the x2100 with 4 disks:

  • Solaris 10 releases:
  • Use SVM mirroring to mirror the boot disk, c1t0d0, with disk c1t1d0.
  • SXCE, build 90, release:
  • Create a bootable mirrored ZFS root pool during a initial installation or a Custom JumpStart installation. The equivalent zpool create syntax is, for example:

zpool create rpool mirror c1t0d0s0 c1t1d0s0

  • Configuration common to both Solaris releases:
  • Use a mirrored ZFS configuration with the remaining two disks, c1t2d0 and c1t3d0. This configuration provides approximately 457 Gbytes of file system space.
# format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
         0. c1t0d0 <ATA-HITACHIHDS7225S-A94A cyl 65533 alt 2 hd 16 sec 465>
            /pci@1e,600000/pci@0/pci@9/pci@0/scsi@1/sd@0,0
         1. c1t1d0 <ATA-HITACHI HDS7225S-A9CA-232.88GB>
            /pci@1e,600000/pci@0/pci@9/pci@0/scsi@1/sd@1,0
         2. c1t2d0 <ATA-HITACHI HDS7225S-A9CA-232.88GB>
            /pci@1e,600000/pci@0/pci@9/pci@0/scsi@1/sd@2,0
         3. c1t3d0 <ATA-HITACHI HDS7225S-A9CA-232.88GB>
            /pci@1e,600000/pci@0/pci@9/pci@0/scsi@1/sd@3,0

# zpool create mpool mirror c1t2d0 c1t3d0
# zpool status -v
    pool: mpool
   state: ONLINE
   scrub: none requested
config:

          NAME        STATE     READ WRITE CKSUM
          mpool       ONLINE       0     0     0
            c1t1d0    ONLINE       0     0     0
            c1t2d0    ONLINE       0     0     0

errors: No known data errors 
# zfs list
NAME                   USED  AVAIL  REFER  MOUNTPOINT
mpool                   54K   457G  1.50K  /mpool 

[edit] ZFS Configuration Example (Netra(TM) T1 with raidz)

This system is used to serve install images, where maximizing disk space is more important than providing best performance.

This system is configured as follows:

  • Two internal 136.96 Gbyte disks are used for the Solaris OS and LiveUpgrade space
  • 12 68.37 Gbyte external disks, available from two controllers (c2) and (c3), configured as two RAIDZ devices of 5 disks each
  • Two external 68.37 Gbyte disks are used as spares
% zpool status
  pool: export
 state: ONLINE
 scrub: none requested
config:

        NAME         STATE     READ WRITE CKSUM
        export       ONLINE       0     0     0
          raidz1     ONLINE       0     0     0
            c2t8d0   ONLINE       0     0     0
            c3t8d0   ONLINE       0     0     0
            c2t9d0   ONLINE       0     0     0
            c3t9d0   ONLINE       0     0     0
            c2t10d0  ONLINE       0     0     0
          raidz1     ONLINE       0     0     0
            c3t10d0  ONLINE       0     0     0
            c2t11d0  ONLINE       0     0     0
            c3t11d0  ONLINE       0     0     0
            c2t12d0  ONLINE       0     0     0
            c3t12d0  ONLINE       0     0     0
        spares
          c2t13d0    AVAIL   
          c3t13d0    AVAIL   

This configuration provides 680 Gbytes of pool space.

% zpool list
NAME                    SIZE    USED   AVAIL    CAP  HEALTH     ALTROOT
export                  680G    645G   35.5G    94%  ONLINE     -

[edit] How to Set Up ZFS on Systems With Storage Arrays

Keep the following considerations in mind when using ZFS with storage arrays:

  • Will the storage array be dedicated to the host running ZFS?

If no, then does this array support multiple volumes or RAID groups?

  • Do you want to optimize for space, performance, or reliability?

Keep this answer handy when reviewing the following items.

  • Determine how you want to allocate space from the array to the system

What storage object granularity meets your needs? Do you want to present full-size LUNS (JBOD-mode), less than full size LUNs (hardware-RAID mode), or LUNs based on stripes?

* If you want to present full disk-size LUNs to ZFS, see the following example:
  #ZFS_Configuration_Example_(Sun StorEdge 3510)
* Then, determine whether you want to use RAID-Z or a mirrored ZFS configuration for redundancy on top of the 
  presented LUNs from the array. 
* For more information about selecting a RAID-Z or mirrored ZFS configuration, see #Choosing_Storage_Array_Redundancy_With_ZFS.
  • Presenting smaller LUNs than the full capacity of the disks
 In this scenario, you would need to use the hardware RAID capability of the array. You would have to make decisions 
 about what RAID level to use in the array, and then decide how many and what size LUNs to make. See 
 #Choosing_Storage_Array_Redundancy_With_ZFS  for a description of using redundancy at the array level with ZFS.
  • Presenting LUNs that consist of a set of striped disks in an array
 This configuration is not recommended unless you use a ZFS mirrored configuration on top of the striped LUNs. Striped 
 LUNs do not provide enough redundancy to be a reliable configuration. If a disk fails, you must replace the bad disk,
 re-create stripe on an array and have ZFS resilver the data unless hot spares are configured.


[edit] Choosing Storage Array Redundancy With ZFS

Consider the following advantages and disadvantages when choosing a storage array redundancy configuration:

  • Presenting mirrored LUNs from the array to ZFS
 * Advantage: If a disk fails in the array, the other disk in the mirror continues to present a viable LUN to ZFS. You 
   replace the disk in the array, the array does the resilvering, and ZFS remains oblivious to this operation. 
 * Disadvantage: Disk space utilized by a mirrored configuration.
  • Creating a redundant ZFS mirrored configuration from hardware-based mirrored LUNs
 * Disadvantage: Is a 50% hit on space utilization. Doing so again in a ZFS mirrored configuration means that you have 
   only 25% of the physical disk space available for data storage. 
  • Creating a redundant ZFS raidz1 configuration from hardware-based mirrored LUNs
 * Advantage: A raidz1 configuration consumes some quantity of disk space for parity, but you'd have protection
   from a full mirror failure. A raidz2 configuration might be overkill, because your data is mirrored anyway.
  • Creating a redundant ZFS configuration from hardware-based RAID5 configuration
 * For best reliability and space optimization, use a mirrored ZFS configuration with a hardware-based RAID5 
   configuration rather than a ZFS raidz configuration with a RAID-5 hardware configuration. Otherwise, time is spent 
   calculating parity in the software *and* the hardware.

[edit] ZFS Configuration Example (x4450 with 2 Sun StorEdge 3510s (JBOD))

The Sun StorEdge 3510 holds 12 disks, that range in size up to 146 Gbytes.

If presenting 12 x 146 Gbyte disks to your FC fabric provides the granularity you need, this configuration might be optimal space utilization. In this case, 12 x 146 Gbyte LUNs would be available to the hosts attached to the FC fabric. You could use them all on one system, or divide them up among multiple systems. You can expand this scenario if you have multiple arrays available.

In this example, the 4450 system is used as a build server, where performance is more important than maximizing disk space. The system components are configured as follows:

  • This system contains 8 internal SATA disks that are the same size as the 24 disks on the 3510s.
  • The 24 disks on the 3510s are mirrored as 12 two-way mirrors.
  • Four of the internal SATA drives are also mirrored.
  • Three of the internal SATA drives are allocated as spares. This isn't an optimal best practice, but provides some protection from a hardware failure in the JBOD arrays.
  • One of the internal SATA drives is used for a UFS root file system.
$ zpool status
  pool: export
 state: ONLINE
 scrub: none requested
config:

        NAME                       STATE     READ WRITE CKSUM
        export                     ONLINE       0     0     0
          mirror                   ONLINE       0     0     0
            c6t500000E0180A5B70d0  ONLINE       0     0     0
            c6t500000E015595B60d0  ONLINE       0     0     0
          mirror                   ONLINE       0     0     0
            c6t500000E0180A5BF0d0  ONLINE       0     0     0
            c6t500000E015597EF0d0  ONLINE       0     0     0
          mirror                   ONLINE       0     0     0
            c6t500000E0180A5C60d0  ONLINE       0     0     0
            c6t500000E0155802A0d0  ONLINE       0     0     0
          mirror                   ONLINE       0     0     0
            c6t500000E0180A5CE0d0  ONLINE       0     0     0
            c6t500000E0155804D0d0  ONLINE       0     0     0
          mirror                   ONLINE       0     0     0
            c6t500000E0180A5E40d0  ONLINE       0     0     0
            c6t500000E0155808D0d0  ONLINE       0     0     0
          mirror                   ONLINE       0     0     0
            c6t500000E0180A5FB0d0  ONLINE       0     0     0
            c6t500000E0155955A0d0  ONLINE       0     0     0
          mirror                   ONLINE       0     0     0
            c6t500000E01809E3E0d0  ONLINE       0     0     0
            c6t500000E0155957B0d0  ONLINE       0     0     0
          mirror                   ONLINE       0     0     0
            c6t500000E01809E5E0d0  ONLINE       0     0     0
            c6t500000E0155970F0d0  ONLINE       0     0     0
          mirror                   ONLINE       0     0     0
            c6t500000E01809E8E0d0  ONLINE       0     0     0
            c6t500000E015580340d0  ONLINE       0     0     0
          mirror                   ONLINE       0     0     0
            c6t500000E01809E260d0  ONLINE       0     0     0
            c6t500000E015580480d0  ONLINE       0     0     0
          mirror                   ONLINE       0     0     0
            c6t500000E01809E310d0  ONLINE       0     0     0
            c6t500000E015581110d0  ONLINE       0     0     0
          mirror                   ONLINE       0     0     0
            c6t500000E01809E360d0  ONLINE       0     0     0
            c6t500000E015598750d0  ONLINE       0     0     0
          mirror                   ONLINE       0     0     0
            c0t12d0                ONLINE       0     0     0
            c0t11d0                ONLINE       0     0     0
          mirror                   ONLINE       0     0     0
            c0t10d0                ONLINE       0     0     0
            c0t9d0                 ONLINE       0     0     0
        spares
          c0t15d0                  AVAIL   
          c0t14d0                  AVAIL   
          c0t13d0                  AVAIL

This configuration provides 1.86 Terabytes of pool space.

% zpool list
NAME     SIZE   USED  AVAIL    CAP  HEALTH  ALTROOT
export  1.86T   111G  1.75T     5%  ONLINE  -
Solaris Internals
Personal tools
The Books
The Ads