ZFS Storage Pool and File System

ZFS uses storage pools to manage storage devices. A storage device could be a whole disk or a slice of a disk. ZFS prefers to use a whole disk. The disk doesn't need to be specially formated. ZFS formats disk using an EFI (The Extensible Firmware Interface) label to contain a single, large slice. Although ZFS could use a single slice from a disk, but it works best when given whole physical disks.

ZFS pool and file system"> Create ZFS pool and file system

File systems can be created from storage pools. All file systems in one pool share disk space in the pool. As file system can grow automatically within the space in the pool, we don't need to predetermine the size of a file system. After new storage is added to the existing pool, all files systems in the pool can use the additional space immediately.

  • Create a storage pool named fsys1 and a ZFS file system in one command

# zpool create fsys1 c10t0d0

fsys1 can use as much of disk space on c10t0d0 as needed, and is automatically mounted at /fsys1. Don't need to edit file vfstab at all. We can manually mount/umount it as below,

# zfs umount fsys1 or # umount /fsys1

# zfs mount fsys1

  • within the pool, create two file systems fsys1/part1, fsys1/part2

# zfs create fsys1/part1

# zfs create fsys1/part2

Since the "mountpoint" property can be inherited, fsys1/part1 will be mounted at /fsys1/part1 if fsys1 is mounted at /fsys1. If you want to mount it at a different mount point, you can use mountpoint=path when you create it as below

# zfs create -o mountpoint=/part1 fsys1/part1

  • set quota on a file system

# zfs set quota=500G fsys1/part1

Per user quota is not supported in ZFS. An alternative is to create a file system for each user, and set up quota on that file system.


If create a pool by using more than one disk, it looks to me ZFS just creates a RAID 0 pool

# zpool create fsys c0t0d0 c0t1d0

  • Create a mirrored pool fsys1 using keyword mirror

# zpool create fsys1 mirror c1t0d0 c1t1d0

In addition to a mirrored storage pool, ZFS provides a RAID-Z configuration with either single or double parity. Single-parity RAID-Z is similar to RAID-5. Double-parity RAID-Z is similar to RAID-6.

  • Create a single parity pool that consists of 5 disks using key word raidz

# zpool create fsys raidz c1t0d0 c2t0do c3t0d0 c4t0d0 c5t0d0

  • Create a double parity pool that consists of 5 disks using key word raidz2

# zpool create fsys raidz2 c1t0d0 c2t0do c3t0d0 c4t0d0 c5t0d0

ZFS pools"> Manage Devices in ZFS pools

  • Check the status of pool fsys

# zpool status fsys

It looks like one can only add a new device at the top of the configuration (known as "root vdevs") of a storage pool. so we really cannot expand a raid group. The adding feature is more like striping two raid groups (or mirrors).

  • Add devices to a storage pool fsys1

# zpool add fsys1 c2t1d0

Creating and adding two disks to a mirror pool

# zpool create zeepool mirror c10t0d0 c10t0d1

# zpool add zeepool mirror c10t0d2 c10t0d3

    # zpool status
      pool: zeepool
     state: ONLINE
     scrub: none requested

            NAME         STATE     READ WRITE CKSUM
            zeepool      ONLINE       0     0     0
              mirror     ONLINE       0     0     0
                c10t0d0  ONLINE       0     0     0
                c10t0d1  ONLINE       0     0     0
              mirror     ONLINE       0     0     0
                c10t0d2  ONLINE       0     0     0
                c10t0d3  ONLINE       0     0     0

You can always run "zpool add -n" for dry run.

  • Replace a device c1t1d0 by c1t2d0 in pool fsys as below

# zpool replace fsys c1t1d0 c1t2d0

  • Rename a zfs pool

There is no single command to rename a pool. We could use export and import. The following two lines rename pool fsys to fsys_old.

# zpool export fsys

# zpool import fsys fsys_old

  • Destory a zfs file system or a pool

# zfs destroy fsys

# zpool destroy fsys

A file system cannot be destroyed if it has children (such as, snapshots). But one should be very careful to use zpool destroy pool. The given pool will be destroyed without any warning if the device is not busy!!! One can use zpool import -Df pool command to recover a storage pool that has been destroyed. You may use zpool import -D to check before run zpool import -Df pool.

ZFS file system"> Share ZFS file system

By default, all file system are unshared. To share, use "zfs set sharenfs=on". Don't need to edit /etc/dfs/dfstab. All file systems whose sharenfs property is not off are shared during boot. If "sharenfs" is set to "on", everyone can read the file system. Other options like ro, rw, root can be used.

The following command enable rw access for a set of IP addresses and root access for cscf.cs on the fsys1/part1 file system.

# zfs set sharenfs='rw=@,root=cscf.cs' fsys1/part1

ZFS snapshots

Sanpshots can be created instantly and initially consume no additional disk space within the pool. However, as data within the active dataset changes, the snapshot consumes disk space by continuing to reference the old data so prevents the space from being freed.

The snapshot name is specified as filesystem@snapname

  • Create a snapshot of fsys1/part1 that is named friday

# zfs snapshot fsys1/part1@friday

  • "zfs snapshot -r" will create snapshots for all descendant filesystems.

# zfs snapshot -r fsys1/part1@now

  • To destroy a snpashot

# zfs destroy fsys1/part1@friday

Snapshots of file systems are accessible in the .zfs/snapshot directory within the root of the containing file system. If fsys1/part1 is mounted on /fsys1/part1, then the snapshot fsys1/part1@friday is accessible in the /fsys1/part1/.zfs/snapshot/friday

zfs rollback command can be used to discard all changes made since a specific snapshot. By default the command cannot roll back to a snapshot other than the most recent snapshot. To roll back to a previous snaphsot, all intermediate snapshots must destroyed.

We have the simple script zfs_snapshots (see attachment) to create and roll over snapshots.

See Also

Topic attachments
I Attachment History Action Size Date Who Comment
Unknown file formatext zfs_snapshots r2 r1 manage 1.1 K 2009-03-31 - 14:34 GuoxiangShen  
PDFpdf zfsadmin_0801.pdf r1 manage 971.3 K 2008-10-15 - 15:30 GuoxiangShen  
Edit | Attach | Watch | Print version | History: r8 < r7 < r6 < r5 < r4 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r8 - 2013-02-15 - DrewPilcher
This site is powered by the TWiki collaboration platform Powered by PerlCopyright © 2008-2024 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback