ZFS uses storage pools to manage storage devices. A storage device could be a whole disk or a slice of a disk. ZFS prefers to use a whole disk. The disk doesn't need to be specially formated. ZFS formats disk using an EFI (The Extensible Firmware Interface) label to contain a single, large slice. Although ZFS could use a single slice from a disk, but it works best when given whole physical disks.
File systems can be created from storage pools. All file systems in one pool share disk space in the pool. As file system can grow automatically within the space in the pool, we don't need to predetermine the size of a file system. After new storage is added to the existing pool, all files systems in the pool can use the additional space immediately.
# zpool create fsys1 c10t0d0
fsys1 can use as much of disk space on c10t0d0 as needed, and is automatically mounted at /fsys1. Don't need to edit file vfstab at all. We can manually mount/umount it as below,
# zfs umount fsys1 or # umount /fsys1
# zfs mount fsys1
# zfs create fsys1/part1
# zfs create fsys1/part2
Since the "mountpoint" property can be inherited, fsys1/part1 will be mounted at /fsys1/part1 if fsys1 is mounted at /fsys1. If you want to mount it at a different mount point, you can use mountpoint=path when you create it as below
# zfs create -o mountpoint=/part1 fsys1/part1
# zfs set quota=500G fsys1/part1
Per user quota is not supported in ZFS. An alternative is to create a file system for each user, and set up quota on that file system.
# zpool create fsys c0t0d0 c0t1d0
# zpool create fsys1 mirror c1t0d0 c1t1d0
In addition to a mirrored storage pool, ZFS provides a RAID-Z configuration with either single or double parity. Single-parity RAID-Z is similar to RAID-5. Double-parity RAID-Z is similar to RAID-6.
# zpool create fsys raidz c1t0d0 c2t0do c3t0d0 c4t0d0 c5t0d0
# zpool create fsys raidz2 c1t0d0 c2t0do c3t0d0 c4t0d0 c5t0d0
# zpool status fsys
It looks like one can only add a new device at the top of the configuration (known as "root vdevs") of a storage pool. so we really cannot expand a raid group. The adding feature is more like striping two raid groups (or mirrors).
# zpool add fsys1 c2t1d0
Creating and adding two disks to a mirror pool
# zpool create zeepool mirror c10t0d0 c10t0d1
# zpool add zeepool mirror c10t0d2 c10t0d3
# zpool status pool: zeepool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM zeepool ONLINE 0 0 0 mirror ONLINE 0 0 0 c10t0d0 ONLINE 0 0 0 c10t0d1 ONLINE 0 0 0 mirror ONLINE 0 0 0 c10t0d2 ONLINE 0 0 0 c10t0d3 ONLINE 0 0 0
You can always run "zpool add -n" for dry run.
# zpool replace fsys c1t1d0 c1t2d0
There is no single command to rename a pool. We could use export and import. The following two lines rename pool fsys to fsys_old.
# zpool export fsys
# zpool import fsys fsys_old
# zfs destroy fsys
# zpool destroy fsys
A file system cannot be destroyed if it has children (such as, snapshots). But one should be very careful to use zpool destroy pool. The given pool will be destroyed without any warning if the device is not busy!!! One can use zpool import -Df pool command to recover a storage pool that has been destroyed. You may use zpool import -D to check before run zpool import -Df pool.
The following command enable rw access for a set of IP addresses and root access for cscf.cs on the fsys1/part1 file system.
# zfs set sharenfs='rw=@129.97.152.128/25,root=cscf.cs' fsys1/part1
The snapshot name is specified as filesystem@snapname
# zfs snapshot fsys1/part1@friday
# zfs snapshot -r fsys1/part1@now
# zfs destroy fsys1/part1@friday
Snapshots of file systems are accessible in the .zfs/snapshot directory within the root of the containing file system. If fsys1/part1 is mounted on /fsys1/part1, then the snapshot fsys1/part1@friday is accessible in the /fsys1/part1/.zfs/snapshot/friday
zfs rollback command can be used to discard all changes made since a specific snapshot. By default the command cannot roll back to a snapshot other than the most recent snapshot. To roll back to a previous snaphsot, all intermediate snapshots must destroyed.
We have the simple script zfs_snapshots (see attachment) to create and roll over snapshots.
I | Attachment | History | Action | Size | Date | Who | Comment |
---|---|---|---|---|---|---|---|
![]() |
zfs_snapshots | r2 r1 | manage | 1.1 K | 2009-03-31 - 14:34 | GuoxiangShen | |
![]() |
zfsadmin_0801.pdf | r1 | manage | 971.3 K | 2008-10-15 - 15:30 | GuoxiangShen |