John Siu Blog

Tech - Business Tool, Personal Toys

Linux ZFS Command

☰ Table of Content

Cheatsheet for Linux OpenZFS.

This page serve as a quick cheatsheet, man zfs and man zpool are your friends!

Installation

On Ubuntu:

1
sudo apt install zfsutils-linux

ZPOOL

zpool command manage storage pool of zfs.

Virtual Devices Type

Different type of virtual devices (vdev).

vdev typeDescription
diskBlock device listed in /dev
fileRegular file. Only recommended for testing.
mirrorMirror of 2 or more devices.
raidz / raidz1, raidz2, raidz3Variation of RAID-5.
sparepseudo-vdev keeping track of hotspares.
logDevice for logging.
dedupDevice for deduplication table.
specialDevice for metadata.
cacheCaching device for the pool.

Create VDEV

1
2
3
4
sudo zpool create <pool> <vdev type> <device list>
sudo zpool create mypool sda
sudo zpool create mypool sda sdb
sudo zpool create mypool mirror sda sdb
  • Pool without specifying vdev is created as dynamic stripe, like RAID-0.
  • Pool name should not be same as any type.
  • Device list accept with and without base path /dev/. Both /dev/<device> and just <device> work.
File Based vdev

File based vdev is NOT recommended for any production use.

When creating file based vdev uses pre-allocated file with absolute path, and don’t need specify vdev type.

Allocating file:

1
2
3
4
sudo mkdir /zfs-storage
sudo truncate -s 1G /zfs-storage/zfs-disk-01
sudo truncate -s 1G /zfs-storage/zfs-disk-02
sudo truncate -s 1G /zfs-storage/zfs-disk-03

Create file based pool:

1
2
sudo zpool create <pool> <absolute path ...>
sudo zpool create mypool /zfs-storage/zfs-disk-01 /zfs-storage/zfs-disk-02 /zfs-storage/zfs-disk-03
Log

Create a log vdev (SLOG) when creating pool:

1
2
3
sudo zpool create <pool> <vdev type> <device list> log <vdev type> <device list>
sudo zpool create mypool mirror sda sdb log sdc
sudo zpool create mypool mirror sda sdb log mirror sdc sdd

Log vdev can be added after pool creation:

1
sudo zpool add mypool log sdc

Destroy

1
2
sudo zpool destroy <pool>
sudo zpool destroy mypool

Check Pool Status

Detail status:

1
2
3
zpool status
zpool status <pool>
zpool status mypool

Utilization and statistic summary:

1
zpool list

Scrub (Check and Repair)

1
sudo zpool scrub mypool

Replace

Replace a disk in a raid or mirror:

1
sudo zpool replace <pool> <old device> <new device>

Attach

Use attach on mirror:

1
2
sudo zpool attach <pool> <a disk in mirror> <new disk>
sudo zpool attach mypool sda sdd

Detach

Use detach on mirror:

1
2
sudo zpool detach <pool> <disk>
sudo zpool detach mypool sdd

Add

Add top level vdev, including cache, log, etc:

1
2
3
4
5
sudo zpool add <pool> <vdev type> <new disk>
sudo zpool add mypool sdd
sudo zpool add mypool cache sdd ade
sudo zpool add mypool logs sdd ade
sudo zpool add mypool mirror sdd ade

Remove

Remove disk from top level vdev, including cache, log, etc:

1
sudo zpool remove <pool> <disk>

Export/Import

Export specific pool:

1
zpool export mypool

Export all pools:

1
zpool export -a

Import by auto detecting /dev

1
2
zpool import
zpool import mypool

Import by file based pool by specifying absolute path and pool name

1
zpool import -d /zfs-storage mypool

ZFS

zfs command manage dataset within storage pool.

Dataset Type

Dataset TypeDescription
filesystemUse as common file storage.
volumeUse as raw or block. (ZVOL)
snapshotRead only file system or volume at a given point.
bookmarkSimilar to snapshot

Create Dataset

Create filesystem:

1
2
sudo zfs create <pool>/<fs>
sudo zfs create mypool/files

Create volume:

1
2
sudo zfs create -V <size> <pool>/<vol>
sudo zfs create -V 1G mypool/disk01

ZVOL will exist as system device in:

  • /dev/zvol/<pool>/<zvol>
  • /dev/<pool>/<zvol>

Check Disk Status

1
2
3
4
sudo zfs list
sudo zfs list <pool>
sudo zfs list <pool> -r
sudo zfs list <pool> -t <all|filesystem|snapshot|volume>

Snapshot

1
2
3
sudo zfs snapshot <pool>/<fs|vol>@<snapshot>
sudo zfs snapshot mypool/files@snap01
sudo zfs snapshot mypool/disk01@snap01

Destroy

1
2
3
sudo zfs destroy <pool>/<fs|vol>@<snapshot>
sudo zfs destroy mypool/files@snap01
sudo zfs destroy mypool/files

Send & Receive

Always create snapshot or bookmark before sending.

Full Stream

Send to file:

1
2
sudo zfs send <pool>/<fs|vol>@<snapshort> > <file>
sudo zfs send mypool/files@snap01 > /backup/files-snap01.img

Receive from file:

1
2
sudo zfs receive <pool>/<new fs|vol> < <file>
sudo zfs receive mypool/new-fs < /backup/files-snap01.img

Send to remote:

1
2
sudo zfs send <pool>/<new fs|vol>@<snapshort> | ssh <remote> "sudo zfs receive <pool>/<new fs|vol>"
sudo zfs send mypool/files@snap01 | ssh user@remote "sudo zfs receive mypool/new-fs"

When receiving full stream from file or pipe, a new filesystem or vol will be created. The receive operation will fail if filesystem or volume already exist.

Incremental

Zfs stream can be created incrementally with -i option during send. Two snapshots or bookmarks are needed.

1
2
3
sudo zfs snapshot mypool/files@snap01
# Some file changes in mypool/files
sudo zfs snapshot mypool/files@snap02

Send to file:

1
2
sudo zfs send -i <pool>/<fs|vol>@<snapshort#1> <pool>/<fs|vol>@<snapshort#2> > <file>
sudo zfs send -i mypool/files@snap01 mypool/files@snap02 > /backup/files-snap01-02.img

Receive from file:

1
2
sudo zfs receive <pool>/<fs|vol> < <file>
sudo zfs receive mypool/fs < /backup/files-snap01-02.img

Send to remote:

1
2
3
4
5
sudo zfs send -i <pool>/<fs|vol>@<snapshort#1> <pool>/<fs|vol>@<snapshort#2> | \
ssh <remote> "sudo zfs receive <pool>/<fs|vol>"

sudo zfs send -i mypool/files@snap01 mypool/files@snap02 | \
ssh user@remote "sudo zfs receive mypool/fs"

As oppose to full stream, when receiving incremental stream, the receiving filesystem or volume must exist.


Reference:

John Siu

Update: 2021-04-23
comments powered by Disqus