#vps#dedicated-servers

ZFS Snapshots: How to Create, Restore, and Automate Them

10 min read - May 5, 2026

hero section cover
Table of contents
  • ZFS Snapshots: How to Create, Restore, and Automate Them
  • How ZFS snapshots work
  • Creating snapshots
  • Restoring from snapshots
  • Managing and pruning snapshots
  • Automating retention with Sanoid
  • Off-site replication with zfs send
  • A note on ransomware protection
  • Video recommendation
  • Final thoughts
Share

Learn how to create, restore, and automate ZFS snapshots on Linux. Covers commands, rollback, retention policies, and off-site replication with Sanoid

ZFS Snapshots: How to Create, Restore, and Automate Them

A ZFS snapshot is a read-only, point-in-time copy of your filesystem. It's created instantly, takes up no space until data changes, and lets you roll back or recover files in seconds. If you manage servers, VPS instances, or anything with data you can't afford to lose, snapshots should be part of your workflow.

This post covers how ZFS snapshots work, how to use them, and how to automate retention so they don't pile up.

How ZFS snapshots work

ZFS uses a copy-on-write (CoW) model. When you take a snapshot, ZFS doesn't duplicate any data. It simply records the current state of the block pointer tree. New writes go to free blocks, while the snapshot keeps referencing the originals.

That means snapshots are created in microseconds regardless of dataset size, and they consume zero additional space at creation. They only start using space as the live dataset changes, because the snapshot holds onto the original blocks that would otherwise be freed.

This is fundamentally different from file-level backup tools like rsync or tar, which scan and copy entire files. If you change 4KB of a 10GB file, rsync copies the whole file. ZFS saves only the 4KB block.

Snapshots are also immutable. They're enforced as read-only at the kernel level, so userspace processes (including ransomware) can't modify them. Combined with ZFS's built-in checksumming, this means you can verify data integrity on restore.

Creating snapshots

Prerequisites

You'll need ZFS installed and a pool set up. On Ubuntu 20.04+:

sudo apt update && sudo apt upgrade -y
sudo apt install zfsutils-linux -y
sudo modprobe zfs

Create a pool. For a single disk (typical on a VPS):

sudo zpool create tank /dev/sdb

For a mirrored setup on a dedicated server, use disk IDs instead of device names to avoid issues after reboots:

sudo zpool create tank mirror /dev/disk/by-id/ata-DISK1 /dev/disk/by-id/ata-DISK2

Enable compression (LZ4 is fast and effective):

sudo zfs set compression=lz4 tank

Then create datasets for your workloads:

sudo zfs create tank/web
sudo zfs create tank/databases

Taking a snapshot

The basic command:

sudo zfs snapshot tank/web@before-update

For timestamped names (useful with cron):

sudo zfs snapshot tank/db@$(date +%Y%m%d_%H%M%S)

For capturing all child datasets at once, use the recursive flag:

sudo zfs snapshot -r tank@daily_backup

Verify with:

sudo zfs list -t snapshot

Restoring from snapshots

Restoring individual files

Every ZFS dataset has a hidden .zfs/snapshot directory at its mount point. It won't show up in ls, but you can navigate directly to it:

ls /tank/web/.zfs/snapshot/before-update/

To restore a single file:

cp -p /tank/web/.zfs/snapshot/before-update/config/app.conf /tank/web/config/

The -p flag preserves permissions and timestamps.

Rolling back an entire dataset

If you need to revert everything, for example after a failed upgrade:

sudo zfs rollback tank/web@before-update

This is almost instantaneous because ZFS updates block pointers rather than copying data. But it's destructive: all changes made after the snapshot are permanently lost.

If newer snapshots exist between the target and the current state, ZFS will block the rollback. Use -r to force it and remove those intermediate snapshots:

sudo zfs rollback -r tank/db@20260426_090000

A good habit: snapshot the current (broken) state before rolling back, so you have a fallback if needed.

Recovery methodSpeedData loss riskBest for
File restore via .zfsDepends on file sizeNoneAccidental deletions, single file recovery
Full rollbackInstantHigh (loses all post-snapshot changes)Failed upgrades, system-wide issues
Clone for testingInstantNone (creates a parallel dataset)Verifying before committing to a rollback

Managing and pruning snapshots

Snapshots start at zero size but grow as the live data changes beneath them. To check space usage:

zfs list -t snapshot -o name,used,refer,creation

The USED column shows how much space is unique to that snapshot. REFER shows the dataset's total size when the snapshot was taken.

To delete a snapshot:

sudo zfs destroy tank/web@before-update

You can delete by pattern too:

sudo zfs destroy tank/web@daily-2026-04-%

Always dry-run first:

sudo zfs destroy -nv tank/web@daily-%

ZFS can technically handle millions of snapshots, but performance degrades past a few thousand per dataset. Commands like zfs list and zfs destroy slow down noticeably. Keep retention tight.

Automating retention with Sanoid

Sanoid is the standard tool for automating ZFS snapshot creation and pruning. You define retention policies in sanoid.conf, and it handles the rest.

A typical production configuration might look like:

Workload typeHourlyDailyWeeklyMonthly
Standard production24-4830812
Database (high churn)72301224
Logs / low priority12-24703
Static media0703

 

Sanoid also supports sub-hourly snapshots via the frequently parameter. Setting frequently = 96 and frequent_period = 15 gives you a snapshot every 15 minutes.

Schedule Sanoid via cron to run every minute or every 15 minutes, and it will create and prune snapshots automatically.

Off-site replication with zfs send

Snapshots on a single server protect against accidental changes and software failures, but not against hardware loss. For that, replicate off-site using zfs send and zfs receive over SSH:

zfs send tank/web@backup | ssh user@remote zfs receive backup/web

For incremental transfers (sending only what changed since the last snapshot):

zfs send -i tank/web@old_snap tank/web@new_snap | ssh user@remote zfs receive backup/web

Sanoid's companion tool, syncoid, automates this process and handles incremental sends, error recovery, and logging.

A note on ransomware protection

ZFS snapshots are read-only at the kernel level, which means standard malware can't modify or encrypt them. That's a strong layer of defence. But it's not bulletproof: if an attacker gains root access, they can delete snapshots before encrypting your data.

Snapshots should be one layer in a broader strategy. Combine them with off-site replication, restricted root access, and network-level security. Don't rely on snapshots alone.

Video recommendation

Final thoughts

ZFS snapshots are fast, space-efficient, and simple to use once you understand the basics. They're not a replacement for off-site backups, but they fill a gap that traditional backup tools can't: instant, zero-overhead recovery points you can take as often as you need.

If you're running ZFS on a VPS or dedicated server, set up Sanoid, define a retention policy, and automate replication. It takes 30 minutes to configure and saves hours when something goes wrong. Try it out on an FDC VPS or dedicated server.

Blog

Featured this week

More articles
ZFS Snapshots: How to Create, Restore, and Automate Them
#vps#dedicated-servers

ZFS Snapshots: How to Create, Restore, and Automate Them

Learn how to create, restore, and automate ZFS snapshots on Linux. Covers commands, rollback, retention policies, and off-site replication with Sanoid

10 min read - May 5, 2026

#server-performance#vps

How to install and use Redis on a VPS

9 min read - January 7, 2026

More articles
background image

Have questions or need a custom solution?

icon

Flexible options

icon

Global reach

icon

Instant deployment

icon

Flexible options

icon

Global reach

icon

Instant deployment