1. intro

in this chapter we will use an external connected USB drive /dev/sdb to backup and restore a guest.

2. prepare the external storage

devicename="/dev/sdb"
wipefs -a ${devicename}
echo ",,05" > /tmp/prep.tmp
echo ",,83" >> /tmp/prep.tmp
sfdisk -q ${devicename} < /tmp/prep.tmp
echo y | mkfs.ext4 -L data2 -m 0.1 ${devicename}5
mkdir -p /data2
mount LABEL=data2 /data2/
pvesm add dir "data2" --path "/data2/" --content "backup,images" --prune-backups "keep-all=1"
Note check the new storage config:
cat /etc/pve/storage.cfg
dir: local
        path /var/lib/vz
        content backup,iso,vztmpl

lvmthin: local-lvm
        thinpool data
        vgname pve
        content rootdir,images

dir: data2
        path /data2/
        content backup,images
        prune-backups keep-all=1
pvesm status
Name             Type     Status           Total            Used       Available        %
data2             dir     active      1921723668          728528      1823303132    0.04%
local             dir     active        98497780        13030052        80418180   13.23%
local-lvm     lvmthin     active       335642624        16748566       318894057    4.99%

3. backup a node

vzdump 141 --storage "data2" --compress zstd
INFO: starting new backup job: vzdump 141 --compress zstd --storage data2
INFO: Starting Backup of VM 141 (qemu)
INFO: Backup started at 2023-03-04 23:30:55
INFO: status = running
INFO: VM Name: srv141
INFO: include disk 'virtio0' 'local-lvm:vm-141-disk-0' 24G
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: creating vzdump archive '/data2//dump/vzdump-qemu-141-2023_03_04-23_30_55.vma.zst'
INFO: issuing guest-agent 'fs-freeze' command
INFO: issuing guest-agent 'fs-thaw' command
INFO: started backup task 'd829fc00-120c-4c64-b844-297ce1b1470f'
INFO: resuming VM aga mkdir -p /data2
INFO:  10% (2.5 GiB of 24.0 GiB) in 3s, read: 842.2 MiB/s, write: 120.1 MiB/s
INFO:  17% (4.2 GiB of 24.0 GiB) in 6s, read: 605.4 MiB/s, write: 140.0 MiB/s
INFO:  25% (6.2 GiB of 24.0 GiB) in 9s, read: 668.2 MiB/s, write: 62.7 MiB/s
INFO:  47% (11.3 GiB of 24.0 GiB) in 12s, read: 1.7 GiB/s, write: 43.3 MiB/s
INFO:  67% (16.2 GiB of 24.0 GiB) in 15s, read: 1.6 GiB/s, write: 39.6 MiB/s
INFO:  75% (18.2 GiB of 24.0 GiB) in 18s, read: 685.1 MiB/s, write: 99.5 MiB/s
INFO: 100% (24.0 GiB of 24.0 GiB) in 21s, read: 1.9 GiB/s, write: 740.0 KiB/s
INFO: backup is sparse: 22.52 GiB (93%) total zero data
INFO: transferred 24.00 GiB in 21 seconds (1.1 GiB/s)
INFO: archive file size: 711MB
INFO: Finished Backup of VM 141 (00:00:21)
INFO: Backup finished at 2023-03-04 23:31:16
INFO: Backup job finished successfully

4. restore a node

qmrestore data2:backup/vzdump-qemu-141-2023_03_04-23_30_55.vma.zst 141
restore vma archive: zstd -q -d -c /data2//dump/vzdump-qemu-141-2023_03_04-23_30_55.vma.zst | vma extract -v -r /var/tmp/vzdumptmp83335.fifo - /var/tmp/vzdumptmp83335
CFG: size: 506 name: qemu-server.conf
DEV: dev_id=1 size: 25769803776 devname: drive-virtio0
CTIME: Sat Mar  4 23:30:55 2023
  Logical volume "vm-141-disk-0" created.
new volume ID is 'local-lvm:vm-141-disk-0'
map 'drive-virtio0' to '/dev/pve/vm-141-disk-0' (write zeros = 0)
progress 1% (read 257753088 bytes, duration 0 sec)
//
progress 100% (read 25769803776 bytes, duration 7 sec)
total bytes read 25769803776, sparse bytes 24178163712 (93.8%)
space reduction due to 4K zero blocks 0.624%
rescan volumes...

5. cleanup

sync
pvesm remove "data2"
umount /data2/