Ceci est une ancienne révision du document !
Table des matières
Proxmox ZFS
ZFS modes
RAID0
Also called “striping”. The capacity of such volume is the sum of the capacities of all disks. But RAID0 does not add any redundancy, so the failure of a single drive makes the volume unusable.
RAID1
Also called “mirroring”. Data is written identically to all disks. This mode requires at least 2 disks with the same size. The resulting capacity is that of a single disk.
RAID10
A combination of RAID0 and RAID1. Requires at least 4 disks.
RAIDZ-1
A variation on RAID-5, single parity. Requires at least 3 disks.
RAIDZ-2
A variation on RAID-5, double parity. Requires at least 4 disks.
RAIDZ-3
A variation on RAID-5, triple parity. Requires at least 5 disks.
The PVE installer
- automatically partitions the disks,
- creates a ZFS pool called rpool,
- installs the root file system on the ZFS subvolume rpool/ROOT/pve-1
- creates another subvolume called rpool/data to store VM images
RQ : It is not possible to use ZFS as root file system with UEFI boot. RQ : Always use GPT partition tables. RQ : ashift = 12 sets 4k blocks IOs with the disks RQ : DO NOT use SWAP over ZFS !
# man zpool # man zfs # vi /etc/sysctl.conf # vm.swappiness = 10 # apt-get install zfs-zed # mail notification # vi /etc/zfs/zed.d/zed.rc # change ZED_EMAIL_ADDR="root" if needed
Log and cache to SSD with existant zfs pool
- GPT partition table on the SSD device :
- log : memory size / 1.8 (the maximum size of a log device should be about half the size of physical memory)
- cache : the rest
- add (add log/cache SSD disk) : zpool add -f <pool> log <device-part1> cache <device-part2>
- replace (replace failing SSD disk) : zpool replace -f <pool> <old device> <new-device>
Tuning
vi /etc/modprobe.d/zfs.conf # change "options zfs zfs_arc_max=" to memory size / 2 at most, and 8 Go + FS size in TB * 1 GB at min ; ex : 8589934592 for 8GB RAM usage
if ZFS is root FS, do : update-initramfs -u