List disks by ID
ls -alh /dev/disk/by-id
Help : make correspondances with disks by-path :
ls -alh /dev/disk/by-path/
An other view :
lsblk -o NAME,SIZE,FSTYPE,TYPE,MOUNTPOINT
Create zpool of RAIDZ1 type, named “zp1” 3 disk, ashift=12 (4096 bytes pool sector size)
Notes : The ashift needs to be set at least to the sector-size of the underlying disks (2 to the power of ashift is the sector-size), or any disk which might be put in the pool (for example the replacement of a defective disk).
zpool create -f zp1 raidz1 -o ashift=12 /dev/disk/by-id/scsi-36a4badb038591c00233791390c02c443 /dev/disk/by-id/scsi-36a4badb038591c00233791480cde8092 /dev/disk/by-id/scsi-36a4badb038591c00233791540d9dabe0
Get some info about it :
zpool list zpool status -Pv zp1 zpool get all zpool iostat -vq 1
RQ : by default, it is mounted over /zp1
zfs get all
# You see the mount point here !
Use fdisk in order to partition SSD disk and create : - 1 partition, size = memory size / 2 - 1 partition with the lefting space
zpool add zp1 log /dev/disk/by-id/scsi-36a4badb038591c002337912c0b32c066-part1 zpool add zp1 cache /dev/disk/by-id/scsi-36a4badb038591c002337912c0b32c066-part2 zpool status -Pv zp1
REVERT :
zpool remove zp1 /dev/disk/by-id/scsi-36a4badb038591c002337912c0b32c066-part2 zpool remove zp1 /dev/disk/by-id/scsi-36a4badb038591c002337912c0b32c066-part1 zpool destroy zp1
TESTS
apt-get install fio echo 3 > /proc/sys/vm/drop_caches fio --description="Emulation of Intel IOmeter File Server Access Pattern" \ --name=iometer --bssplit=512/10:1k/5:2k/5:4k/60:8k/2:16k/4:32k/4:64k/10 \ --rw=randrw --rwmixread=80 --direct=0 --size=4g --iodepth=64 \ --zero_buffers --invalidate=1 --ioengine=sync --size=10G \ --numjobs=50 --runtime=30 --time_based --group_reporting --gtod_reduce=1 \ --filename=./fio_test.delme
# 3 SAS 10k disks, 80GB, SLOG 20GB, CACHE 120GB # before each test, run : echo 3 > /proc/sys/vm/drop_caches
READ: io=272184KB, aggrb=8469KB/s, minb=8469KB/s, maxb=8469KB/s, mint=32138msec, maxt=32138msec WRITE: io=64730KB, aggrb=2014KB/s, minb=2014KB/s, maxb=2014KB/s, mint=32138msec, maxt=32138msec
# remove CACHE
zpool remove zp1 /dev/disk/by-id/scsi-36a4badb038591c002337912c0b32c066-part2 READ: io=157142KB, aggrb=4919KB/s, minb=4919KB/s, maxb=4919KB/s, mint=31941msec, maxt=31941msec WRITE: io=39336KB, aggrb=1231KB/s, minb=1231KB/s, maxb=1231KB/s, mint=31941msec, maxt=31941msec
# remove SLOG
zpool remove zp1 /dev/disk/by-id/scsi-36a4badb038591c002337912c0b32c066-part1 READ: io=210334KB, aggrb=6184KB/s, minb=6184KB/s, maxb=6184KB/s, mint=34012msec, maxt=34012msec WRITE: io=50655KB, aggrb=1489KB/s, minb=1489KB/s, maxb=1489KB/s, mint=34012msec, maxt=34012msec
Compare to MD (soft raid) RAID5 over same 3 disks : Run status group 0 (all jobs):
READ: io=139365KB, aggrb=3974KB/s, minb=3974KB/s, maxb=3974KB/s, mint=35063msec, maxt=35063msec WRITE: io=34421KB, aggrb=981KB/s, minb=981KB/s, maxb=981KB/s, mint=35063msec, maxt=35063msec