Fork me on GitHub

testing the s3700

The Setup

Two 100G s3700 drives, one tested with luks, one not.

Filled the drive to test with it filled.

Testing 4k/8k, with luks using --size=8 or --size=9 for 4k and 8k respectivly.

I used the following settings in fio, changing the filename and block size where appropriate.

[global]
bs=4k
ioengine=posixaio
iodepth=32
size=200g
filename=/dev/mapper/testssd
direct=1

[rand-read]
rw=randread
stonewall

[rand-write]
rw=randwrite
stonewall

[seq-read]
rw=read
stonewall

[seq-write]
rw=write
stonewall

Results

There was generally high interupts and context switches, but oddly less so with luks.

Sequential IOPS

  • 4k sequential writes with luks was 73.7% of max ( 12424 vs 16855 )
  • 4k sequential reads with luks was 76.7% of max ( 14471 vs 18864 )
  • 8k sequential writes with luks was 71.0% of max ( 9640 vs 13573 )
  • 8k sequential reads with luks was 71.8% of max ( 10744 vs 14966 )

Random IOPS

  • 4k random writes with luks was 82.2% of max ( 13919 vs 16924
  • 4k random reads with luks was 80.7% of max ( 6260 vs 7756 )
  • 8k random writes with luks was 71.7% of max ( 9718 vs 13557 )
  • 8k random reads with luks was 64.7% of max ( 4222 vs 6526 )

Conclusion

My use case is zfs usage as l2arc/zil cache, I'll be using 8k on luks.

social