This is a comparison of various block size settings for both squashfs paired with nbd(network block device). I was looking to see if it out-performs standard nfs as well as look into optimal block size settings. It seems default block settings are optimal for squashfs (128k) and nbd(1024). Not only is nbd + squashfs faster but they also reduces the load on the network itself this helps alleviate the current 1000 backbone bottleneck between switches.
Time in seconds – lower is better.
|512 nbd||1024 nbd||2048 nbd||4096 nbd||Size|
|128k squash||47.86||46.78 default||46.73||46.86||6676.55|
CPU Usage – lower is better:
NFS 58 to 70 seconds
NFS CPU 2%
Times were gathered with:
time cat /usr/bin/* /usr/sbin/*
du -sh /usr/bin /usr/sbin
I had compressed the entire /usr partition but only ran tests on /usr/sbin and /usr/bin. Here are the compression for mksquashfs defaults @131072 (128K).
Filesystem size 6836785.50 Kbytes (6676.55 Mbytes)
49.93% of uncompressed filesystem size (13691786.46 Kbytes)
Inode table size 5792060 bytes (5656.31 Kbytes)
27.58% of uncompressed inode table size (20997401 bytes)
Directory table size 5228332 bytes (5105.79 Kbytes)
39.70% of uncompressed directory table size (13169842 bytes)
With ~50% reduction in file-system size I do see a significant reduction on the 1000 network backbone running between switches.
Another benefit to the squashfs+nbd is that the clients are much faster at loading menu’s and icons as the directory scans needed to find the icons perform very well with nbd+squashfs and when compared to the poor performance of directory scans over nfs.