-
-
Notifications
You must be signed in to change notification settings - Fork 43
Description
Dear devs,
first of all big thanks for your software, which makes distributing small projects that won't make it into the repositorys of the distributors so much easier :-)
However, it would be really nice if it was possible to achieve a better compression, willingly at the cost of decompression speed. Zstandard provides really fast compression and decompression for sure, but better compression could be achieved using e.g. LZMA, or even another underlying read-only file system like DwarFS.
I use linuxdeploy to create my AppImages. I found the option to set the LDAI_COMP env variable to set compression options. However, this is passed to the bundled mksquashfs binary, which only supports zstd – so there's only one choice for this option. I filed an issue about the compression options at linuxdeploy/linuxdeploy-plugin-appimage#35 – and there, I was advised that I should file an issue about this here (by @TheAssassin ).
I think another compression option would be really worth it, when size is more important than speed, or the decompression takes only a split second anyway. Here's a real-life example of my project Muckturnier.org I distribute AppImages for:
The final, zstd-compressed AppImage is 8.9M, when I unpack the SquashFS root and tar it, the data is 22.0M.
When I use xz (i.e. LZMA) to compress the tarball, it strips down to 6.5M.
When I compress the tarball using zstd, the result is 8.6M.
Creating a DwarFS from the root also leads to a 6.5M file.
Supposing the AppImage overhead would add another 0.3M (the size difference between the zstd and the AppImage file), my package could be 6.8M – which is about 25 % smaller than the zstd version.
Interestingly, when I create a SquashFS file from the root using an LZMA-enabled squashfs-tools mksquashfs, the result is 7.4M. So, maybe, SquashFS itself adds a lot of overhead, or it simply does not compress as good as DwarFS does. Perhaps, using this would improve compression way better.
Thanks for maybe considering this!
Cheers, Tobias