I am using a Java-Binding for the popular LZ4 compression library, in my case org.lwjgl.lz4 with the intend of compressing and managing a variety assets that way. I have working compression and de-compression of my custom format, verified by tests, which is basically able to:
- to put a bunch of files and a few bytes of metadata together, compress them and write them to an asset-file
- read all the bytes from an asset-file, decompress it and isolate the original file's data from there
So basically, an asset is basically just the raw bytes from a bunch of files put one after another, which then given to the compression.
When putting this into production however (aka not using trivial amount of bytes like in the unit tests), I noticed that my implementation did not seem to actually make the files smaller. While this is not my main goal, the fact that next to nothing was getting smaller is still suspicious, because at least some gains in respect to file size were expected on my part.
Overall, it comes down to this function
private void compress(ByteBuffer uncompressed /* contains all byte content */, File output, boolean useHighCompressionFactor){
ByteBuffer compressed = null;
uncompressed.rewind();
var numberOfBytesUncompressed = uncompressed.remaining();
try {
int numberOfBytesCompressed;
compressed = MemoryUtil.memAlloc(LZ4.LZ4_compressBound(numberOfBytesUncompressed));
if (useHighCompressionFactor){
numberOfBytesCompressed = LZ4HC.LZ4_compress_HC(uncompressed, compressed, useMaxCompression ? LZ4HC.LZ4HC_CLEVEL_MAX : LZ4HC.LZ4HC_CLEVEL_OPT_MIN);
} else {
numberOfBytesCompressed = LZ4.LZ4_compress_default(uncompressed, compressed);
}
if (numberOfBytesCompressed <= 0) throw new IOError("Compression failed!");
compressed.limit(numberOfBytesCompressed);
compressed = compressed.slice();
compressed.rewind();
Files.deleteIfExists(output.toPath());
var writeToFile = new FileOutputStream(output, false).getChannel();
writeToFile.write(compressed);
writeToFile.close();
MemoryUtil.memFree(compressed);
} catch (Exception e){
if ( compressed != null ) { try { MemoryUtil.memFree(compressed); } catch (Exception ignored){ /* nothing */ } }
throw new IOError("Failed to write Asset at '" + output.getAbsolutePath() + "' due to unexpected exception!", e);
}
}
I consulted the docs as well as similar discussions here on StackOverflow but did not find anything wrong with above code. Again, I want to stress that compressing my files like that works and uncompressing them works as well and I recover the data perfectly, no matter what I put in.
But when I load the raw bytes taken from six different images of grass into uncompressed, which is like raw 18MB of RGBA-8 data in total, and then proceed to call this function afterwards, I am writing an asset to the drive that is still pretty much 18MB in size, even if both useHighCompressionFactor and useMaxCompression are true.
Yes, LZ4 is prioritizing speed over compression ratios but to me, it looks like its doing nothing and that implies I am doing something wrong.
Somebody got an idea?