Thursday, May 26, 2011

LZ4 explained

 At popular request, this post tries to explain the LZ4 inner workings, in order to allow any programmer to develop its own version, potentially using another language than the one provided on Google Code (which is C).

The most important design principle behind LZ4 has been simplicity. It allows for an easy code, and fast execution.

Let's start with the compressed data format.

The compressed block is composed of sequences.
Each sequence starts with a token.
The token is a one byte value, separated into two 4-bits fields (which therefore range from 0 to 15).
The first field is to indicate the length of literals. If it is 0, then there is no literal. If it is 15, then we need to add some more bytes to indicate the full length. Each additional byte then represent a value of 0 to 255, which is added to the previous value to produce a total length. When the byte value is 255, another byte is output.
There can be any number of bytes following the token. There is no "size limit". As a sidenote, here is the reason why a not-compressible input data block can be expanded by up to 0.4%.

Following the token and optional literal length bytes, are the literals themselves. Literals are uncompressed bytes, to be copied as-is.
They are exactly as numerous as previously decoded into length of literals. It's possible that there are zero literal.

Following the literals is the offset. This is a 2 bytes value, between 0 and 65535. It represents the position of the match to be copied from. Note that 0 is an invalid value, never used. 1 means "current position - 1 byte". 65536 cannot be coded, so the maximum offset value is really 65535. The value is stored using "little endian" format.

Then we need to extract the matchlength. For this, we use the second token field, a 4-bits value, from 0 to 15. There is an baselength to apply, which is the minimum length of a match, called minmatch. This minimum is 4. As a consequence, a value of 0 means a match length of 4 bytes, and a value of 15 means a match length of 19+ bytes.
Similar to literal length, on reaching the highest possible value (15), we output additional bytes, one at a time, with values ranging from 0 to 255. They are added to total to provide the final matchlength. A 255 value means there is another byte to read and add. There is no limit to the number of optional bytes that can be output this way (This points towards a maximum achievable compression ratio of ~250).

With the offset and the matchlength, the decoder can now proceed to copy the repetitive data from the already decoded buffer. Note that it is necessary to pay attention to overlapped copy, when matchlength > offset (typically when there are numerous consecutive zeroes).

By decoding the matchlength, we reach the end of the sequence, and start another one.

Graphically, the sequence looks like this :

Click for larger display



Note that the last sequence stops right after literals field.

There are specific parsing rules to respect to be compatible with the reference decoder :
1) The last 5 bytes are always literals
2) The last match cannot start within the last 12 bytes
Consequently, a file with less then 13 bytes can only be represented as literals
These rules are in place to benefit speed and ensure buffer limits are never crossed.

Regarding the way LZ4 searches and finds matches, note that there is no restriction on the method used. It could be a full search, using advanced structures such as MMC, BST or standard hash chains, a fast scan, a 2D hash table, well whatever. Advanced parsing can also be achieved while respecting full format compatibility (typically achieved by LZ4-HC).

The "fast" version of LZ4 hosted on Google Code uses a fast scan strategy, which is a single-cell wide hash table. Each position in the input data block gets "hashed", using the first 4 bytes (minmatch). Then the position is stored at the hashed position.
The size of the hash table can be modified while respecting full format compatibility. For restricted memory systems, this is an important feature, since the hash size can be reduced to 12 bits, or even 10 bits (1024 positions, needing only 4K). Obviously, the smaller the table, the more collisions (false positive) we get, reducing compression effectiveness. But it nonetheless still works, and remain fully compatible with more complex and memory-hungry versions. The decoder do not care of the method used to find matches, and requires no additional memory.


PS : the format above describes the content of an LZ4 compressed block. But a file, or a stream, of arbitrary size, may consist of several blocks. Combining several blocks together is the scope of another layer, with its own format, described in the specification document :  LZ4 Streaming format.

23 comments:

  1. I read your spec to reimplement from the description. I think it is complete, the only thing that surprised me is that the matches are allowed to overlap forwards. It might be worth mentioning. My impl was intended to experiment with vectorization but in the end it did not work.

    ReplyDelete
    Replies
    1. Yes, this is a classic LZ77 design.
      With matches autorised to overlap forward, it makes the equivalent of RLE (Run Length Encoding) for free, and even repeated 2-bytes / 4-bytes sequences, which are very common.
      This is in contrast with LZ78 for example, which never takes advantage of overlap. Neither PPM, nor BWT, etc.

      That being said, i'm not sure to understand in which way it prevented your experiment on vectorization to work.

      Rgds

      Delete
    2. With the specification of "The last match cannot start within the last 12 bytes" to be handled by the reference decoder, it is not an equivalent of RLE for free. However, anything that compresses well with RLE is very likely going to compress well with LZ4 unless you pick a worse case of unique byte tokens repeated 12 at a time.

      Delete
    3. Not to be confused : minimum match length is still only 4.
      So, if a token is repeated 12 times, it will be caught by the algorithm.

      The only exception is for the last 12 bytes within input data. Even though this restriction is supposed to have a negative impact on compression ratio, its impact on real-life data is negligible.

      Delete
  2. Thank you for creating a clear and easily understood specification.

    I believe you should add a specification of the size limit of literal length and match length. As specified currently, a correct decoder must be able to process an infinite number of bytes in either field. The best would be to specify the maximum value (not length) of either field as a power of two. The maximum length can then be inferred.

    There is a typographical error in your specification: "additional" is the correct spelling.

    ReplyDelete
    Replies
    1. It was in the initial spirit of the specification that sizes (of literal length or match length) can be unlimited.
      In practice though, it is necessarily limited, by the maximum size that the current implementation supports, which is ~1.9 GB.
      A future implementation may support larger block sizes though.

      There is also a theoretical issue with limited literal length : in case of compressing an encrypted file, it is possible that the compressed output consists only of literals. In this case, the size of literal length is the same as the size of the file. Thus, it cannot be limited.

      Would you mind telling why you think enforcing a limit on length would be beneficial ?

      Typo corrected. Thanks for the hint.

      Delete
    2. Hello Yann, Perhaps it is pedantic, but with no limit specified, a "correct" implementation is impossible to create.
      Another concern is efficiency. To determine a length field value > 14, "we need to add some more bytes to indicate the full length. Each additional byte then represent a value of 0 to 255, which is added to the previous value to produce a total length. When the byte value is 255, another byte is output."

      I understand you chose to add byte values, rather than use a compressed integer, such as the bottom 7 bits as the next most significant bits (with top bit as signal as another byte needed). I believe this was to reduce the output size when the lengths are small. However, with an unlimited length field, we can have a huge number of bytes representing the length. So it seems there must be a limit on the length or the compression becomes inefficient.

      Delete
    3. Regarding maximum size :
      since block input size is currently limited to 1.9GB, what about limiting length sizes to this value too ?

      Regarding length encoding :
      The LZ4 format was defined years ago. Initially, it was just created to "learn" compression, so its primary design was simplicity. High speed was then a "side-effect". Since then, priorities have been a bit reversed, but the format remained "stable", a key property to build trust around it.

      I can understand that different trade-off can be invented, and may seem better. And indeed, if I had to re-invent LZ4 today, I would probably change a few things, including the way "big lengthes" are encoded.

      But don't expect these corner cases scenario to really make a difference in "normal" circumstances. A few people have already attempted such variant, and benched that in most circumstances, the difference is small (<1%) if only length encoding is modified.

      Larger difference can be achieved by modifying the fixed 64KB frame, allowing repetitions at larger distances, but with bigger impact on performance and complexity. (You can have a look at Shrinker and LZnib for example)

      Delete
    4. "the difference is small (<1%) if only length encoding is modified": I expect this to be true if only small length values are encoded. This was why I expected a fairly small length limit: I assumed the LZ4 format was only useful in cases where lengths are small, as the length encoding is poor for large lengths. A length of (2^16, 16K) requires 65 bytes, 64K requires 257 bytes, 256K requires 2028 bytes, and so forth. I am not speaking of the algorithm to compute the literals, but simply the length representation. Whether such lengths would be computed, I don't know.

      Delete
    5. I cannot say what the best limit would be, you would have to decide, as you are the expert. Hopefully I explained why it was surprising the limit is currently infinite, and why I expected a small limit.

      Delete
    6. Sure. It's possible to introduce the notion of "implementation-dependent limit".

      For example, current LZ4 reference C implementation has an implementation-limit of 1.9 GB. But other implementations could have different limits.

      This seems more important for decoders. So, whenever a decoder detects a length beyond its limit, it could refuse to continue decoding, and send an error message instead.

      Delete
    7. There's a way to handle the encoding limit without hurting the compression ratio nor limiting the total file size (eg: streaming-compatible) : you just have to specify that a given litteral length is never followed by an offset (just like the last block). That way you can easily have a litteral limit of 1 GB (30 bits) and if you want to encode litterals larger than this, you just have to stop at 1 GB and put a new litteral, which will just take a few bytes every gigabyte. BTW, thanks for the description and kudos for this smart and fast design!

      Delete
    8. Yes, I realized that point later on.
      Unfortunately, at that point, LZ4 is already widely deployed, with a stable format. It's no longer possible to change that now.

      A correct "limit" for streaming would probably be something like ~4KB. There is a direct relation between this limit and the amount of "memory buffer" a streaming implementation must allocate.

      Currently, my "work around" to this issue is to use "small blocks", typically 64KB. So the issue is solved by the upper "streaming layer" format.

      Delete
  3. I like to see a bigger version of the tiny picture.

    Would you be so kind and upload one?

    ReplyDelete
    Replies
    1. Which tiny picture are you talking about ? The first one on the top left ? It's just an illustration, taken from http://fastcompression.blogspot.fr/p/compression-benchmark.html

      Delete
  4. Did you also evaluate Base-128 Varints (how Protocol Buffers encode ints) for lengths? I assume they might be slightly smaller, but slower, since they require more arithmetic operations.

    ReplyDelete
    Replies
    1. Not for the compression format. The loss of speed would be sensible.

      Delete
  5. can u tell me how much memory consume by lz4 during uncompression

    ReplyDelete
    Replies
    1. The algorithm itself doesn't consume any memory.

      The used memory is limited to the input and output buffer. So it's implementation dependent.

      Delete
    2. sir, i have only 64kb memory so what i have to do. what should be defined as chunksze

      Delete
    3. Well, it can be any value you want.
      LZ4 compression algorithm (lz4.c & lz4.h) doesn't define a chunk size. You can select 64kb, 32kb, 16kb, or even a weird 10936 bytes, there is no limitation. This parameter is fully implementation specific.
      Since I don't know what's the source of the data, what's the surrounding buffer environment, etc. it's not possible to be more precise.

      LZ4 is known to work on system specs as low as 1979's Atari XL or 1984's Amstrad. So there is no blocking point in making in work into 64kb.

      Regards

      Delete
  6. Compressing directory, or even file attributes is outside of the scope of LZ4. LZ4 has the same responsibility as zlib, and therefore compresses "stream of bytes", irrespective of metadata.

    To compress directory, there are 2 possible methods :

    1) On Windows : use the LZ4 installer program, at http://fastcompression.blogspot.fr/p/lz4.html. It will enable a new context menu option by right-clicking a folder : "compress with LZ4". The resulting file will be the directory compressed. You can, of course, regenerate the directory by decompressing the file (just double-click on it).

    2) On Linux : use 'tar' to aggregate directory content, pipe the result to lz4 (exactly the same as gzip).

    ReplyDelete