Compression notes: Difference between revisions

From Helpful
Jump to navigation Jump to search
Line 1,007: Line 1,007:
Like image, but with a lot of focus on spent on the predictable redundanct present in slowish changes within scenes, primarily consecutive frames.
Like image, but with a lot of focus on spent on the predictable redundanct present in slowish changes within scenes, primarily consecutive frames.
-->
-->
=Digging into some details=
==LZW notes==
{{stub}}
For related compression methods, see [[Compression notes]]
=General=
Properties:
* relatively easy to implement.
:: If you're the learn-by-doing kind, you may like this one - you'll make a halfway decent compression algorithm and understand it decently
:: Lots of little edge cases before it's production-ready, though.
* on compression level:
:: for readily compressible things, LZW is pretty decent - repetitive text like logs may become ~10% of original, general text perhaps 40%
:: in general it's not as good as other common and still relatively simple algorithms (like zlib)
:: uncompressable data in LZW coded form may become approx. 110%..130% the original size (varying with the encoder's response to dictionary growth)
* memory overhead is bounded, and small to moderate (...for a codebook-style compression algorithm)
:: Beyond the input/output (which can be streamed), there's only the dictionary
:: the amount of entries in the dictionary is bounded by agreement on the maximum code width. For the somewhat standard 12-bit maximum code width it's fairly small, and even for, say, 20-bit dictionaries it's often manageable.
:: the maximum size of the entries is effectively bounded by the way it is built up. It's not particularly minimized, though there are more-efficient-than-naive implementations
===Concepts, terms, and some implementation/behaviour notes===
{{stub}}
: '''Glossary'''
'''bit width''' - the amount of bits currently needed to be able to use all indexes in the table, which is ''ceil(log<sub>2</sub>(dictionary_size+1))''
'''code width''' refers to the amount of bits needed to code entries from the dictionary, which depends (only) on the (current) amount of entries.
: In practice the maximum size and code width are often decided before you start.
'''code size''' can be a bit ambiguous, often referring to the code width, or sometimes the dictionary size.
'''Dictionary''' (a.k.a. '''codebook''', '''code table''', '''translation table''') refers to a lookup table between uncompressed strings of data, and the symbol (a.k.a. codepoint) it is assigned.
The '''dictionary size'' refers to the amount of entries in the dictionary.
'''Index''' usually refers to a position within the dictionary (referring to the string of data that position of the dictionary stores). The compressed data consists of a list of such indices.
'''Symbol''' usually refers to the indices. Below, symbols and indices are synonymous.  (In a formal-language-theory context it could also refer to the elements of the uncompressed data. Sometimes 'elements' is used to refer less ambiguously to units of the uncompressed input.)
<!--
A complete system consists roughly of:
* Compression: Uncompressed input elements (usually bytes) &rarr; symbols &rarr; bits in a compressed bitstream
* Decompression: Bits in a compressed bitstream &rarr; symbols &rarr; uncompressed output
The LZW algorithm primarily describes the transition from uncompressed uncompressed data and symbols.
Going between symbols and bit-packed data can be considered a separate implementation detail,
though in implementations is often entangled as it allows for some cleverness and optimization.
: '''LZW variations'''
There is no single LZW implementation.
There are a few major design decisions (meaning there are clever variations incompatible with other forms),
and specific-case assumptions.
They include:
* The initial state (of encoder and matching decoder), which consists of:
** the minimum/initial code width {{comment|(actually slightly fuzzy concept. If you make entries for 8-bit input (2<sup>8</sup>=) 256 entries ''and'' have control codes, then your code width ''in practice'' will always be at least 9)}}
** dictionary entries for all possible input values. Often directly implied by the minimum code width. In general-purpose-byte-data coders this is 256, for specific purpose coders it can be smaller, sometimes larger.
** dictionary entries for the control codes your implementation uses (if any)
** the maximum code width, because it's usually a good idea to put a bound on the dictionary size (implementation decision, see notes below)
* Method (and variations), including:
** response to control codes
** dictionary bounding method
** what the dictionary value elements are -- usually bytestrings, but for e.g. GIF codes the color indices it contains, so a 16-color GIF case they would be strings of 4-bit values
These decisions/assumptions must match between encoder and decoder (often fixed by a standard, specific implementation, or de facto convention), which means distinct implementations often cannot interpret data from other implementations.
Sometimes a few aspects may be communicated instead of predefined (e.g. GIF's minimum code size, because the data's bit-width may be smaller than a byte, for few-color images).
Encoders can do things that affect the encoded stream but ''not'' the decoded stream.
For example, variable-bit LZW compressors that support clear codes may choose to use it after a bunch of incompressible data, to bring down the bit-width and/or make the dictionary more fit for the next chunk of data.
(you can even do some fancy analysis to get slightly better compression)
: '''Dictionary growth and bounding'''
The dictionary grow while compressing (and decompressing).
As more entries are added to the dictionary, it will occasionally cross a two-power size, which means the code width will increase by one. 
Because usually the compressed string is a list of symbols at (in variable-bitwidth coders) the current bit-width, the applicability of the dictionary entries plays a role in coding efficiency.
If the dictionary grows without more entries being used (which can happen when random-looking data passes through), the compression ratio will lessen.
The way entries are added makes them potentially useful on average, but not optimal.
As the amount of dictionary entries increases, so does the amount of entries that won't be useful to the compression, or used at all.
Because of this, there is often enough an upper bound placed on the dictionary size.
Once that limit is reached, there are a few different things you can do:
* Perhaps the simplest option is to not alter the dictionary anymore, but this won't adapt to new patterns
** A bad idea for long inputs (with different sort of data at different points), which will all be coded with codes built from a small portion at the start
* Also simple is to reset the dictionary to its initial state, effectively starting over
** Sub-optimal in that the compression ratio immediately after a reset will be lowered
** Decent for long unpredictable inputs, since it will adapt to new patterns, effectively coding them as independent parts
* Something else reproduced on both sides, such as continuously replacing the least-visited entry in the codebook
** Handy in that it will refine the quality of codebook, removing unused and little-used entries. Still no
** not exclusive with the clear code - the encoder can try to be very clever to see what combinations give good compression
: Separation of algorithm and bitpacking
You might like separation in function, and therefore try to separate the symbol generation step from the step that packs the resulting bits into bytes (in the encoder) and ouy of bytes (in the decoder).
Yes, you'ld buffer all the data which you could in theory stream out, but that's a smallish price most of the time.
Two major factors in whether and where you can do this are dictionary bounding and the fixed/variable bitpacking choice.
Variable-width coding means you can't in the decoder because it reads chunks of sizes, the size determined by the current dictionary state.
You can still do it in the encoder, ''if'' it somehow marks where the width changes happen.
If you support clear codes or any other dictionary bounding that invalidates dictionary entries (anything other than "keep as-is"), you cannot separate the steps in the encoder either because the bit sequence you map to depend on the ''current'' state of the dictionary (whereas without this just the final state would be enough).
So for a general purpose LZW implementation you cannot separate the two - although with generator-style code (if possible) you can get a decent enough separation while streaming.
-->
===Encoding input data to symbols/indices===
<!--
Start with a dictionary that has one entry for each possible input value.
For byte input (very common) that would means 256 entries.
For some specific cases/implementations it may differ, for example 16-color GIF images that would mean 16 entries (and an initial code width of 4).
In man real-world implementations there may be also be some more initial codes, often control codes, most commonly two: a clear code and an end of input code (details later).
The encoding procedure is to, while there is still data in the input:
* while the current string is in the dictionary (and we are not at end of the input), read and add the next input element
** you now have the symbol for the longest matching string in the dictionary, and (except at end of input) the next character in the input
* emit the symbol into the symbol stream
* (in all steps but the last) Add a new entry to the dictionary, namely the concatentation of:
** the string that we just emitted the symbol for
** the next input element that we read earlier
For example, assume our input is <tt>ababababa</tt> - and for brevity of the dictionary in the example, that the input can only contain <tt>a</tt> and <tt>b</tt>.
The initial dictionary (assuming our system uses control codes) might look like:
0 a
1 b
2 clear code
3 EOI code (a.k.a. stop code)
After five iterations...
*  a is the longest match, b is next. Emit 0, add ab (becomes 4)
*  b is the longest match, a is next. Emit 1, add ba (becomes 5)
*  ab is the longest match, a is next. Emit 4, add aba (becomes 6)
* aba is the longest match, b is next. Emit 6, add abab (becomes 7)
*  ba is the longest match, and there is no more data. Emit 5.
...we have:
output stream: 0 1  4  6  5
(representing) a b ab aba ba
and the dictionary contains:
0 a
1 b
2 clear code
3 EOI code
4 ab
5 ba
6 aba
7 abab
Notes:
* our output symbol list is a list of integers
** which given our current codebook size could fit within a 3-bit code width (but more on bit coding later).
* entry 7 was never used. This happens regularly
-->
===Decoding symbols/indices to uncompressed data===
<!--
Given matching initial state (assumptions) and method, the decoder can reconstruct both the dictionary {{comment|(because they are all concatenations of existing entries)}} and, since it will has the same entries at the relevant time, also the data.
Emulating the encoder's buildup of the dictionary is conceptually slightly more complex than the steps in the encoder, largely because the decoder is slightly behind in entries.
The decoding procedure: {{comment|(o short for old, also often named prefix)}}
* read symbol into ''o''
* output dictionary's data for symbol ''o''
* While there are still input symbols:
** read symbol into ''n''
** if n currently in dictionary:
*** output dictionary's data for symbol ''n''
*** add new dictionary item: concatentation of ''o'' and ''n''[0]
** if n not currently in dictionary: (we figure out what the encoder would have made it{{verify}})
*** add new dictionary item: concatentation of ''o'' and ''o''[0]
*** output dictionary's data for the just-added entry
** o=n
Continuing the example above, there are five inputs to handle:
* 0 - Necessarily present. Emit a
* 1 - Present. Emit b,  add  ab  (comes from a+b, becomes 4) (o now 1, 'b')
* 4 - Present. Emit ab, add  ba  (comes from b+a, becomes 5) (o now 4, 'ab')
* 6 - Not present. Add and emit what it would be: aba  (comes from ab+a, becomes 6) (o now 6, 'aba')
* 5 - Present. Emit ba, Add abab (comes from aba+b, becomes 7)
...which produces the original output, and the same dictionary as the encoder made.
Note:
* The control codes were there partly to point out that they're not in the way - you do have to be consistent between encoder and decoder.
-->
===Between symbols and bitstream===
<!--
The basic algorithm only says something like 'output the symbol/index to the compressed stream', not how.
Classroom demonstrations may just show the symbol/index stream as-is since they're primarily demonstrating the algorithm,
but since we're talking about compression we may as well try to do it efficiently.
The simplest decent method is to choose the smallest bit-width that can be used to code all of the dictionary (''ceiling(log<sub>2</sub>dictionary_size''), and use bit-packing to pack that into a resulting byte stream.
For example, if the dictionary grew to 700 entries, then we'll need 10 bits to represent each symbol/index. We can pack the bits into bytes (every 8 codes will fit into 10 bytes) and be done with it.
This can be called '''fixed-width bit packing'''. Note that coding does not start producing that stream until we know the dictionary will no longer grow (often usually meaning 'when it's done with all the input'{{verify}}), and the decoder will need a mechanism to know that size before it starts decoding.
Now, for an 8-bit-minimum, 12-bit-maximum setup, for the first few hundred input items the dictionary could still be coded with 8 and 9 bits - but for inputs of a few kilobytes, fixed-width coding is already likely to settle on 11 or 12 bits.
The idea of '''variable-width bit packing''' (a.k.a. ''dynamic LZW'' and various other names) is that you could just use the ''current'' width, as long as encoder and decoder agree on what that is -- and they do, because the dictionary will be the same at the same point in the stream.
...with one difference: the decoder is always one directory entry behind, which means that in decoding you need to test whether the ''next'' dictionary add (''given'' to be about to happen) would grow the dictionary's bitwidth.
Note that variable width coding implies the initial code width, meaning the encoder does not have to buffer all the input before it can start writing the compressed stream, and doesn't have to communicate the bit width to the decoder.
Notes:
* Doing all this bit packing and unpacking, particularly doing it efficiently, takes a little care.
* Different LZW implementations use different bit order within bytes{{verify}}. For example, GIF is LSB, TIFF is MSB.
* A system supporting clear codes can keep in mind that its use means resetting to shorter-width codes. How and when this would be good for compression is usually not so easy to prove or calculate, but it may well be a good idea.
-->
===Other notes===
<!--
: Dictionary optimizations
Dictionary entries always build on existing dictionary entries.
This suggests you can save memory by storing entries in a data structure that implies the concatenations, rather than stores each of them fully.
More interesting is creating a structure that allows the algorithm to do the searches it needs efficiently.
Depending on the intended balance between speed and memory conservation, you could use a tree, a trie, and various other solutions.
: end of input
Various LZW implementatiosn have a special code called end of input (EOI), for one of a few reasons
* Make it explicit when a compressed stream ends - without this it is only implicit, detected by the fact that the input data doesn't have enough ''bits'' left to produce another value.
* Paging, which seems to refer to coding multiple distinct chunks like ''CLEAR, chunk's contents, EOI'' (...and finishing when there is no more data)  {{verify}}
: Initial dictionary size, input value use
Consider an 8-color paletted image, in memory using only byte values 0..7. GIF would take that and create an initial dictionary with eight colors, a clear code and EOI code. That's 10 entries, starting off with 4bpp coding.
It does mean one extra byte needs to be sent, to imply the amount of initial entries, but this is negligible overhead and worth it for simple images:
The difference between the 8-color case and treating the input as byte is 245 initial entries you would never use and can now use for compression codes. In variable-width bitpacking you may also spend a while before you even get to 8-bit or 9-bit coding (with control codes) you would use for byte input.
And if you don't mind creating yet another LZW variant then something similar goes for text.
For example, English text in ASCII regularly only uses byte values within 10..126. You could send both of those values along (spending 2 extra bytes) and have both sides create 117 initial values in the dictionary. This means a bunch more entries in the dictionary that can be used for compression-code entries, and some time spent coding the input as 8-bit instead of 9-bit codes.
Or you could write a bitmask to signal which of the actual byte values are used - which is a more generic solution and for some input might be slightly more effective - though it does mean (256/8=)32 extra bytes to send.
-->
===See also===
* http://en.wikipedia.org/wiki/Lempel%E2%80%93Ziv%E2%80%93Welch
* http://warp.povusers.org/EfficientLZW/
* [http://www.google.com/search?q=tips%20for%20the%20lzw%20decoding%20algorithm Web search: 'Tips for the LZW decoding algorithm']
=Specific implementations=
===In GIF===
<!--
GIF image data is
* initial code width (in a single byte. Some details to this; see notes below)
* packed bits-in-bytestream (wrapped in the 255-byte-at-most-at-a-time blocks that GIF uses in general)
LZW-specific details
* variable-width bit packing
* little-endian bit order
* max code width is 12      {{comment|(presumably minding the memory constraints of the time)}}
* initial state of dictionary is:
** 2**initialcodewidth entries
** CLEAR code
** EOI code (used to mark the end of this data)
** Note: Because of those two control codes, the code width increases by one before any data is handled, and the the first compression code is 2+2**initialcodewidth
* Stream is started with a clear code
: Historically
Using the LZW algorithm from 1984 to 2004 violate the patent for it,
but it seems that (LZW being a specific member of the LZ family meant that) you could decompress without such trouble,
and not using the LZW method to compress data meant that you could fairly easily create LZW data that doesn't actually compress the data - so you could avoid patent problems by writing uncompressed GIFs.
A number of libraries and programs still do this (though it wouldn't be very hard to add a real LZW encoder/decoder now).
Compression step consists of:
* decide code size (usually the same number as the color bits, so up to 8, but can't be 1 because of LZW details; must be coded with 2 bit codes; see below)
* Compress image pixels to compression codes
* the compression codes are a bitstream, with variable amounts of bits; pack it into bytes.
* write into the file
** write the code size
** wrap the bytes into blocks of at most 255 bytes, preceded by the block's length
Uncompressed GIFs use the map-to-themselves codes.
It also inserts an occasional clear code so that the bit width doesn't increase -
You'll need 9 bits per pixel to code an 8bpp GIF - which means uncompressed data is actually larger than the pixel data it comes from - but without clear codes it might widen to 10, 11, or 12.
How often there ''needs'' to be a clear code depends on the current code size (2<sup>codesize</sup>-2).
Two-color (1bpp) images are the worst case; they need to be 2 bits to allow for the control codes, ''and'' every other code needs to be a clear code to avoid it becoming 3bpp. This means two 2-bit codes per 1-bit pixel, four times the original size.
For 8bpp it's 9 bit width and a clear code every 254 codes, which means it's about ~13% larger than the original data.
-->


=Unsorted=
=Unsorted=

Revision as of 16:38, 14 July 2023

📃 These are primarily notes, intended to be a collection of useful fragments, that will probably never be complete in any sense.

Stuff vaguely related to storing, hosting, and transferring files and media:

Utilities and file formats (rather than methods and algorithms)

This article/section is a stub — some half-sorted notes, not necessarily checked, not necessarily correct. Feel free to ignore, or tell me about it.

General purpose

ZIP

Commonly used, mostly because it caught on is so well in the mid-nineties it became a generic brand name.

Created by PKWare, as PKZIP, though many will know the WinZip product (not from PKWare. There was some compatibility kerfuffle about encryption at the time). (verify)


There have been a bunch of revisions/additions (see version history), and the amount of underlying compression formats has increased over time.

Of the compression formats, some are outdated, many are specialized, and there is optional encryption.


This also means not everything can open every zip file. To avoid compatibility problems, ZIP files created for compatibility are based on DEFLATE, so has basically not changed in twenty years.



compression methods

General purpose, little point because basic deflate is usually better:

  • 0: Store (no compression)
  • 1: Shrink
  • 2: Reduce (compression factor 1)
  • 3: Reduce (compression factor 2)
  • 4: Reduce (compression factor 3)
  • 5: Reduce (compression factor 4)
  • 6: Implode

General purpose, more common:

  • 8: Deflate (the one that everything supports)
  • 9: Deflate64 ('Enhanced deflate')
  • 14: LZMA
  • 98: PPMd (since WinZip version 11(verify))
  • 12: bzip2 (since WinZip version 11(verify))

Media-specific compression methods:

  • 96: Compressed lossless JPEG
  • 97: WavPack

Other:

  • 10: Old IBM TERSE
  • 18: New IBM TERSE
  • 19: IBM LZ77 z
  • 7: "Reserved for Tokenizing compression algorithm" (?)

multi-part ZIP

This article/section is a stub — some half-sorted notes, not necessarily checked, not necessarily correct. Feel free to ignore, or tell me about it.


Multipart zip files were once used to get over the maximum size of a medium, in particular floppies but also CDs, maximum supported/allowed file size (e.g. email attachments, FAT32).

(Note that multi-part ZIPs are now often created as Zip64 for large-file reasons(verify))


split zip

These are typically named z01, z02, etc., and the .zip is the last part (not the first as would be quite sensible to assume).

These files are mostly just the compressed bytes split at arbitrary points, except that the offsets in each part's headers are relative to the start of that part.

Which means that just concatenating them together does not make a correct zip file, though a script that fixes the offsets while concatenating, or even fixes them after naive concatentation, is relatively simple, and there are tools to do both.

Ideally your uncompressor just knows about split multipart zip, though, because it's fewer steps.


spanned zip

Spanned zip is the same as the above in terms of file contents, different only in the file naming, specifically all parts have the same name, but reside on different media.

For example, a set of floppies all have game.zip on them, which happen to be sequential parts known only via physical labeling on the floppies.

Relatively rare in that it's impractical to keep all these files on one medium (you'ld keep them in separate directories, or more probably, rename them to be able to handle them).

See also [1]



Other

7zip (when handing non-7z zips)

doesn't create multi-part zip, it simply splits the whole stream at arbitrary points (doesn't alter the offsets),
names them name.zip.001, name.zip.002, etc.
extraction:
7zip obviously understands what it made itself
For other tools you would directly concatenate these files, which creates a correct single zip file.
https://superuser.com/questions/602735/how-could-i-portably-split-large-backup-files-over-multiple-discs/602736#602736


ZIPX

This article/section is a stub — some half-sorted notes, not necessarily checked, not necessarily correct. Feel free to ignore, or tell me about it.

From the WinZip team, based roughly on ZIP but different enough to be a separate format.

Not widely supported.

RAR

Compresses better than ZIP, also proprietary but comparably available, so was used instead of ZIP in certain areas.

Uses LZ and PPM(d)

7z

The .7z format archive focuses on LZMA, which performs noticably better than ZIP's classical DEFLATE, and regularly a little better than other alternatives such as RAR, bz2, gzip, and others.

The 7zip program is useful as an interface that handles many formats, but the 7zip format itself is not supported by many others.


The 7zip interface uses plugins to support external formats. When you use it from the CLI and have multiple 7z commands, the difference is that:

  • 7z: the variant you can tell to use plugins
can read/write .7z, .zip, .gz, .bz2, .tar files, and reads from .rar, .cab, .iso, .arj, .lzh, .chm, .Z, .cpio, .rpm, .deb, and .nsis
  • 7za: a standalone variant, which handles only .7z, .zip, .gz, .bz2, .tar, .lzma, .cab, .Z
  • 7zr: a lightweight standalone version that handles only .7z
  • 7zg: GUI

In general (e.g. linux CLI) you can use 7z (some package only 7za) and forget about the rest.


gzip

Based on DEFLATE, which is a combination of LZ77 and Huffman coding.

Fairly widely supported (things like WinRAR and 7zip can also open these).

Compression amount/speed tradeoff:

  • has -1 to -9 (and also --fast and --best, respectively 1 and 9)
  • particularly -8 and -9 seem to take a lot more time than is worth it compression-wise.
  • default is -6, which seems a practical choice

bzip2

Based on BWT and Huffman.

Generally gives better compression than deflate (e.g. gzip, classic/compatible ZIP), and a little more resource-intensive.

Like gzip, this is generic enough that various other tools (e.g. 7zip) can also open these.

Compression amount/speed tradeoff:

  • -9 is the default (verify)
  • compression seems to level off to sub-percent size differences at and above -7
  • Time taken seems to be fairly linear from -1 to -9.
...so if you only care about having compression at all, you could just use -1 (it's perhaps twice the speed of -9)

This seems to mean there's not a sweet spot (as clear as e.g. with gzip) without specifically valuing time over drive space, or the other way around.

pax

https://en.wikipedia.org/wiki/Pax_(Unix)


xar

Used around OSX,

apparently replaced gzipped pax

https://en.wikipedia.org/wiki/Xar_(archiver)

XIP

.xar which can be digitally signed.

Used around OSX

https://en.wikipedia.org/wiki/.XIP

ARC

ARJ

Seen in the nineties, rarely used now.

Had useful multi-file handling before ZIP really did, so saw use distributing software, e.g. in BBSes.


There was a successor named JAR, not to be confused with the Java -related JAR archives.

ACE

Seen in the late nineties, early noughties. Similar to RAR in that it outperforms ZIP, but RAR is more popular.

LHA/LZH (LHarc)

Used on the Amiga and for some software releases.

Now rarely used in the west, but still used in Japan, and there was is an LZH compressed folder extension for WinXP, analogous to zipped folder support in windows in the west.

There was a successor named LHx, then LH.


xz

LZMA algorithm, often with the file extension ".xz"

lzip

LZMA compressor.

In terms of compression per time spent

lzip -0 is comparable to gzip -6
lzip -3 is comparable to bzip2 -9 / xz -3
lzip -9 is comparable to xz -9(verify)


Apparently also seems a little better at error recovery(verify) [2]


lzip and zx are both LZMA family, xz is more recent though there are arguments why it's not the best design as an archiver, whereas lzip just does just the thing it does.


http://www.nongnu.org/lzip/lzip.html

Parallelized variants, and speed versus compression

tl;dr:

  • Parallelized variants are mostly:
    • for bzip2 there's lbzip2 and pbzip2
    • for gzip there's pigz
    • for lzip there's plzip
    • for xz there's pxz and pixz
  • if you care more about speed: pigz -3
  • if you care more about space: lbzip2 -9 (-9 is the default, so is optional to specify)
  • if you care even more about space, you're probably already an xz user and have no speed expectations.



Some details:

  • pigz speed: pigz -3 is ~50% faster than the default -6, at still decent compression
  • pigz -11 is zopfli, a few percent better but much slower
  • lbzip2:
speed barely changes with compression level, so you may as well use -9, also the default
(-1 seems slightly slower than the rest, even. Yeah, weird)
memory hungrier than others, also at (many-thread) decompression
  • pigz versus lbzip2 - draw your own conclusions from below and tests on your own typical data(!), but it seems:
lbzip2 compresses better at all settings (one test file: lbzip2:24% pigz:35%)
pigz -3 noticably faster than lbzip2 at all settings ((verify))
pigz -7 (and higher) are slower than lbzip2
I've heard that lbzip2 scales better with cores so on beefy dedicates cases would then be faster; TODO: check
  • pbzip2 is slower than lbzip2 at higher compression, similar speed at lower compression
  • pxz and pixz are slower than the above at higher compression (...what did you expect?)
(more comparable at lower compression - have not checked to what degree)


Quick and dirty benchmarks:

(on tmpfs, so negligible IO) on a 218MB, fairly compressible floating point number data file

On an AMD FX6300:

  • pigz -1 compresses it to 80MB (36%) in 1.0sec
  • pigz -2 compresses it to 77MB in 1.1sec
  • pigz -3 compresses it to 76MB in 1.3sec
  • pigz -4 compresses it to 75MB in 1.4sec
  • pigz -5 compresses it to 75MB in 1.8sec
  • pigz -6 compresses it to 75MB in 2.2sec
  • pigz -7 compresses it to 75MB in 2.4sec
  • pigz -8 compresses it to 75MB in 4.2sec
  • pigz -9 compresses it to 75MB (34%) in 11.5sec


  • lbzip2 -1 compresses it to 55MB (25%) in 3.0 sec (yes, slower than -6!)
  • lbzip2 -2 compresses it to 51MB in 2.7 sec
  • lbzip2 -6 compresses it to 48MB in 2.7 sec
  • lbzip2 -9 compresses it to 46MB (21%) in 2.8sec


  • pbzip2 -6 compresses it to 48MB in 3.3sec


Similar tests on a dual Xeon E5645 were approximately twice as fast, so it seems to scale roughly linearly, going by passmark score.


Decompression speed

decompression - of the three, gzip decompresses faster than xz, both decompress faster than bz2

Less practical nerdery

This article/section is a stub — some half-sorted notes, not necessarily checked, not necessarily correct. Feel free to ignore, or tell me about it.

There is a whole set of compression types that are exercises in getting very high compression at the cost of a bunch of time.

They are usually barely command line utilities, let alone easy to use.

These include:

...and many others.

Other

This article/section is a stub — some half-sorted notes, not necessarily checked, not necessarily correct. Feel free to ignore, or tell me about it.


Lossless methods

Entropy coding (minimum redundancy coding)

This article/section is a stub — some half-sorted notes, not necessarily checked, not necessarily correct. Feel free to ignore, or tell me about it.

http://en.wikipedia.org/wiki/Entropy_coding

Mostly per-symbol coding - where a symbol are logical parts of the input, regularly fixed-size things, e.g. a character or byte in a file, a pixel of an image, etc.

Probably the most common symbol-style coding, contrasted with variable-run coding, which is often dictionary coding.


Modified code

Huffman

See also:

Adaptive Huffman (a.k.a. Dynamic Huffman)

See also:

Shannon-Fano

Similar to Huffman in concept and complexity.

A little easier to implement, so a nicer programming exercise, yet Huffman typically performs better.


See also:

Elias, a.k.a. Shannon-Fano-Elias

See also:

Golomb coding

http://en.wikipedia.org/wiki/Golomb_coding

Rice coding

https://en.wikipedia.org/wiki/Golomb_coding#Rice_coding

universal codes

In itself mostly a propery https://en.wikipedia.org/wiki/Universal_code_(data_compression)

Fibonacci coding

http://en.wikipedia.org/wiki/Fibonacci_coding


Elias gamma coding

https://en.wikipedia.org/wiki/Elias_gamma_coding

Exp-Golomb (Exponential-Golomb)

http://en.wikipedia.org/wiki/Exponential-Golomb_coding

Range encoding

http://en.wikipedia.org/wiki/Range_encoding

Arithmetic coding

https://en.wikipedia.org/wiki/Arithmetic_coding


Dictionary coding

Dictionary coders work by building up a dictionary of "this short code means this longer data".

The method of building that dictionary varies, and most of the work is in making a dictionary that is efficient for a given set of data.


Deflate/flate

Primarily combines LZ77 and Huffman coding.

Used in ZIP, zlib, gzip, PNG, and others.

See also RFC 1951

Modern compressors may still produce standard deflate streams for compatibility. For example, ZIP's deflate is still the most uniquitously supported coding in ZIP files, #zopfli can be used as a drop-in in gzip (and in theory zip).


(ZIP also uses Deflate64 (see its #compression_methods), which is a relatively minor variation)

LZ family and variants

Referring to an approach that Abraham Lempel, and Jacob Ziv used, and various others since.


LZ77 and LZ78 (a.k.a. LZ1 and LZ2)

Refer to the original algorithms described in publications by Lempel and Ziv in 1977 and 1978.

http://en.wikipedia.org/wiki/LZ77_and_LZ78


LZW

Short for Lempel, Ziv, Welch, refers to an extension of LZ78 described in 1984 by Terry Welch.


Used in various place, including GIF, TIFF, Unix compress, and various others.

Limited adoption because implementations were due to patents from 1983 to 2003 (or 2004, depending on where you lived).


See also:

LZSS

Lempel-Ziv-Storer-Szymanski (1984)

Used by PKZip, ARJ, ZOO, LHarc

http://en.wikipedia.org/wiki/Lempel-Ziv-Storer-Szymanski

LZMA

Lempel-Ziv-Markovchain Algorithm (since 1998)

Still much like LZ77/deflate, but with smarter dictionary building(verify).

Default method in 7Zip. An optional method in some new versions of the ZIP format.

http://en.wikipedia.org/wiki/LZMA

LZMW

LZW variant (letters stand for Miller, Wegman)

http://en.wikipedia.org/wiki/Lempel%E2%80%93Ziv%E2%80%93Welch#Variants

LZAP

LZW variant (letters stand for all prefixes)

http://en.wikipedia.org/wiki/Lempel%E2%80%93Ziv%E2%80%93Welch#Variants

LZWL

Syllable-based LZW variant

http://en.wikipedia.org/wiki/LZWL

LZO

Lempel-Ziv-Oberhumer

Built for very fast decompression (and can do it in-place(verify)).


Compression speed more at a tradeoff, though typically used at lower compression settings, to e.g. get ~half the compression ration of gzip at ~four times the speed, because usually the point is a CPU-cheap way to store more data / move it faster.


You can also make it work harder to get compression ratio ant time quite comparable to gzip -- while retaining the high-speed low-resource decompression.


https://en.wikipedia.org/wiki/Lempel%E2%80%93Ziv%E2%80%93Oberhumer

LZF

Largely comparable to LZO, but designed for more constrained memory use (verify)


LZ4

Basically a variant of LZO - fast to decompress, but even during compression it prefers speed over compression ratio (by not being as exhaustive as typical DEFLATE style compression).

You can think of it as a "what compression can I get at minimal CPU cost?", in that it can compress at hundreds of MByte/s per modern core (order of magnitude).


It is an option in things like ZFS, a filesystem that's biased to archiving, so where a little CPU and memory overhead on writes are considered worth it. ZFS also tries to (quickly, early in the data) detect when no compression should be used at all, which adds to the 'minimal CPU cost' aspect.


Comparison between LZO and LZ4 seems to be:

  • LZ4 is done faster
  • LZO can compress a little better
  • LZO uses less memory


https://en.wikipedia.org/wiki/LZ4_(compression_algorithm)

LZ4HC

Variant of LZ4

  • same cheap decompression speed
  • typically 20% better compression, but at 5x to 10x slower compression speeds
  • no fast decision to not compress (verify), so e.g. in ZFS you generally don't want it over LZ4

...i.e. it's meant to save storage cost for archiving and backups, on hosts that otherwise don't use their CPU much.

...where explicit compression (e.g. throw it at bzip2 or xz) may be equally sensible. (There seems no hurry to get it into ZFS)

LZX

(1995) Used in Amiga archives, Microsoft Cabinets, Microsoft Compressed HTML (.chm), Microsoft Ebook format (.lit), and one of the options in the WIM disk image format.

http://en.wikipedia.org/wiki/LZX


LZRW

Lempel-Ziv Ross Williams (1991), which itself has seven variants.

http://en.wikipedia.org/wiki/LZRW


LZJB

Letters stand for Jeff Bonwick. Derived from LZRW (specifically LZRW1)

Used in ZFS.

http://en.wikipedia.org/wiki/LZJB


ROLZ

Reduced Offset Lempel-Ziv

http://en.wikipedia.org/wiki/Reduced_Offset_Lempel_Ziv


zstd

zstd (Zstandard, extension may be .zst) is a LZ77-family compressor, aiming to be a fast compressor and in particular a fast decompressor, at zlib-like compression levels.


Very roughly:

at one end it has lzo / lz4 speeds, but compresses more
at the other end it has zlib/bzip2 compression (but faster than those)


Part of that speed seems to come from multicore being standard part, rather than an afterthought.

There are other footnotes, e.g. that of compression, decompression memory.


zstd makes sense within certain systems, because of the lower CPU (and/or; tradeoff with) higher speed at almost the same compression as zlib.



See also:

Combinations

zopfli, brotli

This article/section is a stub — some half-sorted notes, not necessarily checked, not necessarily correct. Feel free to ignore, or tell me about it.

These are meant for the web.


Zopfli produces a standard DEFLATE stream (often then wrapped in zlib/gzip) but is an implementation that tends to squeeze out a little more compression than zlib/gzip at their max settings, by exhausting more alternatives -- which also makes it much slower.

Certainly not for on-the-fly compression or one-time transfers, it is intended for things you compress once, then serve many many times, e.g. on the web for woff, PNG and such, while not changing these standard at all.


Brotli is built for fast on-the-fly text compression, and has dictionaries primed with some very common strings on the web. It tends to shave 20% more off for HTML, JS, and CSS.

Which is comparable to zlib/gzip/DEFLATE on its lighter settings, yet brotli's is meant to decompress faster, and have low and well bound resource requirements (not unique features, but can be a useful combination for some purposes).(verify)


See also:

https://caniuse.com/#feat=brotli

Techniques

Burrows-Wheeler transform (BWT)

See Searching_algorithms#Burrows-Wheeler_Transform


Not a compression, just a way to change a string - but one that tends to put similar patterns near each other, and one that is fully revesible. This makes the BWT useful as a step in compression algorithms, and is indeed used: bzip2 is built around it.

Prediction by Partial Matching (PPM)

PPM is the concept, PPM(n) is PPMd


Dynamic Markov Compression (DMC)

Context tree weighting (CTW)

Context Mixing (CM)

Choices

High compression, who-cares-about-speed

Consider file-serving purposes you compress once, then serve millions of times.

You pay mostly for transfer, so it pays to try your hardest once to compress it.

This is a big argument behind pre-compressed zlib through HTTP - and specifically zopfli for your most-served stuff.


Similarly, archive and backup spend CPU compressing exactly once, and mostly pay long-term storage space (disk, or even just the physical rooms to put them in).


This means high compression ratios save the most cost, and you want to look at bzip2, 7z, LZMA and PPMd methods, maybe xz or ZPAQ, over things like gzip, lz4, zstd.

While these tend to not parallelize as brilliantly as some specialized variants, if you have cores to spare you may get more compression within the same timespan, which can be a no-brainer. See e.g. Compression_notes#Parallelized_variants.2C_and_speed_versus_compression for the kind of tradeoffs -- but test on your own data.

On CPU and memory difference under higher compression

The higher compression settings tend to mean "search more exhaustively within the given data to see if there is a slightly better solution".


This means the highest compression almost always means significantly lower compression speed - but often little to no difference in decompression speed.

But, under methods that involve lookup tables, it can make a real difference to the memory used in decompression and compression.


And with quickly-diminishing returns in compression.

The quickly diminishing returns are quickly not worth the extra CPU time and/or memory use.

Or responsiveness. Things like LZ4 have no parameters, which is good when you want guarantees like 'is always done in what seems like negligible time'.


It does mean lower compression, because they only really focus on redundancy within a shortish window, rather than the best possible solution for a large amount of data.

This is also part of why there is often more difference between compression methods, than there is within a method at different compression levels.


There is still an argument for cases where you will read/send/serve the result many times - it will eventually make up for the initial investment (or when storage is more expensive to your bottom line than CPU power, which should be basically never). This is why things like png crushers exist, the reason behind zopfli, and why you may wish to sometimes use zx or lzip for software distribution.

Generic tools may not even let you try this, e.g. bzip2 settings barely matter, and for good reason.

High speed, reasonable compression

(see above)

These can make sense in systems

where CPU is generally not the bottleneck, so spending a little CPU on light data compression is almost free
and/or tend to write as much as they read (e.g. storage/database systems)

For example

for database and storage systems, since these are often relatively dedicated, and tend to have a core or two to spare.
and/or when it lowers network transmission (bandwidth and interrupt load)


  • LZO (see above)
  • LZF (see above)
  • LZ4
(e.g. compresses similarly to e.g. FastLZ, QuickLZ in less time)
e.g. used in ZFS's transparent data compression
  • snappy (previously zippy) seems similar to LZ4
  • zstd and gipfeli seems to intentionally aim at more compression than LZ4/snappy at speeds still higher than typical DEFLATE'. (verify)


Note that some of these have the footnote "If you've got a core or two to spare."

Which on dedicated storage servers is typically true, while on workhorses might take a few percent off your general performance.


See also:

Lossy methods

Note that lossy (sometimes 'lossful') methods often employ lossless coding of some perceptual approximation.


Transparency is the quality of compressed data being functionally indistinguishable from the original - a lack of (noticeable) artifacts. Since this is a perceptual quality, this isn't always easy to quantify.

Ideally, lossy compression balances transparency and low size/bitrate, and some methods will search for the the compression level at which a good balance happens.


Floating point compression

Floating point data, particularly when higher-dimensional, has a tendency to be large.

And also to have smooth, easily modelled patterns.


Very clean floating point data may compress well, even using completely general-purpose compression that doesn't know it's floating point data.

Noisier data less so, but methods that know it's dealing with floating point may do better.


Note that floating point compression can be essentially lossless.

There is often also the option for lossy compression. Conceptually, you can shave bits off the mantissa. The tradeoff may be worth it, particularly when you know the data was noisy to start with.


Audio

Image

Video

Digging into some details

LZW notes

This article/section is a stub — some half-sorted notes, not necessarily checked, not necessarily correct. Feel free to ignore, or tell me about it.

For related compression methods, see Compression notes

General

Properties:

  • relatively easy to implement.
If you're the learn-by-doing kind, you may like this one - you'll make a halfway decent compression algorithm and understand it decently
Lots of little edge cases before it's production-ready, though.
  • on compression level:
for readily compressible things, LZW is pretty decent - repetitive text like logs may become ~10% of original, general text perhaps 40%
in general it's not as good as other common and still relatively simple algorithms (like zlib)
uncompressable data in LZW coded form may become approx. 110%..130% the original size (varying with the encoder's response to dictionary growth)


  • memory overhead is bounded, and small to moderate (...for a codebook-style compression algorithm)
Beyond the input/output (which can be streamed), there's only the dictionary
the amount of entries in the dictionary is bounded by agreement on the maximum code width. For the somewhat standard 12-bit maximum code width it's fairly small, and even for, say, 20-bit dictionaries it's often manageable.
the maximum size of the entries is effectively bounded by the way it is built up. It's not particularly minimized, though there are more-efficient-than-naive implementations


Concepts, terms, and some implementation/behaviour notes

This article/section is a stub — some half-sorted notes, not necessarily checked, not necessarily correct. Feel free to ignore, or tell me about it.
Glossary

bit width - the amount of bits currently needed to be able to use all indexes in the table, which is ceil(log2(dictionary_size+1))

code width refers to the amount of bits needed to code entries from the dictionary, which depends (only) on the (current) amount of entries.

In practice the maximum size and code width are often decided before you start.

code size can be a bit ambiguous, often referring to the code width, or sometimes the dictionary size.

Dictionary (a.k.a. codebook, code table, translation table) refers to a lookup table between uncompressed strings of data, and the symbol (a.k.a. codepoint) it is assigned.

The 'dictionary size refers to the amount of entries in the dictionary.

Index usually refers to a position within the dictionary (referring to the string of data that position of the dictionary stores). The compressed data consists of a list of such indices.

Symbol usually refers to the indices. Below, symbols and indices are synonymous. (In a formal-language-theory context it could also refer to the elements of the uncompressed data. Sometimes 'elements' is used to refer less ambiguously to units of the uncompressed input.)


Encoding input data to symbols/indices

Decoding symbols/indices to uncompressed data

Between symbols and bitstream

Other notes

See also

Specific implementations

In GIF

Unsorted

Bi-level images

This article/section is a stub — some half-sorted notes, not necessarily checked, not necessarily correct. Feel free to ignore, or tell me about it.

JBIG2

  • ITU T.88, Information technology - Coded representation of picture and audio information - Lossy/lossless coding of bi-level images

See also:


Note that DjVu's CJB2 is a variation of this format.


CCITT Group 3 and 4

This article/section is a stub — some half-sorted notes, not necessarily checked, not necessarily correct. Feel free to ignore, or tell me about it.

(Note CCITT is now ITU-T)

Primarily:

  • ITU-T T.4, Standardization of Group 3 facsimile terminals for document transmission
  • ITU-T T.6, Facsimile coding schemes and coding control functions for Group 4 facsimile apparatus

Published in 1988, parts of the fax standard were updated in 1992, 1993. Actual fax machines need to conform to a larger number of standards.

See also


containers and/or compression

Pack200

Pack200 is a compression scheme for Java (specified in JSR 200), which seems to specialize in informed/logical compression of bytecode, (and bytecode within JARs)

Pack200 plus gzip compress these things better than gzip alone.

It's used with JWS, among other things


See also:

See also