Compression notes

From Helpful
Jump to: navigation, search
These are primarily notes
It won't be complete in any sense.
It exists to contain fragments of useful information.

Stuff vaguely related to storing, hosting, and transferring files and media:

Contents

Utilities and file formats (rather than methods and algorithms)

This article/section is a stub — probably a pile of half-sorted notes, is not well-checked so may have incorrect bits. (Feel free to ignore, fix, or tell me)

General purpose

ZIP

Commonly used, mostly because it is so well known.


There have been a bunch of revisions/additions (see version history), with increasing amounts of underlying compression formats.

Of the compression formats, some are outdated, and many are specialized, and there is optional encryption.

This also means not everything can open every zip file.

To avoid compatibility problems, ZIP files created for compatibility are based on DEFLATE which has barely changed in twenty years.


Related formats:

  • Java:
    • JAR (Java Archive) is actually ZIP, with the addition of a few (optional) files, e.g. a manifest. [1]
    • WAR (Web Application aRchive) usually refers to the Sun format that is a JAR used to distribute Java webapps. WAR files have a number of basic files and directories that are expected to be present. [2]
    • EAR (Enterprise ARchive) also serves specific purposes. [3]
  • XPI: Firefox' plugins are a standardized set of files within a ZIP file [4]


compression methods

General purpose, little point because basic deflate is usually better:

  • 0: Store (no compression)
  • 1: Shrink
  • 2: Reduce (compression factor 1)
  • 3: Reduce (compression factor 2)
  • 4: Reduce (compression factor 3)
  • 5: Reduce (compression factor 4)
  • 6: Implode

General purpose, more common:

  • 8: Deflate (the one that everything supports)
  • 9: Deflate64 ('Enhanced deflate')
  • 14: LZMA
  • 98: PPMd (since WinZip version 11(verify))
  • 12: bzip2 (since WinZip version 11(verify))

Media-specific compression methods:

  • 96: Compressed lossless JPEG
  • 97: WavPack

Other:

  • 10: Old IBM TERSE
  • 18: New IBM TERSE
  • 19: IBM LZ77 z
  • 7: "Reserved for Tokenizing compression algorithm" (?)

multi-part ZIP

This article/section is a stub — probably a pile of half-sorted notes, is not well-checked so may have incorrect bits. (Feel free to ignore, fix, or tell me)


Multipart zip files were once used to get over the maximum size of a medium (e.g. floppy, CD) or maximum supported/allowed file size (e.g. email attachments, FAT32).

(Note that multi-part ZIPs are now often created as Zip64 for large-file reasons(verify))


split zip

They are typically named z01, z02, etc., and the .zip is the last part (not the first as some assume).

These files are mostly just the compressed bytes split at arbitrary points, except that the header offsets are relative to the start of each part.

As such you can't just concatenate them together (though concatenating them properly, or fixing the offsets afterwards, is relatively simple).

Ideally your uncompressor just knows about split multipart zip. For example, on linux it's easier to use 7z/7za than using cat and Info-ZIP's -F / -FF.


spanned zip

Spanned zip is the same as the above in terms of file content, different only in the file naming, specifically all parts have the same name, but reside on different media.

For example, a set of floppies all have game.zip on them, which happen to be sequential parts known only via labeling on the floppies.

Relatively rare in that it's impractical to keep all these files on one medium (you'ld keep them in separate directories, or more probably, rename them to be able to handle them).

See also [6]



Other

7zip (when handing non-7z zips)

doesn't create multi-part zip, it simply splits the whole stream at arbitrary points (doesn't alter the offsets),
names them name.zip.001, name.zip.002, etc.
extraction:
7zip obviously understands what it made itself
For other tools you would directly concatenate these files, which creates a correct single zip file.
https://superuser.com/questions/602735/how-could-i-portably-split-large-backup-files-over-multiple-discs/602736#602736


ZIPX

This article/section is a stub — probably a pile of half-sorted notes, is not well-checked so may have incorrect bits. (Feel free to ignore, fix, or tell me)

From the WinZip team, based roughly on ZIP but different enough to be a separate format.

Not widely supported.

RAR

Compresses better than ZIP, also proprietary but comparably available, so was used instead of ZIP in certain areas.

Uses LZ and PPM(d)

7z

The .7z format archive focuses on LZMA, which performs noticably better than ZIP's classical DEFLATE, and regularly a little better than other alternatives such as RAR, bz2, gzip, and others.


The 7zip interface uses plugins to support external formats. When you have multiple 7z commands, the difference is that:

  • 7z
    : the variant you can tell to use plugins
can read/write .7z, .zip, .gz, .bz2, .tar files, and reads from .rar, .cab, .iso, .arj, .lzh, .chm, .Z, .cpio, .rpm, .deb, and .nsis
  • 7za
    : a standalone variant, which handles only .7z, .zip, .gz, .bz2, .tar, .lzma, .cab, .Z
  • 7zr
    : a lightweight standalone version that handles only .7z
  • 7zg
    : GUI


In linux CLI, usually just use 7z (some package only 7za) and forget about the rest.


gzip

Based on DEFLATE, which is a combination of LZ77 and Huffman coding.

Things like WinRAR and 7zip can also open these.

Compression amount/speed tradeoff:

  • has -1 to -9 (and also --fast and --best, respectively 1 and 9)
  • particularly -8 and -9 seem to take a lot more time than is worth it compressionwise.
  • default is -6, which seems a practical choice

bzip2

Based on BWT and Huffman. Generally gives better compression than deflate (gzip, ZIP), and a little more resource-intensive.

Things like WinRAR and 7zip can also open these.

Compression amount/speed tradeoff:

  • -9 is the default (verify)
  • compression seems to level off to sub-percent size differences at and above -7
  • Time taken seems to be fairly linear from -1 to -9.
    • ...so if you only care about having compression at all, you could just use -1 (it's perhaps twice the speed of -9)

This seems to mean there's not a sweet spot (as clear as e.g. with gzip) without specifically valuing time over drive space, or the other way around.

pax

https://en.wikipedia.org/wiki/Pax_(Unix)

xar

https://en.wikipedia.org/wiki/Xar_(archiver)


Further

lzip

LZMA compressor.

In terms of compression per time spent

lzip -0 is comparable to gzip -6
lzip -3 is comparable to bzip2 -9 / xz -3
lzip -9 is comparable to xz -9(verify)


Apparently also seems a little better at error recovery(verify) [7]


lzip and zx are both LZMA family, xz is more recent though there are arguments why it's not the best design as an archiver, whereas lzip just does just the thing it does.


http://www.nongnu.org/lzip/lzip.html

Parallelized variants, and speed versus compression

tl;dr:

  • if you care more about speed: pigz -3
  • if you care more about space: lbzip2 -9
  • if you care even more about space, you're probably already an xz user and have no speed expectations :)


Parallelized variants:

  • for bzip2 there's lbzip2 and pbzip2
  • for gzip there's pigz
  • for lzip there's plzip
  • for xz thre's pxz and pixz


Some details:

  • pigz speed: pigz -3 is ~50% faster than the default -6, at still decent compression
  • pigz -11 is zopfli, a few percent better but much slower
  • lbzip2:
speed barely changes with compression level, so you may as well use -9, also the default
(-1 seems slightly slower than the rest, even. Yeah, weird)
memory hungrier than others, also at (many-thread) decompression
  • pigz versus lbzip2 - draw your own conclusions from below and tests on your own typical data(!), but it seems:
lbzip2 compresses better at all settings (one test file: lbzip2:24% pigz:35%)
pigz -3 noticably faster than lbzip2 at all settings ((verify))
pigz -7 (and higher) are slower than lbzip2
I've heard that lbzip2 scales better with cores so on beefy dedicates cases would then be faster; TODO: check
  • pbzip2 is slower than lbzip2 at higher compression, similar speed at lower compression
  • pxz and pixz are slower than the above at higher compression (...what did you expect?)
(more comparable at lower compression - have not checked to what degree)


Quick and dirty benchmarks:

(on tmpfs, so negligible IO) on a 218MB, fairly compressible floating point number data file

On an AMD FX6300:

  • pigz -1 compresses it to 80MB (36%) in 1.0sec
  • pigz -2 compresses it to 77MB in 1.1sec
  • pigz -3 compresses it to 76MB in 1.3sec
  • pigz -4 compresses it to 75MB in 1.4sec
  • pigz -5 compresses it to 75MB in 1.8sec
  • pigz -6 compresses it to 75MB in 2.2sec
  • pigz -7 compresses it to 75MB in 2.4sec
  • pigz -8 compresses it to 75MB in 4.2sec
  • pigz -9 compresses it to 75MB (34%) in 11.5sec
  • lbzip2 -1 compresses it to 55MB (25%) in 3.0 sec (yes, slower than -6!)
  • lbzip2 -2 compresses it to 51MB in 2.7 sec
  • lbzip2 -6 compresses it to 48MB in 2.7 sec
  • lbzip2 -9 compresses it to 46MB (21%) in 2.8sec
  • pbzip2 -6 compresses it to 48MB in 3.3sec


On a dual Xeon E5645

  • pigz -1 took 0.58sec
  • pigz -3 took 0.7sec
  • pigz -6 took 1.0sec
  • pigz -8 took 2.0sec
  • pigz -9 took 6.2sec
  • lbzip2 -1 took 1.35sec (seemed less pronounced than above)
  • lbzip2 -2 took 1.3sec
  • lbzip2 -6 took 1.4sec
  • lbzip2 -9 took 1.4sec
  • pbzip2 -6 took 1.7sec

So they all seem to scale roughly linearly (going by passmark score)

Older or less used

ARC

ACE

Seen in the late nineties, early noughties. Similar to RAR in that it outperforms ZIP, but RAR is more popular.


ARJ

Seen in the nineties, rarely used now.

Had useful multi-file handling before ZIP really did, so saw use distributing software, e.g. in BBSes.

There was a successor named JAR, not to be confused with the Java -related JAR archives.

LHA/LZH (LHarc)

Used on the Amiga and for some software releases.

Now rarely used in the west, but still used in Japan, and there is an LZH compressed folder extension for WinXP (analogous to zipped folder support).

There was a successor named LHx, then LH.

Less practical nerdery

This article/section is a stub — probably a pile of half-sorted notes, is not well-checked so may have incorrect bits. (Feel free to ignore, fix, or tell me)

There is a whole set of compression types that are exercises in getting very high compression at the cost of a bunch of time.

They are usually barely command line utilities, let alone easy to use.

These include:

...and many others.

Unsorted

This article/section is a stub — probably a pile of half-sorted notes, is not well-checked so may have incorrect bits. (Feel free to ignore, fix, or tell me)


Image

Lossless methods

Entropy coding (minimum redundancy coding)

This article/section is a stub — probably a pile of half-sorted notes, is not well-checked so may have incorrect bits. (Feel free to ignore, fix, or tell me)

http://en.wikipedia.org/wiki/Entropy_coding

Mostly per-symbol coding - where a symbol are logical parts of the input, regularly fixed-size things, e.g. a character or byte in a file, a pixel of an image, etc.

Probably the most common symbol-style coding, contrasted with variable-run coding, which is often dictionary coding.


Modified code

Huffman

See also:

Adaptive Huffman (a.k.a. Dynamic Huffman)

See also:

Shannon-Fano

Similar to Huffman in concept and complexity.

A little easier to implement, so a nicer programming exercise, yet Huffman typically performs better.


See also:

Elias, a.k.a. Shannon-Fano-Elias

See also:

Golomb coding

http://en.wikipedia.org/wiki/Golomb_coding

Rice coding

https://en.wikipedia.org/wiki/Golomb_coding#Rice_coding

universal codes

In itself mostly a propery https://en.wikipedia.org/wiki/Universal_code_(data_compression)

Fibonacci coding

http://en.wikipedia.org/wiki/Fibonacci_coding


Elias gamma coding

https://en.wikipedia.org/wiki/Elias_gamma_coding

Exp-Golomb (Exponential-Golomb)

http://en.wikipedia.org/wiki/Exponential-Golomb_coding

Range encoding

http://en.wikipedia.org/wiki/Range_encoding

Arithmetic coding

https://en.wikipedia.org/wiki/Arithmetic_coding


Dictionary coding

Dictionary coders work by building up a dictionary of "this short code means this longer data".

Most of the work is in making a dictionary that is efficient for a given set of data.


Deflate/flate

Primarily combines LZ77 and Huffman coding.

Used in ZIP, zlib, gzip, PNG, and others.

See also RFC 1951

Modern compressors may still produce standard deflate streams for compatibility. For example, ZIP's deflate is still the most uniquitously supported coding in ZIP files, #zopfli can be used as a drop-in in gzip (and in theory zip).


(ZIP also uses Deflate64 (see its #compression_methods), which is a relatively minor variation)

LZ family and variants

Referring to an approach that Abraham Lempel, and Jacob Ziv used, and various others since.


LZ77 and LZ78 (a.k.a. LZ1 and LZ2)

Refer to the original algorithms described in publications by Lempel and Ziv in 1977 and 1978.

http://en.wikipedia.org/wiki/LZ77_and_LZ78


LZW

Short for Lempel, Ziv, Welch, refers to an extension of LZ78 described in 1984 by Terry Welch.


Used in various place, including GIF, TIFF, Unix compress, and various others.

Limited adoption because implementations were due to patents from 1983 to 2003 (or 2004, depending on where you lived).


See also:

LZSS

Lempel-Ziv-Storer-Szymanski (1984)

Used by PKZip, ARJ, ZOO, LHarc

http://en.wikipedia.org/wiki/Lempel-Ziv-Storer-Szymanski

LZMA

Lempel-Ziv-Markovchain Algorithm (since 1998)

Still much like LZ77/deflate, but with smarter dictionary building(verify).

Default method in 7Zip. An optional method in some new versions of the ZIP format.

http://en.wikipedia.org/wiki/LZMA

LZMW

LZW variant (letters stand for Miller, Wegman)

http://en.wikipedia.org/wiki/Lempel%E2%80%93Ziv%E2%80%93Welch#Variants

LZAP

LZW variant (letters stand for all prefixes)

http://en.wikipedia.org/wiki/Lempel%E2%80%93Ziv%E2%80%93Welch#Variants

LZWL

Syllable-based LZW variant

http://en.wikipedia.org/wiki/LZWL

LZO

Lempel-Ziv-Oberhumer

Built for very fast decompression (and can do it in-place(verify)).


Compression speed more at a tradeoff, though typically used at lower compression settings, to e.g. get ~half the compression ration of gzip at ~four times the speed, because usually the point is a CPU-cheap way to store more data / move it faster.


You can also make it work harder to get compression ratio ant time quite comparable to gzip -- while retaining the high-speed low-resource decompression.


https://en.wikipedia.org/wiki/Lempel%E2%80%93Ziv%E2%80%93Oberhumer

LZF

Largely comparable to LZO, but designed for more constrained memory use (verify)

LZ4

Basically a variant of LZO - fast to decompress, and also prefers compression speed over compression ratio (by not being as exhaustive as typical DEFLATE style compression).

You can think of it as a "what compression can I get for CPU-cheap?", because it can compress at (order of magnitude) hundreds of MByte/s per modern core.

It is notably used in ZFS, where it is also made to (quickly) detect when no compression should be used at all.



https://en.wikipedia.org/wiki/LZ4_(compression_algorithm)

LZ4HC

Variant of LZ4

  • same cheap decompression speed
  • typically 20% better compression, but at 5x to 10x slower compression speeds
  • no fast decision to not compress (verify), so e.g. in ZFS you generally don't want it over LZ4

...i.e. it's meant to save cost for archiving and backups.

...where explicit compression (e.g. throw it at bzip2 or xz) may be equally sensible. (There seems no hurry to get it into ZFS)

LZX

(1995) Used in Amiga archives, Microsoft Cabinets, Microsoft Compressed HTML (.chm), Microsoft Ebook format (.lit), and one of the options in the WIM disk image format.

http://en.wikipedia.org/wiki/LZX

LZRW

Lempel-Ziv Ross Williams (1991), which itself has seven variants.

http://en.wikipedia.org/wiki/LZRW

LZJB

Letters stand for Jeff Bonwick. Derived from LZRW (specifically LZRW1)

Used in ZFS.

http://en.wikipedia.org/wiki/LZJB

ROLZ

Reduced Offset Lempel-Ziv

http://en.wikipedia.org/wiki/Reduced_Offset_Lempel_Ziv


zstd

zstd (Zstandard) is a LZ77-family compressor, aiming to be a fast compressor and in particular a fast decompressor, at zlib-like compression levels.


Very roughly:

at one end it has lzo / lz4 speeds, but compresses more
at the other end it has zlib/bzip2 compression (but faster than those)


Note that part of that speed comes from multicore being standard part, rather than an afterthought from another project, yet these compressors exist for gzip, bzip2 (and narrow the difference at the latter end).

There are other footnotes, e.g. that of compression, decompression memory.


zstd makes sense within certain systems, because of the lower CPU (and/or; tradeoff with) higher speed at almost the same compression as zlib.



See also:

Combinations

zopfli, brotli

This article/section is a stub — probably a pile of half-sorted notes, is not well-checked so may have incorrect bits. (Feel free to ignore, fix, or tell me)

These are meant for the web.


zopfli produces a standard DEFLATE stream (often then wrapped in zlib or gzip) that tends to squeeze out a little more compression than zlib/gzip at their max settings, just by exhausting more alternatives - which also makes it much slower.

It is intended for things you compress once, then serve many times, e.g. on the web for woff, PNG and such. Certainly not for on-the-fly compression or one-time transfers.


Brotli is built for fast on-the-fly text compression, and tuned with dictionaries that do better at some of the more predictable text on the web. It tends to shave 20% off HTML off HTML, JS, and CSS.

Which is comparable to zlib/gzip/DEFLATE on its lighter settings, except brotli's is meant to decompress faster, and have its resource requirements be well bound (not unique features, but can be a useful combination for some purposes).(verify)


See also:

https://caniuse.com/#feat=brotli

Techniques

Burrows-Wheeler transform (BWT)

See Searching_algorithms#Burrows-Wheeler_Transform


Being reversible and putting similar patterns near each other, using the BWT as a step in compression algorithms makes sense, and is indeed used: bzip2 is built around it.

Prediction by Partial Matching (PPM)

PPM is the concept, PPM(n) is PPMd


Dynamic Markov Compression (DMC)

Context tree weighting (CTW)

Context Mixing (CM)

Choices

High compression, who-cares-about-speed

Consider file-serving purposes you compress once, then serve millions of times. You pay mostly for transfer.

This is a big argument behind pre-compressed zlib through HTTP - and specifically zopfli for your most-served stuff.


And, importantly, archivaes, where you spend CPU exactly once but mostly pay long-term storage space (disk, or even just physical rooms).

This means high compression ratios save the most cost, and you want to look at bzip2, 7z, LZMA and PPMd methods, maybe xz or ZPAQ, over things like gzip, lz4, zstd.

While these tend to not parallelize as brilliantly as some specialized variants, if you have cores to spare you may get more compression within the same timespan, which can be a no-brainer. See e.g. Compression_notes#Parallelized_variants.2C_and_speed_versus_compression for the kind of tradeoffs -- but test on your own data.

(and highest compression is almost always low speed)

The highest compression settings tend to mean "search more exhaustively within the given data", and it tends to require signigicantly more time to do so.


More so when they tend to work on a shortish window by default, and can be told to look at more data for redundancy. Doing that typically results in more compression, but it's a diminishing-returns deal. It also tends to mean increasing RAM requirements for compression and decompression.

So after a while, it's just not worth it for general use.

This is also one reason there is often more difference between methods, than there is difference within a method at different compression levels.


There is still an argument for cases where you will read/send/serve the result many times - it will eventually make up for the initial investment (or when storage is more expensive to your bottom line than CPU power, which should be basically never). This is why things like png crushers exist, the reason behind zopfli, and why you may wish to sometimes use zx or lzip for software distribution.

Generic tools may not even let you try this, e.g. bzip2 settings barely matter, and for good reason.

High speed, reasonable compression

(see above)

These can make sense in systems

where CPU is generally not the bottleneck, so spending a little CPU on light data compression is almost free
and/or tend to write as much as they read (e.g. storage/database systems)

This e.g. makes sense

for database and storage systems, since these are often relatively dedicated, and tend to have a core or two to spare.
and/or when it lowers network transmission (bandwidth and interrupt load)


  • LZO (see above)
  • LZF (see above)
  • LZ4, e.g. used in ZFS's transparent data compression
(e.g. compresses similarly to e.g. FastLZ, QuickLZ in less time)
  • snappy (previously zippy) seems similar to LZ4
  • zstd and gipfeli seems to intentionally aim at more compression than LZ4/snappy at speeds still higher than typical DEFLATE'. (verify)


Note that some of these have the footnote "If you've got a core or two to spare."

Which on dedicated storage servers is typically true, while on workhorses might take a few percent off your general performance.


See also:

Lossy methods

Note that lossy (sometimes 'lossful') methods often employ lossless coding of some perceptual approximation.


Transparency is the quality of compressed data being functionally indistinguishable from the original - a lack of (noticeable) artifacts. Since this is a perceptual quality, this isn't always easy to quantify.

Ideally, lossy compression balances transparency and low size/bitrate, and some methods will search for the the compression level at which a good balance happens.


Floating point compression

Floating point data, particularly when higher-dimensional, has a tendency to be large.

And also to have smooth, easily modelled patterns.

Very clean floating point data may compress well using completely general-purpose compression anyway, but more with something that knows it's floating point rather than just patterns in bytes


Note that floating point compression can be essentially lossless, but is probably more often applied when a little loss is worth the space saved, particularly when you know the data was noisy to start with. Conceptually, you can shave bits off the mantissa.


Audio

Image

Video

Unsorted

Pack200

Pack200 is a compression scheme for Java (specified in JSR 200), which seems to specialize in informed/logical compression of bytecode, (and bytecode within JARs)

Pack200 plus gzip compress these things better than gzip alone.

It's used with JWS, among other things


See also:


See also