Back to RasterLite2 doc index
Introduction
RasterLite2 supports many alternative image codecs, each one presenting its own specific characteristics. A good comprehension of the basic features offered by each codec will surely help you in using RasterLite2 at its best.The following simply is a very quick presentation mainly focused on practical aspects; if you are anyway interested in getting more rigorous and scientific informations you can easily found many exhaustive explanations by searching the Web.
Codec families
- the simplest codec you can imagine is NONE: all pixels are directly stored exactly as they are without applying any compression.
This approach will obviously require an higher storage amount than any other method, but will surely ensure two positive results:- fast and very effective processing, due to the absolute absence of any extra overhead imposed by compressing algorithms.
- absence of any possible artifact; all pixels will be always preserved exactly as they are.
- a more sophisticated approach is the one supported by all compressors of the lossless type:
- any lossless compressor is theoretically required to apply a perfectly reversible transformation.
i.e. after performing a full compress/decompress cycle the original input values must be exactly restored without any information loss or alteration. - some interesting reduction of the required storage amount is always ensured.
Typically you can expect an ideal compression ratio of about 1:2 (i.e. half of the original uncompressed size), however you have to take this value with a pinch of salt.
The actual compression ratio will always depend both on the internal statistical distribution of pixel values and on the adopted algorithm; be ready to face a wide range of variability. - different compression algorithm have different performances, both in terms of compression ratio and in terms of processing time.
as a general rule any compression algorithm ensuring a stronger compression will require a longer time to be computed.
- any lossless compressor is theoretically required to apply a perfectly reversible transformation.
- and finally we have several compressors of the lossy type:
- any lossy compression will always imply some kind of irreversible information loss.
After performing a full compress/decompress cycle you'll never get the original pixels, you'll get instead a more or less severely degraded image. - lossy compressors are smartly designed accordingly to the average capabilities of the human eye.
The degraded image returned by a good quality lossy compressor will apparently look exactly the same of the original one to an human observer.
Anyway a most rigorous computational approach will certainly detect that many pixels have irreversibly changed their values. - all compressors of the lossy type support variable (and freely selectable) compression ratios:
- very strong compression ratios will always introduce a severe image degradation.
- bland compression ratios will introduce only a moderate image degradation.
- all lossy compressors are always expected to ensure very strong compression ratios; surely better than the ones expected by lossless compressors.
- also in the case of lossy compressors different algorithms have different performances. Some codecs are incredibly fast, others are really slow.
- any lossy compression will always imply some kind of irreversible information loss.
Lossless codecs |
---|
the DEFLATE codec
Jack of all trades, master of none, Certainly better than a master of one. |
This codec is based on the same algorithm used by the very popular zip and gzip compression utilities. The RasterLite2 own implementation always applies a Delta Filter so to ensure an optimal compression ratio:
- Pixel values are never directly encoded as they are but as differences (aka deltas) between consecutive pixels pairs. The opposite transformation will be applied while decompressing.
- This way the internal statistical variance will be noticeably flattened, and this will usually ensure a more effective compression.
(the same identical technique is usually referenced in TIFF and GDAL documentation as predictor=2).
- PROS
- it's very flexible: it supports all possible combinations of Sample and Pixel type.
- it's very fast: it's actually the fastest of all lossless compressors.
It imposes a very bland compression overhead, and it has an incredibly low impact while decompressing.
- CONS
- in pure compression ratio terms many other lossless compressors could easily outperform DEFLATE.
the DEFLATE_NO codec
Exactly the same as DEFLATE except in that it never applies a Delta Filter.the PNG codec
This codec is based on the popular PNG image format and is fully based on DEFLATE supported by several filters explicitly intended for images.- PROS
- it's reasonably flexible: it supports many combinations of Sample and Pixel type (including MONOCHROME and PALETTE).
- CONS
- it's unable to support 32 bit and FLOAT/DOUBLE samples, and it's restricted to 1, 3 or 4 bands.
- REMARKS
- compression ratios are usually very close to the ones achieved by DEFLATE: sometimes DEFLATE performs better than PNG, other times it's the opposite.
Anyway PNG constantly has worst performances than DEFLATE in pure speed terms.
- compression ratios are usually very close to the ones achieved by DEFLATE: sometimes DEFLATE performs better than PNG, other times it's the opposite.
the LZMA codec
This codec is based on the same algorithm used by the very popular 7-zip compression utility. The RasterLite2 own implementation always applies a Delta Filter so to ensure an optimal compression ratio (exactly the same adopted by DEFLATE).- PROS
- it's very flexible: it supports all possible combinations of Sample and Pixel type.
- it always ensures higher compression ratios than DEFLATE.
Anyway the difference isn't a so striking one in many common cases.
- CONS
- it's painfully slow. It's about 4-5 times slower than DEFLATE while compressing; and it's about 2 times slower while decompressing.
the LZMA_NO codec
Exactly the same as LZMA except in that it never applies a Delta Filter.the FAX4 codec
This codec is based on an ultra-specialized compressor originally designed for FAX machines and only supporting 1-bit MONOCHROME pixels.- PROS
- it's really fast.
- it always ensures higher compression ratios than PNG (the unique alternative codec accepting 1-bit pixels).
- CONS
- not at all flexible; it's strictly intended for 1-bit images.
the LL_WEBP codec
This codec is based on the lossless flavor of the WebP compressor.- PROS
- in pure terms of compression ratios seems to be the most powerful lossless compressor available today.
- CONS
- it has a very limited field of application; only 1, 3 or 4 bands UINT8 samples are supported.
- it's painfully slow. I intend more exactly deadly slow ... a real slowcoach.
It's terribly slow while compressing; it shows better performances while decompressing, but it still remains one the worst performers in speed terms.
the LL_JP2 codec
This codec is based on the lossless flavor of the Jpeg2000 compressor as implemented by the OpenJpeg open source library.Caveat: the OpenJpeg library has a very long development history (about 10 year long), but only in very recent times has finally become practically usable.
Earlier versions were bug ridden, terribly slow and not at all usable. Using the very recent versions 2.0 or 2.1 is a strict requisite.
- PROS
- it's reasonably flexible: it supports 1, 3 or 4 bands, and it accepts bot UINT8 and UINT16 samples.
- CONS
- not completely flexible; there are many unsupported Sample / Pixel combinations.
- it's painfully slow. Even worst than WebP if possible.
It's terribly slow while compressing, more or less as WebP is; but differently from WebP it's terribly slow even while decompressing. - its compression ratios are usual interesting, but it's regularly outperformed by some other compressor.
- last but not least: it's not genuinely lossless.
Caveat: Jpeg2000 lossless isn't exactly lossless !!!I've set up a basically simple testbed:
|
Lossy codecs |
---|
the JPEG codec
This codec is based on the very popular JPEG image format.- PROS
- in terms of pure speed it's the fastest lossy compressor available today.
It's so fast that it actually outperforms the NONE codec due a very favourable trade-off between I/O sizes and computational load. - at moderate compression ratios (anyway ensuring a noticeable compression) it offers a very good visual quality.
- in terms of pure speed it's the fastest lossy compressor available today.
- CONS
- it has a very limited field of application; only 1 or 3 bands UINT8 samples are supported.
- at the highest compression ratios it visibly suffers from conspicuous and annoying artifacts. Both WebP and Jpeg2000 are better in this very specific field.
the WEBP codec
This codec is based on the lossy flavor of the WebP compressor.- PROS
- it supports a better visual quality than JPEG at highest compression ratios.
- at moderate compression ratios it requires a smaller storage amount than JPEG.
- CONS
- it has a limited field of application; only 1, 3 or 4 bands UINT8 samples are supported.
- It's really slow while compressing and it surely hasn't brilliant performances while decompressing.
the JP2 codec
This codec is based on the lossy flavor of the Jpeg2000 compressor.- PROS
- it supports a better visual quality than JPEG at highest compression ratios.
- at moderate compression ratios it requires a smaller storage amount than JPEG.
- it has a fairly good field of application; 1, 3 or 4 bands for both UINT8 and UINT16 samples are supported.
- CONS
- It's really slow while compressing and it's terribly slow even while decompressing.
Conclusions |
---|
lossless codecs
- DEFLATE always is a valid option: it offers fairly good compression ratios (never excellent anyway) but it's really fast.
It can be applied to many different Pixel / Sample types; certainly an honest although unsophisticated multipurpose compressor. - PNG is more or less the same of DEFLATE (slightly slower): anyway it's the unique codec supporting pixels of the PALETTE type.
- FAX4 is an excellent specialized compressor supporting only 1-bit samples.
- LZMA has exactly the same field of applicability than DEFLATE; it always offers higher compression ratios but at the cost of imposing longer processing timings.
- both the DEFLATE_NO and LZMA_NO codecs are mainly intended for research/testing purposes, and are never expected to reach interesting compression ratios (except may be with few extravagant dataset presenting some bizarre pixel distribution).
- LL_WEBP usually offers very good compression ratios, but it only supports a very limited range of Pixel / Sample combinations.
It always imposes poor performances; to be very judiciously applied only in very specific cases. - LL_JP2 only has disadvantages: it seems unable to offer very good compression ratios, and it's deadly slow. Last but not least, it's only imperfectly lossless.
lossy codecs
- JPEG is the most obvious best choice in many cases.
It's astonishingly fast (most notably in its turbo-Jpeg implementation) and it supports a very good visual quality except than at the most extreme compression ratios. - WEBP doesn't seems to be really interesting if not at highest compression ratios. Its painful slowness is a severe handicap.
- more or less the same if for JPEG2000; its terrible slowness obliterates any possible interesting feature.
Anyway Jpeg2000 could eventually be an interesting option in the case of UINT16 samples.
Back to RasterLite2 doc index