Jump to content

Data Compression/Lossy vs. Non-Lossy Compression

From Wikibooks, open books for an open world

Lossy Compression vs Non-Lossy Compression

[edit | edit source]

Lossy image compression and lossy video compression (such as JPEG compression, MPEG compression, and fractal image compression) give much better reduction in size (much higher compression ratio) than we find in almost any other area of data compression.

Lossy compression is best used to reduce the size of video data, where defects in the picture can be hidden as long as the general structure of the picture remains intact. This type of compression is called lossy compression because part of the reason that it compresses so well is that actual data from the video image, is lost and then replaced with some approximation.

Fractal compression is based on the concept of self-similarity in fractals. It uses fractal components that have self-similarity to the rest of the surrounding area in the picture.

The storage is so inaccurate that by replacing some of the fractal data with encrypted text, it is very difficult to see a difference in the picture, which means that important messages can be embedded in video images with little fear of detection. [citation needed]

If we could achieve the same level of compression without loss of data, we wouldn't use Lossy Compression because there is an imperceptible loss of quality in the image.

One of the reasons that there is still some interest in new loss-less compression techniques, is that only very inexact data structures can survive Lossy Compression. It is often the case that loss of a single bit, renders a whole phrase or line of data inaccurate. This is why we attempt to build more and more stable memory systems. The recent shift from RD to SD ram for instance was partially because RD ram needed more interactive maintenance of its data.

If we are so protective of the memory of data, then it makes sense that we must also be protective of the compression scheme we use to store and retrieve data. So the only places Lossy Compression can be used, are places where the accuracy at the bit level, does not materially affect the quality of the data.

image compression

[edit | edit source]

Most image compression and video compression algorithms have 4 layers: Is there a standard terminology for these things?

  • A "modeling" or "pre-conditioning"[1] or "filtering"[1] or "de-correlation" layer converts the raw pixels into more abstract information about the pictures – and, at the decoder, a "rendering" part converts that abstract information back into the raw pixels.
  • A "quantizer" layer that throws away some of the details in that abstract information that humans are unlikely to notice – and, at the decoder, a "dequantizer" that restores an approximation of the information.
  • A "entropy coder" layer that compresses and packs each piece of information into the bitstream to be transmitted – and, at the decoder, the "entropy decoder" that accepts the bitstream and unpacks each piece of information.
  • A layer that adds synchronization, interleaving, and error detection to the raw bitstream just before transmitting it.

Since we talk about entropy coding elsewhere in this book, and the Data Coding Theory book discusses synchronization and error detection, this section will focus on the other layers.

Some kinds of "modeling" process gives us information that is useful for things other than data compression, such as "de-noising" and "resolution enhancement".

Some popular "modeling" or "de-correlation" algorithms include:

  • delta encoding
  • Fourier transform – calculated using the fast Fourier transform (FFT) – in particular, the discrete cosine transform (DCT)
  • wavelet transform – calculated using a fast wavelet transform (FWT) – in particular, some discrete wavelet transform (DWT)
  • motion compensation
  • matching pursuit
  • fractal transform[2][3][4][5]


There is a huge amount of information in a raw, uncompressed movie. All movie and image compression algorithms can be categorized as either:

  • Lossless methods: methods that make no changes whatsoever to the image; the uncompressed image is bit-for-bit identical to the original.
  • "Nondegrading methods" or "transparent methods": methods that make some minor changes to the image; the uncompressed image is not exactly bit-for-bit identical, but the changes are (hopefully) invisible to the human eye.[6]
  • "Degrading methods" or "low-quality methods": methods that introduce visible changes to the image.

... "idempotent" one-time loss vs. "generational" loss at every iteration ...


Clipboard

To do:
move the Data Compression/Multiple transformations#some information theory terminology section to here?


In theory, a "perfectly" compressed file (of any kind) should have no remaining patterns. Many people working compression algorithms make pictures of the compressed data as if it were itself an image. [7] The human visual system is very sensitive to (some kinds of) patterns in data. If a human can describe a repeating pattern in the compressed data precisely enough, you can use that description to help model the original image more accurately, leading to a smaller compressed image file.

Alas, in situations where there is no pattern at all (true random data), the human eye often sees "clumps" and other "patterns" that are not useful for data compression. (Some artists take randomly-positioned data and carefully nudge the points to make them more evenly spaced to make them look random, but the result is actually less random and can often be compressed at least a little, unlike true random data).

The two-dimensional "basis functions" of the JPEG DCT.

Standard JPEG uses a (lossy) discrete cosine transform (DCT), and standard MP3 uses a (lossy) modified discrete cosine transform (MDCT). The difference between the original input and the decoded output caused by these standard transforms are smaller than the errors caused by quantization, which in turn are generally so small as to be imperceptible.

integer transforms

[edit | edit source]

Several wavelet transforms use only integer coefficients, such as the ones used in Progressive Graphics File (PGF), lossless JPEG 2000, CineForm, ICER, etc.

A few DCT-like transforms have only integer coefficients, such as the MPEG integer IDCT standard,[8] the 4×4 and 8×8 integer transforms used in H.264, and the 4×4 transform used in JPEG XR.

Most of these transforms were originally developed so they would run quickly on embedded systems – systems where the floating-point transforms used by standard JPEG/MPEG run too slowly.


In 2012, compression researcher Rock Brentwood posted several lossless approximations to the 1D DCT and IDCT, including the following:[9]

LDCT forward transform transformation matrix:

17 17 17 17 17 17 17 17
24 20 12 6 −6 −12 −20 −24
23 7 −7 −23 −23 −7 7 23
20 −6 −24 −12 12 24 6 −20
17 −17 −17 17 17 −17 −17 17
12 −24 6 20 −20 −6 24 −12
7 −23 23 −7 −7 23 −23 7
6 −12 20 −24 24 −20 12 −6

ILDCT reverse transform inverse transformation matrix is the transpose of the LDCT matrix:

17 24 23 20 17 12 7 6
17 20 7 −6 −17 −24 −23 −12
17 12 −7 −24 −17 6 23 20
17 6 −23 −12 17 20 −7 −24
17 −6 −23 12 17 −20 −7 24
17 −12 −7 24 −17 −6 23 −20
17 −20 7 6 −17 24 −23 12
17 −24 23 −20 17 −12 7 −6

The JPEG data compression algorithm and file format are extensively covered in the wikibook JPEG - Idea and Practice.


Clipboard

To do:
Provide some general notions and any specifics regarding the JPEG compression


JFIF: JPEG File Interchange Format

[edit | edit source]
Clipboard

To do:
Extend


JPEG 2000

[edit | edit source]

JPEG 2000 is useful in applications that require very high quality – such as the DICOM medical image format – because the compressor can choose higher quality settings – including a completely lossless mode – than are available in earlier JPEG standards.

JPEG 2000 is a image coding system based on wavelet technology. [10] The JPEG 2000 compression standard uses the biorthogonal CDF 5/3 wavelet (also called the LeGall 5/3 wavelet) for lossless compression and a CDF 9/7 wavelet for lossy compression. However, a variety of other kinds of wavelets have been proposed and used in experimental data compression algorithms. [11]

Open-source implementations of JPEG 2000 are available. [12]

color transformation

[edit | edit source]

Most image compression algorithms use some kind of color space transformation to de-correlate the R, G, B values of each pixel. We discuss color transforms in Data Compression/Multiple transformations#color transforms.

audio compression

[edit | edit source]

Compression as we have seen does not depend only on the algorithm that is applied, but to the type of data that is intended compressed and to the quality of the copy generated at decompression time. This as covered in the previous section is extremely important to images but is also for sound.

One important fact that distinguish sound from image is that the general public does not have in general the capacity to make a qualified distinction regarding audio, by the general subjectiveness of the sense and for lack of education in regards to the intrinsics of an complex audio production.


Clipboard

To do:
Cover dynamic range - 'Loudness Wars' and distinguish sound wave compression vs data compression


Today due to the increasing importance of digital transferences of audio files. That seems finally to have gotten the support of the music media conglomerates, the importance of quality in regards to the consumer is of extreme importance.

Mp3 classifies not only a type of compression but a data format. It is used for lossy compression of audio and as a facilitator for digital audio distribution and categorization.

Most people will not be unable differentiate between an original (natural audio) and a 256kbps mp3 lossy reproduction.

An audio file that undergoes heavily sound compression when applied to a mp3 compressions, will generally sound worse to the human ear that the original, especially if encoded at 128 kbps (or less).

AAC is used mainly by Apple, sometimes encoded with the apple lossless audio codec (ALAC), however AAC is compatible with a huge range of non-Apple devices; see Wikipedia's page on AAC.


Clipboard

To do:
ADD FURTHER INFORMATION


The FLAC uses lossless compression.


Clipboard

To do:
Complete


lossy and residual give lossless

[edit | edit source]

Sometimes people download a (highly compressed) image or video or music file, using it to decide if they really want to spend a (much longer) time downloading the high-resolution lossless version of the file.

Rather than delete the original (lossy) version of the file and start downloading the lossless version from scratch, several people are experimenting with methods of somehow using the partial information in the lossy version of the file in order to reduce the time required to download "residue", also called the "residual" – the "rest" of the file (i.e., fill in the details that the lossy compressor discarded).

[13] [14] [15] [16] [17] [18] [19] [20] [21]

As long as the size of the (compressed) residue is significantly less than the size of the file stored in a stand-alone lossless format, the user has saved time – even though the total lossy + residue size is usually larger than a stand-alone lossless format.

GIF and PNG, image file formats designed to be downloaded over relatively slow modems, have some of this characteristic – they are designed to support "partial downloads".

Such lossless formats include JPEG XR, DTS-HD Master Audio, MPEG-4 SLS (lossless audio compression), Wavpack Hybrid, OptimFROG DualStream, ...

References

[edit | edit source]
  1. a b Greg Roelofs. "PNG: The Definitive Guide: Chapter 9. Compression and Filtering".
  2. Wikipedia: fractal transform
  3. "Fractal based Image compression algorithm (and source code)" [1]
  4. Fractals/Iterated function systems
  5. "FAQ:What is the state of fractal compression?"
  6. Sergei Vasilyev. "Contour-Based Image Compression for Fast Real-Time Coding". 1999. quote: "lossy REIC belongs to nondegrading methods ... as once applied, its repeated application to the same image does not lead to any further change to the image."
  7. "Technical Overview of Cartesian Perceptual Compression" (c) 1998-1999 Cartesian Products, Inc.
  8. http://www.reznik.org/software.html#IDCT
  9. Rock Brentwood. "Lossless & Non-Degrading Lossing DCT-Based Coding"
  10. Joint Photographic Experts Group : JPEG 2000 standard
  11. The Wavelet Discussion Forum
  12. The JasPer Project by Michael D. Adams, an implementation of JPEG-2000 Part-1.
  13. "Flexible 'Scalable to Lossless'?"
  14. Detlev Marpe, Gabi Blättermann, Jens Ricke, and Peter Maaß "A two-layered wavelet-based algorithm for efficient lossless and lossy image compression" 2000.
  15. GCK Abhayaratne and DM Monro. "Embedded to lossless coding of motion compensated prediction residuals in lossless video coding"
  16. G.C.K. Abhayaratne and D.M. Monro. "Embedded to Lossless Image Coding (ELIC)". 2000.
  17. CH Ritz and J Parsons. "Lossless Wideband Speech Coding". 2004. "lossless coding of wideband speech by adding a lossless enhancement layer to the lossly baselayer produced by a standardised wideband speech coder"
  18. Nasir D. Memon, Khalid Sayood, Spyros S. Magliveras. "Simple method for enhancing the performance of lossy plus lossless image compression schemes" 1993.
  19. Qi Zhang Yunyang Dai Kuo, C.-C.J. Ming Hsieh "Lossless video compression with residual image prediction and coding (RIPC)". 2009.
  20. Gianfranco Basti and M. Riccardi and Antonio L. Perrone. "Lossy plus lossless residual encoding with dynamic preprocessing for Hubble Space Telescope fits images" 1999.
  21. Majid Rabbani and Paul W. Jones. "Digital image compression techniques". 1991. Chapter 8: Lossy plus lossless residual encoding.