New Super-Compression?

Charles Steinkuehler charles at steinkuehler.net
Wed Nov 27 21:31:04 CST 2002


Scott Long wrote:
> A friend of mine is a consultant for a local company (who would remain 
> nameless, but I've gotta give you guys a link for this) and they've partnered 
> with a company that supposedly has a new compression algorithm that can 
> dramatically reduce the size of files (for example, from 8MB to less than 
> 100k for an image).  You can find out info about it at:
> 
> http://www.entrekc.com/products/compression.html
> 
> While he was telling me this, I tried my damndest not to laugh.  He says that 
> he has actually seen this stuff in action (though only still images, of 
> course...no video yet...hmmm...and what's easier to fake?).  
> 
> So what do you guys think?  Anyone done any research into compression stuff 
> and could quantify my gut feeling that this is B.S.?  Or could I just happen 
> to be wrong on this?  (I wouldn't mind it if I was wrong, of course...but I 
> doubt it.)

The apx. 80:1 compression ratio for a still is not unbelievable.  I have 
a much harder time believing the video compression ratios listed on the 
link (500:1 to 5000:1), mainly because they claim their source material 
was captured from VHS with a Matrox board (ie: *LOTS* of noise, which is 
the death of all compression routines).

Of course, all real-world tests depend heavily on the source material 
(unknown in this instance), and the quality of the uncompressed image on 
the far end (simply referred to as "Lossy").

I have done a *LOT* of work with video compression, including 
implementing custom compression routines in FPGA chips for the Video 
Toaster Flyer product from NewTek, and the QuBit video recorder from QuVIS:
http://www.quvis.com/products/

At QuVIS, we could pretty easily compress clean video sources with 10:1 
to 80:1 compression ratios, and we were primarily focused on maintaining 
extremely high quality for use in a production environmnet (with 
guaranteed worst-case SNR ratios, and no time-based compression for use 
in a production environment...each frame was compressed independently).

Another thing to remember is higher resolution images (either more 
pixels more bit-depth per pixel, or both) are easier to compress than 
lower resolution images.  If you think about it, this makes sense...as 
long as the image is not total noise, the more pixels it has, the more 
similar neighboring pixels are to each other, so the better the image 
compresses.

We were using iterative discrete wavelet transforms, with coefficents 
optimized for integer arithmetic (lossless transform from the image 
domain to multiple frequency domains, which gets the image data into a 
format that is easy to actually compress), followed by an adaptive 
arithmetic entropy coder, which could be run in lossless, fixed quality, 
or fixed data-rate modes.

In simpler terms, the compression system was based on image scaling, 
similar to the scan-converters that upsample video resolution to 
high-res for the high-end home-theater projectors.  When compressing, 
we'd shrink the image, use the small version to create an image the same 
size as the original, and store the differences between the extrapolated 
image and the real one.  This process was applied iteratively on 
compression, creating smaller and smaller images (and the deltas between 
the synthetically up-sized version and the real image), and when 
finished, we'd store kind of a "thumnail" version of the original image, 
and all the delta's required to build the original.

As mentioned, with quality source material, we could see 10:1 to 80:1 
compression ratios, and we were *NOT* optimizing for the smallest 
"watchable" quality.  We also were not compressing across the time 
dimension (ie like mjpeg, rather than mpeg).  Apply the exact same 
compression method to the time dimension, and you'll get compression 
ratios from 50:1 to 3200:1 (applying a 0.5 fudge factor for "real-world" 
effects to the theoretical 100:1-6400:1 range).

So...the compression ratios they're talking about are quite possible, 
but the details of their video test sound fishy, mainly because of the 
source material, and the fuzziness of "Lossy".  The numbers under 
Transmission and Storage, however (13:1 Lossless, 167:1 visually 
lossless, and 6000:1 lossy), sound more believable, although probably 
represent numbers derived from a relatively easy to compress image (hey, 
it does look like a piece of marketing fluff).

I wouldn't start investing, but I don't see anything that smacks of the 
impossible.  Just remember to lift up the curtain and check:

1) How much processing they are doing (more CPU cycles or more hardware 
gates = better compression, but at a cost)

2) What material are they compressing.  Higher quality and higher 
resolution source images will compress better, and different compression 
algorithms will have trouble with different classes of images.

3) What artifacting shows up in the output

-- 
Charles Steinkuehler
charles at steinkuehler.net




More information about the Kclug mailing list