Quote:
|
Originally Posted by ErichKeane
In a real file, there are enough failsafes, that it may take as much as 30% to make a file unrecoverable. The same cannot be said about a compressed file.
|
Assuming that the compressed files are under 1k in size, it's no big deal to add "failsafes" such as error-correcting codes or even brute force redundancy
after the compression. That will of course increase the resulting file size, but if an original megabyte of raw data still ends up less than a few kilobytes of damage-tolerant compressed data, it's a major net win.
Given a sufficiently restricted set of possible input files and a sufficiently large shared data base, I can achieve miraculous compression too. For example, I can "encode" any static data currently on the World Wide Web into a short string of characters: just reference it by URL. But
arbitrary multimegabyte files compressed to 500-odd bytes? To say I am skeptical would be an understatement.