Quote:
|
Originally Posted by ahecht
I'm not sure what this guy is trying to pull (such as whether he is an existing member who set up a dummy account for this or just a random troll), but I can assure people that something is up.
|
I agree that it is a little unbelievable. Alaphabob, I'd trust almost anyone here with something that isn't protected yet. Just ask anyone with 5+ "rep(utation) points" to look over your algorithm. (Those are the green dot's on the left of the darker-grey bar above their post.) I'm not saying that less rep'd people aren't trustworthy, but it's something to reasure you.
Quote:
|
Originally Posted by ahecht
Besides, with most compression methods, compressing an already compressed file results in a slightly larger file size.
|
That's because, with all most all compression schemes, they work by creating a library, at some point in the file, and put multi-byte strings into it. The rest of the file is then encoded by putting a one byte key in place of the original string. If this string happens more than once, then you save space. For instance, if you have a string in a text file like "happy" three times, then you put "happy" in your library once and 3 one byte markers in the text for a total of around 8 bytes (there's probably also buts separating different items in the library, etc, which is why I say "around"). The original three happies took up 15 bytes.
When you recompress the file, it ends up compressing a file with no, or very few, redundencies, which are what make the library method work so well.
EDIT:
Why the heck did I choose happy? Why not something cool, like FIRST?
