Chief Delphi

Chief Delphi (http://www.chiefdelphi.com/forums/index.php)
-   Programming (http://www.chiefdelphi.com/forums/forumdisplay.php?f=51)
-   -   New compression method (http://www.chiefdelphi.com/forums/showthread.php?t=30187)

Aalfabob 12-09-2004 22:34

Re: New compression method
 
Alright here is a little paper I wrote on how it works, I hope it explains it well:

The first step in this process is just to load the file, this can be done right now which will save a ton of time or it can be done for every 128 bytes of the file when they are needed.
After the data is loaded, the compressor works with a 128 byte chunk or less at a time. It is only less at the end of the file because sometimes it can not always get a 128 byte chunk. More on why it breaks it into chunks later on.

The range method listed in the outline works like this:
1) It takes in 2 bytes, for this example they will be named byte1 and byte2.
2) It then uses simple math in saying, if byte1 > byte2 then bit1 = 1 else bit1 = 0.
3) After the first compare it finds the current range in where byte2 can lie in the asiic values of 0 – 255.
4) It then takes the middle of the current range and compares it to byte2 again, logging it with the > or <= (1 or 0).
5) It repeats this until only 1 integer can fit in the range which will be byte2.
Now for this to actually be able to get back byte2 with only the binary number (> or <= in the bits) it needs byte1 to be with it in the file. So when it is outputted, it saves it like this: byte1, byte2’s range in bits.

------------------------------------------------------------------------------------------------------------

From all of the combinations that can be made in the range compression method, 15.5% of them come out with 7 or less bits, 37.5% are 9 bits, and 50.0% are 8 bits.

To make use of this, bit operations are used on a entire 128 byte section to put them into the 15.5% range. Sometimes only some of them can fit in the section which can sometimes make that section (128 Bytes) into a larger size, but not by much.

The bit operations I use are as following:
1) Every 2 bit rotation.
2) Every 4 bit rotation
3) Every 8 bit rotation
4) Opposite values of all bits in a byte
5) Opposite values of the odd bits in a byte
6) Opposite values of the even bits in a byte
7) Every other two bits of a byte are opposite.
8) Undefined for now

So say you put in byte1 and byte2 into the range function to get byte2’s binary range value. You would do all 128 bytes of the section like this and add up the average loss / gain of bits. As you know it could gain bits from this, so I use these operations to help them loss bits since they become totally new combinations after them. So it runs through all 256 values of the bit flips to find the best (The operations are stored in a byte before that section, for example say bit op. 1 and 5 were used, the header would be 10001000 and runs them in that order on decompression.)

Now instead of gaining bits it can actually lose some to gain a small amount of compression.

------------------------------------------------------------------------------------------------------------

So this is how the file setup would work:

Main Run Counter Header = HH;
128 Byte Section Operation Log = L;
Uncompressed byte (byte1) = U;
Compressed byte (byte2) = C;

HH , L , U , C , U , C , U , C, … , L , U , C , …
the first ... ='s The rest of the 128 byte section.
------------------------------------------------------------------------------------------------------------

That’s about it, you should now be able to see that this process is reversible because it uses predefined variables. This includes the byte range method, it reads in 1 bit at a time and calculates the range the same way it did the first time but backwards.

Hopefully this helps.

ahecht 13-09-2004 00:55

Re: New compression method
 
Alfabob, I'm sorry I assumed the worst about you, and thank you for admitting your mistake not just making endless excuses or disappearing. It takes a strong person to admit that they were wrong.

Rickertsen2 13-09-2004 07:20

Re: New compression method
 
Quote:

Originally Posted by ahecht
Alfabob, I'm sorry I assumed the worst about you, and thank you for admitting your mistake not just making endless excuses or disappearing. It takes a strong person to admit that they were wrong.

I agree.

FizMan 13-09-2004 09:12

Re: New compression method
 
$@#$@#$@#$@#... now I won't be getting any stake in the money... ;_;

Tough luck Aalfabob... but look on the brightside, it was a good learning experience for you!

Fulcrum2000 13-09-2004 13:27

Re: New compression method
 
Quote:

Originally Posted by ahecht
Alfabob, I'm sorry I assumed the worst about you, and thank you for admitting your mistake not just making endless excuses or disappearing. It takes a strong person to admit that they were wrong.

Have to agree also :)
Sorry for my strong reaction, but I have seen it happen before. In many cases people tried to make money out of their great 'super-compression' discoveries, but you obviously thought what you where doing was possible.

suneel112 13-09-2004 14:25

Re: New compression method
 
I also agree, and (being an aspiring inventor), I know the feeling of apathy and displeasure at my designs very well. I just hope you continue, change any mistakes, and make a great product (don't sell it to MS though).

Aalfabob 13-09-2004 14:46

Re: New compression method
 
Np about the assuming stuff. But one good thing that came from this is that I now know c++ pretty well, which should help a ton when it gets time to start programming those robots.

ThomasTuttle 25-09-2004 13:31

Re: New compression method
 
Aalfabob, your compression scheme is not so bad. I would guess that it works very well on uncompressed photo files with large gradient areas, audio files, and other things where byte values do not jump around a lot. If I understand right, you are basically encoding the difference between two bytes? Since you subdivide the range of values for byte2, very low or very high bytes probably yield a better result than values near 127/128, since there's a smaller range to encode. I have one suggestion: rather than doing: U C U C U C ... and encoding only every other byte, output just the first byte and then encode each next byte in relation to the previous one. So the decompressor knows the first byte and can find each next byte from the compressed data. Also think about alternate ways of encoding the differences between bytes...

Now that we've heard the guts of two different versions of the program explained, perhaps it is time to share a copy of it for people to try? I would be glad to help...

Just my $0.02

ThomasTuttle 03-10-2004 14:32

Re: New compression method
 
Why has this thread suddenly become so quiet?

ErichKeane 03-10-2004 16:21

Re: New compression method
 
Because the topic ended...?


All times are GMT -5. The time now is 13:31.

Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi