|
|
|
![]() |
|
|||||||
|
||||||||
![]() |
|
|
Thread Tools | Rate Thread | Display Modes |
|
|
|
#1
|
||||
|
||||
|
New compression method
I made a new way to compress files a couple of days ago, ive made the program and have tested them. The only problem is that most people will not believe that what i have done is possible and i cant actually send them how it works because i havent patented it yet. But heres my question, If you are able to compress the same file over and over again would it be possible to reach very small sizes? Ive tryed a 1 Mb file so far and it reached 515 bytes, and it is fully reversable.
|
|
#2
|
||||
|
||||
|
Re: New compression method
It's possible that recompressing a file can make it smaller, depending on the compression scheme used, but, <edit>I think</edit> most modern compression schemes compress as much as they can on there first pass (sometimes because there first pass really includes several passes at compressing the data).
Anyway, compressing a file from 1Mb to 515 bytes doesn't really say anything about your compression scheme. If you give me a file of any size, I can very simply write a compression scheme to compress it to 0 bytes. If you can take arbitrary files and consistently compress them to a small size, then you have a compression scheme of merit. Last edited by Max Lobovsky : 07-09-2004 at 21:18. |
|
#3
|
|||||
|
|||||
|
Re: New compression method
Quote:
Besides, with most compression methods, compressing an already compressed file results in a slightly larger file size. |
|
#4
|
||||
|
||||
|
Re: New compression method
Quote:
Quote:
When you recompress the file, it ends up compressing a file with no, or very few, redundencies, which are what make the library method work so well. EDIT: Why the heck did I choose happy? Why not something cool, like FIRST? ![]() Last edited by Ryan M. : 07-09-2004 at 16:33. |
|
#5
|
||||
|
||||
|
Re: New compression method
Alright, first off I'm not trying to pull anything. Why would I create an account and waste my time to mess around with people, a friend gave me this site to ask a few people if they think it would be possible. Second off, my compression theorm isnt like the others, it doesnt run on how many times certain charactors show up in a file as a hole. This makes it able to recompress the same file over and over again almost always gaining a compression. This also means that it can work on any type of file, .zip, .exe, .jpg, ect. But it does reach a limit to the file sizes it can reach, with the current program I have made it can compress any file type and size down to 508 bytes and ussually fluctuates around 508 and 515 bytes. Because a file is larger then another doesnt mean it can not hit this limit, it just means that more attemps must be made to reach it. I have some data charts if anyone wishes to see them.
|
|
#6
|
||||
|
||||
|
Re: New compression method
Is it a lossy or lossless?
|
|
#7
|
||||
|
||||
|
Re: New compression method
Lossless, what would be the point of compressing a data file if it was corrupted when uncompressed?
I am going to see if any large buisnesses are interested in this and if not I will make it open source, this is one of the reasons why I am trusting noone. Even if someone is trustful, there is still always that very small chance of it getting out. |
|
#8
|
||||
|
||||
|
Re: New compression method
Well, why did you post anything at all if you are trusting no one? Without your algorithm it is very difficult to help you.
Compression is like factoring. In factoring, you take a complex equation and can define it simply by its solutions. In compression you do a similar thing. However, you will eventually run into a floor no matter how good a compression system you use. This is due to the fact that the information is still there, just in a compressed format. I am guesing your algorithm has some sort of system for recording what it has done in order that it can be undone. This file created requires space. The more times you compress, the closer the file is to becoming "prime" toward the algorithm. Eventually you reach a point where the information to expand the file makes the file large enough that compression will not make the whole set any smaller. So basically what ryan morehart said, but in more generic terms. |
|
#9
|
||||
|
||||
|
Re: New compression method
I understand the problem of you not being able to help me because im not releasing how it works. Are there any ways that I would be able to make it open source for a little while till I am able to open it up for commericial uses? I want to be able to keep the theorm if I ever decide to make money with it and I want to make sure noone can steal it. Would making it open source save it from being stolen? Ive looked into patents but there is no way I can afford the $4000 to get one and the $1500 to keep them updated every couple of years. If anyone has a link or something to help me out here please post it.
Ill be happy to post all the information needed about it as soon as its safe. And I do understand that this seems impossible but trust me its not . |
|
#10
|
||||
|
||||
|
Re: New compression method
Well, here's a site which lists the most commonly used open source licenses. Read through them and see what you like. Make sure you choose one which prevents the commercial reuse of the source code.
Edit: Hm, actually, according to them, "open source" licenses do not prevent commercial use. Whatever... ![]() Last edited by Ryan M. : 07-09-2004 at 16:52. |
|
#11
|
|||
|
|||
|
Re: New compression method
Go to www.maximumcompression.com and run your utility against their test files. You'll be able to compare your results against a fairly large set of benchmarks. Post your results. If you really beat those benchmarks then you'll need to have a few volunteers verify your results. For that you can distribute a binary without source and a non-disclosure.
|
|
#12
|
||||
|
||||
|
Re: New compression method
Alright let me rebuild my program (7 days max) because i have a couple new ideas i would like to try out with it. And i will post up the scores by then or earlyer. Hopefully I will be able to get it done alot sooner but it depends on how much work I have.
|
|
#13
|
||||||
|
||||||
|
Re: New compression method
Quote:
|
|
#14
|
||||||
|
||||||
|
Re: New compression method
Quote:
A few definitions first: Let c be the compression function, so c(f) represents the compressed version of f. Let d be the decompression function, so d(e) represents the decompressed version of e. Also, since the compression is lossless, we have d(c(p)) = p for all p. Lemma 1: In a lossless compression scheme, at most 2^k different files can be represented by k-bits. Proof: Assume for sake of contradiction that 2^k + 1 distinct files can be represented in k-bits. Since there are only 2^k different possible bit strings of length k, by the pigeon hole principle, we can conclude that two distinct files (say, f1 and f2) compress to the same output e. Formally, we have: c(f1) = c(f2) = e, and f1 <> f2 (I use <> for not equal) applying the function d to both sides, we get: d(c(f1)) = d(c(f2)) = d(e) by our cancellation law, we get: f1 = f2, but this contradicts the assumption that f1 <> f2. Thus, the lemma holds. Applying lemma 1 to the range of bit values given, we trivially conclude that at most 2^(8*508) + 2^(8*509) + 2^(8*510) + 2^(8*511) + 2^(8*512) + 2^(8*513) + 2^(8*514) + 2^(8*515) = 2^(8*508)(1 + 2^8 + 2^16 + 2^24 + 2^32 + 2^40 + 2^48+ 2^56) = 2^(8*508) < 2^(8*508) 2^57 = 2^(4121) distinct files can be compressed to sizes between 508 and 515 bytes. Now, there are 2^(8*1024*1024) files of size 1MB. With a little division, we see that at most, one in every 2^(8388608)/2^4121 = 2^8384487 can be compressed to be within that range. AT THE MOST. For reference, there are approximately 2^265 atoms in the entire universe. On the other hand, it is perfectly possible that your compression is amazing for certain types of files. It just isn't mathematically possible to be _that_ amazing for an arbitary file. EDIT: Before someone starts talking about how the lemma breaks down because it doesn't take into account the fact that you're running the algorithm multiple times, consider the following: Let c be the one-pass compressor, and d be the one-pass decompressor. Then we can write (in pseudocode): function cmult(f){ while(prev_file_size > curr_file_size) f = c(f) endwhile } Thus, cmult will continually compress using whatever algorithm of choice until it stops getting smaller. Then just replace c with cmult in the proof of the lemma. Last edited by rbayer : 07-09-2004 at 23:37. Reason: repeated function application note |
|
#15
|
||||
|
||||
|
Re: New compression method
In case any of you are still in doubt about this "new compression scheme", I encourage you to read this discussion of exactly this matter, (http://www.faqs.org/faqs/compression...section-8.html)
A quote from this document: Quote:
|
![]() |
| Thread Tools | |
| Display Modes | Rate This Thread |
|
|
Similar Threads
|
||||
| Thread | Thread Starter | Forum | Replies | Last Post |
| Compression Crisis | reisser | 3D Animation and Competition | 9 | 22-02-2004 11:23 |
| IRI Elimination Round Method | D.J. Fluck | Off-Season Events | 19 | 23-07-2003 18:56 |
| Scouting method suggestiongs | punarhero | Scouting | 0 | 26-01-2003 04:27 |
| What is your favorite method for attaching gears to shafts? | archiver | 2001 | 13 | 24-06-2002 04:00 |
| CRYSTAL METHOD CONCERT | drksdofthemoon | Chit-Chat | 7 | 30-04-2002 16:58 |