Go to Post The 2006 FRC Manual: Read it. Learn it. Know it. - dlavery [more]
Home
Go Back   Chief Delphi > Technical > Programming
CD-Media   CD-Spy  
portal register members calendar search Today's Posts Mark Forums Read FAQ rules

 
Closed Thread
Thread Tools Rate Thread Display Modes
  #16   Spotlight this post!  
Unread 07-09-2004, 20:02
FizMan's Avatar
FizMan FizMan is offline
aboot, eh?
AKA: Chris Sinclair
#0783 (Mobotics)
Team Role: Alumni
 
Join Date: Feb 2004
Location: Toronto, Canada
Posts: 102
FizMan will become famous soon enough
Send a message via AIM to FizMan Send a message via MSN to FizMan
Re: New compression method

Turns out this entire time Aalfabob has been compressing files with the same sequence of information repeated over and over again

j/k So are you consistently compressing files to around 508-515 bytes despite major ranges in their uncompressed format? A 5 meg file would compress to 514 bytes and a 500kb file would also compress to say, 510b? I find that very fascinating...
__________________
Joules per second! Watt? Joules per second! Watt? Jouls per second! Watt?
  #17   Spotlight this post!  
Unread 07-09-2004, 20:28
Aalfabob's Avatar
Aalfabob Aalfabob is offline
Registered User
#0201 (FEDS)
 
Join Date: Sep 2004
Rookie Year: 2004
Location: Rochester, MI
Posts: 27
Aalfabob is on a distinguished road
Send a message via AIM to Aalfabob
Re: New compression method

Quote:
Originally Posted by FizMan
Turns out this entire time Aalfabob has been compressing files with the same sequence of information repeated over and over again

j/k So are you consistently compressing files to around 508-515 bytes despite major ranges in their uncompressed format? A 5 meg file would compress to 514 bytes and a 500kb file would also compress to say, 510b? I find that very fascinating...
Yep. the only thing that changes when changing the files size is how many times it needs to be ran through the compressor to get the same results as say a file half its size. To keep track of this, 2 bytes are put as the main header of the file to keep track of how many times its been compressed (Up to 65535 times).

The last time I ran it on a 1 meg file, it had taken around 818 runs to get down to the 515 Bytes, but most of these runs were spent between 2500 bytes and 515 bytes due to only gaining 1 - 10 bytes per try. I made some graphs from the logs put out by that program. If i can get my friend to give me some space again on his server i can post them.

Right now I'm reprogramming the compression portion of the program because the last one was pretty slow from the way i was reading from the file. The new one pretty much just has to run a couple of if then statements and change a byte. Hopefully it will beat the common compressors today in speed also but ill have to see how well i can get it programmed. This next way should also be able to chop of around 2% - 5% of the file every pass.

edit - Ill just put up the graphs here to give you an idea what im talking about. Also, the second chart actually loses some compression some times but the next pass ussually gains it back.
Attached Thumbnails
Click image for larger version

Name:	Graph1.jpg
Views:	112
Size:	30.9 KB
ID:	2466  Click image for larger version

Name:	Graph2.jpg
Views:	91
Size:	40.5 KB
ID:	2467  

Last edited by Aalfabob : 07-09-2004 at 20:41.
  #18   Spotlight this post!  
Unread 07-09-2004, 21:07
Max Lobovsky's Avatar
Max Lobovsky Max Lobovsky is offline
Fold em oval!
FRC #1257 (Parallel Universe)
Team Role: College Student
 
Join Date: Jan 2004
Rookie Year: 2004
Location: Scotch Plains, NJ
Posts: 1,026
Max Lobovsky has a brilliant futureMax Lobovsky has a brilliant futureMax Lobovsky has a brilliant futureMax Lobovsky has a brilliant futureMax Lobovsky has a brilliant futureMax Lobovsky has a brilliant futureMax Lobovsky has a brilliant futureMax Lobovsky has a brilliant futureMax Lobovsky has a brilliant futureMax Lobovsky has a brilliant futureMax Lobovsky has a brilliant future
Send a message via AIM to Max Lobovsky
Re: New compression method

In case any of you are still in doubt about this "new compression scheme", I encourage you to read this discussion of exactly this matter, (http://www.faqs.org/faqs/compression...section-8.html)

A quote from this document:
Quote:
It is mathematically impossible to create a program compressing without loss*all* files by at least one bit (see below and also item 73 in part 2 of this FAQ). Yet from time to time some people claim to have invented a new algorithm for doing so. Such algorithms are claimed to compress random data and to be applicable recursively, that is, applying the compressor to the compressed output of the previous run, possibly multiple times. Fantastic compression ratios of over 100:1 on random data are claimed to be actually obtained.
__________________
Learn, edit, inspire: The FIRSTwiki.
Team 1257


2005 NYC Regional - 2nd seed, Xerox Creativity Award, Autodesk Visualization Award
2005 Chesapeake Regional - Engineering Inspiration Award
2004 Chesapeake Regional - Rookie Inspiration award
2004 NJ Regional - Team Spirit Award
  #19   Spotlight this post!  
Unread 07-09-2004, 21:53
Aalfabob's Avatar
Aalfabob Aalfabob is offline
Registered User
#0201 (FEDS)
 
Join Date: Sep 2004
Rookie Year: 2004
Location: Rochester, MI
Posts: 27
Aalfabob is on a distinguished road
Send a message via AIM to Aalfabob
Re: New compression method

Quote:
Originally Posted by Max Lobovsky
In case any of you are still in doubt about this "new compression scheme", I encourage you to read this discussion of exactly this matter, (http://www.faqs.org/faqs/compression...section-8.html)

A quote from this document:
Wow, had no clue anyone else was claiming that they could do this. Ive read the paper and from what I understand, my method does not fit his arguement for the most part. It uses no fancy math, no special numbers which im guessing that most of the people try to work with because their *special* and really had no proof or ideas that they would actually do something. But I have to admit, alot of the processes he talks about that are flawed I had thought about at one time and I turned them down because it was easly noticed that they were flawed. I check my ideas from start to finish before I go around and claiming that I invented something that works.

Another arguement that he made was that some of these methods used groups of bits to show compression. Most of these methods are flawed majorly because they had no proff that they could ever be reversed. But I do know for a fact that a computer can tell if a start of a group of bits are 1, 00, or 01 which makes them easly seperated. There is also another method which I had made and it also is a way to seperate the bytes but Ill explain that in my post when my program is finished (Was thrown away due to me finding a better faster way).

If this is truly just about if I am lieing about this, just give me 7 days as I had posted earlyer and the new program will be finished. But just because some guy writes a paper on if this is possible or not does not mean he knows 100% what hes talking about. I am positive that he has not tested every single way you can compress a file which makes his arguement invalid about there being no possible way. Every once in a while something *impossible* happens, Im sure you can think of hundreds of examples, but please just give me the time to prove this. It think the short wait of only 7 days isnt going to kill you. And if you still really think its so impossible thats there is no chance at all, you can exit this post and forget all about it.

edit - Btw when i finish the program I will post a link to a video of the program running, and if thats not enough because some people are going to say, "Ahh that can be easly faked" Ill post up some other way that it can be proven unless i somehow get the money for a temporary patent, then ill just put the source up for everyone and the method.

Last edited by Aalfabob : 07-09-2004 at 22:02.
  #20   Spotlight this post!  
Unread 07-09-2004, 22:43
Rickertsen2 Rickertsen2 is offline
Umm Errr...
None #1139 (Chamblee Gear Grinders)
Team Role: Alumni
 
Join Date: Dec 2002
Rookie Year: 2002
Location: ATL
Posts: 1,421
Rickertsen2 has a brilliant futureRickertsen2 has a brilliant futureRickertsen2 has a brilliant futureRickertsen2 has a brilliant futureRickertsen2 has a brilliant futureRickertsen2 has a brilliant futureRickertsen2 has a brilliant futureRickertsen2 has a brilliant futureRickertsen2 has a brilliant futureRickertsen2 has a brilliant futureRickertsen2 has a brilliant future
Send a message via AIM to Rickertsen2 Send a message via Yahoo to Rickertsen2
Re: New compression method

Quote:
Originally Posted by Aalfabob
Wow, had no clue anyone else was claiming that they could do this. Ive read the paper and from what I understand, my method does not fit his arguement for the most part. It uses no fancy math, no special numbers which im guessing that most of the people try to work with because their *special* and really had no proof or ideas that they would actually do something. But I have to admit, alot of the processes he talks about that are flawed I had thought about at one time and I turned them down because it was easly noticed that they were flawed. I check my ideas from start to finish before I go around and claiming that I invented something that works.

Another arguement that he made was that some of these methods used groups of bits to show compression. Most of these methods are flawed majorly because they had no proff that they could ever be reversed. But I do know for a fact that a computer can tell if a start of a group of bits are 1, 00, or 01 which makes them easly seperated. There is also another method which I had made and it also is a way to seperate the bytes but Ill explain that in my post when my program is finished (Was thrown away due to me finding a better faster way).

If this is truly just about if I am lieing about this, just give me 7 days as I had posted earlyer and the new program will be finished. But just because some guy writes a paper on if this is possible or not does not mean he knows 100% what hes talking about. I am positive that he has not tested every single way you can compress a file which makes his arguement invalid about there being no possible way. Every once in a while something *impossible* happens, Im sure you can think of hundreds of examples, but please just give me the time to prove this. It think the short wait of only 7 days isnt going to kill you. And if you still really think its so impossible thats there is no chance at all, you can exit this post and forget all about it.

edit - Btw when i finish the program I will post a link to a video of the program running, and if thats not enough because some people are going to say, "Ahh that can be easly faked" Ill post up some other way that it can be proven unless i somehow get the money for a temporary patent, then ill just put the source up for everyone and the method.
You have piqued my intrest but i won't believe anyhting till i see it. Honestly jsut think about what you are claiming and give us reason why we should believe you and not jsut think you a nut? We have been given lots of empty promises but no hard evidence. One question... Why do you choose these forums to post your *discovery*? Sure we are all nerds here and many of us are interested in this sort of thing but i am sure there are forums dedicated to compression. Why Chiefdelphi? I truly would like to believe that your claims are real but until i see proof you are a nut in my book. This reminds my of the human cloning ppl a while back who suddenly vanished into the ethersphere when asked to prove what they had done. I have seen too many things like this that all turn out to be nothing. If i were claiming what you are claiming you would think i was nuts too so you can't realyl blame us. I'll admit that many things that have in the past been dismissed at utterly impossible are now intergral parts of our everyday lives. Maybie this is one of them but somehow i am doubtful. I sincerely hope you prove me wrong.

I'm not really sure what i talkign about and i may be completely wrong, but could this possible fall under a copyright which is a much easier process. I think there is even some sort of implied copyright on everythign that you don't actualyl have to file for but it won't hold up in court very as well as an official one. I looked into this at one point a loong time ago but i don't really remember much.
__________________
1139 Alumni

Last edited by Rickertsen2 : 07-09-2004 at 23:04.
  #21   Spotlight this post!  
Unread 07-09-2004, 22:56
Aalfabob's Avatar
Aalfabob Aalfabob is offline
Registered User
#0201 (FEDS)
 
Join Date: Sep 2004
Rookie Year: 2004
Location: Rochester, MI
Posts: 27
Aalfabob is on a distinguished road
Send a message via AIM to Aalfabob
Re: New compression method

I understand exactly what you are saying and I would feel the same way. The reason I came to this site is because I was having trouble finding an actual forum on file compression. So I talked to one of my friends and he said that this would be a good site so I made a post.

The only way to give everyone hard proof of this would be to give someone the decompressor, and have them provide me with the file that needs compressing. This is the only fair way I can think of that would prove that it works while keeping how it works a secret until I can actually make the idea *safe*. Stay patient and I will have a fast and working program ready for this.
  #22   Spotlight this post!  
Unread 07-09-2004, 23:03
rbayer's Avatar Unsung FIRST Hero
rbayer rbayer is offline
Blood, Sweat, and Code
no team (Teamless Orphan)
 
Join Date: Mar 2002
Rookie Year: 2001
Location: Minnetonka, MN
Posts: 1,087
rbayer is a glorious beacon of lightrbayer is a glorious beacon of lightrbayer is a glorious beacon of lightrbayer is a glorious beacon of lightrbayer is a glorious beacon of light
Send a message via AIM to rbayer
Re: New compression method

Quote:
Originally Posted by Aalfabob
Alright, first off I'm not trying to pull anything. Why would I create an account and waste my time to mess around with people, a friend gave me this site to ask a few people if they think it would be possible. Second off, my compression theorm isnt like the others, it doesnt run on how many times certain charactors show up in a file as a hole. This makes it able to recompress the same file over and over again almost always gaining a compression. This also means that it can work on any type of file, .zip, .exe, .jpg, ect. But it does reach a limit to the file sizes it can reach, with the current program I have made it can compress any file type and size down to 508 bytes and ussually fluctuates around 508 and 515 bytes. Because a file is larger then another doesnt mean it can not hit this limit, it just means that more attemps must be made to reach it. I have some data charts if anyone wishes to see them.
Ah, now you've made a _provably_ false claim. And here I go with my proof (because I have nothing better to do and am extremely bored right now):

A few definitions first:

Let c be the compression function, so c(f) represents the compressed version of f.
Let d be the decompression function, so d(e) represents the decompressed version of e.
Also, since the compression is lossless, we have d(c(p)) = p for all p.

Lemma 1: In a lossless compression scheme, at most 2^k different files can be represented by k-bits.
Proof: Assume for sake of contradiction that 2^k + 1 distinct files can be represented in k-bits. Since there are only 2^k different possible bit strings of length k, by the pigeon hole principle, we can conclude that two distinct files (say, f1 and f2) compress to the same output e. Formally, we have:
c(f1) = c(f2) = e, and f1 <> f2 (I use <> for not equal)
applying the function d to both sides, we get:
d(c(f1)) = d(c(f2)) = d(e) by our cancellation law, we get:
f1 = f2, but this contradicts the assumption that f1 <> f2. Thus, the lemma holds.

Applying lemma 1 to the range of bit values given, we trivially conclude that at most
2^(8*508) + 2^(8*509) + 2^(8*510) + 2^(8*511) + 2^(8*512) + 2^(8*513) + 2^(8*514) + 2^(8*515)
= 2^(8*508)(1 + 2^8 + 2^16 + 2^24 + 2^32 + 2^40 + 2^48+ 2^56)
= 2^(8*508) < 2^(8*508) 2^57 = 2^(4121)

distinct files can be compressed to sizes between 508 and 515 bytes.

Now, there are 2^(8*1024*1024) files of size 1MB. With a little division, we see that at most, one in every
2^(8388608)/2^4121 = 2^8384487

can be compressed to be within that range. AT THE MOST. For reference, there are approximately 2^265 atoms in the entire universe.



On the other hand, it is perfectly possible that your compression is amazing for certain types of files. It just isn't mathematically possible to be _that_ amazing for an arbitary file.


EDIT:
Before someone starts talking about how the lemma breaks down because it doesn't take into account the fact that you're running the algorithm multiple times, consider the following:
Let c be the one-pass compressor, and d be the one-pass decompressor. Then we can write (in pseudocode):
function cmult(f){
while(prev_file_size > curr_file_size)
f = c(f)
endwhile
}

Thus, cmult will continually compress using whatever algorithm of choice until it stops getting smaller. Then just replace c with cmult in the proof of the lemma.
__________________
New C-based RoboEmu2 (code simulator) available at: http://www.robbayer.com/software.php

Last edited by rbayer : 07-09-2004 at 23:37. Reason: repeated function application note
  #23   Spotlight this post!  
Unread 07-09-2004, 23:09
FizMan's Avatar
FizMan FizMan is offline
aboot, eh?
AKA: Chris Sinclair
#0783 (Mobotics)
Team Role: Alumni
 
Join Date: Feb 2004
Location: Toronto, Canada
Posts: 102
FizMan will become famous soon enough
Send a message via AIM to FizMan Send a message via MSN to FizMan
Re: New compression method

lolk d00d, I'd be more than happy to test a binary executable or something where I couldn't haxx0r the source code from... heck... the limit of my haxxing skills are doing a whois search on someone's IP

Out of curiosity, you mentioned that the greater the file size, the more iterations it has to go through to reach a file size around 500+b, and the time it takes will be exponential. So what's the difference in times between a 10 meg file and say, a 10 gigabyte file? You said it could do any file type/size... if that's the case, it could totally revolutionize data storage if you can story gigabytes of information as say, 10 megs instead (or heck, if you got the raw power, 500+b)

Though to be honest, I'd really like to see this in action myself because, and no offense to you (I'm sure you'd be just as skeptical if someone else said it), it seems quite the feat to compress 1000000000 bytes into 515 bytes.

And do you know why the limit is 500+ bytes?

Can it currently compress directory information/batch files? I suppose that's a trivial matter to take care of, but I don't know what the foundation to your code is. Or would it end up turning a directory of 500 files into 250000 bytes? (500files X 500b)

And what happens if you try to compress a file that's already LESS than 500 bytes? roffle, utterly trivial too... but like I said, I'm curious.


EDIT:

Don't listen to these non-believers!!!!!!!!11111 I HAVE FAITH IN YOU!!!!11 (actually... just taking the two minutes to show interest and support in the small possibility that I could end up getting a few bucks out of it when this program makes millions and billions of dollars...)

For the benefit of the doubt... who knows? Maybe there is a way around it all...

>.>
<.<
(remember me when you make your billions)
__________________
Joules per second! Watt? Joules per second! Watt? Jouls per second! Watt?

Last edited by FizMan : 07-09-2004 at 23:14.
  #24   Spotlight this post!  
Unread 08-09-2004, 06:14
Ryan M. Ryan M. is offline
Programming User
FRC #1317 (Digital Fusion)
Team Role: Programmer
 
Join Date: Jan 2004
Rookie Year: 2004
Location: Ohio
Posts: 1,508
Ryan M. has much to be proud ofRyan M. has much to be proud ofRyan M. has much to be proud ofRyan M. has much to be proud ofRyan M. has much to be proud ofRyan M. has much to be proud ofRyan M. has much to be proud ofRyan M. has much to be proud ofRyan M. has much to be proud of
Re: New compression method

Quote:
Originally Posted by Rickertsen2
I'm not really sure what i talkign about and i may be completely wrong, but could this possible fall under a copyright which is a much easier process. I think there is even some sort of implied copyright on everythign that you don't actualyl have to file for but it won't hold up in court very as well as an official one. I looked into this at one point a loong time ago but i don't really remember much.
He could copyright any source code released (an open source license is essentially a detailed copyright) or any document detailing the algorithm, but I don't think it wouldn't actually protect the algorithm itself. I may be wrong about that.

Putting a copyright notice on is easy, at least. Just stick a "Copyright 2004 by Alphabob (real name, obviously)." And, the best part is, you can do that for free. No registration, nothing.
__________________


Last edited by Ryan M. : 08-09-2004 at 06:45.
  #25   Spotlight this post!  
Unread 08-09-2004, 11:45
ErichKeane ErichKeane is offline
Registered User
FRC #3210
Team Role: Mentor
 
Join Date: Nov 2003
Rookie Year: 2004
Location: Hillsboro, OR
Posts: 113
ErichKeane is just really niceErichKeane is just really niceErichKeane is just really niceErichKeane is just really niceErichKeane is just really nice
Send a message via AIM to ErichKeane
Re: New compression method

A few things: newer compression methods do not base themselves on true ability to compress. Even if you COULD do what you claim, which i highly doubt, it would not be very worth while.

Right now, hard drive space is cheap. Time is not cheap. So if your algorithm took say, 2 hrs to make my 1gb file into a 500byte file, and winrar turns it into 400mb in 3 minutes, what is the advantage? Sure, i saved about 400mb, but i lost 2 hours! Also, you can get a hard drive for a buck a gig, so whats the point?

Also, most people who care about larger files, have broadband. I personally downloaded the entire mandrake linux pack a few years ago on a 1mbit connection. So, i let it run overnight. Consider this: the ISO files totaled to about 2 gigs, it took me overnight to download, so no big deal. Using winrar, i possibly could have gotten these 2 gigs down to about 1.3 gb, but it would have taken 20 minutes or so.

Now, based on the fact that you say it is an exponentially enlarging algorithm, i would find no problem assuming this compression/decompression would take longer than it did to download the whole file on my DSL connection.

NOW, place that file accross my 10mbit connection at college, and compression becomes useless. It used to be for compression that the order of matter was this : Size, Speed, Packageability. Now the order is: Packageability, size, speed.

So unless your "algorithm" can turn the 2gb files into a 500byte file within 20 minutes, it would not be worth while.
  #26   Spotlight this post!  
Unread 08-09-2004, 12:34
FizMan's Avatar
FizMan FizMan is offline
aboot, eh?
AKA: Chris Sinclair
#0783 (Mobotics)
Team Role: Alumni
 
Join Date: Feb 2004
Location: Toronto, Canada
Posts: 102
FizMan will become famous soon enough
Send a message via AIM to FizMan Send a message via MSN to FizMan
Re: New compression method

Sure it would be worth the while... I mean... imagine being able to store entire hard drives on say, your USB pen, or on a floppy. Set your computer to make weekly (or nightly) backups overnight to a 1kb file. And as processors are constantly evolving, so too will the compressing speeds drop. And you probably wouldn't need to compress your files down to 500 bytes... bringing it down to say, 10 megs would be just as ideal (or 1+ megs for your floppy) or to 700 megs and toss it on a CD.

It would naturally change many industries too; especially if under professional development the processing time can be reduced. Transfer full length feature films to the theatres for digital real-time screening... not to mention all those 56k'ers out there mmmm piracy
__________________
Joules per second! Watt? Joules per second! Watt? Jouls per second! Watt?
  #27   Spotlight this post!  
Unread 08-09-2004, 12:35
Fulcrum2000 Fulcrum2000 is offline
Registered User
no team
 
Join Date: Sep 2004
Location: Netherlands
Posts: 7
Fulcrum2000 can only hope to improve
Re: New compression method

This claim is total BS. Iterative compression simple isn't possible as it will *increase* filesize instead of reducing it.

Regards,
Werner Bergmans
Webmaster http://www.maximumcompression.com/
  #28   Spotlight this post!  
Unread 08-09-2004, 13:24
ErichKeane ErichKeane is offline
Registered User
FRC #3210
Team Role: Mentor
 
Join Date: Nov 2003
Rookie Year: 2004
Location: Hillsboro, OR
Posts: 113
ErichKeane is just really niceErichKeane is just really niceErichKeane is just really niceErichKeane is just really niceErichKeane is just really nice
Send a message via AIM to ErichKeane
Re: New compression method

Im calling BS too. As i said before, even if you could compress stuff really well, it wouldnt be worth the time.

Processing power is expensive--it takes a TON of money to get a processor that would be able to do an iterative compression method in any reasonable amount of time.

Storage on the other hand, is relatively cheap: you said that making nightly backups would be the way to go. What happens if nightly backups take say, 16 hours? It makes it not worth it, especially because then you need to factor in transfer time. It is just as cheap to set up a raid array and just live with an extra HD around.

Compression was big a few years ago, but it really isnt important at all right now. Most corperations wont even trust compression, because they lack some of the failsafes that maintain the integrity of the compressed file. In a compressive algorithm, it is possible to screw up the entire file with a relatively small amount of loss (slightly above 1 byte). In this "method" listed by the thread starter, i would assume that a loss of 4 bits could screw the file to hell.

In a real file, there are enough failsafes, that it may take as much as 30% to make a file unrecoverable. The same cannot be said about a compressed file.
  #29   Spotlight this post!  
Unread 08-09-2004, 15:04
Ryan M. Ryan M. is offline
Programming User
FRC #1317 (Digital Fusion)
Team Role: Programmer
 
Join Date: Jan 2004
Rookie Year: 2004
Location: Ohio
Posts: 1,508
Ryan M. has much to be proud ofRyan M. has much to be proud ofRyan M. has much to be proud ofRyan M. has much to be proud ofRyan M. has much to be proud ofRyan M. has much to be proud ofRyan M. has much to be proud ofRyan M. has much to be proud ofRyan M. has much to be proud of
Re: New compression method

I have to argue from the other side. Compression is good. For instance, I have a 120 GB HD and a 40 GB HD in my computer right now. (For those lazy out there, that's 160 GBs of total storage.) However, I still don't appreciate the 9 GBs that 4 Fedora Core, 3 Mandrake, 3 Redhat, 3 BSD, and a few different live CD ISOs are taking up.

Also, consider the people on dialup users who pay by the minutes they're on. If you had the choice between a 100 MB download or a .5 KB download which are exactly the same things, which would you go for? Sure, the decompression takes time, but you're not paying for the time online you would be spending downloading a larger file. Also, this would be great for those amateur developers/webmasters who use a cheap/free web server with bandwidth limits.
__________________

  #30   Spotlight this post!  
Unread 08-09-2004, 15:08
Aalfabob's Avatar
Aalfabob Aalfabob is offline
Registered User
#0201 (FEDS)
 
Join Date: Sep 2004
Rookie Year: 2004
Location: Rochester, MI
Posts: 27
Aalfabob is on a distinguished road
Send a message via AIM to Aalfabob
Re: New compression method

OK at these speed issues,
This is the information I have compiled about the compressor so far:
If is able to run the compression routine for 7.5 megs in 1 second (It is actually alot faster then this but incase I have any more ideas for the test version). Now if this file was highly in the compressors favor it could reach around a 10.9% gain in that one second for 7.5 megs. This means that it would shorten the file from 7.5 megs to around 6.67 megs in close to 1 second depending on your cpu and memory clock speed. This test was taken on a 1.47 Ghz XP athlon with 133 mhz memory.

Another thing about the time being exponentially larger:
Even though it has more passes to run for a small gain, the data being compressed is much smaller which gives it a pretty constant change vrs. time. If you look at those graphs ive posted earlyer, you can see that it includes no time but only passes vrs file size.

Btw the file needing to be in its 'favor' does not have a very great chance on random files but still can occur. But on files with charators near each other (text files, logs, ect.) It will be very close to to its max gain per pass (10.9%). Now on average, with my last build which wasnt programmed very well came to around 1.5% to 4% per pass on a .rar file, but the current build has been heavly modifyed to get higher compressions per pass.

O and here is another graph that shows that sometimes a file can actually get larger for a run but in the end, the compression method is in the favor of getting smaller. (Have tested many files of different sizes and types this is just one).

edit - If you think these times are impossible, they are due to the file being loaded into memory and of course it does take larger files in chunks if memory is needed. Also the speed comes from the compression process consisting of a couple of if thens, the qsort routine, and then the bytes being put back into memory. I expect that a professional programmer would be able to chop these times down by a ton by using assembly or faster librarys.
Attached Thumbnails
Click image for larger version

Name:	Graph3.jpg
Views:	79
Size:	54.2 KB
ID:	2468  

Last edited by Aalfabob : 08-09-2004 at 15:15.
Closed Thread


Thread Tools
Display Modes Rate This Thread
Rate This Thread:

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
Compression Crisis reisser 3D Animation and Competition 9 22-02-2004 11:23
IRI Elimination Round Method D.J. Fluck Off-Season Events 19 23-07-2003 18:56
Scouting method suggestiongs punarhero Scouting 0 26-01-2003 04:27
What is your favorite method for attaching gears to shafts? archiver 2001 13 24-06-2002 04:00
CRYSTAL METHOD CONCERT drksdofthemoon Chit-Chat 7 30-04-2002 16:58


All times are GMT -5. The time now is 06:39.

The Chief Delphi Forums are sponsored by Innovation First International, Inc.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi