View Full Version : Mac G4 render farm
I was looking on ebay when I found this (http://cgi.ebay.com/Lot-of-36-Apple-Power-Mac-G4-400Mhz-128MB-20GB-PowerMac_W0QQitemZ110022444218QQihZ001QQcategoryZ5 1035QQrdZ1QQcmdZViewItem) .
I was just wondering if one would be able to make a fully functioning maya render farm out of these.
If yes, how will this render farm perform? (after all they are 400MHz CPU's).
How would this perform compared to the new Mac Pro 3.0ghz (http://www.apple.com/macpro/)?
Thanks
David
(If you were wondering I am not seriously considering buying this. Only the shipping to Israel would be around 12,000$ :eek: )
I was looking on ebay when I found this (http://cgi.ebay.com/Lot-of-36-Apple-Power-Mac-G4-400Mhz-128MB-20GB-PowerMac_W0QQitemZ110022444218QQihZ001QQcategoryZ5 1035QQrdZ1QQcmdZViewItem) .
I was just wondering if one would be able to make a fully functioning maya render farm out of these.
If yes, how will this render farm perform? (after all they are 400MHz CPU's).
How would this perform compared to the new Mac Pro 3.0ghz (http://www.apple.com/macpro/)?
Thanks
David
(If you were wondering I am not seriously considering buying this. Only the shipping to Israel would be around 12,000$ :eek: )
One quad Xeon Mac Pro with 2 gigs of ram will easily beat that farm of 36 G4s.
Intel really outdid itself with those Core 2 Duos...who would have thought that Intel would pull it off?
Kyle Fenton
23-08-2006, 13:04
Actually what will probably hinder them the most would be the RAM, and not yield (clock speed). If these machines had 1gig or more of RAM, and gigE cards it wouldn't be that bad. That is assuming though that you have high end switches & cables.
Cody Carey
23-08-2006, 13:27
Mac rendering farm:
400mhz= 400 million instructions per second.
400mhz X 36 = 14400 (total megahetz)
14400mhz=14.4 billion instructions per second.
Mac single computer:
3.0ghz = 3000mhz
3000mhz= 3 billion instructions per second.
Answer:
14.4 billion > 3 billion
Just my thoughts...
sanddrag
23-08-2006, 13:36
400mhz X 36 = 1440 (total megahetz)
I think you left off a zero. 14400 is what you mean.
I think you might have made a wrong calculation...
The new Mac Pro's have 2 processors, each dual core with each core running at a clock speed of 3 ghz. Therefore, the calculation should be 4x3 ghz=12 ghz= 12 billion instructions per second.
A quad 3.0 Mac with 2 gigs of memory is around the price of that lot of G4's. If you take only the CPU clock speed into consideration, then the lot of G4's might perform a little better. But when you calculate performance, you have to consider the memory, graphics card and processor specs (cache,frontside buses) and all the other components I do not know much about. Also, I think that the G4's performance might go down a little because of the many network connections that would be needed in such a render farm.
AdamHeard
23-08-2006, 14:21
Also take account of indirect cost.
36 will use a lot of power, costing a lot of money compared to one. Also, who has room for 36 G4's?
Mac rendering farm:
400mhz= 400 million instructions per second.
400mhz X 36 = 14400 (total megahetz)
14400mhz=14.4 billion instructions per second.
Mac single computer:
3.0ghz = 3000mhz
3000mhz= 3 billion instructions per second.
Answer:
14.4 billion > 3 billion
Just my thoughts...
Clock does not determin performance. That is why AMD beat Intel until now. the new Xeons are far more advanced and do more per clock than those G4s ever could. The Xeons can do more instructions per clock than those G4s ever could.
The old 3GHz P4s could do 10,000 MIPS.
Clock speed tells nothing about its MIPS or the performance of the processor
Chris Marra
23-08-2006, 17:42
This would only be advantageous to a Maya or Cinema 4d setup where animation is involved. Each machine out of the 36 can each independently set up the entire scene, and work on a seperate frame, and then send it back to the master machine which compiles them together. So even if it takes a Mac Pro 2 seconds to do 1 frame, and one of these 20 seconds, it is still faster because in 20 seconds, you actually get 36 frames out of it, not just 10. As long as a Mac Pro is less than 18 times faster per core, these are technically faster.
However, under load a Mac Pro consumes about 200W-250W (http://www.anandtech.com/mac/showdoc.aspx?i=2816&p=19) , vs about at least 50W per G4 (http://www.macalester.edu/its/faq/power_usage.html), which adds up to at least about 1800 Watts, but most likely not more than 3600W. With about 10 times the power consumption, you're paying a significant amount just to run a full farm of these, and I doubt that they would be significantly faster than Mac Pro's enough that getting a few extras wouldn't be enough to settle any potential gap.
So now the real question is, how fast would a farm of 36 mac pro quads perform? :D
sanddrag
23-08-2006, 18:59
So now the real question is, how fast would a farm of 36 mac pro quads perform? :DNot nearly fast enough for nearly half a million dollars (if they all had every top of the line option) and not as fast as an $80,000 car (cost of 36 bottom of the line Mac Pros).
First, the number of CPUs on a render farm impacts performance more than the combined clock speed of the CPUs. Nine 1-GHz machines chained together will render much faster than three 3-GHz systems, so don't rule out a find on the basis of its wimpy-sounding processor speed
Taken from this article (http://www.extremetech.com/article2/0,1697,1847365,00.asp) .
lukevanoort
24-08-2006, 00:10
I did a little New Egg browsing and noticed that one could build a system with an AMD Athlon 64 3000+, 1 gig of ram, an Geforce 7300LE 128MB for ~$340. I wonder if 10 would outperform those 36 G4s? (Note: the $340 dollar figure is in fact $341.95, this doesn't include an OS, or a CD/DVD drive, the assumption being that one could borrow a CD drive from another computer to do software installs)
This would only be advantageous to a Maya or Cinema 4d setup where animation is involved. Each machine out of the 36 can each independently set up the entire scene, and work on a seperate frame, and then send it back to the master machine which compiles them together. So even if it takes a Mac Pro 2 seconds to do 1 frame, and one of these 20 seconds, it is still faster because in 20 seconds, you actually get 36 frames out of it, not just 10. As long as a Mac Pro is less than 18 times faster per core, these are technically faster.
I don't know for sure but I would say a brand new 3 GHz quad core is probably way more than 10 times faster than a 400 MHz(old) single core. It only has to be 9 times faster per core(possible based on the clock speed + newer core). You also have to look at RAM bandwith and bus and a lot of other things.
Remember there is more to a lot more to a processor than just clock speed now a days.
addictedMax
24-08-2006, 17:02
I don't know for sure but I would say a brand new 3 GHz quad core is probably way more than 10 times faster than a 400 MHz(old) single core. It only has to be 9 times faster per core(possible based on the clock speed + newer core). You also have to look at RAM bandwith and bus and a lot of other things.
Remember there is more to a lot more to a processor than just clock speed now a days.
and intels are good at things like rendering
and intels are good at things like rendering
Usually because the software is optimized for them...otherwise AMDs are not bad either...
Cody Carey
24-08-2006, 19:50
Clock does not determin performance. That is why AMD beat Intel until now. the new Xeons are far more advanced and do more per clock than those G4s ever could. The Xeons can do more instructions per clock than those G4s ever could.
The old 3GHz P4s could do 10,000 MIPS.
Clock speed tells nothing about its MIPS or the performance of the processor
No they don't, the clock speed determines how many instructions per second. Period. Just because the new Xeons are more "advanced" doesn't change the definition of megahertz. The farm of G4s do more instructions per clock (not considering other system specs, just processor) than the Xeon's does, even if you take into consideration the four logical processors...
The average studio quality scene takes an hour per frame to render.If the Xeon takes an hour to render it, and the G4s take 3 hours each, then they still win... because in 3 hours the Xeon will have rendered 3 frames, and the G4s will have rendered 36.
In standard rendering programs (like the default scan-line renderer for 3dSmax), three of the four cores in a quad-core processor are wasted, because the renderer only uses one. I have no Idea about the standard renderers for Maya, but I wouldn't imagine they could be much different.
The Xeon computer would have to be more than 36 times faster than its G4 predecessor to have better rendering capabilities, and that just isn't plausible.
Capt.ArD
24-08-2006, 20:50
Edit: double post.
Capt.ArD
24-08-2006, 21:01
Agreed, other things influence the performance of a computer than clock. buswidth allows more simultaneous calculations, and therefore increases the net work done. Dual cores act essentially as a tiny renderfarm on one chip, and do the same thing. It's not about the clock, men, there are more important things...
If the Xeon takes an hour to render it, and the G4s take 3 hours each
Just a nitpick, a one hour Xeon scene would probably take a 400MHz processor five or seven hours to do, probably more. We strive for accuracy...
(not considering other system specs, just processor)
dont dismiss. The discussion is about what computer will render faster, not processor. ram and graphics cards play just as important a role in this as the processor.
In standard rendering programs (like the default scan-line renderer for 3dSmax), three of the four cores in a quad-core processor are wasted, because the renderer only uses one. I have no Idea about the standard renderers for Maya, but I wouldn't imagine they could be much different.
Scanline does go faster with dual cores, BTW. And mental ray adds buckets for more cores. watch a dual core render in MR. you'll see TWO boxes going at it.
Cody Carey
24-08-2006, 21:15
Just a nitpick, a one hour Xeon scene would probably take a 400MHz processor five or seven hours to do, probably more. We strive for accuracy...
Ok, I'm sorry for my innaccuracies, in 7 hours, the xeon would only have 7 frames done, and the render farm would have 36... the Xeon still looses, which is the point I was trying to make.
dont dismiss. The discussion is about what computer will render faster, not processor. ram and graphics cards play just as important a role in this as the processor.
By your own admission in the first quote in this post, the render farm will still render faster. Don't sidestep.
Scanline does go faster with dual cores, BTW. And mental ray adds buckets for more cores. watch a dual core render in MR. you'll see TWO boxes going at it.
I didn't mention a word about Mental Ray.
Now onto what I actually said, and not what was brought to the table by people other than myself. Can you show me documentation that says the default Scanline renderer provided with Max uses all the processors present in a computer? because I have timed a render on our quad-core 2.8 at the school, and It did no better than my single-core 2.8 at home.
codyc, mhz has very little to do with how much gets done. IPC, or instructions per clock cycle is what determines how much work a processor can do per mhz. just look up some benchmarks of a lower clocked processor like a dothan (pentium m) or conroe (core 2 duo) and compare them to a higher clocked processor like a prescott (pentium 4). you will find that a conroe at 2 ghz can compete with a pressler (pentium d) at 5+ghz. A lower clocked chip with a very high IPC is better then a higher clocked chip with a low IPC. high MHZ = lots of heat, and wasted energy. that is why the pentium 4 turned into a disaster while the athlon 64 kicked butt.
there are still many other factors such as bandwidth, latency, cache, ARCHITECTURE etc.
unless you are comparing the mhz speed of the exact same chip, you are doing a hopeless apples to oranges comparison. Even if you are comparing the same chip, performance does not scale linearly to mhz. a 4ghz chip is not going to be twice as powerful as the same chip clocked at 2ghz.
Capt.ArD
24-08-2006, 22:51
By your own admission in the first quote in this post, the render farm will still render faster. Don't sidestep.
Who's sidestepping? It's a valid point.
I didn't mention a word about Mental Ray.
I did. It's also a valid point. Sorry if MR kicks scanline out the window.
there are still many other factors such as bandwidth, latency, cache, ARCHITECTURE etc.
Thank you, this is what i ws trying to say. It's not the clock, or how many processors you have. It s the overall setup.
Cody Carey
25-08-2006, 00:41
In this particular setup, however... the g4s would far out perform the Xeon.
theycallhimtom
25-08-2006, 00:51
Sure the render farm would be faster than one single computer. But consider other things. There needs to be a network so add the cost of that. The space for 36 computers, power costs etc. Then the time to setup and keep the cluster running would be a pain.
Over the summer I had an internship at a local college. My teacher had a 48 computer cluster (each one with dual Pentium 3s) it was a pain to keep everything running. After a few years the motherboards stopped working so they had to be replaced by hand. etc.
well for $4000 you could make 6 computers based on an amd x2 or a core 2 duo that will easily outperform all of those macs.
addictedMax
25-08-2006, 14:35
well for $4000 you could make 6 computers based on an amd x2 or a core 2 duo that will easily outperform all of those macs.
atlon 64's even
Cody Carey
25-08-2006, 14:43
codyc, mhz has very little to do with how much gets done. IPC, or instructions per clock cycle is what determines how much work a processor can do per mhz. just look up some benchmarks of a lower clocked processor like a dothan (pentium m) or conroe (core 2 duo) and compare them to a higher clocked processor like a prescott (pentium 4). you will find that a conroe at 2 ghz can compete with a pressler (pentium d) at 5+ghz. A lower clocked chip with a very high IPC is better then a higher clocked chip with a low IPC. high MHZ = lots of heat, and wasted energy. that is why the pentium 4 turned into a disaster while the athlon 64 kicked butt.
there are still many other factors such as bandwidth, latency, cache, ARCHITECTURE etc.
unless you are comparing the mhz speed of the exact same chip, you are doing a hopeless apples to oranges comparison. Even if you are comparing the same chip, performance does not scale linearly to mhz. a 4ghz chip is not going to be twice as powerful as the same chip clocked at 2ghz.
This is true, but the clock speed is all we have to go in in this scenario... we know nothing more about the Mac G4s than the clock speed, so we can't say which has more ram or a better video card. All we can do is assume, and as stated before... we strive for accuracy. Making calculations based on clock speeds when nothing else is known about a computer is in no way a mistake, It's working with what was given to solve a problem that was presented.
About the heat caused by Pentium 4s, I don't know if you're much into researching the reasons behind problems, but I am. The extra heat of a Pentium 4 is due to the fact that they require more core voltage to operate. My 2.8 Pentium 4 requires a core voltage of 1.8v while my brothers athlon equivalent only asks 1.6v, and yes mine runs hotter, but it beats his in all of the benchmarks. If there is more energy going into the core, then more energy will have to come out, and alot of it will be heat, but every bit as much energy will will still be going into the practical functions of the processor. High megahertz does not equal lots of heat, High core voltage does. High megahertz equals faster ability to process data.
I have searched and searched for the benchmarks That state that a
5.0 Ghz looses to a 2.0Ghz, and I just can't find them... mind pointing out where you found them?
I sure hope you didn't take those numbers out of thin air...
and as for the 6 computers that would beat the 36, you may be absolutely correct, but that has nothing to do with the question that was asked, so don't present it like a point in your argument.
As for the render wall costing more in energy and labor than the single computer... The 46 extra cents a day (when you have the machines running) and having to replace hardware every couple of years is worth it
when you have a deadline coming up and your singe Xeon can't render your project fast enough.
This is starting to feel pretty hostile, So I'm done.
-Cody C
This is true, but the clock speed is all we have to go in in this scenario... we know nothing more about the Mac G4s than the clock speed, so we can't say which has more ram or a better video card. All we can do is assume, and as stated before... we strive for accuracy. -Cody C
Just wanted to point out that some of the specs are listed in the product description. All of the G4's have 128 MB of RAM and some have Dual Head or DVI videocards ( I am assuming these are the cards that came with the computer so they are not too advanced.)
And Cody, there is really no reason to leave this discussion. After all, we are just discussing a theoretical question about which system will render faster :rolleyes:
David
Capt.ArD
25-08-2006, 16:00
I think i am done as well, after this last post.
Making calculations based on clock speeds when nothing else is known
Much is known. The specs of a xeon and of the G4 are only a google search away.
Xeon:
http://www.2cpu.com/articles/99_1.html
G4:
http://www.dealtime.com/xPF-New_Technology_PowerPC_G4_400_MHz_7MXMG400_200
As per the benchmarks, I did them myself. I was speaking for what i have done and seen. Such were my observations, and many of my other computer friends have had similar experiences. I apologize if my using this information was offensive to anyone.
Another thing to look at the G4's is do they support OSX, they may be limited to OS 9.2 which could limit you to a somewhat volatile OS and not be able to get compatible software.
Joe Matt
25-08-2006, 17:05
Another thing to look at the G4's is do they support OSX, they may be limited to OS 9.2 which could limit you to a somewhat volatile OS and not be able to get compatible software.
Even if the G4s could theoretically do more faster the quad is still the better investment. There is guaranteed future support.
OSX supports the whole PPC family line, don't worry about that. Now, the G4 Towers support booting into 9.2, unlike the G5s, the Aluminium PB, iMac G5, Mac mini, etc.
AV_guy007
25-08-2006, 23:04
Just wanted to point out that some of the specs are listed in the product description. All of the G4's have 128 MB of RAM and some have Dual Head or DVI videocards ( I am assuming these are the cards that came with the computer so they are not too advanced.)
David
they video cards are proboly ati rage 128's (16mb's). they came stock in most g4's. the ram is most likely pc133
sanddrag
25-08-2006, 23:35
I'm not too familiar with animation and rendering and things like that. What role do the video cards play in a rendering farm? Does a rendering farm need a monitor for each computer?
Cody Carey
25-08-2006, 23:53
No, the video card determines little, rendering is processor intensive.
vBulletin® v3.6.4, Copyright ©2000-2017, Jelsoft Enterprises Ltd.