multi-comp system like 1100 G5s at home?

Racer D

what?
so I know totally nothing about this and dunno even what search term to type into google, so lemme try explaining

how does more-computer systems function? U know like the 1100 G5s they connected at that university. And where exaclty does that benefit you?

The main question is, how would someone setup this at home? Is it worthy anyway? I've heard of some ppl connecting more pcs to fasten the 3D renders

could you do that for other things? with 2 unix systems? (I have another box running openbsd in particular with some cpu power to spare) And if it is possible is it worthy anyway? for apps like photoshop, illustrator & everyday apps like iTunes which use cpu all the time

tnx for any info u could give me
and sorry for the totally n00bish approach :(
 
The main question is, how would someone setup this at home?

Doubtful your house is big enough.

Moving to discussions.
 
As I understand it, a multi-cpu array is usually connected by a relatively ordinary LAN, such as ethernet.

In order to make use of the power of such an array, each machine must be running software (sometimes custom-written for a specific purpose) that allows each machine to perform part of some large task, usually a task which inherently requires a great deal of CPU time. OS X does not include such capability out of the box, and I don't *think* OSX Server does either.

Such an array is useful primarily for tasks requiring a LOT of processing power. If you're familiar with Seti@Home, then you may already have a good idea of how this would work. Seti@Home is a freeware program you can download, which runs on many different platforms, as a screen-saver. It's purpose is to analyze data collected by radio telescopes to search for evidence of signals from space of artificial origin. Each Seti@Home client periodically downloads a block of raw data from a central server through the internet, then spends hours or days analyzing it, then later sends back the results and downloads a new block of data. Of course, Seti@Home uses the internet, and an arbitrary number of CPUs, so it has a few important differences vs. an in-house array.

Computer animation houses, like Pixar, or ILM, also use multi-CPU arrays for rendering super-high-quality computer animation. Once a CGI-scene has been modeled, and the animation planned, a CPU array is given the gargantuan task of taking the 3D-model/movement data and producing finished, theater-screen-resolution images. A master-controller CPU assigns tasks to the other "slave" CPUs in the array. The work can be divided up in any number of ways, with each CPU rendering a different frame, or even with each CPU rendering (say) one small sub-section of each frame. As each CPU completes it's task, the finished bitmaps are sent back to the controller CPU, which integrates the pieces into a finished animation sequence, and gives the idle CPU a new task.

Multi-CPU arrays can also be configured to be "load-balanced" network servers. For instance, a mega-website like Yahoo or Amazon might use an array of CPUs so that the task of serving their thousands of simulataneous users is divided evenly among hundreds of servers, so that repsonse time remains good.

What use might they have at home? Not much, really. If you decide try your hand at 3D-modeling, you can speed up your render times (I know Bryce 3D can make use of multiple CPUs, and I think Maya and Lightwave can too). You should not, however, expect to use a multi-CPU array to up your frame rate in games. The games are not written to make use of such a configuration, and even if they were, I doubt that even gigabit ethernet would be able to move data around quickly enough to provide good real-time game graphics.
 
tnx, brianleahy, that did explain some things :) So if I understood this right, it doesn't work system-wide but for each application separately. And you would need to run that application on all the systems u want to use in the array?
 
That's *almost* right.

In order to take advantage of multiple CPUs, the application must have been written with that configuration in mind.

For advanced systems, there is a master application and a seperate slave application. One CPU runs the master, the rest run the slave.

Bryce 3d for instance includes a 'slave' app you can run on other machines on the LAN. (The full version of Bryce can also be set to do 'slave duty' but of course, the license agreement only allows you to install 1 copy per CD-ROM.)
 
it may be the same thing, but it reminds me of a beowulf cluster. you can check that out by searching google, but this link will give you a good idea of the setup...

http://www.fysik.dtu.dk/CAMP/cluster-howto.html

i think it is mostly just a massive parallelization (if thats a word). kind of like have 1100 processors in one box. or at last thats what it is supposed to emulate i think.

you can set up a cluster using os x, i would think, because of its unix underpinnings. in any event, you can think of it being handled like dual processor g4/g5. the only difference is the "dual" part.
 
one last tid-bit. i have some friends in the comp. sci. department here at school that have a small cluster in their apartment. i think its like 4 linux boxes. anyway, they are kinda dorky, but i guess it could be cool. maybe.
 
It would not surprise me to hear that such software exists, but I would think that although it might succeed in emulating an n-processor motherboard, bear in mind that it will have the added overhead of network communication, so it can never be *quite* as fast as having the processor chips right there in one box. Also, even a 2-cpu Mac cannot run a single app significantly faster unless it's a "multiprocessor aware" app.

Still, it could be faster if the processor time used is SIGNIFICANTLY greater than the network overhead.
 
brianleahy - you are exactly right. in the case of my dorky friends, they like to do experiments that require mass amounts of cpu time. their main problem is writing code that can run in a parallel environment.
 
FWIW:

Source: www.macosrumors.com

Xgrid: Apple's solution for distributed computing. Around this time last year, Rumors reported that Apple had trademarked "Xgrid" to describe its forthcoming technology to distribute resource loads across multiple computers, often referred to as "grid" computing. Now Thinksecret is reporting that Apple has set up a mailing list for the technology, suggesting that more public activity in this area is shortly to come...
 
The Virginia Tech cluster uses a piece of software called Deja Vu, developed by the cluster's mastermind, to distribute the data loads evenly across all computers and to account for trouble one computer might be having. If one of the G5's goes down, its work will be spread across the rest of the 1099 nodes until they fix it.

Now, if you save up your pennies and save $5 million, you too can get an 1100-node G5 cluster at home.
 
well I wasn't really planning to make a 1100 G5s cluster, as u see I have no money for that ;)

more like what brianleahy suggested, 2 comps acting as one multi-processor comp.
though the 100mbps network limit is the problem here, yea

btw, cf25, are ur friends using it for anything in particular or just experimenting
 
Back
Top