Why can't multiple CPUs solve the performance gap?

ccuilla

Software Developer
I am no hardware engineer (and frankly I suspect that some of those that speak with such "authority" on these board about suhc issues, are not either).

Anyway, my question is this...why can't multiple CPU's be a solution to the performance gap?

In particular, as my use of a multi-tasking OS grows, doesn't the multi-CPU architecture begin to be a greater benefit? As I am able to do more things at once with my computer , it seems that the multi-CPU architecture makes even more sense.

In addition to this, the off-loading of certain graphics operations to the GPU (a la Quartz Extreme) also appears to be a thoughtful architecture decision.

Why MUST it be all about the MHz of the CPU?

This seems to be the "myth" that I think some people talk about.

Someone, please explain.
 
1. COST
2. This design approach already has a name: Super Computer
3. Your electric bill
4. More Fan noise
5. Band-Aid type of solution

Your question is kind of a non issue as far as Apple is concerned. The last time I checked, their business plan didn't include building super computers (although clustering xServes is not something Apple will frown upon). If you want more power you already have the option of buying a couple of xServes, configure them to display on a single desktop and be the envy of all your friends.

Your desire to have an Apple desktop PC perform better than the competition is probably the root of your initial question. I agree that the current crop of G4 processors is getting its butt kicked by Intel, but remember that when the G4 first came out the opposite was true. Throwing multiple processors at the problem IS a solution, but the best solution would be to hope that IBM can provide Apple with killer processors as soon as possible <-ten years ago that sentence would have looked really strange!!
 
Multiple CPU's can make things much faster if your problem can easily be made into something that can be solved in parallel. If it can't, then having multiple CPU's won't help.

This can be extended to the whole system too. If your applications are heavily multi-threaded and SMP aware/safe then you will see significant performance gains. On the other hand, if your apps aren't, then they really only run on one of the processors at a time anyway, so you don't see any speed up, out side of the fact that with multiple CPU's, you can run more programs concurently with out seeing as much of a speed decrease.
 
In SMP (simultaneous Multiple Processing) the law of Diminishing Returns takes effect: the second processor gives you an increase of 80%, the third 60% and so on. The overhead increases with the number of CPU units.
 
Your overhead count depends strongly on the type of tasks.

And on the point of view of evergy, it may be better to work in parallel. As the maximal individual speed is lower, you can work at lower voltage for a given technology, so that each CPU cycle needs less energy. This is why fast processors work with large data sizes (32 - 64 bits), this is why your memory is highly parallelised, this is why the graphic CPU have even larger data busses (above 128 bits sometimes).

And this is even why some whatches now use 8-bit CPUs instead of 4-bit CPUs: accelerating the 4-bit CPUs would use more energy than the slow 8-bit do for the same computing power.
 
Originally posted by zootbobbalu
The last time I checked, their business plan didn't include building super computers.

I thought that the definition of 'Super Computer' was the abillity to execute one gigaflop (or one billion floating-point operations) per second. Apple claims that the current dual 1.25GHz model is capable of 18 gigaflops per second. (The original 500MHz model could reach almost 4 gigaflops)
Doesn't this make the G4 a super computer, or are both Apple and I missing something?
 
dlloyd,

That's true to an extent. But you have to remember that was the old defination of a super computer (80's & 90's), the new one would probably be more likely measured in teraflops (a trillion floating point operations per second) rather than gigaflops...

So it's settled, Apple has to release a computer that'll do at least 1 teraflop. ;) WTH, why not just go all out and aim for petraflops (a thousand trillion floating point operations per second).

I wonder how Photoshop would run on that thing?!?
 
Originally posted by dlloyd
I thought that the definition of 'Super Computer' was the abillity to execute one gigaflop (or one billion floating-point operations) per second. Apple claims that the current dual 1.25GHz model is capable of 18 gigaflops per second. (The original 500MHz model could reach almost 4 gigaflops)
Doesn't this make the G4 a super computer, or are both Apple and I missing something?

nice thought, but poor logic. That means we'll be call a toaster a super computer in five years.

Oh wait I'm just thinking about my G4 Cube :)
 
Ah ha, so typical 'uninformative marketing'. I wonder how many gigaflops the new 3.06GHz Intels can do? Apple doesn't tell us that, does it?
 
Yes, it's agonizingly difficult to find decent cross platform performance numbers that actually mean anything. Everyone has an agenda.

Anyway, some stuff is better on multiple processors, some stuff isn't. Notable, AI logic and virtual machine logic like java doesn't benefit from parallel logic. It's pretty linear in its current form. But from my experience on a dual 450, the difference is in the feel. Because there's almost always a processor with nothing to do but listen to the user, a dual machine feels snappier. It scales load well. It doesn't feel bogged down. It won't necessarily feel like a speed demon either.

With more than so many million calculations per second available to you, I find that the productivity limiting factor on modern computers is almost never the CPU.
 
Originally posted by mdnky
So it's settled, Apple has to release a computer that'll do at least 1 teraflop. ;) WTH, why not just go all out and aim for petraflops (a thousand trillion floating point operations per second).

I wonder how Photoshop would run on that thing?!?

Ha!! My computer runs yottaflops, baby :cool:
 
Part of the problem is the memory bandwidth. Information can be moved around internally at a maximum speed that is dependent on the bus design and clock rate of the bus.
 
Two primary reasons:

1. overhead
2. software

Overhead - There is some overhead associated with handling multiple cpus. One of the biggest issues is cache coherency. Each cpu has it's own L1 cache (and maybe L2), when a cpu updates a memory address, it is possible for that memory location to be cached in another cpu. So any multicpu machine has to be able to deal with this (usually through cache snooping). The more cpu's you have, the more of an issue this becomes. This is one of the reasons that adding cpu's never gives you an exact scaling in performance. Adding a cpu to a single cpu system never gives you exactly 2x performance increase. And the more cpus you add, the more you lose in overhead. This is one of the reasons that the x86 world is practically stuck at 4, any more would warrant a significant increase in costs both in terms of the cpu's themselves and system design.

Software - Having multiple cpu's don't do jack if the software you're running can't utilize it. This is why with MacOS <X, having multiple cpu's didn't buy you much, only in specific cases (typically photoslop). With OSX and multiple processes/threads, this is improved. However, not all apps are multithreaded even with *nix at it's core. Multithreading is still more prevelant in the Windoze world as NT has supported it for a while and it's much more standardized (I know there's pthreads, but even that's not a complete standard as OSX has mach threads as well). Not to mention the simple fact that some things either don't lend themselves to parallel processing or would be very difficult to code.

So as the OSX software market starts to mature, we'll be seeing more and more software that can benefit from multiple cpus. We already see some of this since all the g4 desktops are dual cpu.

Keep in mind some other issues. The cpu is usually the most expensive part of the motherboard. So not only is it more expensive to simply plop an extra cpu in, but it makes the motherboard design significantly more complex. And once again, the more cpu's, the more complex. Also cpu's generate a ton of heat, and adding more would cause a serious rethink about case design in order to get the proper amount of cooling.

One thing to keep an eye on are cpu's that incorporate SMT (simultaneous multithreading). This usually means that it will have an extra set of registers and misc logic so it looks to the OS like it's actually two chips, even though physically it's only one. newer P4 and Xeon chips already have this (Alphas too, but who cares ;) and the newly announced Power5 also will have this. Which brings us to Power4 and the mulitple cpu cores, but this post has gone on long enough.
 
Back
Top