All of that sounds REALLY GOOD!!! I wonder how much of it is true...
I really hope that Apple will be including the HyperTransport technology from AMD. I read a review about nVidia's new nForce chipset that uses HyperTransport, and the performance gains are very impressive!!!
If Apple is able to ship the 1.6GHz chip in February or so, my guess is that Intel will have a 2.5GHz or so. So I guess that Apple is going for the Megahertz race as well (and it almost has to). This just means that we will have egg-frying CPUs like our PC counterparts.
I read a report a few days ago saying that Firewire is getting a jump up to 3.2MBps, because equipment may be shipping with the new standard by the end of the year. If this is true, then chances are the Firewire in the new Macs will be that speed too...
i think u ge hung up on the mhzzz, does it say that altivet is on it? not that i know, my guesss is that this shit will get stuffed in iMacs and lowend desktops, and the highend desktops will still have g4's with lower mhz. or, they are dumping the altivec and just going for mhz, then mhz matters and altivec won't be able to help out the big apps. am i wrong, will they still stick altivec on these chips?
Whoa 16 Gb or Ram?! This kinda sucks for me, i just upgraded to an iBook which is still in the G3 processor range, us iBook consumers will be dying when this G5 comes out! It sounds extraordinary, but is it true?
This is 100% speculation, but I'm betting the farm on it.
They will definatly put altive on these chips, that's a given. Altivec is an awesome invention and Apple loves it [who doesnt?]. The lowend computers will get low Mhz G4s and the new power macs will have the G5. Just like when the G4 was introduced...history repeats itself.
Altivec acts like a multiplyer for the chip. A 500MHz G4 without altivec is equal to a 500MHz G3. [run altivec fractal carbon if you don't believe me]. So higher MHz means a bigger number to multiply by.
With the new itanium starting out at 800MHz, the mac future will look bright. We are certainly due for "a next best thing" from apple. The G4 has been out quite some time now and its getting old. I think we are looking at maybe one more MHz boost like the one we just saw at MWNY before the G5 comes out.
16GB of ram is a godsend. We've been stuck at a "theoretical" limit of 2GB for like 2 or 3 years now [since the sawtooth, right?].
The ultimate in vapourware, Motorola has recently filed patents for a new technology which basically amounts to optical circuitry on the cheap- they've already apparently spec'ed out theoretical 70Ghz chips. This kind of jump in performance is the kind of really next level technology that could spur some amazing technologies. What people need to do now is lay TONS and TONS of optical cable- just plaster the continent with it. Eventually the hardware will catch up and I think we'll see some unfounded performance that will need to bring on a whole slew of new standards- hopefully including new hi-hi-fi video and multimedia standards. My question is- how will storage keep up?
A friend gave me a link to this off of cgchannel- motorola also has some info somewhere in their page.
Just a note, that the limit is set at 1.5 GB, for the Classic MacOS anyway-- the System can't allocate anything more than that.. and I don't think you can even GET higher than 1.5 GB, unless you have a Sawtooth which has 4 slots, compared to the 3 in current G4s. Also, the classic MacOS can't allocate more than 999 MB of memory to one applicatione
airport is IEEE 802.11 ethernet over 2.4GHz radio
bluetooth is radio, for peripherals I don't konw frequency. Shorter range, and allows anyone with a good radio dish to sniff your keystrokes. Forget email encryption, the gov can just take in everything you type!
and I quote
I think we'll see some unfounded performance
Indeed I think this G5 piece is full of unfounded performance. And as for Altivec being an invention or a multiplier ... I hate to be a spoil sport to my own Mac friends, but Altivec is a good implementation of SIMD, Single Instruction Multiple Data, same thing the original supercomputers were good at. Calling altivic an invention is like cutting the wings off an airplane, driving it on the ground and calling that an invention. Admittedly a rocket car, as is altivec, but even then...
unless you know the math you'll be doing, which is great for codecs, you can't speed it up using altivec. Any logic decisiions can't be improved by SIMD, so word processing, and the AI in games, and a lot of the logic fundamental to an OS can't be improved by SIMD. graphics can be. And the Quartz to pixel math is only made reasonable by SIMD.
And isn't the whole point of the G5 multiple cores? That's why they mentioned n-bar multiprocessing ... If by 1.6 GHz they mean 2*800 MHz, then I believe it. the G4 is not that old, and the story is fun, but - whatever...
and on firewire, it should be set to up its speed soon. But I haven't heard anything in a while. It's currently at 400Mbps which is roughly 50 MB/s and the speed jump originally proposed was double that. I'd like to see it come out just to trounce that weak USB 2.0 crap. What the werld neede was cheaper USB, not faster, those bastards. Since USB 2.0 is supposed to be forward and backward compatible with USB 1.1 (or whatever) I'd rather save my cash. Truth is, I just can't type more than 1.2 million characters per second, even in dvorak.
and I thought the MMU on G4's was now 36 bit, allowing the chip at least to deal with 16G of RAM right now. I could be off on that fact. But in truth, bitness is primarily a memory addressing issue, and a 64bit chip should be able to address not 16GB but 16 Petrabytes of RAM. I suppose good VM is important by then eh?
Good rumors require good humor. I'm not betting the farm on any of this. But I do think the G5 will retain Altivec.
Everyone gets all excited and jumpy about this many or that many gigahertz, but the truth is, modern processors don't even NEED a clock. In fact, the clock can slow a processor down by two or more times. Why not axe the clock in the newest G-Series Motorola processor, and gain speed?
I read an article the other day off my slash-dock dockling all about the evolution of clockless processors, and why the processor of the future won't have a clock.
The clock in the processor makes sure everything stays "in time", making sure every part of the processor gets it's chance to do it's bit. The problem is, we don't need it anymore. In the very first computers, in the forties, it was necessary to build in a clock to keep everything in sync.
In modern processors, this isn't the case. All the parts of the processor can work well independently. Having a clock just slows it down: The clock has to wait for all processes, including the slowest ones. It makes the processor as slow as it's slowest elements. Without out the clock, a modern processor like G5 could be possibly twice as fast. Without having to wait for it's slowest parts, its speed would be an average of all its elements, not the speed of its slowest.
Of course, Apple is already having a hard time marketing G4, what with the "megahertz myth" campaign and all. It might prove exceedingly difficult to market a "Zero Megahertz" chip.
Perhaps this is just my day to be negative, but I've done my homework in chip design, and quite frankly I think that we have separated the fast parts and slow parts of a chip, run them each at appropriate speeds, and whatnot and that we are very likely already at roughly 75% the performance of a clockless chip of similar design. 75% or hell even a doubling of speed is nothing. We've been doing that year over year for a while. And the added complexity of having a "not yet" bit for everything ... yikes!
The real difference in clockless chips should be power consumption, I don't think they'd really improve speed much on current chips. The PPC already addresses a lot of the power issue with fine grained instant on sleeping sections of the chip.
I agree that clockless chips would be able to run a windows laptop a LOT longer, but I think that's another nice myth. And really, what chip currently in development isn't at least twice as fast as what's out there. If you look at design cycles, you have to start thinking about a 10fold performance jump if you're gonna start from scratch. That's what makes the industry hard.
Yeah, I need some time with my woman or something. I'm just edgy. Forgive me.
As far as government listening in ,
They can do that already by looking at the electrical inpulses sent from teh cable to the motherboard lol no bluthood needed hehehe besides the governemnt is already checking up on us
It's alright to leave the clock in and wait for a couple of years for speed, but why not axe the clock now, and immediately gain it?
Also... A main advantage of clockless processors is this: lowered temperature and power consumption. Any PowerBook or iBook user knows about heat, and the clock is the main source of it.
Everyone wants devices to have a longer battery life. Recently, Phillips began making pagers based on clockless technology, and they gained twice the battery life. Why not bring this technology to real computers?