What does Hypertransport get me?

mindbend

Registered
I asked the same question about DDR-RAm and got garbage answers. Can't wait to see the answers for this.

I don't want a techncial explanation of HT, I can get that info elsewhere. I want to know specifically what HT will get me in performance for the apps and hardware I use or will use.

For example, how does HT affect the following:

Game frame rate
Photoshop rendering
Lightwave Rendering
Video rendering
OS X GUI
I/O to and from hard drives
Ethernet transfer speeds
Common application performance
Launch times for apps
Multi-tasking
RAM usage

And to what degree does HT affect any of the above? 10%, 50%, 100%?

I know it's mostly speculcation, since HT doesn't really exist yet AFAWK, but I don't understand what its real world benefits are yet.
 
boy, someone's hard to please...
;)

i was kinda wondering the same thing actually. but since I won't have a dual chip desktop anytime soon, I'm mainly interested in "Piles" for Panther.

Bring on WWDC for crying out loud!
(Funny thing is, on the 27th I go to hawaii, but I'm almost more excited about WWDC a couple days earlier. Man, I'm a nerd.)
:D
 
Game frame rate
Possibly faster, depends on your video card most likely.
Photoshop rendering
Lightwave Rendering
Video rendering
Faster because the cpu can move data back and forth to memory faster and isn't waiting for data all the time.
Faster, although it will depend on video card as well.
I/O to and from hard drives
Ethernet transfer speeds
Won't change, because the hard drive still will be slower than the rest of memory. The system will be able to deliver data faster to the IDE controler, but the drive will still be the same speed it would be with out HT.

Ethernet speed is already mostly dependant on the speed of your network, so that won't change. Where it does apply is in GigE, as if I remember correctly that can starve the current bus right now.
Common application performance
Launch times for apps
Multi-tasking
RAM usage
Not sure what you mean by RAM usage, but application performace and launch time should be improved, since the CPU isn't waiting for data as often as it is now.

Multi-tasking isn't really a valid catagory here, since what that really means is context switches, which will happen in the execution of just a normal program. Either way it should be faster.
And to what degree does HT affect any of the above? 10%, 50%, 100%?
No one, out side of people with access to the development systems, really knows. Its complex and not exactly something that can just be calculated without some real testing on the box.
 
Things that will improve lots: (30%-50%)
Ethernet transfer speeds
Photoshop rendering
Game frame rate

Ethernet is all about checking sums and moving to and from memory. Provided you're talking about normal size frames on gigabit ethernet. If you're only talking 100mbit ethernet we flood that already.
Photoshop is pretty memory intensive, and things that are not written very well to (or can't) take advantage of locality of memory will benefit from fatter memory access. I see the 100 pixel gaussian blur picking up quite a bit.
Games, in general, are large and heavy and will likely improve given fatter hardware.

Things that will improve a little:
Common application performance
Lightwave Rendering
Video rendering
OS X GUI
Multi-tasking

In general, memory is a lightweight bottleneck, and the majority of performance hits are on the OS, the app, or the processor itself. Adding memory bandwidth or peripheral bandwidth will help, but overall not a lot. A lot of the performance of these applications is helped by a fat and fast cache.

Things this won't improve:
Launch times for apps
I/O to and from hard drives
RAM usage

Dear God is the I/O to hard drives awful in OS X. Seriously. If you have 2 hard drives to play this game with, try it: Have a huge file, like a CD image, on each drive. Now duplicate that file in place on one drive and time how long it takes. Now duplicate that image in place on the other drive and time that. Now do both at the same time. Since the bottleneck in both cases should be the hardware, and since the processor will show itself to be mostly idle, it would make sense that the processes would not interfere with each other ... wrong.

I'd be curios what people's times are, I haven't timed it exactly myself, but I've had a large number of issues where parallel tasks should have been faster and weren't. Occasionally they're even slower than doing one, waiting for it to finish, and doing the other.

The drive I/O handler needs an overhaul, and so does the AFP handler. Hopefully 10.3 will address these as it isn't a hardware issue, it's an OS issue. Maybe this wasn't the best place to rant, but that's my mood.
 
Originally posted by theed
...
Dear God is the I/O to hard drives awful in OS X. Seriously. If you have 2 hard drives to play this game with, try it: Have a huge file, like a CD image, on each drive. Now duplicate that file in place on one drive and time how long it takes. Now duplicate that image in place on the other drive and time that. Now do both at the same time. Since the bottleneck in both cases should be the hardware, and since the processor will show itself to be mostly idle, it would make sense that the processes would not interfere with each other ... wrong.
...

Just out of curiosity, in a 2 hd config on a powermac (which is what I assume you're referring to here), do both hd's have their own ide channel, or are they sharing one? If they're sharing one, then theres a huge part of you're bottleneck right there. IDE is a dumb bus and masters/slaves don't share very well and do their best to kill each others performance when both are active. Also, in your test scenario, have you tried it with say two firewire drives, or an internal drive and a firewire drive (or scsi drives for that matter)?
 
Your mission, should you choose to take it, is simple:

- Define Hypertransport! -

Don't just assume everyone knows what it is. Say, this is such and such, what does it do? I haven't heard of this technology before; what is it?

[side note]Nice post #, Theed (666 as of this posting).[/side note]
 
Misson accepted: http://www.hypertransport.org/faqs.html#b
7. What is HyperTransport technology?
HyperTransport Technology is a universal chip-to-chip interconnect that replaces and improves upon existing multilevel_ buses used in systems such as personal computers, servers and_ embedded systems while maintaining software compatibility with PCI I/O technologies. HyperTransport technology delivers a maximum 12.8 GB/second aggregate bandwidth using easy to manufacture dual,_ low-latency, unidirectional point-to-point links. Enhanced 1.2V low-power LVDS signaling and dual-data rate data transfers deliver increased data throughput while minimizing signal crosstalk and EMI. HyperTransport interconnect technology employs a packet-based data protocol to eliminate many sideband signals (control and command signals) and supports asymmetric, variable width data paths.



8. What are the key characteristics of HyperTransport technology?
Key characteristics include: royalty-free IP (to member companies), high bandwidth (up to 12.8 Gigabytes/second), PCI software compatibility, 1.2V low-power LVDS signaling, dual-data rate data transfers, low-cost to manufacture, and wide-spread industry support.
 
Heh, time to get that number off of satanic reference: My lingering question after that wonderful definition of hypertransport is if the power consumption of hypertransport will be notebook worthy. Having a fat bus with a high speed clock smells like a heater to me.

As for my I/O bashing scenarios, IDE on my machines is either 100 or 133, so copying 2 instances of 20MB per second ((20 read + 20 write) * 2) still shouldn't flood the bus. But I have seen no improvement when the drives involved are IDE/firewire, IDE/remote, IDE/IDE, etc. If I'm copying from a firewire drive to a local IDE drive and from another local IDE drive to a network drive, do you think these should slow one another down?

Either way, even if it is IDE killing me, hypertransport doesn't fix this. Serial ATA might help, but I'm fairly certain that the OS needs to be rethought on it's multithreading I/O. BSD has some sweet throughput technologies as well as a much cleaner SMP architecture that I'd like to see find their way into Mac OS X. That and software RAID 5, but I'm going way off topic here.

Another potential advantage to hypertransport is reduced cost of computers as it's a fairly standardized bus. Less proprietary stuff, less costly manufacture. Yay.
 
And for the rest of us: Macs had 'slow' busses in the recent past, while PCs had 'fast' ones. HyperTransport will skip 'fast' ones and go to 'superfast' ones. :p

(Not that PCs won't get HT, too, with AMD being the leader on the technology...)
 
Originally posted by fryke
And for the rest of us: Macs had 'slow' busses in the recent past, while PCs had 'fast' ones. HyperTransport will skip 'fast' ones and go to 'superfast' ones. :p

(Not that PCs won't get HT, too, with AMD being the leader on the technology...)

PC's already have hypertransport via the nForce chipset from nVidia (and maybe SIS also, can't remember now).

Also, HT could help IO bandwidth considerably, esp compared to the current Apple mobos, and even more so in a multi-tasking environment (think files tranferred over ethernet to your hd while some other app is hitting another drive). Where this will really pay dividends is on machines like xserve, where you have multiple io channels chugging away. This is all assuming Apple uses HT in the "usual" way (northbridge to southbridge interconnect (or memory controller to io controller interconnect to be more precise)).
 
Originally posted by theed
Dear God is the I/O to hard drives awful in OS X. Seriously. If you have 2 hard drives to play this game with, try it: Have a huge file, like a CD image, on each drive. Now duplicate that file in place on one drive and time how long it takes. Now duplicate that image in place on the other drive and time that. Now do both at the same time. Since the bottleneck in both cases should be the hardware, and since the processor will show itself to be mostly idle, it would make sense that the processes would not interfere with each other ... wrong.

Well, it really depends; I think it has more to do with the configuration of the computer. Last time I checked, Apple did offer UltraSCSI 160 cards and ripping fast 10,000 RPM hard drives to go with that SCSI card. Question is, do WANT to pay for that speed? I purchased my Blue & White G3 with a stock 5400 RPM 12GB UltraATA (IDE) hard drive and a UltraSCSI card. I later added an internal 7200 RPM 18GB UltraSCSI HD, and the SCSI drive is much much faster than the stock UltraATA drive, but it was more expensive. Generally, you get what you pay for.

IIRC, the main reasons for bottle necks in the current PowerMacs is that the dual G4 processors are starved for information, the internal bus is too slow, and the hard drives have to be coupled with the system bus speed to be cost effective. In addition, the SDRAM isn't utilized properly (not going to go into that, read Arstechnica.com). There's no use in having a ripping fast hard drive when the system bus speed is half the drive's speed. That extra potential drive speed goes unutilized. A friend of mine has a 12X SCSI CD burner with his biege G3, he can only burn at 8X due to the limitations of the bus speed and hard drive speed.
 
Back
Top