Intel tech question

GroundZeroX

Searching for logic
I have done some reading about the new Hyperthreading tecchnology by Intel, and I was wondering that since it runs two instruction threads down the same pipeline, does that increase the chance of a branch prediction error? And if it does, does that mean both threads going down the pipeline would have to be cleared? If that is true, then it sounds like this technology is more trouble then its worth.
 
I honestly don't know much about the physics and specifics of computer chips, or of what you just said, but I'm sure if you thought of that problem happening, the highly paid engineers at Intel sure would have thought of it and devised a solution.
 
Not that easy. They are sending two threads through a pipeline at once. The branch predition has like a certain percentage of errors, and when those errors happen, the entire pipeline has to be flushed. If there are two threads, that means its twice as likely to come across an error. There is already a HUGE penalty when one thread has to be emptied and started all over again, it would be even harder to recover two since they are sharing a pipeline.
 
i can't argue with the details or your logic... i know nothing about it. i can argue however, that we don't know everything about this chip, and i am willing to bet, once we do, we will see the intel engineers have devised an ingenious fix for this problem.. who knows how, or by what means, but if this is such a big performance degrader, i am willing to bet, somehow the problem has been fixed or worked around...
 
eh, time will tell, thats why i asked the question in here. Alot of people talk like they know what their talking about. Intel always had problems with piplines needing to be flushed, just that this looks only to add to their problems.
 
From what I understand of hyperthreading, this is the type of scenario that hyperthreading is supposed to take advantage of. When the cpu detects a branch misprediction on a thread, it can then keep the pipeline full by scheduling instructions from another thread. The cpu will discard the results from the invalidated thread, I don't believe it's like the old days when the entire pipeline had to be "flushed", since the cpu is designed to be able to negate the completion of "non-executed" code.

On an interesting side note, the Java oriented processor by Sun (the name escapes me now, not that it's likely to go anywhere, but that's another story) speculatively executes threads, not just branches!
 
If thats the case though, abotu the pipeline not needing to flush out when there is a misprediction, then why haven't they used that before?
 
Originally posted by mightyjlr
i can't argue with the details or your logic... i know nothing about it. i can argue however, that we don't know everything about this chip, and i am willing to bet, once we do, we will see the intel engineers have devised an ingenious fix for this problem..

Wow, and some people say Apple has a blind and loyal following! :D :p ;) (Don't take it personally mightyjlr! :))
 
From what I recall from my computer architecture class last semester, the penalty on the pipeline can be rather large if a branch is mis-predicted, but not terribly so.. especially if you are using Tomasulo's algithm to deal with out of order execution.. you only wind up losing the number of cycles that the misprediction took to discover - you don't have to flush the pipeline before you can start executing again, because the reorder buffers take care of whether or not the result of a particular instruction are valid...

Also, the larger your branch prediction buffer, the better the chances are that your branch will be predicted correctly - and especially if you can use a correlating predictor, which takes into account not only the branch instruction itself, but also the context of the branch - even something as small as a 4 bit correlating predictor is tons better than a straight 2 bit predictor.

Just my $0.02....


--quangdog
 
Back
Top