Originally posted by binaryDigit
Sorta. Like I said in my previous post, many applications today don't need 64bit calcs as 64bit integers are overkill for many/most apps. Think of how many numbers you deal with in day to day life (even computing life), now how many times do you deal with values above 4billion? Unless you're a large company, it's very rare. So for your typical user, the use of 64bit integers doesn't really buy much.
The example you give is more of a reasoning for going with multiple execution stages. A better analogy might be to say, imagine a door that is somewhat narrow. Your average person can get through it easy enough, but "larger" people have to turn sideways, taking them twice as long to get through the door. Now you upgrade to the 64bit door which is twice as wide. Your "average" person doesn't really benefit, but now your "larger" person can go straight through. But as you will notice, the benefits you derive are totally dependant on the number of "larger" people you have (if you have none, you see no benefit, if they're ALL larger, then you'll see roughly twice the performance). And you will also notice that as the door gets larger, the number of people likely to benefit also decrease (assuming a bell shaped distribution of people sizes). So going from 8bit door to 16bit to 32bit door paid big dividends. But going to 64bit door isn't as likely because generally you won't have that many large people in the general population (there will always be special cases of course).