PPC was designed from the start so that you can run 32 bit programs on 64 bit chips, so I think it should work out ok.
Viro said:A fundamental change is the size of pointers. The word size is no longer 4 bytes (32 bits) but 8 bytes (64 bits). This alone has the potential to break tonnes of C apps that are out there.
To my knowledge, you can't mix 32 bit and 64 bit libraries/kernel extensions. I'm not familiar enough with PPC64 to be authoritative on this, but with the AMD64 architecture, you could run existing 32 bit applications on a 64 bit OS, but you needed the supporting 32 bit libraries to be present. This ended up causing Linux systems to be in a real mess having to maintain two separate sets of libraries.
Krevinek said:Nope... as I said, you do 32-bit arithmetic on a 64-bit PPC, and all it does is do the math in 64-bit, and shave the top 32-bits off. r0 (general register 0) used to be 32 bits, now is 64 bits, for all operations, but the instruction tells it how many bits are supposed to be in the result. So a 32-bit pointer in PPC scales to 64-bit for the CPU. The OS needs to adapt slightly, but each app is isolated from each other for the most part, so the OS just needs to expand each app's memory partition from 2GB to something larger on a 64-bit processor.
//define a structure for a simple inventory system
typedef struct _PRODUCT {
int _id;
char name[80];
int _price;
} PRODUCT;
...
...
...
//somewhere else in your code you allocate space for an array of 100 _PRODUCTS
PRODUCT *myArray = (PRODUCT*)malloc(100 * sizeof(PRODUCT));
//Do your thing with the array
...
...
...
//Store your array in a binary file
FILE *fp = fopen("data.dat","wb");
for(int i = 0; i < 100; i++)
fwrite(&myArrau[i], 1, sizeof(PRODUCT), fp);
fclose(fp);
//program continues
...
...
Except in C, since there aren't any generics, it is common to cast things to void (argh!!!) and manipulate them. This is where the problems come in.
Krevinek said:In fact, a pointer is always the same size: the native size of the processor the code was compiled for. A 32-bit app running on a G5 uses 64-bit pointers, but with only 32 bits of data (I am starting to repeat myself here). This is because the 32 bit instructions shave excess bits off the top.
Captain Code said:He means that the other 32 bits are unused and are set to zero. There's 64 bits in the register and you can't change that, so it basically ignores the upper 32 bits and uses the lower 32 bits for the pointer.
Same thing if you use a "short" on a 32 bit CPU. The upper 16 bits are useless in that case.
Krevinek said:(void) typecasts are invalid constructs in C, as void is the 'lack of a type'. (void *) is still valid, but is still a pointer, just like an int pointer or a long pointer, or every other type. In fact, a pointer is always the same size: the native size of the processor the code was compiled for. A 32-bit app running on a G5 uses 64-bit pointers, but with only 32 bits of data (I am starting to repeat myself here). This is because the 32 bit instructions shave excess bits off the top.
Plus, anyone caught writing pointers to a file should be fired on the spot in my book. This is where problems really occur, when data is written (you said so yourself). Pointers are temporary variables at best, and do not need to exist beyond the scope of the program's memory space. Hence, they pose no problem.
Additional: Plus, it isn't hard to 'shrink' data when serializing it to a file. If you designed your file structure well, this will not bite you because you are writing a record using fixed length values, rather than relying on sizeof(int). I wouldn't want to keep around a programmer doing that practice either, as we never know what platform some code might wind up running.
Captain Code said:He means that the other 32 bits are unused and are set to zero. There's 64 bits in the register and you can't change that, so it basically ignores the upper 32 bits and uses the lower 32 bits for the pointer.
Same thing if you use a "short" on a 32 bit CPU. The upper 16 bits are useless in that case.
Viro said:I meant void pointers. That's the only time anyone would typecast anything to void and I thought the meaning would have been obvious to anyone who knew basic C.
A pointer on the G5 is going to be 32 bits *if* the G5 is running a 32 bit OS. Don't believe me? Do sizeof and tell me what you get. Lurk has explained it very well and I'll leave that to him.
[\QUOTE]
sizeof() is computed at compile time. An int on an app you compiled 2 years ago is still gonna be 32-bits when it runs on Tiger. It changes when you recompile using a compiler set for 64-bit ints. People aren't aware you have to turn on 64-bit ints in GCC to get a sizeof(int) to return 8.
That's non sequitur. No one writes pointers to files unless they have no clue about what they're doing. The problem is with binary files, and especially with structures, the move to 64 bits will cause the structures to become larger. This is what causes problems, and I thought this was explained clearly in the code I posted. The code actually demonstrated actual writing of data to a file, not the pointer location.
Then why bring up void pointers as a problem? Because clearly it isn't. I was addressing a comment you made about pointers being a problem. They aren't. The data size is.
This isn't going to work. If you have a structure in the listing that I posted, moving to 64 bits will cause the structure to grow in size. From 88 bytes to 96 bytes. Reading/writing 88 bytes when the size of the structure is 96 bytes is going to bite you in the behind. Hard.
There are easy ways around this, and hard ways around this. Easy way would be to force the compiler to use 32-bit ints when compiling G5-only code. Hard way would be to change it to 'long', (search and replace, anyone?) or writing up a nice set of macros that let you more clearly declare the size you want... Heck, every platform has em. (int32, int64, int8, int16, etc) One of the first things I did when starting cross-platform development was to make one of these... I still keep it around. Either way, the hard way is something that should have been done from the start. I thought people learned from the 16->32 bit mess.
This is why the code at the very least needs to be changed. The ints in the structure will have to be converted either to shorts or longs, whatever suits it.
Yup, honestly not a big deal yet though unless you are going to start writing G5-only code soon.
Writing portable binary files for different platforms is very difficult and a lot of considerations need to be taken into account. If you can, read the chapter on "Offline Data Storage and Retrieval" from the book "C Unleashed" which is a very good source of info into this topic. It goes into all the pitfalls of writing portable code and the solution ain't as easy as you're making it out to be.
Yup, well aware of the pitfalls, but I don't personally find it 'very difficult'. Planning for endian-ness, especially in network apps is key. Planning for fixed record sizes is key, and int should NEVER be used to describe data that will hit the network or disk. Plus, a fair number of the solutions require no effort on the part of the programmer beyond making sure a couple compile switches are set right. You aren't going to be using 64-bit ints? No problem, don't enable em. You made it sound as if us programmers are being forced to move to 64-bit clean code, which is not true. The "performance increase" from moving to 64-bit isn't gonna do jack for most applications either.
Although, to try to move back onto the topic to some extent... your apps from 10.3 will be about as stable on 10.4 as 10.2 apps are on 10.3... in other words... nobody needs to worry about Tiger breaking anything that wasn't broken to begin with (using undisclosed APIs).
Krevinek said:Yes, thank you. The PPC spec states how instructions work. Every instruction (well, nearly every instruction) specifies how many bits you want the result to be in. So when you load a number into a register, you have to specify how many bits are going to be loaded. (8, 16, 32, and now 64 are available or have been, since the FPU has been 64-bit for quite awhile)
Krevinek said:After that, once in the register it is treated as if it was a native-size number. So if I load an 8 bit number on a G5, the processor treats it like a 64 bit number while in the register. Math instructions even require that you tell it how many bits you want in the result. All the original math calculations are done at the native size, and then 'chopped' for the desired size, and other bits are properly set from the remaining data, wherever it needs to come from to be accurate.
Krevinek said:So when you do pointers (which are just unsigned numbers!), the CPU instructions still require that operations on them declare the desired bit size for the result. So, in fact when you run a 32-bit app on a G5, it is native from the standpoint that there is no 'emulation' involved. It simply tells the G5 it wants the results as 32-bit results on the pointers, although all the actual math is done at the native 64-bits behind the scenes. No bits need to be set, no funny stuff, it just works.
Krevinek said:Another example is that the PPC FPU is double-precision, so even if you do floating point math on two single-precision numbers, it does the math as double precision internally, and chops the result to a single-precision number if that is what the instruction said to give the result as. The whole design of the PPC is that it could be expanded to larger register sizes without impacting how previous scales worked, or incurring massive performance penalties.
Krevinek said:I have this odd urge to write a 16-bit PPC app in assembly now...