Bignums, take 3

Mikuro

Crotchety UI Nitpicker
Okay, I already posted this TWICE, but it keeps getting deleted from DB errors. :mad: First, I'll run down of my original post, and then I have two more questions.

I couldn't figure out how to initialize a vU1024 (1024-bit unsigned integer from the Accelerate framework). I couldn't use any simple = operator to assign it a value.

Someone (I think it was Viro; thanks!) said that I need to assign values to each of the 32 integer elements individually, like so:
Code:
vU1024 booga;
booga.s.MSW = 0; //MSW = Most Significant Word
booga.s.d2 = 0;
//...step through d3~d30
booga.s.d31 = 0;
booga.s.LSW = 0; //Least Significant Word
//then use the arithmetic functions described in vBigNum.h
And that seems to work. Great! But boy, is it a hassle. Now for question #1:
I sometimes use this form to assign values to structs, like NSRects:

NSRect r = (NSRect){ {0,0} , {0,0} };

Since the 's' property of vU1024 is a struct, I feel like I ought to be able to do something similar, but I'm not quite sure how. This...

booga.s = {0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0};

...results in a "parse error before '{' token", I guess because I have no typecast. But how should I typecast it? I can't typecast it as vU1024, because I'm not assigning the entire object, just its 's' sub-struct.

Now, for question #2:
The vU1024 type is comprised of more than just this struct. It has an 8-element array of vUInt32's, and also a struct called 'vs' which has, again, 8 vUInt32's. So...what do these do? Should I be assigning them values as well?
 
vU1024 isn't a struct, it's a union. That's a big difference there. With a struct, each element is allocate it's own space. Thus, if you have a struct that is made up of 4 char fields, the size of the struct is going to be 4 bytes (assuming the compiler doesn't pad bytes as an optimization). A struct that is made up of 1 int and 2 chars is going to be 6 bytes, in size. You see the pattern. To find the size of a struct, just add the size of the individual fields.

Unions are very very different. The size of a union is equal to the size of the largest member. Therefore, if you had a union of 4 chars, the size is actually going to be 1 byte, instead of 4 bytes if it were a struct. In a union of 1 int and 2 chars, the size of the union is going to be 4 bytes, because the largest field (the int) is 4 bytes.

With vU1024, the size of the union is 1024 bits, because the largest field is s, which is 1024 bits. Unions are funky like this, because all the fields are aliased onto one another. Thus, if you wanted to assign to s.LSW, you could vs.v1, or v[0] instead. Works the same way since they are all aliased to the same address in memory.

As for the compile error you're getting, I've got no suggestions :(. It looks like it should work, but I don't remember if unions are treated any differently to structs.
 
Ooooooohhhhh! Now that makes a lot of sense. So I can manipulate the same data as several different data types. I just have to pick the representation I want to use and run with it. Cool. Thanks a lot for the explanation!

Well, I can't get the struct-assignment thing working, so instead, I just made a few functions. In case anyone else is interested:

For *.h:
Code:
vU1024 make_vU1024(int,int,int,int,int,int,int,int,int,int,int,int,int,int,int,int,int,int,int,int,int,int,int,int,int,int,int,int,int,int,int,int);
vU1024 int_to_vU1024(int);
vU1024 random_vU1024(void);
For *.m:
Code:
vU1024 make_vU1024(int MSW,int d2,int d3,int d4,int d5,int d6,int d7,int d8,int d9,int
				 d10,int d11,int d12,int d13,int d14,int d15,int d16,int d17,int d18,int d19,
				 int d20,int d21,int d22,int d23,int d24,int d25,int d26,int d27,int d28,int d29,
				 int d30,int d31,int LSW)
{
	vU1024 theResult;
	theResult.s.MSW = MSW;
	theResult.s.d2 = d2;
	theResult.s.d3 = d3;
	theResult.s.d4 = d4;
	theResult.s.d5 = d5;
	theResult.s.d6 = d6;
	theResult.s.d7 = d7;
	theResult.s.d8 = d8;
	theResult.s.d9 = d9;
	theResult.s.d10 = d10;
	theResult.s.d11 = d11;
	theResult.s.d12 = d12;
	theResult.s.d13 = d13;
	theResult.s.d14 = d14;
	theResult.s.d15 = d15;
	theResult.s.d16 = d16;
	theResult.s.d17 = d17;
	theResult.s.d18 = d18;
	theResult.s.d19 = d19;
	theResult.s.d20 = d20;
	theResult.s.d21 = d21;
	theResult.s.d22 = d22;
	theResult.s.d23 = d23;
	theResult.s.d24 = d24;
	theResult.s.d25 = d25;
	theResult.s.d26 = d26;
	theResult.s.d27 = d27;
	theResult.s.d28 = d28;
	theResult.s.d29 = d29;
	theResult.s.d30 = d30;
	theResult.s.d31 = d31;
	theResult.s.LSW = LSW;
	return theResult;
}

vU1024 int_to_vU1024(int LSW) {
	return make_vU1024(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,LSW);
}

vU1024 random_vU1024(void) {
	srandom(time(NULL));
	return make_vU1024(random(), random(), random(), random(), random(), random(), random(), random(),
					   random(), random(), random(), random(), random(), random(), random(), random(),
					   random(), random(), random(), random(), random(), random(), random(), random(),
					   random(), random(), random(), random(), random(), random(), random(), random());
}

So now I can just say int_to_vU1024(0) to initialize a vU1024 to 0. Easy enough for me.

Oh, one thing I should mention is that I think the random_vU1024 function I made won't be as random as it should, because the random() function only returns values from 0~2^31 (i.e., positive signed ints). So every 32nd bit from the right will always be 0. But it's good enough for me, for now.
 
I just realised one thing as well. You shouldn't try to assign data to the struct at initialization. Doing booga.s = {0,0,0,0,.......etc} is gonna be fine on the PowerPC, but once Apple switches to Intel, you're gonna be in trouble. PowerPC and Intel machines have different byte representations. With the Intel machines, you'll have to swap everything around. Another reason why I dislike the move to a different processor. Makes developer's lives harder.

The procedure method is much better.
 
.......I had not considered that. >_< I can think of at least a few of my programs that will be broken from that alone, and will require some pain-in-the-butt workarounds to maintain compatibility with both platforms. It's a year before Intel Macs will even hit the market, and they're already giving me headaches... (And while I'm venting...don't the people at Intel know how to count?!? You know, the big numbers go first. That's the way we've been doing it for over a millenium. But then again, I guess the PC world has never liked actual standards. ;))

Does this mean that any program using Apple's current array of vUInt32's to populate the vU1024 union will not survive the switch to Intel?
 
That is why you should use the fields in struct s. That way, even if it switches round, your code will still be fine once you recompile it, since you weren't accessing the memory location directly, rather you were depending on the fields themselves.

I don't know the deal with arrays. Have a look at vBigNum.h. You'll see that with the Intel version, MSW is where the LSW is for the PPC version. My guess is that all the elements in the array will be swapped round. Thus if you used to access v[0] on PPC, the same operation will require you to access v[7] instead if you compiled for Intel.

Wonder how those guys managed to port Mathematica so easily :).
 
Viro said:
Thus if you used to access v[0] on PPC, the same operation will require you to access v[7] instead if you compiled for Intel.

Nope. It will still be v[0]. Think of it this way if we look at the addresses in the computer as a series of bytes and not integers then both the big and little endian machines see the exact same thing. The difference comes when we look at an integer (which is 4 bytes in this example) an ask how do we fit it into those four bytes. Basically do we right it from left to right or right to left.

Now there is some room for confusion with 64 bit values. Are they written as two 32 bit words or a single 64 bit word? Think about the possible combinations and that is why things get tricky.

Sorry if that is not coherent enough, coffee hasn't kicked in....
 
Cool. Thanks for clearing up. So the bits are only reversed when we talk about individual data types like ints, floats, etc. But if they are in arrays, it's still the same?

How about structs? I'm guessing this will affect structs, which is why in the Apple headers for vBigNum.h, the definitions of the structs are reversed when they are defined for x86. Can anyone confirm this?
 
Back
Top