Panther not really 64 Bit?

vikingshelmut

100% Bull Plop
After watching the WWDC keynote live, reviewing every Panther related page on apple.com, and reading numerous other reports from WWDC, there has been absolutely NO mention of weather or not Panther is actually compiled to be 64bit native, or if it is actually running in 32bit mode like every other app demoed. Does anybody know more about this?

I know it would make sense that it would be 64bit, but no mention of it? It would be a great marketing tool to have just "left it out".

Also, since I will be able to purchase a copy later this year and install it on my 15" Flat Panel, would it possibly still run on my machine if it was 64bit? Will Apple be actually selling 2 different versions of Panther?

If anybody can provide any insite as to this question, please contribute. The G5 looks amazing, but running a 32bit OS on 64bit hardware seems like kind of a waste.

(BTW, I understand that running a 64bit version of Panther may not actually provide any benefit over the 32bit version on a G5 since overhead is not that great compared to Photoshop, etc...)
 
That is actually a good question.

No, Panther is not going to be 64-bit native.

That having been said, parts of Panther are going to be compiled for optimum performance on the G5. The closest example historically would be the versions of the Mac OS that could run on both 680x0 and PowerPC systems. It should run great on a G4. Also, considering that a developer release was handed out to people at WWDC and most of them do not have access to a G5 system, I think that should tell you that it is not a G5 only operating system.

Some of the best parts of Panther look like they are going to need better graphics support, so those of us with ATI Rage 128s in our systems maybe at the cut off of systems it'll run on.

If Apple's move to 64-bit is anything like Silicon Graphic's move, we have very little to worry about. I started out with 32-bit systems and they worked great. When I got my first 64-bit system, the only OS I had to run on it was a 32-bit version of IRIX. It ran great, never had any problems. When I finally bought a 64-bit version of IRIX to run on that system, I didn't notice any great improvement in speed or anything (I mainly wanted the better version of QuickTime capturing the new system had with it). And all my old 32-bit apps still run great.

I think IBM and Apple have made this move as easy on their clients as they could. You really have very little to worry about from any of the announcements that were made. At this point having Mac OS X be 32-bit is a better course of action then to have a separate 64-bit version like what Windows has done (which is why there are no other 64-bit desktop computers). It would be very hard to try and support (and develop for) both a 32-bit Mac OS X and a 64-bit Mac OS X. The first version of Mac OS X that would go 64-bit native totally would be Mac OS X Server... and that is something that is not that far away I would guess.

Remember that the first PowerPC systems shipped with a version of System 7.1 and yet the first PowerPC only version was Mac OS 8.5. Apple doesn't leave older systems behind very fast (and they know that neither do their users).
 
Copied from Apple's G5 White Paper PDF:

Native Compatibility with 32-Bit Application Code
On other platforms, switching to a 64-bit computer requires migrating to a 64-bit
operating system (and purchasing 64-bit applications) or running a 32-bit operating
system in a slow emulation mode.With the PowerPC G5, the transition to a 64-bit
system is seamless: Current 32-bit codeÑsuch as existing Mac OS X and Classic
applicationsÑruns natively at processor speed, with no interruptions to your workflow
and no additional investment in software.
This easy compatibility is possible because the PowerPC architecture, unlike competing
instruction sets, was designed from the beginning to run both 32-bit and 64-bit application
code. And because the PowerPC G5 uses the same Velocity Engine instruction
set introduced in the PowerPC G4, applications that have been optimized for Velocity
Engine will immediately run faster on the new processor.WhatÕs more, as applications
are optimized and as Mac OS X is further enhanced for the PowerPC G5, performance
gains will be even greater.
6 White Paper
PowerPC G5
 
Originally posted by RacerX
Remember that the first PowerPC systems shipped with a version of System 7.1 and yet the first PowerPC only version was Mac OS 8.5. Apple doesn't leave older systems behind very fast (and they know that neither do their users).

Just because 8.5 required a PPC to run, doesn't in any way mean that the OS used no 680x0 code, as your message implies. It's entirely possible (and actually true in this case) that they could have used a few PPC-only instructions, which would have made it at least partially incompatible with 680x0 - as well as introduced an "artificial" limitation (what do you think will prevent OS 9 from booting on a G5? Some minor, artificial limitations)

No, Panther is not going to be 64-bit native.

Do you have a source on this, or are you just creating rumors based on speculation and "deduction"?
 
Originally posted by RacerX
No, Panther is not going to be 64-bit native.

I would think that the fact they are shipping boxes with 8 gigs of memory would say that it most certainly must be. Those little 32 bit pointers can only see 4 Gigs max and I cannot imaging them moving to a segmented memory model to get around that fact.
 
Fuzz -
I read that, and it doesn't clearly say that OS X is 32bit, all it says is "Existing Mac OS X APPLICATIONS" not, Mac OS X.

Lurk -
This is exactly the type of info I am looking for. I honestly don't know much about OS design, but you statement makes sense to me. Thanks for the info.

Anybody else have any experience with 64bit platforms that could help to further disect the question?
 
More rumor and unsubstantiated opinions....

I've read somewhere and I can't think where (maybe arstechnica?) that the current OSX is not yet fully optimized for PPC and could run faster if not for compatibility with 68k programs.

I hate to inject this thread with more FUD but if other people have seen this it might give weight to the argument that OSX will have a long wait before it's fully 64bit.
 
Whoa, 68k emulation still in OSX, I really doubt it. OS9 perhaps, but not OSX.

Just because the hardware can support 8GB RAM doesn't mean that the OS can. After all, if Apple intends to release a full 64bit OS in say a 1 year time frame, it would be easier for them to ship hardware today that can take FULL advantage of it tomorrow. It's a lot easier to explain to people why they can only access 2GB today but have access to the full 8GB later, then it would be to explain to someone who plunked down $5K for a TOL G5 a year previous why they can only access 2GB even though the OS can support 1024PB (Peta Bytes).

Plus Apple could have tweaked Panther to access more memory without going fully 64bit. The OS can have full access to the extended address registers and allow programs that have been compiled specifically to take advantage of 64bit pointers. Now whether or not this is worthwhile is highly debatable, but certainly doable.

Oh, and about segments. Actually segmented memory rocks. What sucks is when your segments are limited to 64k in size, now that sucks. With a 64bit proccie, you could have 4billion segments of 4GB each or even more important 1 byte/word. It's a lot easier to do memory protection using segments (make a single word read only, detect an out of range access to a segment), plus you can do all that fun stuff like overlaying segments with each segment having different properties. That kind of stuff is harder and less efficient when you're dealing with a large address space where you're often limited to applying protection in page sized chunks (2k/4k/8k).
 
Hey, lets give RacerX a little credit here! He probabaly has more experience with computers than all of us in this thread combined. Plus the fact that he is a Rhapsody user gives him insight into the roots of OS X very few people on these boards have.


Originally posted by Ripcord
Just because 8.5 required a PPC to run, doesn't in any way mean that the OS used no 680x0 code, as your message implies. It's entirely possible (and actually true in this case) that they could have used a few PPC-only instructions, which would have made it at least partially incompatible with 680x0 - as well as introduced an "artificial" limitation (what do you think will prevent OS 9 from booting on a G5? Some minor, artificial limitations)

There are a LOT of hardware changes made to the new G5 machines that prevent them booting OS 9. Its not like you could pop the latest OS 9.2.2 CD in these machines and expect them to boot, and its certainly not like Apple would deliberately restrict OS 9 booting because it wanted to create "artifical limitations" as you put it. Check the Apple Store online right now and you'll see that the 1.25 GHz G4 machines all boot OS 9.

When you write any OS, it is specifially written to work with certain hardware. OS 9 can't handle 8 GB of memory! Its core memory management system was written when 8 GB hard drives weren't even available. Considering Apple disbanded the OS 9 team months ago, there will not be any OS 9 for new Mac hardware - not because Apple deliberately wants to make you work with OS X, but because it would be very costly and time intensive to create OS 9 again to work with new G5 machines. In fact it would probably need massive architectural changes that would make a lot of programs no longer function.

Do you have a source on this, or are you just creating rumors based on speculation and "deduction"?

RacerX is not one to spread gossip and unsubstantiated rumors. He deserves more trust than that.

Yes, Panther will be 32 bit - it would be a big deal if it were 64 bit because it would only work on the G5s. Imagine what a suicidal move that would be. Also, if it were 64 bit don't you think Apple would advertise that?
 
I reckon it'll be shipped in both 32 and 64 bit. They made a point in the keynote in showing how easy it is to convert apps to 64bit.

May be they will do the same with the OS?
 
by Ripcord:
Just because 8.5 required a PPC to run, doesn't in any way mean that the OS used no 680x0 code, as your message implies.

I didn't imply anything of the sort. 8.5 was the first operating system by Apple that had enough native elements to make it unbootable by a 680x0 system. I have a number of 680x0 systems. I have just about every operating system from Apple (I still need 7.6, but that is way off topic), and I've seen the difference in how PPC systems run under 7.1, 7.5, 8.0/8.1 and 8.5, and every time I've installed 8.5 on a system running some earlier version, the performance has increased.

It's entirely possible (and actually true in this case) that they could have used a few PPC-only instructions, which would have made it at least partially incompatible with 680x0

Define few? The difference in how 8.5 runs compared to 8.1 should be enough evidence that it was at the very least an important few.

- as well as introduced an "artificial" limitation (what do you think will prevent OS 9 from booting on a G5? Some minor, artificial limitations)

So what, in your experience, was the reason that no Mac OS has ever been able to boot Apple's WGS 500/700? These are PPC604-based computers. Made by Apple. Yet you can't run any Apple operating systems on them (not any Mac OS, not any version of Rhapsody, not any version of A/UX, not any version of Mac OS X/Mac OS X Server).

Maybe I'm miss reading you here. By artificial you could mean any element of the computer... like the motherboard, that is preventing 9 from booting these systems. But wait, does Mac OS 9 actually boot systems that Apple sell's today that can boot Mac OS 9.2.2? Why can't these systems just run 9.0? What artificial elements are stopping people from doing that? Beyond progress in hardware development, of course.

Or do you think there is a plot to keep people from booting the G5 with System 6.0.8?

Do you have a source on this, or are you just creating rumors based on speculation and "deduction"?

RE: No, Panther is not going to be 64-bit native.

I've posted enough on these boards to not have to worry about this. But yes, I've talked with some people at WWDC, but you shouldn't need that to see that what I'm saying is correct.

Besides, I always thought I was a source. :D

by lurk :
I would think that the fact they are shipping boxes with 8 gigs of memory would say that it most certainly must be. Those little 32 bit pointers can only see 4 Gigs max and I cannot imaging them moving to a segmented memory model to get around that fact.

RE: No, Panther is not going to be 64-bit native.

By that logic I guess we can run a 64-bit OS on 32-bit hardware? And the G5s are most likely going to ship with 10.2.x, we must already be using a 64-bit OS.

I'm sure that Apple is going to have the installation check the hardware and install the parts needed to take advantage of the new hardware. Which is why I said that "parts of Panther are going to be compiled for optimum performance on the G5." It helps when people read all of what I've said and not just parts they want to argue with.
 
Originally posted by binaryDigit
Just because the hardware can support 8GB RAM doesn't mean that the OS can. After all, if Apple intends to release a full 64bit OS in say a 1 year time frame, it would be easier for them to ship hardware today that can take FULL advantage of it tomorrow. It's a lot easier to explain to people why they can only access 2GB today but have access to the full 8GB later, then it would be to explain to someone who plunked down $5K for a TOL G5 a year previous why they can only access 2GB even though the OS can support 1024PB (Peta Bytes).
I must be reading this wrong because it just doesn't parse. Right now apple is charging $3750 to put 8GB in a new machine and when you get it home and it only says 2GB available you think that customers will be happy hearing that the rest will show up next year? Now if they were limiting them to shipping with 2GB with 6 open dimms for later I might agree...

Plus Apple could have tweaked Panther to access more memory without going fully 64bit. The OS can have full access to the extended address registers and allow programs that have been compiled specifically to take advantage of 64bit pointers. Now whether or not this is worthwhile is highly debatable, but certainly doable.
It is not just a question of setting up the extended regesters for 64 bit addresses. If that is all you do then you can fake a 64bit address space for a program but it will still be limited to only acessing the first 32bit of memory. Basically your stick a bunch of zeros on the front of the pointer. Now if you want to access anything beyond the base 32 bits of memory you need to expand all of the internal datastructures to be 64bit, for instance the mappinigs from virtual to physical memory addresses. Once that is all said and done you can have a user process which accesses more than a 32 bit address space. But you know what you just did, basicially you now have a 64 bit kernel.

This is really a mush easier conversion than going to 32 bits was because we are starting form a flat (non-segmented) address space and we can basicially make any 32 bit program 64 bit by just tacking a 0 on the front. But that really touches on your next point so...


Oh, and about segments. Actually segmented memory rocks. What sucks is when your segments are limited to 64k in size, now that sucks. With a 64bit proccie, you could have 4billion segments of 4GB each or even more important 1 byte/word. It's a lot easier to do memory protection using segments (make a single word read only, detect an out of range access to a segment), plus you can do all that fun stuff like overlaying segments with each segment having different properties. That kind of stuff is harder and less efficient when you're dealing with a large address space where you're often limited to applying protection in page sized chunks (2k/4k/8k).

Ah but at what cost? (I don't think that PPC even supports segments so this is prolly all academic but hey that can be fun.) The problem with segments and the admitidly neat things you can do with them is that it significantly complicates both programs and the OS.

  • Pointer equavilence is not longer a simple comparison. At best it is two and at worst it is a system call to see if two segments map to the same location.
  • The compler must generate different code for inter vs intra segment operations. This forces us to always keep the memory model in mind.
  • Virtual Memory is complicated as you cannot know the special attributes of segments a page may intersect without walking the segment descriptors. (If you only allow for address translation this is not a problem but if you want all the cool segment tricks it is.)
  • Non-local segment accesses are slower.
  • Yadda yadda yadda...

Don't get me wrong I programed using segments in DOS and OS/2 and they were a good solution to the archetectural isses surrounding programmin on small machines. But I certianly don't miss those days;)

-Eric
 
Originally posted by lurk

But you know what you just did, basicially you now have a 64 bit kernel.

I agree, and again, perhaps as a stopgap, this is what Apple will ship. After all, just like you said, how else is Apple going to allow you to accessing anything above 2GB. Well either you can't, or Apple did some tweaks to allow you to do it (assuming that Panther is not fully 64bit, which is what this is all predicated on).


Ah but at what cost? (I don't think that PPC even supports segments so this is prolly all academic but hey that can be fun.) The problem with segments and the admitidly neat things you can do with them is that it significantly complicates both programs and the OS.

  • Pointer equavilence is not longer a simple comparison. At best it is two and at worst it is a system call to see if two segments map to the same location.
  • The compler must generate different code for inter vs intra segment operations. This forces us to always keep the memory model in mind.
  • Virtual Memory is complicated as you cannot know the special attributes of segments a page may intersect without walking the segment descriptors. (If you only allow for address translation this is not a problem but if you want all the cool segment tricks it is.)
  • Non-local segment accesses are slower.
  • Yadda yadda yadda...

Don't get me wrong I programed using segments in DOS and OS/2 and they were a good solution to the archetectural isses surrounding programmin on small machines. But I certianly don't miss those days;)

-Eric

Ah yes, DOS and 16bit OS/2, those were the days. Anyway, I think you're experiences are clouding your thoughts (easy to understand, there are parts of my brain that are forever unusable due to near/far abuse). I'm talking about single segment sets per process (i.e. one CS, one DS, one SS). The applilcations programmer doesn't worry about inter segment math, because to them they only see one big 2GB memory space, this takes care of your first two points. As for more overhead, the proccie is supposed to take care of that for you right? You don't have to worry about checking for write permissions to a code segment, because any attempt to write into it will generate a fault. And as a VM optomization, this is a point I brought up on /. recently about todays cpu's being so fast, but yet all that power is put into making desktops turn into rotating cubes, vs utilizing that power to help create a better environment. If there is a 10% performance hit, aren't we all better off by having a more robust environment vs that extra 10%?

IIRC, this is what 32bit OS/2 did and it's kernel memory management was awesome. You could do some really cool stuff (sparse memory allocations being one of my favorite).
 
First RacerX said...
No, Panther is not going to be 64-bit native.

That having been said, parts of Panther are going to be compiled for optimum performance on the G5. The closest example historically would be the versions of the Mac OS that could run on both 680x0 and PowerPC systems. It should run great on a G4. Also, considering that a developer release was handed out to people at WWDC and most of them do not have access to a G5 system, I think that should tell you that it is not a G5 only operating system.


Then I said...
I would think that the fact they are shipping boxes with 8 gigs of memory would say that it most certainly must be. Those little 32 bit pointers can only see 4 Gigs max...

To which RacerX replied...

By that logic I guess we can run a 64-bit OS on 32-bit hardware? And the G5s are most likely going to ship with 10.2.x, we must already be using a 64-bit OS.

I'm sure that Apple is going to have the installation check the hardware and install the parts needed to take advantage of the new hardware. Which is why I said that "parts of Panther are going to be compiled for optimum performance on the G5." It helps when people read all of what I've said and not just parts they want to argue with.

Now my coffee just must not be kicking in this morning because I am having all sorts of problems decyphering the logic in this thread. How could you infer that I said that the 64bit kernel (which is what this is all about) would be the one running on older 32bit hardware. Of course such a statement is assinine.

A new 64 bit vs 32 bit kernel is not an "optimization" it is a significant difference and it is the only one which defines an OS to be "64bit native". Now the other machine I have access to which claimes to be 64 bit native is my Ultrasparc and it even comes with a 32 bit kernel if you would like to use it. But guess what, if you are running the 32 bit kernel on that 64 bit hardware you will only be able to see the memory which will fit into the 32 bit address space. The rest is wasted. As a side note I ran on that 32 it kernel for a long time because the 64 bit one had some bugs in it that interfeered with the programs I was running, if a 64 bit kernel is just an optimization I figure it would not have taken Sun so long to get it right.

If this is an argument over if "cat" and "ls" will be compiled as 64 bit programs the answer is that they most likely never will be. Running in a 64 bit address space is more expencive in terms of resources and can be slower overall so only things which would actually benifit from being 64 bit would be compiled as such. If the program is happy with 32 bit addresses it will be just fine.

As I typed that I see that there is still one area of confusion which could lead to more argument. The distinction between 32 and 64 bit programs is based on the address space they request from the kernel and not the size of the regesters they use for computations. So your 32 bit program can add 64 bit integers to its heart's content. Such a program would not run on a G4 because it was "optimized" for the G5. This case iof optimization is a direct parallel to that of trying to run Alervec code on a G3.

Tricky huh..
 
Originally posted by binaryDigit
Ah yes, DOS and 16bit OS/2, those were the days. Anyway, I think you're experiences are clouding your thoughts (easy to understand, there are parts of my brain that are forever unusable due to near/far abuse). I'm talking about single segment sets per process (i.e. one CS, one DS, one SS). The applilcations programmer doesn't worry about inter segment math, because to them they only see one big 2GB memory space, this takes care of your first two points.
Almost but then you still cannot compare an automatic variable allocated on the stack with one malloced from the heap - they are in different segments. Although I understand you motivation of not having your stack bump into your code or heap. I have cursed that on many an ocassion. ;-)

As for more overhead, the proccie is supposed to take care of that for you right? You don't have to worry about checking for write permissions to a code segment, because any attempt to write into it will generate a fault. And as a VM optomization, this is a point I brought up on /. recently about todays cpu's being so fast, but yet all that power is put into making desktops turn into rotating cubes, vs utilizing that power to help create a better environment. If there is a 10% performance hit, aren't we all better off by having a more robust environment vs that extra 10%?

You should look at the system archetecture for the old Symbolics Lisp Machines. Every 32 bit word had an 8 bit type tag and garbage collection was built into the hardware. It was a butiful thing to behold and super robust because of the integrated typing but alas they have pased from this earth. I fully agree with your sentiment but I think you need to read Richard Gabriel's "Worse is Better" essay. (Then again if I fully agreed wiht him I would not own a Mac) :)
 
I think at least the kernel will be 64 bit. There will be different versions based on what hardware it's installing onto.

32 bit kernel for <G5 and 64 bit kernel for G5. Anything else wouldn't make sense for the exact reason that you'd be wasting half of your memory if you had 8GB installed.

I'm 99% sure Steve was implying that in his keynote. He said something like, the G5's will ship with a 32 bit version of Jaguar tweaked to run on the G5, but once Panther comes out, it will run fully native on the G5.

So, this says that there will be a 32 and a 64 bit version of the kernel and other parts of the OS. Perhaps not all of it will be 64 bit at first like OS 8/9 was not 100% PPC code at first.
 
After reading several contradicting sources, it seems like Apple is approaching this in several steps.

Mac OS X 10.2.7 is the 32bit Jaguar with some tweaks, so it can run on G5 hardware AT ALL. This build won't address 8GB of RAM.

Apple will update Jaguar to 10.2.8 (and maybe even 10.2.9) to get rid of some bugs that arise in that implementation. But Jaguar won't be 64bit, period.

Panther will have more 64bit code available for the G5s, but will still not be a 100% full 64bit native for them. Of course 32bit PowerMacs will still be compatible with Panther, but Panther will install different versions on different processor platforms.

However, that doesn't really matter for most people, as the G5 has no problem handling 32bit code.

I'm pretty sure that Panther will be the first OS that'll support all the RAM the G5 can handle, so, yes, Apple sells you a computer that can't handle its RAM, if you want that.
 
Maybe this helps.

From the G5 Technical Overview (apple.com)

Mac OS X combines the power and stability of UNIX with Apple’s legendary ease of use.
The Power Mac G5 ships with the latest version of Mac OS X v10.2 “Jaguar.” Unlike other 64-bit platforms, no special 64-bit version is required; the same operating system runs on all your Mac computers.
Because it’s fully compatible with 32-bit PowerPC application code, the Power Mac G5 eases the migration to 64-bit computing and protects your investment in software.
Mac OS X doesn’t revert to a slow 32-bit emulation mode, as is typical on other 64-bit platforms. Existing Mac OS X application code, as well as Classic applications, runs at full processor speed with no upgrades required. This seamless transition is possible because the PowerPC architecture, unlike competing instruction sets, was designed from the beginning to run both 32-bit and 64-bit application code.
In addition, Mac OS X v10.2.7 (G5) has been enhanced to leverage the capabilities of the 64-bit Power Mac G5.
• Built from the ground up for symmetric multiprocessing and multithreading, Mac OS X enables peak performance on dual PowerPC G5 systems.
• Mac OS X takes full advantage of the 8GB memory capacity of the Power Mac G5:
It can now allocate up to 4GB of memory per process to easily fit memory-intensive
applications into RAM.
• The math and vector libraries have been tuned to take full advantage of the PowerPC G5 processor’s 64-bit integer and floating-point math capabilities and its optimized Velocity Engine.
While existing 32-bit applications benefit from the faster processor and high-bandwidth architecture of the Power Mac G5, performance gains will be more dramatic with PowerPC G5–optimized applications. Look for upcoming announcements from developers of popular professional applications.
Built on open standards.


The trick is in this:
Mac OS X takes full advantage of the 8GB memory capacity of the Power Mac G5:
It can now allocate up to 4GB of memory per process to easily fit memory-intensive
applications into RAM.
If I interpret this correctly, this is something that could have been done even without a 64 bit processor, since we are addressing a maximum of 4GB RAM per process, which canbe done on 32 bit processors AFAIK. The dual system is capable of exploiting 8 GB of RAM only because it has two processors, which can adress max 4 GB each. Why? Well obviously because Jaguar 10.2.7 is not a 64 bits OS. This, and not the lack of bigger RAM modules, poses the 8 GB limit.

If Panther is 64 bits, which it should be, then there will be no limit of 4 GB per process.

EDIT: This implies that 10.2.7 cannot address 8 GB of RAM on the single processor machines. If it can, then I am wrong. :)
 
Originally posted by Cat
If Panther is 64 bits, which it should be, then there will be no limit of 4 GB per process.

EDIT: This implies that 10.2.7 cannot address 8 GB of RAM on the single processor machines. If it can, then I am wrong. :)

No fair doing actual research to back up wild conjecture. :p

There is a bit of a slant on some of the statements but it does let us know a bit more about what they are doing.
  • What they are doing is giving each process a 4G virtual address space which is what you can see with 32 bits. And you are right you could do that with a 32bit OS and they are doing it with Linux and XP to run on 32 bit machines with >4G of ram. But it is a bit hackish. As I remember Intel processors actually have a 36 bit addressing you can access via the segments mentioned previously in the thread.
  • Before the huge 64bit address spaces are available to user programs we will need to have a 64 bit OS.
  • The limit is per process and not processor so each program gets up to 4GB and you could use all 8GB on a single processor machine. But you cold not load a 5 GB database into the memory of a given program.

So now I when someone is kind enough to send me one of htese in the mail I promise that I will not feel cheated because I cannot immediatly access all the memory :)
 
Ok, this is just wild conjecture, but what is stopping them from doing what HP does with HP-UX 11 (not to be confused with 11i which is only 64bit)?

Basically you boot the installer and if its a 32bit machine it installs the 32bit version of the kernel/libraries, and if its a 64bit machine it installs the 64bit version.

This could also be done as a simple 2 cd distro. If you have a G5 you use one cd, if you have a pre-G5 you use the other.

Most of the OS libraries are already 64-bit clean, they just need to be compiled and tested on a 64-bit machine. The main changes will be in the kernel, and I would be willing to bet that the kernel guys have it under control.
 
Back
Top