Reasons for Apple NOT to go with x86

MacLegacy

Unemployed Student!
There's a very interesting article from a reader over at looprumors.com

""The LoopRumors reader that thinks Apple should (or could) switch to x86-based processors (Intel Pentiums etc.) is forgetting a very important point:
Apple might be able to change Mac Os-X to run on x86 processors, but non of the existing program's will. You would be unable to run classic mac-apps because there code can only be natively executed by PowerPc processors. This will be the same for Os-X native apps as well. If apple would like to run anything on a x86 processor they would have to emulate all the PowerPC code.

Anyone who has seen a PC run a mac-emulator knows that it will be terribly slow, much slower than any current mac would be. The only other option is to reprogram all applications. But I'm sure that Mac users don't want to buy new versions of all there software again, since they already had to do so to get them to run in OS-X natively in the first place.

Another reason why apple shouldn't use x86 processors is because it would turn the mac into a regular PC. Now this might sound like a good idea performance wise at the moment with the lacking performance of the current G4 chips. BUT if apple starts using the IBM PowerPC 970 and it will run on 2.5 Ghz like IBM says it will then (particularly in a dual-CPU system) the PowerMac will run cycles around any PC workstation, and apple will again have the best performing workstation in the market, like they had when the G4 first came out.

So forget x86 processors they will be obsolete in the near future anyway since there based on CISC technology from the early seventies and there will be a limit to the maximum amount of Ghz that they can squeeze out of them In the near future also Intel and AMD will switch to RISC-based processors, so apple should stick to the PowerPC as well.""

Doesn't this almost clarify that Apple will NOT go x86? I don't think a lot of people would like having to buy software again, like he says.. unless simply a patch is possible for the 0S9-X apps to run on an x86 processor, I think it would be a big mistake.

They should definitely go with the PowerPC 970 !

Discuss it here
 
Unless Apple does something there is going to be too much of a processor gap for even the most diehard Apple fans to hand wave away. Motorola has to be abandoned, and the sooner the better. The IBM PowerPC 970 might be a good fit but the costs of the processor still haven't been determined as far as I know. If it follows the POWER series processor pricing rather than consumer parts then the price of a Macintosh would skyrocket even further. Also, in March 2003 2.5 GHz sounds reasonable, but how reasonable will the 2.5 GHz be in mid 2004, the earliest we could possible expect to be using the processors?

It's a tough position to be in. It would be best for the platform, but not necessarily current users to hop on the Intel bandwagon. Intel invests so much money in the x86 architecture that I don't see how anybody will be able to beat it in the long run.
 
Originally posted by substrate
The IBM PowerPC 970 might be a good fit but the costs of the processor still haven't been determined as far as I know. If it follows the POWER series processor pricing rather than consumer parts then the price of a Macintosh would skyrocket even further.

I think that some people miss the difference between processor price and system price over at IBM.

Lets look at some current models:
  • IBM sells a workstation using the PowerPC 604e processor (at either 250 or 375 MHz) starting at $8,805.00.
  • IBM sells a workstation using the POWER3-II processor (at 450 MHz) at just under $14,000.00.
  • Apple (back in 1997) sold PowerMacs using the PowerPC 604e processor (duel at 180/200 MHz, or one clocked at 350 MHz) for just under $6,000.00 (now selling on ebay for under $300.00)

The PowerPC 970 is supposed to replace the POWER3-II at the top of their workstation line, moving the POWER3-II down to the mid range and lower end models and removing the PowerPC 604e based system from the list. The pricing is supposed to stay pretty much the same for the level after it enters the line up.

I would have to say that I don't believe that IBM's prices of their systems reflects the actual cost of the relative processors. I don't know any Mac user who would be willing to pay $8,000+ for a PowerPC 604e based Macintosh today. These systems have much more than just raw processor speed to offer, and that is why they appear at prices that are almost unreal to what Mac users would think they should be at given what they are using for processors.
 
As long as Apple ported their entire API set over to x86 then all Cocoa apps would only need to be recompiled to run on an x86 processor. Carbon apps would not run because that would mean they have to completely re-write Classic as well, which we all know isn't going to happen.

Still, do I think they'd actually port OSX to x86? NO.
 
Originally posted by Captain Code
As long as Apple ported their entire API set over to x86 then all Cocoa apps would only need to be recompiled to run on an x86 processor.

Although that is true in theory, it doesn't work that nice in practice.

For a number of years I have been running Rhapsody on a couple of Intel-based systems. The number of apps for this version of Rhapsody was much smaller than the PowerPC version. As most of these apps were never going to see commercial release (when Apple pulled the Rhapsody client release), developers included the source code for many of their apps. On more than a few occasions I tried compiling apps for Rhapsody PPC on my Rhapsody Intel systems. every time I got errors and the final output wouldn't run. By contrast, source for either Intel apps or Intel/PPC apps would compile and run correctly.

It seems that there are a number of variables that fall into play beyond just recompiling for the same APIs on a different processor.
 
So forget x86 processors they will be obsolete in the near future anyway since there based on CISC technology from the early seventies and there will be a limit to the maximum amount of Ghz that they can squeeze out of them

When will people learn. They were saying the EXACT same thing about x86 back when we were at 100mhz and the DEC Alphas were "screaming" along at 266mhz. P4 is designed to scale it's clock. Intel has abandoned effeiency for high scalability. When will it end? Who knows. Plus any Pwhatever chip is basically a RISC chip that interprets x86 instructions anyway.

Another reason why apple shouldn't use x86 processors is because it would turn the mac into a regular PC

Another pet peeve of mine. Let's be clear here, are we talking about a Macintosh with an x86 cpu at it's heart, or are we talking about "beige" boxes? Two entirely different concepts that people keep intermingling. Apple aboslutely will not come out with YAPC (yet another PC). This would be death, pure and simple. Now switching the cpu over to x86 while still keeping the machine semi proprietary, this has more posibilities and is at least feasable.

If it follows the POWER series processor pricing rather than consumer parts then the price of a Macintosh would skyrocket even further.

Why would it since it is a PowerPC and not a POWER. IBM already has Power4 and is coming out with Power5 (which is supposed to cost less and run cooler than Power4). It's almost impossible to tell how much a Power cpu costs since they pretty much don't sell them seperately (to oems that is, not counting cpu upgrades, etc).

The PowerPC 970 is supposed to replace the POWER3-II at the top of their workstation line

Actually Power4 sits at the top of the workstation foodchain currently, though "only" at 1ghz.

On more than a few occasions I tried compiling apps for Rhapsody PPC on my Rhapsody Intel systems. every time I got errors and the final output wouldn't run. By contrast, source for either Intel apps or Intel/PPC apps would compile and run correctly.

I think that this is more of a sign of a development system that hadn't reached full maturity yet. There is no reason why the two systems shouldn't be able to compile the same source files unless the errors are related to platform differences that the compiler couldn't adjust for (don't know what these would be though other than endian issues and maybe struct/class packing?) Even then, the errors should be few and relatively easy to deal with (key word here being relatively).
 
Plus any Pwhatever chip is basically a RISC chip that interprets x86 instructions anyway.
As i understand it, Intel developed the x86 architecture way back when. People started making "clone" cpus, hence the change from 286, 286, 486 names to Pentium, cus a number can't be copyrighted. Anyways. So I've always been under the impression that AMD chips while being CISC, aren't actually x86 based, and basically they 'emulate' an x86 processor, right in the cpu, and it's supposed to be faster than a native x86 chip, which is why AMD chips are faster than Intel chips at the same, and to a certain extent even at lower clock speeds.
 
Originally posted by Pengu
As i understand it, Intel developed the x86 architecture way back when. People started making "clone" cpus, hence the change from 286, 286, 486 names to Pentium, cus a number can't be copyrighted. Anyways. So I've always been under the impression that AMD chips while being CISC, aren't actually x86 based, and basically they 'emulate' an x86 processor, right in the cpu, and it's supposed to be faster than a native x86 chip, which is why AMD chips are faster than Intel chips at the same, and to a certain extent even at lower clock speeds.

You got your facts a little crossed there. Historically there have been x86 clones pretty much from the beginning. One of the stipulations for IBM using the x86 in their original PC was that there be other OEM's for the cpu, just in case. So Intel struck deals with AMD (and some others that escape me now) to also create x86 "clones". Now fast forward to 486/Pentium time frame. Relations between AMD and Intel have soured greatly. Lots of lawsuits flying around about what was covered under what licensing agreements. Judge rules that Intel can't trademark 486 (or any other number combination), so Intel comes up with Pentium. Any licensing deals with Intel are now over, so AMD is forced to come up with a "clean room" design for their x86 clones (at least for the parts that are Pentium specific). This leads to various 586 clones. AMD purchases technologies from NxGen that had a 586 clone that underperformed, but was innovative in that it used a risc core that translated x86 instructions on the fly. This leads to the K6 family of cpu's.

Both the Athlon and P4 (actually starting from about the PII/PPro timeframe) are basically RISC at heart. AMD has been able to have a mips/clockrate advantage generally through "better" design, but this became moot with the P4 since Intel intentionally designed the P4 to be less efficient at a given clock rate to be able to ramp the clockrate up at a faster pace (so far this scheme has generally worked as they are shipping 3+ghz processors and AMD is still around 2.2 I believe). Though the AMD is nearly as fast even though it's clocked slower.
 
I guess these refer to my posts:

by binaryDigit:
Actually Power4 sits at the top of the workstation foodchain currently, though "only" at 1ghz.

Actually the POWER4 sits at the top of the foodchain only if you are working with 64-bit apps and a 64-bit OS. The POWER3-II and the PPC604e are both 32-bit processors and the workstations that use them are running 32-bit apps and a 32-bit operating system.

The very fact that the POWER4 is not backwards compatible with 32-bit apps is the reason that many companies won't work with it no matter how fast it is. The PPC970 is both a 64-bit processor and is backwards compatible with 32-bit software. IBM needed to do this to ease people with a large investment in software from the older 32-bit line into a 64-bit future.

I think that this is more of a sign of a development system that hadn't reached full maturity yet. There is no reason why the two systems shouldn't be able to compile the same source files unless the errors are related to platform differences that the compiler couldn't adjust for (don't know what these would be though other than endian issues and maybe struct/class packing?) Even then, the errors should be few and relatively easy to deal with (key word here being relatively).

Lets take this a step further. Rhapsody represents the 5th full version of what started out as NEXTSTEP. I had similar problems while using OPENSTEP 4.2. These systems had periods where up to four different type of processors were being used with the same OS (Motorola's 68030/68040, Intel's x86, Sun's SPARC and HP's PA-RISC). For most apps that contained more than a few lines of code (that is they represented actual productivity apps), source code didn't mean that you had a working version for your platform of choice. Right up to when Apple bought NeXT (and beyond), apps written for both NeXT hardware and Intel based systems completely out numbered apps for the other systems.

In a system were all one needed to do was just compile those apps on a given type of hardware to make them available, this should not have been a problem (specially for larger development houses). Given that plus first hand experience (plus being friends with some of the oldest NeXT developers), I would have to conclude that a sizable effect on the part of developers must still be needed to make an application suitable for more than one hardware platform.

NeXT's software as a development platform was by all accounts mature before Apple acquired it, it has only become better (and more mature) since. I doubt that this has eased the amount of effort to move things between different processor/hardware types. Even given that OpenStep still remains the most successful cross platform development environment that had been put forward to date (giving developers the ability to write their apps for Sun's Solaris on SPARC, OPENSTEP for Intel and NeXT hardware, and Windows NT).
 
RacerX

IRT the porting issues. Look at the various unix flavours and how their applications. Usually you download the source, which has a 'configure' script. This script then scans your system/environment and makes some choices on how to set things up. You then build your app. This works remarkable well given the huge variation that exist between various *nix flavours (not to mention the various systems they run on). I can't believe that OpenStep could not have reached that level of compatabililty given only two platforms and fairly rigid control over the environment.

I only had a chance to tinker with NT 3.51 on Alpha/PowerPC, but I was able to build admitadly small gui apps no problems on the three systems. Non gui apps were a breeze (to compile at least, runtime was another story).

Perhaps the Next folk had bitten off more than they could chew, and hence the issues? Or maybe even platform bigotry? Never having played with Open/NextStep on x86, I don't know.
 
As a person with quite a few flavors of Unix sitting around me as I write this (Irix on three systems, Solaris on two, Rhapsody on two, Mac OS X on two and A/UX on one) I can tell you from first hand experience that one source compiles for all absolutely isn't true. With any complex apps that have any form of GUI to them, usually you'll find source for special platforms. Follow any app that is aimed at a large number of Unix environments and you'll find unique source for each with unique problems associated with the implementation of that source.

As for platform bias at NeXT, there was none. As for platform bias with developers, they are always going to address the area with the most users first and other areas second (or not at all).

As I said, OpenStep as an application development environment was second to none. Sun was working on moving their platform over to OpenStep completely when Apple bought NeXT. The same issue shows up in many different areas. You brought up Windows NT, I seen it first hand with Sun Solaris for Intel.

The only true way to make things 100% portable is for everyone to be running the same environment. That is choose an environment that can run a top some emulation structure on every platform. Of course this gives rise to the problems that Microsoft and it's users are having. Large homogeneous environments are always less secure.

Safety in numbers, that is what I always say! A large number of different operating systems is the best way to keep things like viruses at bay (they require large numbers of users to all be using the exact same environment to reproduce and move to other systems).
 
This is definitely growing opinion-stuff. Therefore I give the thread an upgrade. :)
 
Back
Top