That sounds cool, in a geeky sort of way.
I have probably read 30 or 40 articles discussing that over the last few years.
It seems like something that can be worthwhile, in some situations.
But, how does one determine what is the best use for that?
IIRC, the various articles all came to a similar conclusion with OS X: No real benefit, and usually a performance hit, over allowing the OS X system to do that processor scheduling automatically. Some apps are coded to use only one (or whatever) processor, or core, and OS X respects that programming, AFAIK.
If you are using other operating systems, then I suppose you would be subject to whatever limitations (and features, eh?) of that operating system.
What does 1993 have to do with this? I'm sure there were some multi-processor systems at that time, and the operating system (whatever is in use) needed to support those systems.
Apple's PowerMac 9500 had a dual processor system in 1995, FWIW.
"how does one determine what is the best use for that?"
Realistically - by testing it.
For Linux and Windows systems running computationally intensive tasks (Folding@Home, CFD, FEA); setting the processor affinity can result anywhere from a 5-14% performance INCREASE (because you minimize/mitigate as much of the thread locking/thread contention issues). I forgot where exactly I read this, but it said that moving data is an expensive and useless task (in the sense that when you're just moving data around; you're not doing any useful work ON the data). So if there's a lot of process migration, you have to drain the buffers and cache and everything for one core and move the whole thing over to another. And in transit, you can't do anything with it (computationally). But by locking it in place and forcing it to "stay"; you minimize the number of move operations so that the vast bulk of ops are what you want it to be - computational in nature.
For Solaris, because it's a commercial UNIX that's originally designed for more mainframes than anything else; the idea of you being able to bind a process to a GROUP of processors make sense. Suppose you're running a bunch of clean-up operations or you really only want a specific task to stay on one of the systems of the mainframe (so think a little like how BlueGene-series consists of computenodes, each node has two sockets, I think that each socket has 4 cores now, and a drawer has I forget how many computenodes -- so being able to limit a process or a group of jobs - say to a very small, local subset (say one of the computenodes, ID'd by the processor ID)) will prevent process migration "outside" of it's local, physical box.
It's a tad like micromanaging to an extent, but a la MPI/hand-coded assembly, you can benefit DRAMATICALLY from it; IF it is worth your time/worthwhile for you to go to that level. (Probably more important if you're running the jobs (like calculating interest for bank accounts) over and over and over again; so squeeze 40% performance out of it just by doing that - that WOULD be worth your while.
Since I can't set that in OS X - I can't test it in OS X.
The point about it being around in Solaris since 1993 is that setting the processor affinity is nothing new. And yet, almost 20 years later, it doesn't look like that OS X has incorporated that yet. (I know that some of my technical computing programs now - whenever is spawns slave MPI processes that it AUTOMATICALLY binds itself to a processor.) It's also saying that it really SHOULD be included in OS X since it's somewhat "elementary" in some regards (in my opinion).
Now, most agreeably, most Macs AREN'T used for CFD/FEA (a point that I've highlighted to my brother-in-law when he asked me WHY I'm not using a Mac). But I did recently just find out that MATLAB WILL run on a Mac, so in theory; I COULD write a FEA/CFD program using the MATLAB solvers; which then would make it effectively be running FEA/CFD on a Mac (finally). But it's still very highly custom code. That and I'm not a programmer.
But I could imagine scenarios where if you're processing pictures (for example) in Photoshop that if the operation isn't multiprocessor capable; restricting it to one processor could help prevent thread/process migration. (And if it IS multiprocessor capable; it depends on how much it can use it.)