Has anybody ever tried running Solaris (natively) on a 13" MBP (mid 2010)?

That sounds cool, in a geeky sort of way.
I have probably read 30 or 40 articles discussing that over the last few years.
It seems like something that can be worthwhile, in some situations.
But, how does one determine what is the best use for that?
IIRC, the various articles all came to a similar conclusion with OS X: No real benefit, and usually a performance hit, over allowing the OS X system to do that processor scheduling automatically. Some apps are coded to use only one (or whatever) processor, or core, and OS X respects that programming, AFAIK.
If you are using other operating systems, then I suppose you would be subject to whatever limitations (and features, eh?) of that operating system.
What does 1993 have to do with this? I'm sure there were some multi-processor systems at that time, and the operating system (whatever is in use) needed to support those systems.
Apple's PowerMac 9500 had a dual processor system in 1995, FWIW.
 
That sounds cool, in a geeky sort of way.
I have probably read 30 or 40 articles discussing that over the last few years.
It seems like something that can be worthwhile, in some situations.
But, how does one determine what is the best use for that?
IIRC, the various articles all came to a similar conclusion with OS X: No real benefit, and usually a performance hit, over allowing the OS X system to do that processor scheduling automatically. Some apps are coded to use only one (or whatever) processor, or core, and OS X respects that programming, AFAIK.
If you are using other operating systems, then I suppose you would be subject to whatever limitations (and features, eh?) of that operating system.
What does 1993 have to do with this? I'm sure there were some multi-processor systems at that time, and the operating system (whatever is in use) needed to support those systems.
Apple's PowerMac 9500 had a dual processor system in 1995, FWIW.

"how does one determine what is the best use for that?"
Realistically - by testing it.

For Linux and Windows systems running computationally intensive tasks (Folding@Home, CFD, FEA); setting the processor affinity can result anywhere from a 5-14% performance INCREASE (because you minimize/mitigate as much of the thread locking/thread contention issues). I forgot where exactly I read this, but it said that moving data is an expensive and useless task (in the sense that when you're just moving data around; you're not doing any useful work ON the data). So if there's a lot of process migration, you have to drain the buffers and cache and everything for one core and move the whole thing over to another. And in transit, you can't do anything with it (computationally). But by locking it in place and forcing it to "stay"; you minimize the number of move operations so that the vast bulk of ops are what you want it to be - computational in nature.

For Solaris, because it's a commercial UNIX that's originally designed for more mainframes than anything else; the idea of you being able to bind a process to a GROUP of processors make sense. Suppose you're running a bunch of clean-up operations or you really only want a specific task to stay on one of the systems of the mainframe (so think a little like how BlueGene-series consists of computenodes, each node has two sockets, I think that each socket has 4 cores now, and a drawer has I forget how many computenodes -- so being able to limit a process or a group of jobs - say to a very small, local subset (say one of the computenodes, ID'd by the processor ID)) will prevent process migration "outside" of it's local, physical box.

It's a tad like micromanaging to an extent, but a la MPI/hand-coded assembly, you can benefit DRAMATICALLY from it; IF it is worth your time/worthwhile for you to go to that level. (Probably more important if you're running the jobs (like calculating interest for bank accounts) over and over and over again; so squeeze 40% performance out of it just by doing that - that WOULD be worth your while.

Since I can't set that in OS X - I can't test it in OS X.

The point about it being around in Solaris since 1993 is that setting the processor affinity is nothing new. And yet, almost 20 years later, it doesn't look like that OS X has incorporated that yet. (I know that some of my technical computing programs now - whenever is spawns slave MPI processes that it AUTOMATICALLY binds itself to a processor.) It's also saying that it really SHOULD be included in OS X since it's somewhat "elementary" in some regards (in my opinion).

Now, most agreeably, most Macs AREN'T used for CFD/FEA (a point that I've highlighted to my brother-in-law when he asked me WHY I'm not using a Mac). But I did recently just find out that MATLAB WILL run on a Mac, so in theory; I COULD write a FEA/CFD program using the MATLAB solvers; which then would make it effectively be running FEA/CFD on a Mac (finally). But it's still very highly custom code. That and I'm not a programmer.

But I could imagine scenarios where if you're processing pictures (for example) in Photoshop that if the operation isn't multiprocessor capable; restricting it to one processor could help prevent thread/process migration. (And if it IS multiprocessor capable; it depends on how much it can use it.)
 
@DeltaMac
So apparently the reason WHY I could write to a NTFS partition/array (off my central server) is because it WAS being mounted (on the MBP) via SMB (which seems to be the default).

So I'm not sure how it would perform without SMB, so just to clarify - OS X might still be read-only for NTFS partitions (especially if it's directly connected/attached to the system). But if it's a Windows share, then it's through SMB.
 
My understanding (which might be inaccurate) is that a net-connected hard drive that you have access to, should be both readable and writeable, depending on whatever access level is provided to you for that net-connected drive. The actual file system that created the volume is not an issue for the net-connected drive. All you need is that access assigned to you.
Much different for a locally attached hard drive, as your OS X system would need to access the drive natively, so the drive needs to be formatted in one of those file systems that give you full native read-write support (assuming that you need that full access)
I'm sure that someone else who has direct experience will correct my belief, if it is inaccurate.
 
No no. Perhaps I wasn't very clear when I was writing it.

The current system is actually hosted on a 2U server (12 bays, 10 of which are occupied by 3 TB drives), connected to an ARC-1230 12-port SATA 3 Gbps RAID HBA + an OS drive (I forget the size off the top of my head), which has a single array RAID5 logical volume (27 TB usable) formatted as NTFS.

The NEXT system that will be used to clone and be the backup version of the aforementioned server is going to be running Solaris, again with 10 3 TB drives, but managed with ZFS.

I was thinking that even for network connect drives; that it would be able to recognize that it is a NTFS volume; but forgot that Windows shares it's folders using SMB (which is abstract from whatever FS the volume happens to be).

So I had THOUGHT that OS X would be trying to read/mount the NTFS partition natively (like it's a NTFS formatted USB drive), which turns out to be an incorrect assumption. (Cuz I forgot about how Windows shares work.)

So - to conclude, it's quite possible that if I were to directly attached a NTFS drive (say via USB or something) vs. a network-nased solution (regardless of format/type (i.e. NAS/SAN/file server); it is quite possible that the direct-attached NTFS formatted drive will still ONLY be read-only.

But for network-mounted systems (especially over SMB); you are correct in that it will be whatever permissions you assign to the share.

And the reason why I was asking about NTFS/SMB is because I need the data on the drive to be accessible by the variety of OSes that I (will) use and so if I were to format a direct-attached hard drive as HFS+, I don't think that that the other systems would be able to read it readily/easily.

And Samba/SMB is readily available, and nowadays; they've made it REALLY easy to configure Samba shares now. Might not be the most efficient of the application-layer network protocols; but it's easy. And it does work.

(And should the need arise, like you said; I could always switch to NFS if I really need to, but for the time being; I doubt it that I would need to go down that route.)
 
Yes, if you direct connect an NTFS drive to your Mac, read is supported, write is not supported. Again, NTFS driver software for the Mac is the usual solution for that directly connected NTFS drive. Free versions do not have best performance, compared to paid-for versions. Unless you have only minimal needs for external storage (which it sounds like you do NOT), then the commercial NTFS drivers should be a great choice for you.
Isn't this returning to info that has already been covered in your thread?
I think you have your original question fairly settled by now.
 
It's a correct/clarification on my comment that I wrote where I said that apparently you can write to a NTFS partition (which was true because it was mounted over SMB); but won't be true if someone were to try to attach a drive directly.

Just CMA...
 
Back
Top