Mac OS X and Tiger's failure as a Server Platform ... (?)


Mac Ninja
This is in reference to the article on anandtech:

Notice towards the end of this article, they go into MySQL, and the MySQL performance on OS X is crippled. They found the root of the problem to be the way kernel threading is handled, and that the problem is not the G5 chip, but the way OS X actually limits the performance here.

So does OS X then suck as a server solution? And please don't say "buy Oracle or Sybase" because we're talking about Open Source here ONLY, and thus MySQL should work great.

I'm not a mac fanatic because of server capabilities, but after OS X came out, and then the G5, I loved thinking about using the G5 one day as a server solution, and talking about how it would be a great server, but does this article trash all that?

Captain Code

Staff member
Both Tiger and Panther have this problem. It's hard to say how accurate their test is because it is not exactly a real world test. They're talking about starting threads and processes but with Apache for example, you will have servers running already, waiting for a request. As the load increases, more servers will be created in advance.

So, really the creating of processes may be somewhat slow but because they're already started it is not such a big deal.

Having only 5 httpd processes and then having to create 1000 all at once isn't a realistic example even if it shows some weakness in the way OS X handles it.

I read that FBSD has worked this problem out(it had the same problem as OS X) and that Apple will hopefully incorporate it into OS X soon.


Mac Ninja
Hopefully so, it would be awesome to see Apple take note and fix a problem like this!

I like how all the Google AdSense ads on the right are Toyota Supra related, just due to my signature, lol.


Here's a link to an Apple engineers comments on the benchmark and how ... strange their results and testing methodologies were.

Here's the gist of it. It turns out the MySQL on Linux uses fsync() to clear the buffers and flush data from memory to the hard drive. However, this isn't guaranteed to be safe because data could be left in the disk buffer and _still_ be reported as being written to the hard drive. If a crash or power outage occurs, you can kiss you data goodbye because it was on the hard drive cache and not written to the actual hard drive platters.

On OS X, the MySQL developers probably read the fsync() man pages:
Note that while fsync() will flush all data from the host to the drive
(i.e. the "permanent storage device"), the drive itself may not physi-
cally write the data to the platters for quite some time and it may be
written in an out-of-order sequence.

Specifically, if the drive loses power or the OS crashes, the application
may find that only some or none of their data was written. The disk
drive may also re-order the data so that later writes may be present
while earlier writes are not.

This is not a theoretical edge case. This scenario is easily reproduced
with real world workloads and drive power failures.

For applications that require tighter guarantess about the integrity of
their data, MacOS X provides the F_FULLFSYNC fcntl. The F_FULLFSYNC
fcntl asks the drive to flush all buffered data to permanent storage.

Applications such as databases that require a strict ordering of writes
should use F_FULLFSYNC to ensure their data is written in the order they
expect. Please see fcntl(2) for more detail.
and freaked out, deciding to use fcntl instead.

Further down the comments in the Apple Engineer's blog, you see comments on the speed hit of using F_FULLSYNC. On a very contrived benchmark of writing 22 bytes to a disk, disabling F_FULLSYNC produced a write speed of 220 KB/s while enabling F_FULLSYNC dropped the speed to 22 B/s!

Read the blog for further explanations of what's going on. Basically, on OS X there are stringent checks that are in place to bypass the hard drive's cache to ensure data integrity. This isn't present on Linux. Adding these extra checks could have massive performance implications.