Threads: can multiple thrds read same file?


I've got an application written in C++ (Project Builder). It has about 8 threads within a single process. All the threads are reading from the same disk file. It is a huge disk file, each thread is reading a different section of the file.

Each thread opens the file with fopen() and gets a unique FILE stream, with a unique IO buffer (at least, the 4,096 byte buffer within the FILE object is unique .. each thread has a unique address for this).

The threads do an fseek() and fread() to read data from the disk file. Each thread _appears_ to have a uniuque stream.

But the threads do not produce good results: the data read from the disk is not the expected data. It appears that there is some kind of low-level disk buffering going on that precludes multiple threads from reading a single disk file.

Certainly this has been done before: multiple threads (within a single process) _must_ be able to read from a single disk file (via mulitple streams), true?

Question: Is there any way to make this work? For example, is there some flag or option I can turn on to make the stream (fopen(), fseek() fread() etc) functions thread-safe?
If anyone else ever searches the forum with a similar question, here is how I finally got it to work:

The goal was to have multiple threads (in a single process) all reading from the same disk file interleaved. Each thread had dedicated FILE* stream. Initial attempts using simple logic failed because the stream internal buffers interfered with each other.

The solution was: Each thread has to (a) have a lock while seeking and reading; and (b) call fpurge() after each read. So every thread has code like:

FILE* f = fopen(...);
while (..) {

fpurge(...); // this call not required on Windows
... do work ...​
} // end while

The reason it took me so long to figure out is that the application was initially developed on Windows, and it worked there _without_ the call to fpurge(). For some reason, fpurge() is reqd on Mac.

Neal Olander
streams in the posix model aren't thread safe, you have to manually set the locks. take a look at flockfile() and the releated group of functs. you can also use the locked and unlocked versions of the io functs, for example, you can do something like the following...

putc_locked('a', file);
pubc_locked('b', file);
putc_locked('c', file);

hope that helps

Thanks for the suggestion about flockfile() ...

But flockfile() is only implemented in some versions of Unix .. and it is not available on my Mac (using Project Builder, Mac OsX).

I do have the fpurge() workaround running ... but it is slower than I want. I dont want to call fpurge(). If anyone knows how to make fread() and fseek() thread-safe, please reply.
flockfile() is available on Mac OS X. Just go to Terminal and type "man flockfile" - it's there ;) What OS X version are you running?
When I add a call to flockfile() into my program (C++ using ProjectBuilder) I get a compiler error .. function not found. I already have #include <stdio.h> and I successfully call many IO functions.

Sure, the man page for flockfile() is there, but the compiler cannot find it.

BTW, I cannot even find the file "stdio.h" on my Mac. What is up with that?

I'm still willing to try flockfile() ... my compiler is the one who cannot find it :)

As for which version of MacOSX I am running ... I have a 1-month old MiniMac. I dont know how to get the precise version number.
You shouldn't be using Project Builder on the Mac Mini. It must come with Tiger or Panther, and both of them come with Xcode.
Okay ... I found stdio.h using "find" in a command window. (for some reason, the GUI "Find" under Finder could not find it?).

My "stdio.h" file does _not_ contain flockfile(). This is /usr/include/stdio.h.

So why do I have a man page for flockfile()?

PS: I'm using ProjectBuilder because my project leader has been using ProjectBuilder for years. He tried out XCode two months ago and determined that it was too buggy and he wants to wait until the next version of XCode.

Certainly, the existence of flockfile() in stdio.h has nothing to do with XCode vs. ProjectBuilder?
The thing is, no one really supports or remembers ProjectBuilder... it's so old. So asking for support for it is like asking support for Mac OS 7.5...

Xcode 2.1 is the latest version. It's definitely the best. I just created a sample project with this source, and it compiled and ran without error:
#include <iostream>
#include <stdio.h>

int main (int argc, char * const argv[])
	FILE *file = fopen("random_file.txt", "r");
    return 0;
Are you on Tiger or Panther?
Try compiling the above code directly in Terminal with gcc. If that doesn't work, then something's screwy...
Okay ... you've convinced me: If I want to call flockfile() I must upgrade to XCode. Currently we have Tiger-ProjectBuilder.

But upgrading is expensive. What are the odds that using flockfile() will fix my problems?

My application has multiple threads all reading from a single disk file. Each thread has its own dedicated FILE* object.

Here is what my threads look like:

FILE* f = fdopen( .. shared file ...); // each thread has its own FILE* buffer
for ( a long time){
fread(); // Here is the failure: threads are reading from each others buffers :-(
} // end for

What good is it to call flockfile(f) when FILE* f is only used by one thread?

Also, I already have pthread mutexes around the fseek()/fread() IO calls ... will flockfile() add more protection than pthread_mutex_lock()?
When I said it was expensive to upgrade from ProjectBuilder to XCode, I meant all those intangible costs when you've got 80,000 lines of code and everything running smoothly: then you change gears to a new environment. Things that should be no impact never are :)
XCode isn't that drastically different from Project Builder.

You can also set XCode to use a different version of GCC (or whatever else) as your compiler. The IDE shouldn't affect how your code works, if you compile with the same compiler in each.

By merely installing the Tiger Developer Tools, you should end up with an updated stdio header, which will be accessible to any version of GCC.

Or at least that's my understanding. It's like saying SubEthaEdit produces better code than BBEdit - they're just the text editor. The difference is in the compilation.
Yeah, but XCode does fark with a few things people did as part of their workflow, so while you aren't losing anything, it does take some adjustment. However, if you want to actually use the headers designed for your current OS and libraries, then you will have to move up to XCode, and you need to be using XCode 2.1 if you want to start moving to Intel in the future. The reason you don't have flockfile is that while you do have support in the libraries for it (OS), you don't have support for it in the headers (Dev tools). Usually it is a bad idea to use dev tools from an OS version that doesn't have support for the OS you are using.

That said... there is an easier way to handle this without fpurge , but it does require a re-work of things. You can use the mutex mechanism, but use a single FILE reference attached where the mutex is (since you seem to be passing the mutex around, you can pass the FILE with it). This way, fseek and the like will work with the existing buffers and properly purge them automatically, rather than forcing a purge after every read which will drastically cut down on performance.
Thanks for the suggestion about using a single FILE pointer, shared by all the threads.

However, that will not meet the requirements. Recall, the situation is: I've got a huge disk file with thousands of chunks of data scattered thruout it. When the operator hits a key, I've got to read 20 (or 12 or 8) of those chunks into memory ASAP. I cant read the 20 chunks sequentially, because that is too slow. And I cannot read _all_ the data chunks from the data file into memory beforehand because the file is 3 GB.

So, the sensible design is to set up 20 threads: and each one reads one chunk of data from disk, in parallel.

In this design, each thread must have its own FILE* stream .. if all 20 threads shared a single FILE* then the internal buffer would be useless: because each of the 20 threads is reading from a different location in the disk file.

And that is the problem: I've set up 20 threads, each with a dedicated FILE* stream, and the IO still fails (that is, the 20 threads all read data, but the data comes from the wrong place in the disk file: namely, locations that the other threads are reading).

And flockfile() - although promising (I have not tried it yet) - probably wont work because it only acts on one FILE*.

BTW: I dont mean to beat a dead horse, its just that this works perfectly on Windows, and I want the mac to work also. Here is the per-thread code that works on Windows but fails on Mac:

FILE* f = fdopen(..); // one FILE* stream per thread
for ( ever ) {
fseek( .., f);
fread (..., f); // Works on PC; reads from wrong place on disk on Mac
} // end for
} // end thread
Well, yes, I did read the rest of the thread before posting. It wouldn't be helpful to you if I didn't do so. :)

The issue as to why it works on one, but not the other, is that reading from multiple threads is not defined as part of the ANSI C spec. So, platforms are free to implement the internals how they see fit, and so some make them thread-safe internally, while others expect the programmers to make their apps thread-safe. MacOS X is apparently an example of the latter.

Now here is the real problem that you will face... and that is you are likely stuck with some sort of speed hit. Libc does do internal caching (which you don't want to muck with without good reason), but so does the OS's filesystem (which you /can't/ muck with). It is very unlikely that your 20 chunks are going to be close enough that the OS or libc can properly optimize for it. In other words, your code is a corner case, and multiple threads will not speed up the file reads. In fact, your current implementation is written such that you are wasting more resources by spawning 20 threads, than you are by just doing a series of sequential reads.

See, when you call to lock the mutex, you exclude all 19 other read threads from running... and then immediately call a read function which blocks the active thread until the read completes. Once the read completes, you release the lock, another one grabs it, and that one continues. You never have more than one thread doing anything at any one time other than the main thread, which might be blocked waiting for the reads to finish as well (I don't know since I don't recall you sharing that information with us). Overall, while fpurge() will fix the problem, it incurs even MORE overhead than you have already introduced. In the case of file I/O, especially on large files which will not fit entirely into cache, parallelism isn't the way to go, streaming is (a nice solution the game programmers have worked on).

A producer/consumer model will work quite well in this situation. Have one thread fulfilling requests by the other. You can use insertion sorts and other tools to optimize file I/O and ensure better throughput, and reduce the resources being consumed by your application. Also, attempt to avoid the common threading problem that you stumbled upon with this implementation: parallel, yet serial. This means, reduce the amount of work actually done within the mutual exclusion region, and never do anything that can block in this region. Use a local buffer on the thread for the read, and enter the mutex to copy it RAM->RAM afterwards, a lot lower latency. You should find that you get better performance under Windows as well, once it is stable.

I know you asked for a way to get this working on a Mac, not a critique of your design. Unfortunately, the design has a couple hidden issues that getting it working on the Mac has revealed (beyond the lack of thread-safety in libc). I hope you don't take this as an attack on your skills or anything of the sort, but I see a solution to your problem that would meet your requirements for throughput (as best as possible, since the OS file cache is outside devs' reach for tweaking), although it would require a change of approach.

If you have any further questions on this topic, don't hesitate to ask.