Maxed out processes?

kilowatt

mach-o mach-o man
Ok. I made a short script to check and see how many process I could run at once on my mac. I'm running 10.0.3 at the time.
Here's the script:

#!/bin/sh
top -l1 3 > pstest.tx
date >> pstest.tx
./test1.sh

Basically a shell script that runs top (only three processes) and date (I thought it might crash the computer, and I wanted to know how long it took). Then it executes its self, without closing /bin/sh (its an infinite loop - only its not so infinite, as you will see).
Well, I eventually got this message (the script stops and reports this):

./test1.sh: fork failed: resource temporarily unavailable [2]
./test1.sh: fork failed: resource temporarily unavailable [3]
./test1.sh: fork failed: resource temporarily unavailable [4]

Here's the pstest.tx file, for your viewing pleasure:
<pre>
Processes: 130 total, 2 running, 128 sleeping... 176 threads 02:41:19
Load Avg: 2.19, 2.06, 2.08 CPU usage: 26.1% user, 73.9% sys, 0.0% idle
SharedLibs: num = 69, resident = 10.8M code, 836K data, 2.36M LinkEdit
MemRegions: num = 2791, resident = 39.7M + 3.42M private, 31.5M shared
PhysMem: 21.8M wired, 69.7M active, 34.9M inactive, 126M used, 1.61M free
VM: 785M + 39.5M 8564(8564) pageins, 1179(1179) pageouts

PID COMMAND %CPU TIME #TH #PRTS #MREGS RPRVT RSHRD RSIZE VSIZE
1195 top 0.0% 0:00.54 1 20 18 328K 220K 544K 1.48M
1194 sh 0.0% 0:00.03 1 17 13 164K 508K 552K 1.68M
1191 sh 0.0% 0:00.07 1 17 13 180K 508K 552K 1.68M

Tue Jul 3 02:41:19 EDT 2001
</pre>
(Sorry that came out so jumbled up, I used < pre > tags, don't know why it didn't come out better)

What I'm wondering is:
1) did I max out some maximum processes thing
2) Or did I max out my poor g3 266?
3) You can all see how this could be very bad in a professional enviroment, so how would you expand this number? I haven't tried running it as root, maybe I will later on tonight. (Imagine a mac os x box used as an apache server with say 50 virtual servers. Lots of traffic on your box and poof - users are loosing connections. Not that 50 virtual servers is at all practical....)

I'm pretty sure its a kernel thing, but would yall mind running that script and reporting the max you could run?
Under RedHat 6.1 Workstation, I'm currently at 450 processes and growing (slightly different script, same function).
Not to re-ignite any kernel wars, but does this difference in max processes have to do with mac os X being on a micro-kernel and Redhat Intel linux being on a monolithic kernel? (the intel box is amd k62 at 500mhz, same 128 megs of ram).

Maybe I am confused ;-)
oh well.
Any comets?
 
quite shortly after posting my last message, the intel box stoped running in circles (so to speak).

Here's the error I got:
top: error in loading shared libraries: libncurses.so.4: cannot open shared object file: Error 23
./test1.sh: pstest.tx: Too many open files in system
./test1.sh: pstest.tx: Too many open files in system
./test1.sh: ./test1.sh: Too many open files in system

Here's my script:
#!/bin/sh
top -n1 > pstest.tx
date >> pstest.tx
./test1.sh

/bin/sh is a link to bash.

What ticks me off is when I cat pstest.tx, I only get this:
Tue Jul 3 07:14:54 /etc/localtime 2001

And, date by its self looks like this:
Tue Jul 3 03:15:33 EDT 2001

What's going on here?

It looks like linux complained only because I had pstest.tx open about 400 times. (Maybe ncurses crashed - does os X have that or just curses?) I don't know why top -n1 > pstest.tx didn't record anything.
I wasn't logged in as root. The kernel is Linux 2.2.12-20.

I think the processes maxed out <500 and >450.
So whats going on?
 
Even 500 processes doesn't seem like very many to crash it.

Did the computer actually crash, or did the shell script just bail out? Maybe it's a "problem" with how nested the shell can be.

What if you ran a while loop that ran some persistent process in the background and then kept launching it again and again? I'm not sure what a good process would be. Maybe 'man'.

-Rob
 
There is a limit of processes per user, and it is probably a sysctl on MacOS X as well.

The use of the limit is to prevent exactly the kind of fork(2) bombs you guys are attempting. Remember, BSD comes out of a university, and fork bombs are favorite passtime of freshmen students :)
 
It just bomed the shell script. (It didn't bring the system dowm on either computer). I'm going to try my script as root. (I know, he who play in root mess up tree...)

wish me luck
 
typing sysctl kern.paxproc displays the maximum process 'allowed in the system'. On my computer, thats 532.

Looks like sysctl -w kern.maxproc=1000 will up that to 1000 processes. Cool! Now, how can we set this on a per-user basis? And, what if a user spawns a process under another username (ie, root (system) runs httpd as user www)?

Typing sysctl kern is very interesting.

(As yall can see, I read the man page a bit...) Thanks for the advice on sysctl.
 
Back
Top