Mr K
Thought this might interest you;
From;
http://www.macosxhints.com/article.php?story=20010613140025184&query=pageouts
I collected the following illuminative posts from Barry Sharp on system memory management from the Apple discussion boards.
- Dennis Hill
[Editor's note: Dennis suggested I cut this down to a concise summary, but I thought I'd just publish them as they were written by Barry; he obviously has a great deal of knowledge about Mac OS X! These emails were originally sent by Barry to Ted Landau at MacFixIt, and then were posted to the discussion group where Dennis found them. So if you'd like to learn a lot more about OS X's usage of memory, read the rest of this article. It's a bit long, and can get technical at times, but I found it very interesting.]
--------------Email-1
Ted:
Virtual memory (VM) is just what it says -- "virtual" -- it really doesn't exist. The VM size is NOT consuming any disk space.
Unless a user's X system is performing swapping there's absolutely no need to worry about the swap file size nor its location. Swapping activity is provided by observing the "0(0) pageouts" in the last header line of the Terminal top command. Another useful Terminal command is the vm_stat(1) command (see man vm_stat). This command also displays the number of pageouts. The pageout value is an indication that physical memory is being paged(swapped) to the swap file. This i/o is done in page chunks. A page chunk is 4096 bytes in size.
When physical memory is paged (swapped) to the swap file it is being done so because physical memory is being over-subscribed. The best solution for avoiding frequent over-subscription of physical memory is to have fewer Apps running at same time or install more physical memory. When physical memory becomes over-subscribed the OS will seek out inactive memory pages and copy them to the swap file in order to make room for the active memory pages -- which may have to be copied from the swap file back into physical memory.
Excessive and continuing swapping in a VM UNIX system is BAD and should be avoided at all costs. One has to have a swap file to deal with memory over-subscription.
If a user observes pageouts to be non-zero AND growing rapidly then more memory should be installed or else reduce the memory subscription by running less work in the machine at the same time.
Taking time and effort to place and configure the swap file is for the most part futile and is an attempt to hide the real problem of over subscribing physical memory.
Also, note that in a multi cpu system there's no real concern for swapping activity if while swapping is being done the CPUs are kept busy with other work. Swapping and CPU work can proceed simultaneously. Only when the CPUs run idle waiting for swapping in/out to complete is there a problem with swap performance. In this case placing swap file on a very high-speed device will be beneficial.
My advice for most home computer users of X is to leave the swap file placement and its config alone and concentrate on ensuring the machine has ample physical memory.
I've been in the supercomputing UNIX business a long time and this aspect of swap file placement and config has been well and truly discussed and the conclusions are as I mentioned above.
If a UNIX system employs non VM for memory management (that is, real memory) the issue of swapping is a different beast altogether. This is because when swapping memory out it has to be done in large contiguous chunks (not small pages of 4096 bytes). For this reason it's important that the swap file space on disk be a contiguous set of tracks/cylinders and if possible have a separate data path to avoid interefering with other user i/o activities.
Regards... Barry Sharp
================================================
---------Email-2
Ted:
After sending you my lost post on "Virtual Memory swapfiles and OS X performance" I had some further thoughts related to many people posting
a) no matter how much RAM I have installed the X system appears to need it all
and
b) the X system appears to perform better with increasing time of usage
Although I don't have the X kernel source code I can easily speculate why these statements are being made, and also why they are valid.
First, let's digress just for a moment back to 9.1. In 9.1 there were two configuration options that relate to what's going on in X.
They are the Disk Cache and the RAM Disk features.
The Disk Cache can default to some size or be overridden. This cache is used to hold frequently used disk data or data that simply is being written out to disk. The idea is for the data to be more readily available to Apps when they need it and so avoids data having to be read from disk. Memory-to-memory transfers are very much faster than disk-to-memory and vice versa. The important thing to note here is that this Disk Cache is static. Its size never changes. If you make it large it takes memory away from what's available for Apps. If it's made too small it is ineffective. There's also some danger in caching data in memory and that is, if the system crashes this data may have not made it to disk for safe keeping and recovery after the reboot.
The RAM Disk feature is similar in nature to the Disk Cache in that it's static and takes precious memory away from Apps that may need it. Its usefulness lays in the fact that some App need to reuse their data files repeatedly. If these data files all fit into the RAM Disk then great benefits can be obtained by avoiding the much slower disk data transfers.
In X neither of these features are present on the face of it.
However, X's underpinnings (ie the UN*X kernel) provide both these features without any inputs being needed by the user. It's called the file system buffer cache. The one most significant difference is that the size of this buffer cache is dynamic. It starts of with some small size and can grow and shrink as the i/o demands and Apps memory requirements vary over time.
It's called a 'buffer cache' because it buffers the i/o data on its way to/from the disk. When an App writes data it first will be deposited into the Apps file buffer memory region and will subsequently be requested via library routines to have the kernel (the OS) copy it from the App's buffer to disk. The kernel will oblige and will copy it first to its buffer -- the file system buffer cache. If the kernel requires more room in its buffer cache it will obtain it from the free memory. When this happens the free memory value, in say the Terminal's top command, will immediately show a reduction of free memory. At some later point the kernel will copy this data (referred to has dirty buffers) to the appropriate disk location. I believe the frequency of this being done is 30 secs -- called sync-ing to disk.
As the usage of X increases with time without rebooting the kernel file system buffer cache will fill with the most needed or most frequently used data. This should help explain why some people claim that the system appears to perform better the longer they've been running X. The needed data for doing things (or maybe most of it) is now all resident in memory (the kernel's buffer cache) and doesn't need to be read from disk. This is much much faster.
As mentioned above, the kernel will expand its buffer cache on demand by using the free or unused memory in the machine. This explains that with time (could be a short period of time or a long period of time -- it depends on system usage/workload) the system appears to be using all of the available RAM per the Terminal's top command.
One other point to make is that if the kernel's buffer cache has grown to be quite large and is consuming a large percentage of the installed RAM there's no harm being done. If a new App is launched the kernel will release as much its buffer cache as needed. First it will release parts of the buffer cache that aren't 'dirty' until it figures it can honor the new App's memory demand. If by releasing all the non-dirty buffer segments it still requires more memory for the App then it will start writing the dirty segments of the buffer cache to disk and releasing their memory which in turn can be given to the new App. This stops when all the memory required by the new App is satisfied. In this manner the kernel buffer cache shrinks down in size. There's probably a minimum size to which it will shrink down to. In this case the kernel will start looking for other memory that's inactive. This could be a dormant App's memory. In this case the kernel will start to page out the dormant App's memory hoping to satisfy the new App's memory requirements. This kernel activity is called paging or swapping.
When the kernel starts to perform swapping it's a sign that the physical memory in the machine has been oversubscribed. Continual swapping will impact the overall system performance -- things will become unresponsive and much disk i/o seeking will be apparent/heard. This type of activity is displayed in the Terminal's top command with the value immediately preceding the "pageouts". If this number is non-zero and increasing rapidly over short periods of time then severe swapping is taking place. This is bad.
If severe swapping is taking place then either more RAM must be installed or the workload in the machine needs to be reduced. Of course the system will continue to run but not at its optimal performance levels.
I believe the above helps explain why people claim the X system performs better over time without intervening reboots and why the system consumes all of memory no matter how much RAM is installed.
A good example of seeing the kernel buffer cache in action is to use the Terminal App. In Terminal perform a copy of a large file. I chose to copy the swapfile as I know it's large (you'll need to be root in order to do this). Also it helps to have a second Terminal window active with the top command running.
localhost% cp /var/vm/swapfile0 ./bigfile
If top showed some 100MB of free memory prior to this copy you'll notice that the free memory falls rapidly to around 4-5MB. This is because the kernel has consumed just about all of the available free memory for its buffer cache.
Now remove the ./bigfile
localhost% rm ./bigfile
What happens in the top display -- well all of a sudden you should see free memory shoot way up. This is because the kernel no longer requires all that space in its buffer cache that's holding ./bigfile AND it doesn't need to write it out to disk BECAUSE you removed it.
So my advice is
a) don't be too worried about free memory being small in the top's display
b) keep an eye on pageouts and if increasing rapidly with time reduce machine's workload or add RAM if workload is a requirement.
c) Don't mess with relocating or sizing the swapfile -- an interesting exercise but really quite futile for the average Joe using X on iBooks, iMacs, etc. You simply should avoid severe swapping at all cost.
One last point -- I speculate that X attempts to always keep a small amount of free memory. I've never seen mine dip below 3MB. I believe this is for avoiding a memory deadlock situation whereby the kernel needs memory to perform a critical function and cannot page any memory out -- a nasty situation.
Some of he following is speculative on my part as I don't have OS X kernel source code nor have I performed any real hands-on experimentation. I leave it to you to figure whether in your case moving the swapfile to a specially configured HD or HD partition provides any benefit. I will offer some suggestions and opinions along the way though.
1. (FACT) Each time X boots it removes any of the swap file segments -- ie swapfile0, swapfile1, swapfile2, ...., swapfileN
2. (FACT) Each time X boots it will create a file /var/vm/swapfile0 that I understand to be 80MB in size.
3. (SPECULATIVE) When X creates the swapfile0 file it may or may not be forced to be contiguous on disk. If not then it will be scattered about on the HD (ie fragmented to a small or large degree -- depends on how fragmented the HD's free space is at the time.
4. (SPECULATIVE) I believe when X pages/swaps memory it does so using well-formed i/o and transfers data in 4096 byte chunks (called pages). The minimum allocation style for HFS+ is 4096 bytes. It's unclear whether X pages/swaps using multiple 4096 byte chunks in a single i/o request. If not, then paging/swapping is done transfering single 4096 byte chunks.
5. (FACT) If the kernel buffer cache has a series of 4096 byte chunks that all map to a single contiguous disk address range then paging/swapping this series of chunks will be much quicker if the kernel organises the series of 4096 byte chunks in the proper order and issues a single i/o request. The same would be true if the data were coming from disk to memory.
6. (FACT) Application's memory can be scattered throughout main memory (RAM) -- it's not necessarily contiguous.
7. (FACT) Many Applications share memory resident re-entrant code fragments with other Applications and or system support programs. Typically these code fragments never get paged/swapped out as they are in constant use.
8. (FACT) If X finds itself having to page-out/in (swapin/swapout) constantly to meet the users memory demands the system's responsiveness will go 'down the toilet' in a hurry.
This activity can be observed by monitoring pageouts in the top display.
A constant stream of pageouts and pageins is not good and means main memory has been oversubscribed. If this is unavoidable for some reason then if X pages out a series of contiguous 4096 byte chunks rather than many individual 4096 byte chunks then having a swapfile that's not fragmented (be it on the internal HD or not) provides some benefit. If on the other hand X always does its swapfile i/o in single 4096 byte chunks then it makes absolutely no difference if the swapfile is fragmented vs. not fragmented (ie a contiguous set of HD tracks/cylinders). However, this situation should be resolved by adding more RAM not by messing with swapfile placement.
Sooooo, if X does insist on having the swapfile0, swapfile1, etc be contiguous it matters not as to where its located -- on the internal HD, a separate internal HD partition, or a separate disk altogether.
On the other hand, if X doesn't insist on having the swapfile be contiguous then only when your system performs severe paging/swapping in/out will having it on a separate partition all by itself or a separate HD used exclusively for swapfile will there be any benefit. I suggest however, that if this is the case you either cut back on the amount of workload in the system or install additional physical memory (RAM) to gain the full performance potential of your system.
It's kinda like the conjestion on the freeways -- if the freeways are conjested the soln is to throttle back the number of cars entering the freeways (ie reduce the workload) or build wider or more freeways (ie install more RAM).
My guess is that when X creates the swapfile on your HD it does so in such a way as to make it contiguous. If this is correct there's absolutely no advantage in placing the swapfile elsewhere.
I will try to find time to explore this aspect of the X default placement of the swapfile(s) later and post back.
Hope this rather long-winded explanation helps some.
Regards... Barry Sharp
I'm speculating and basing my answers on my UNIX OS experiences.
PhysMem is just that -- physical memory -- your installed RAM.
1. Wired = memory allocated that shouldn't/can't be swapped/paged out (ie its locked into memory -- possibly portions of the OS code for example).
2. Active = allocated memory that has been accessed during last N seconds.
3. Inactive = allocated memory that hasn't been accessed during last N Secs (quite likely to be forst candidates for being swapped/paged out if memory being demanded).
4. Used = Wired + Active + Inactive
5. Free = memory that isn't allocated to any process or the kernel.
6. VM = Virtual Memory ( a fictictous amount of memory that represents a processes upper potential limit for its memory allocation or requirements -- very raely ever requested). I'm not sure what the + 44.0M represents.
Remember that the command top shows both instantaneous and historic data. The PhysMem line shows actuals at the time top makes its inquiry to the kernel whereas the pageins and pageouts display activities since the machine was booted. Sooooo at some previous point apparently, in your case, the system/kernel had to swap/pageout some memory in order to accommodate a memory request issued by a user or kernel process.
Hope that was brief enough and helps you understand things better.
Regards... Barry Sharp