Disabling VM

musey79

Registered
So, OS 10.1 is cool, a few things that need to be addressed, but nothing anyone hasn't already pointed out...however I am saddened to see that yet another feature has left our grasp-the ability to control VM. I have plenty of Physical RAM installed on my iMac and I certainly do not need my HD processing through code of every program I launch-not to mention I'd like to extend the life of my HD. Using VM doesn't help that from what I understand. (?)

Anyone know how to disable the Virtual Memory in OS 10.1?

Thanks! =)
 

kenny

Registered
(In a deep and theatrical voice) "Welcome to the World Of Tomorrow!!!!"

It's funny - for years now, I've been hearing the MacFaithful wondering "when are we going to get modern memory management?" "It's ridiculous that I have to tell the system how much memory an app should get" and so on... I've been one of that group. It's been a long long wait for things that the Windows/Unix/Linux/etc crowd have had for years and even decades! And now that we have it, a vocal minority wants to turn it off... Amazing. (please note, I'm not picking on you, I've seen similar questions posted elsewhere, and I... just... snapped... :) )

That said, given the way memory management is done on Unix (and Windows now, too), there is likely NO WAY to turn it off without some serious modifications to the kernel. Memory managers set up the memory address space in such a way that it is not practical yet to fill that space with real memory. For instance, 32-bit system allows for up to 4GB of memory (2^32 = 4294967296). The memory manager sets up a full 4GB of space, and since there isn't nearly as much in actual memory, paging is likely, nay - certain, to occur.
 

zarafa

Registered
If you really do have "plenty of Physical RAM", then VM will never get used (memory isn't paged out just for the fun of it). If you don't actually have enough RAM, you'll need VM. Either way, disabling it would give you absolutely no advantage.

Virtual memory on modern OSes (Unix variants, Win2000/XP, OS X, etc.) really has nothing to do with the primitive memory management of MacOS prior to X.

Though this isn't directly relevant, I'm including below a repost of a newsgroup posting on comp.sys.next.misc back in 1994 by none other than Avie Tevanien. Several of the points he discusses aren't really valid for OS X anymore (a monolithic WindowServer no longer exists, for insance, and the new VM system is a lot more advanced in many ways, including the ability to shrink or eliminate physical swap files when no longer needed) but it's still one of the better posts I've seen that discusses some of the complexities of virtual memory.



\begin{repost from comp.sys.next.misc, Jun 1994}

From: avie@next.com (Avadis Tevanian)
Subject: Re: Why does NS require so much Memory?

In article <1994Jun5.221433.24748@sifon.cc.mcgill.ca>
samurai@cs.mcgill.ca
(Darcy BROCKBANK) writes:
> Oh well... can someone more informed than me *please* take up
> this discussion, because I don't have enough knowledge on this
> to come to the correct conclusion.

Here's the facts on how swapfiles work:

For every page in the swapfile, the kernel maintains status telling
whether that page is in use or not. When a swapfile it enabled
(mach_swapon), it is truncated to lowat and each page is flagged
as free. When the page out daemon requests a page to be swapped
out, the pager locates the first free page in the swapfile (actually,
there is an algorithm to determine which swapfile is used, if more
than one is enabled, but I will omit this from the discussion).
The first free page is defined as the lowest numbered page. As
more and more memory is consumed by processes, higher and higher
numbered pages are used. When all pages in the swapfile are in
use, and additional page out causes the swapfile to be extended in
size. This occurs until hiwat is reached. If hiwat is reached,
or if the file system is out of space, the page will be left in
memory (unless there is another swapfile enabled that can be used).
If the system stays in this state, it will eventually be full of
dirty pages which can not be paged out. When this happens, the
system comes to a grinding halt as it is forced to use fewer and
fewer pages of memory (memory is filled with dirty pages that can
not be paged out).

Now, it gets interesting when we consider what happens when memory
is freed. In particular, when a process exits or calls vm_deallocate,
the VM system attempts to free any memory that was associated with
the appropriate regions of virtual memory. When memory is shared,
it simply makes a note that there is one fewer reference to the
shared memory (or copy-on-written memory) and no further action is
taken. If this is the last reference to the memory, any corresponding
physical pages are freed from main memory and any corresponding
pages in the swapfile are tagged as free. A subsequent allocation
of page on the swapfile will most definitely reuse this page!

When a page is freed, if it is the highest page in the swapfile,
the swapfile will be truncated all the way down to the highest page
in use (down to lowat). In practice, this happens rarely. The
basic problem is that if you have a long running process use a very
high number paged (e.g., if the Windowserver allocates a high
numbered page) the swapfile will not get truncated until that
process exits --- which could be a very long time. When this
happens due to a core process (e.g., the nmserver), which cannot
be restarted unless the system is rebooted, your swapfile will
remain large. Still, there can be lots of free pages in the swapfile
file, and rest assured they will be reused!

So why don't we compact the swapfile to handle these pages that
get allocated at high page numbers? Good question. We've considered
doing it many times. However, it has always been considered a
quite risky change (how many of YOU have debugged a virtual memory
system before) and would need to be done very carefully to ensure
correctness and adequate performance. As an example, it would not
be acceptable to just start a compaction and cause the system to
lock up as the kernel does several megabytes of I/O for the
compaction. The relative merits of making this improvement has
never outweighted the costs in risk and the opportunity costs of
not working on other parts of the system. I'm not saying we'll
never do it, I'm just saying we haven't done it yet for some
carefully considered reasons.

Having said all of this, why do so many people seem to have problems
with their swapfiles? Here are some possible explanations:

1) Not everyone realizes just how much memory their apps use. As
has been mentioned before, the Windowserver keeps backing store
for all the windows (on or off screen). On 16-bit color systems
this can be quite large, on 24-bit systems its downright huge!
Simple images on the screen can translate into megabytes of storage.
Mathematica sessions are notorious for consuming 10's or even 100's
of megabytes of VM.

2) Programs occasionally have memory leaks. We work hard to be
sure that the software we release does not have leaks. There's a
reason we developed MallocDebug! I think we do pretty well, but
I'm sure there are some bugs. For example, the Windowserver, with
it's printer heritage, has long had problems with correctly managing
its memory. On the printers they just "reset" the memory heap for
each new job --- we can't do that. If/when the Windowserver leaks
we get a double whammy since not only do we leak a small amount of
memory, but the Windowserver is a long running process and tends
to hog those high numbered pages. I think NEXTSTEP ISV's generally
do a good job too, but it only takes one or two apps to leak memory
and cause problems.

3) As many of you know, Mach has a quite advanced virtual memory
scheme, which NEXTSTEP makes excellent use of. Features like
copy-on-write and pageable read/write sharing can cause complex
relationships between memory and how it is mapped into one or more
processes. There is one known optimization that the kernel does
(specifically the coalescing of adjacent memory regions when backing
store has not yet been allocated --- for those of you Mach VM
literate) which sometimes causes the freeing of some memory to be
delayed until a process has exited. The situations when this
happens are fairly rare, and worse case the memory is freed when
the process exits, but it wouldn't surprise me if this is the cause
of isolated problems.

I personally think the Mach swapfile solution is quite good. I'm
obviously biased though. Sure, there are a few things I think
could be improved, but that's true of any piece of software.
Overall I think we've made some reasonable trade-offs. I also
think swapfile management is fairly bug-free. We know we can
improve the situation is (3) above (but it is difficult). Certainly
if anyone has any other possible reasons for swapfile growth,
especially with concrete examples of programs, let us know so we
can investigate!

I'd be more than happy to read suggestions others have on improving
how swapfiles work. I can't guarantee we'll implement them, but
you never know!

I hope this sheds a little light on the whole swapfile discussion.
Somehow I think it will still continue on --- but hopefully it can
be grounded with a few more facts now.

Avie

\end{repost}
 

lgerbarg

Registered
Originally posted by kenny

That said, given the way memory management is done on Unix (and Windows now, too), there is likely NO WAY to turn it off without some serious modifications to the kernel. Memory managers set up the memory address space in such a way that it is not practical yet to fill that space with real memory. For instance, 32-bit system allows for up to 4GB of memory (2^32 = 4294967296). The memory manager sets up a full 4GB of space, and since there isn't nearly as much in actual memory, paging is likely, nay - certain, to occur.
That is not at all how things work. Our system sets up a 4GB memory region for every app, and the kernel. All of the memory pages in them that are not actually in use are not mapped into real memory, and thus take no memory. As soon as everything on the system uses more ram than you have you will start hitting swap. I can allocate immense amounts of memory, until I actually use the memory it does not cost me anything besides a couple of bits in the page tables.

One important thing to remember is VM and swapping are different. VM just means you can remap one block of memory to another. Swapping general uses this by setting up the mappings for a program so that the pages that are swapped are marked as invalid. When a read or write occurs that triggers a processor exception, which the kernel can process. Hence swapping uses VM, but VM does not imply disk swapping.

Louis
 

musey79

Registered
I totally appreicate the info guys -and kenny I didnt take it personal -in fact its funny, it IS a new OS- a "new" way of thinking -MODERN MEMORY MANAGEMENT doesn't mean jack to me *lol* but I'll take your word for it and heed your advice! :)

M
 
Top