[Gc] leak with finalizers

Paolo Molaro lupus at debian.org
Sun Feb 20 02:36:34 PST 2005


On 12/24/04 Paolo Molaro wrote:
> Yep, confirmed. Sadly the C# example has a slightly different patterns and
> eventually it will go out of mem anyway. I'll try to reproduce with a C
> program. Adding a GC_gcollect() at the right place in the C# code makes it 
> work fine, too, so it seems to be very sensitive to the time a collection 
> happens...

In the last few weeks I made changes to the mono runtime so that more
objects are allocated as pointer-free. I also reduced the amount
of runtime data structures that use libgc for allocation, ensuring
that only the minimum amount of memory is allocated with libgc.
This fixed the issue with the C# app not behaving correctly when
setting the max heap size with the test case.

> > I wouldn't be surprised if other implementations had similar problems
> > with such code.
> 
> The C# equivalente has been reported to work fine with the MS CLR.
> 
> > The workaround is to avoid finalization of objects that reference
> > huge amounts of memory.  If possible, finalize a smaller object that
> > is referenced by the main object instead.
> 
> Unfortunately we can't ask our users to rewrite their code:-)

We have been testing the attached patch for the last few days.
It attempts to avoid increasing the heap size if a large amount
of finalizable objects were allocated recently and if the last
finalization runs had freed some memory. This should ensure that
forward progress is made (though I guess it may happen that
a useless GC is done in some cases, before deciding it really
needs to expand).
The patch fixes the test case completely (no need to set the max
heap size) and many other programs running in mono behave much
better now: programs that used to increase the heap size until
they got to a crawl now have steady memory usage.
The 500 constant (number of objects with finalizers) is a guess
that happens to balance between memory usage and performance: I
guess we could make it configurable, so it's decreased when
the runtime knows that many finalizers are created that point to
potentially big arrays or objects. I haven't tried this kind
of runtime tweaking.
Comments welcome.

lupus

-- 
-----------------------------------------------------------------
lupus at debian.org                                     debian/rules
lupus at ximian.com                             Monkeys do it better
-------------- next part --------------
Index: ChangeLog
===================================================================
--- ChangeLog	(revision 40775)
+++ ChangeLog	(revision 40776)
@@ -1,3 +1,11 @@
+
+Wed Feb 16 22:30:54 CET 2005 Paolo Molaro <lupus at ximian.com>
+
+	* alloc.c: tune the code to collect instead of expanding
+	the heap if there are many finalizers and we reclaimed some
+	memory from cleaning the finalization queue (should fix
+	bug #71001 and #70701).
+
 2005-02-07  Geoff Norton  <gnorton at customerdna.com>
 
 	* include/private/gc_priv.h: Bump the max root sets to 1024
Index: alloc.c
===================================================================
--- alloc.c	(revision 40775)
+++ alloc.c	(revision 40776)
@@ -1021,13 +1021,20 @@
 			/* How many consecutive GC/expansion failures?	*/
 			/* Reset by GC_allochblk.			*/
 
+static word last_fo_entries = 0;
+static word last_words_finalized = 0;
+
 GC_bool GC_collect_or_expand(needed_blocks, ignore_off_page)
 word needed_blocks;
 GC_bool ignore_off_page;
 {
     if (!GC_incremental && !GC_dont_gc &&
-	(GC_dont_expand && GC_words_allocd > 0 || GC_should_collect())) {
+	(GC_dont_expand && GC_words_allocd > 0 
+	|| (GC_fo_entries > (last_fo_entries + 500) && (last_words_finalized  || GC_words_finalized))
+	|| GC_should_collect())) {
       GC_gcollect_inner();
+      last_fo_entries = GC_fo_entries;
+      last_words_finalized = GC_words_finalized;
     } else {
       word blocks_to_get = GC_heapsize/(HBLKSIZE*GC_free_space_divisor)
       			   + needed_blocks;


More information about the Gc mailing list