[Gc] leak with finalizers

Boehm, Hans hans.boehm at hp.com
Wed Dec 22 15:36:32 PST 2004


Interestingly, I managed to reproduce it only on an X86 machine running
NPTL.

I think I sort of understand the problem, and NPTL doesn't have much to do with
it.  The example just seems to be at the edge of causing this behavior,
and something about NPTL or one of the newer libraries pushes it over.

The problem is that it gets in a mode in which a garbage collection
reclaims very little, since everything is finalizable and can't
actually be collected.  Thus the next collection finds a small
GC_words_allocd value, decides the heap is too small, and grows.
Since the heap has grown, GC frequency decreases, more finalizable
stuff gets allocated between GCs, etc.

If you set GC_MAXIMUM_HEAP_SIZE to a reasonable value, everything
is OK.

This was a known problem.  The GC keeps track of GC_words_finalized
so it gets credit for finalizable objects in addition to reclaimed memory.
Hence it isn't supposed to grow the heap in cases like this anymore.

But the fact that the large objects are not finalized and just referenced
by finalizable objects defeats this mechanism.

I have to run now, but I'll think about it some more.

I wouldn't be surprised if other implementations had similar problems
with such code.

The workaround is to avoid finalization of objects that reference
huge amounts of memory.  If possible, finalize a smaller object that
is referenced by the main object instead.

Hans


> -----Original Message-----
> From: gc-bounces at napali.hpl.hp.com
> [mailto:gc-bounces at napali.hpl.hp.com]On Behalf Of Paolo Molaro
> Sent: Wednesday, December 22, 2004 1:30 PM
> To: Gc at napali.hpl.hp.com
> Subject: Re: [Gc] leak with finalizers
> 
> 
> On 12/22/04 Boehm, Hans wrote:
> > Did you see a growing leak?
> 
> Yes, it gets fairly quickly to 180 megabytes in under a minute,
> then it slows down a bit and leak about 100 MB per minute.
> I stopped it at 400 MB.
> 
> > If so, I can't reproduce the problem.  I saw the process 
> stabilize at
> > anywhere between 17 and 58MB.  This was under Linux, on 
> IA64 and X86.
> 
> I'm running it under Linux x86 (kernel 2.6.8ish and nptl) and 
> Linux/ppc
> with linuxthreads.
> 
> > 17 Megabytes looks reasonable to me.
> > I did not yet investigate why it takes 58MB in some cases.  
> That does seem
> > high.  On the other hand, this benchmark is such that a few 
> old pointers
> > left around on the stack could drastically affect memory use.  And I
> > got the higher numbers after enabling threads, which might 
> cause certain
> > stack locations not to be overwritten regularly.  And the 
> finalization
> > count reflects the fact that a few lists are still being considered
> > live.
> 
> I used the debian package and it has thread support enabled, though
> it shouldn't matter much, since the app doesn't use threads. The stack
> locations should get overwritten at each loop, at least the 
> ones in main()
> so at most an object or two should not be collected (when the list
> is zeroed).
> If it helps, allocating the obj->array with GC_malloc_atomic()
> makes things much worse, it got to 400 MB in 20 seconds (but 
> this may be
> because it just allocates faster).
> 
> > If you were seeing this on Windows, could you retry with 
> 6.4, and see
> > if it's fixed there?
> 
> Sorry, I forgot to mention it in the first mail, but no 
> windows involved here.
> 
> > (Even on Linux, that might be interesting, since the GC_words_wasted
> > calculation was wrong in 6.3, and that could make s 
> difference, though
> > I didn't actually see much of that.)
> 
> I didn't know about the 6.4 release, maybe I missed the mail.
> Just downloaded: it's a bit better, with GC_malloc_atomic() it
> gets to 400 MB in a minute, instead of 20 seconds.
> Attached the log from GC_DUMP_REGULARLY (with GC_malloc_atomic()
> for obj->array and GC_no_dls TRUE, to make sure that is not causing
> mem retention).
> Thanks.
> 
> lupus
> 
> -- 
> -----------------------------------------------------------------
> lupus at debian.org                                     debian/rules
> lupus at ximian.com                             Monkeys do it better
> 


More information about the Gc mailing list