[Gc] leak with finalizers

Paolo Molaro lupus at ximian.com
Fri Feb 25 07:03:00 PST 2005

On 02/23/05 Hans Boehm wrote:
> I'm very surprised that this shows up regularly in .NET applications.
> That suggests that a very large fraction of objects are either
> finalizable, or reachable from finalizable objects.  I've never seen
> that outside of .NET.  But I've heard similar things from others about
> .NET applications.  I'd love to understand the discreptancy.

Well, I'm not sure this has something to do with .net per se and it's not
that this happens regularly: most regular apps have no such issue.
Also note that there is no need for many objects to be finalizable:
it's just that a finalizable object can point to big objects, as it happens
with file streams, for example (where big can ge a few KBs).
The kind of apps where the issues shows up are primarily server-side
applications: we have been testing them with hundreds of thousands of
I'm sure the allocation pattern may be different, but it may also be that
mono is pushing the limits of the usage patterns of libgc, by providing
a large user base with a diversity of applications. We've been
reasonably happy so far, though we're trying more and more to
limit the amount of memory that is inspected conservatively
because memory retention seems to be an issue for long running
processes with heavy activity.

> As far as the patch is concerned, does your patch also work for you
> if you check for a geometric increase in the number of finalizable
> objects, e.g. that it grew by at least 10%?  I would find that a little
> easier to justify than the current patch.  An absolute number here
> doesn't seem very meaningful, since everything else grows with heap
> size.  In a huge heap, you'd be testing for a tiny growth.

I'm testing on a 32 bit system with heap sizes up to about 1 GB
and the number seems adequate. Allowing, say, 10000 objects
to queue up may not be best than doing a collection, because
we start hitting swap. Anyway, yes, more tests are needed here
especially with bigger heaps and 64 bit machines.

> A more robust fix might be to track the number of bytes marked from
> finalizable objects, and basically count those as though that many bytes
> were allocated.  If lots of bytes are marked this way, you are
> either making progress in running finalizers, or (in other environments)
> you have long finalizer chains and need frequent collections.
> But this seems hard if you want to account for pointerfree objects
> correctly.  I don't see how to do it without cloning GC_mark_from().

Yes, this would be the 'correct' thing to do, but even non
considering the code overhead, I don't know how much of a runtime
overhead this would have.
The next version of mono/CLR have an API call called something like
GC.AddMemoryPressure (int numbytes) or something like that. This is
supposed to be a rough amount of unmanaged memory referenced by
an object and it's supposed to be used with objects that have finalizers.
Maybe we can hook into this to provide the GC with an estimate of
how much memory could be freed by running finalizers.

> There are some other issues with this scenario.  In particular, the marking
> from finalizable objects is currently always done sequentially, on the
> assumption that it doesn't take much time.  Once we all have dual core
> or larger processor chips, having most of the marking done here
> isn't going to be good either.

Well, marking from finalizable objects needs to be done anyway and this
usage case is more about having big buffers referenced by finalizable
objects than having many of them.


lupus at debian.org                                     debian/rules
lupus at ximian.com                             Monkeys do it better

More information about the Gc mailing list