[Gc] GC in "soft" real time

BGB cr88192 at hotmail.com
Fri Aug 27 10:18:39 PDT 2010

in some of my projects, a similar issue is faced.
most rendering code does not allocate memory.
that which does, typically doesn't use GC.
the little which is left (typically, secondary code responding to user 
interaction, ...) may still trigger GC's every so often, and when the GC 
kicks on, it doesn't really matter "where".

possibly GC could be reduced by manually freeing memory.

possibly, for one-off data, something I have done (usually in my 
compiler/codegen code) will work:
having a temporary heap, which is often just a largish buffer (or sometimes 
a segmented array of buffers) with a sliding pointer, and allocation is done 
from this buffer.

as a general rule, allocation is fairly simple:
check that the allocation will fit (rover+size)<=end:
    if false, special handling is used, such as expanding the heap and/or 
adjusting the pointers for a new heap chunk, ...
fetch current pointer (store in a temp pointer);
add padded up size (to keep alignment, usually 8 or 16 bytes) to current 
now, one has an object (temp pointer).

when none of this is needed anymore, the pointers can be reset, effectively 
destroying this heap (next time around, anything there before is simply 

granted, as a general rule, a GC will have no visibility of any pointers in 
this temporary heap, so care is needed if this is being mixed with GC'ed 

mostly, my compiler code had done this for performance reasons, as 
traditional allocators (both GC and malloc) tended to bog down severely with 
these usage patterns (such as allocating 25-50MB or more, in often small 
objects, in a matter of milliseconds). switching to the above mentioned 
strategy notably improved performance and overall reliability.

granted, any data to be preserved afterwards needs to be manually copied 
out, but this is not usually too much of an issue. also, any memory used by 
this system can't be shared (unless it frees the heap chunks when done, 
which creates its own performance hit).

or such...

----- Original Message ----- 
From: "Boehm, Hans" <hans.boehm at hp.com>
To: "Jim Hourihan" <jimhourihan at gmail.com>; <gc at linux.hpl.hp.com>
Sent: Thursday, August 26, 2010 5:08 PM
Subject: RE: [Gc] GC in "soft" real time

> If you're in a position to look around in your application with a 
> debugger, it may be worth looking at the size of the root set being 
> scanned by the collector, and whether it's easy to exclude (with 
> GC_exclude_static_roots) large regions of that that can't possibly contain 
> pointers to the garbage-collected heap.  GC_dump() will tell you about the 
> root sections, among other things.  Getting the extension language 
> implementation to make more use of GC_malloc_atomic may have a similar 
> effect, especially if it allocates large pointer-free objects.  With luck, 
> those might help you dodge the issue, if you haven't already tried that.
> The only concern I would have about implementing a pool outside the GC in 
> this case is that
> 1) It may not reduce GC times much, though it probably will greatly reduce 
> their frequency.
> 2) You have to think about whether or not it can contain pointers to the 
> garbage-collected heap, and hence should be traced.  If so, you may want 
> to avoid tracing from dead objects in the pool.  In any case, you should 
> think about where the pool memory comes from.
> I don't see a great argument for implementing this inside the GC, though 
> we may want to think about making it easier to scan only live objects 
> inside such a pool, if that's really an issue.  (I suspect the best way to 
> do that currently is to allocate the pool from the garbage collector as a 
> custom "kind", with a user-defined mark function.  This is probably harder 
> than it should be.)
> Hans
>> -----Original Message-----
>> From: gc-bounces at linux.hpl.hp.com [mailto:gc-bounces at linux.hpl.hp.com]
>> On Behalf Of Jim Hourihan
>> Sent: Thursday, August 26, 2010 10:03 AM
>> To: gc at linux.hpl.hp.com
>> Subject: [Gc] GC in "soft" real time
>> We've been using the GC in our cross platform app for a few years now,
>> and its been great. But in the process we've starting using it *too*
>> much inside of a render loop (~60Hz at its fastest, usually 24). So
>> collection is taking longer than .03 seconds for us on windows in
>> particular. On mac and linux its still under the limit.
>> I had a thought about our particular use case which I'm not sure is
>> directly applicable to the GC library itself or not.
>> Basically, we've observed that *all* of the memory allocated during the
>> critical rendering loop can be immediately reclaimed. No object
>> allocated during that time is retained.  The obvious solution is
>> allocate from a pool (e.g. objc autorelease) and free it all on return
>> from the rendering code. (I'm sure there is a term-of-art for this that
>> I don't know.)
>> So my question is: is this something that makes sense to do in the
>> context of the GC library (because of issues I don't know about) or
>> should this type of thing be implemented outside of it.
>> A little bit of context: the GC is being used by our extension language
>> which is making OpenGL calls. Its not being used for all memory
>> management in the app.
>> Thanks for any help.
>>     -Jim
>> _______________________________________________
>> Gc mailing list
>> Gc at linux.hpl.hp.com
>> https://www.hpl.hp.com/hosted/linux/mail-archives/gc/
> _______________________________________________
> Gc mailing list
> Gc at linux.hpl.hp.com
> https://www.hpl.hp.com/hosted/linux/mail-archives/gc/

More information about the Gc mailing list