[Gc] possible memory performance issues

Boehm, Hans hans.boehm at hp.com
Mon Sep 27 13:39:50 PDT 2004


Could you describe what problems you are seeing in the default collector configuration?

You don't want to touch GC_allocobj.  It's an internal routine which doesn't really make policy decisions.  Some GC decisions are made further down when a new heap block is allocated.  This may fail with a non-full heap if it needs to split a larger block and (based on history) it believes that larger blocks may later be in demand.  If it fails, GC_collect_or_expand eventually decides whether the heap is full enough that it should be grown, or whether it should collect.  The easiest way to adjust this decision is by setting GC_free_space_divisor.

If you call GC_expand_hp, you are telling the collector that you want it to use a large heap.  If that's not what you meant to do, then let it grow the heap.

It is true that unmapped pages count towards maximum heap size.  That's probably suboptimal.  I'm generally not very happy with the way USE_UNMAP works, and this is probably not its most serious problem.  (Keeping track of unmapped pages in the block header was almost certainly a mistake.  It interacts badly with coalescing of free blocks.  We need a separate data structure for unmapped pages.  If someone wants to look at this after 7.0alpha1 is out, that would be great.)

Hans

-----Original Message-----
From: gc-bounces at napali.hpl.hp.com [mailto:gc-bounces at napali.hpl.hp.com]On Behalf Of Johannes P. Schmidt
Sent: Friday, September 24, 2004 7:05 PM
To: gc at napali.hpl.hp.com
Subject: [Gc] possible memory performance issues


(1) GC_allocobj

I am running with GCJ on an embedded MIPS processor with limited RAM.  The main problem I have is that if I use GC_allocobj as-is, my program invariably runs out of memory with little heap actually used.  If I change GC_allocobj to force a collect before an expand every time, I never run out of memory.  But, because of all the collects, things run very slowly.

One thing that seems worrisome about GC_allocobj is that if not using incremental collection, this function appears to always try GC_new_hblk before C_collect_or_expand. 

This means if the heap is initialize at a large size (for example with GC_expand_hp), then all of this memory will be used before any collection takes place.  This can be bad if this allocation leaves uncollectible objects in many blocks.

I am concerned that excessive blocks will be allocated even when GC_expand_hp isn't used.

I wonder whether anybody has thought about a mechanism that keeps track of how many blocks by size/kind have been allocated, and how many were released on most recent GC and use this to better determine when to collect versus expand.

note an earlier attempt at fixing this problem was: If no free items, allocate a new block, but also force a GC.  The idea is that this would over time proportion the optimal number of each size/kind of block.  However its performance was poor, mostly I think because of blocks becoming entirely empty and returning to the heap (which I really didn't want to disable either).

(2) USE_MUNMAP

If I am interpreting this correctly, free blocks which aren't touched for a period of time will be unmapped, which will free up process memory space.  However, these unmapped regions still count as part of the heap size (I do use GC_set_max_heap_size).

The program is that this then doesn't seem to solve fragmentation problems caused by large blocks of memory (>HBLKSZ).  If unused blocks were simply deleted altogether, and the current GC_heapsize was decreased, then the OS could satisfy the next request at a different virtual address (if necessary), eliminating fragmentation problems.

First question is, do I misunderstand how the code actually works?  Second, what do you think about this type of change?  Too slow?

Johannes Schmidt
jschmidt at avtrex dot com


More information about the Gc mailing list