[Gc] possible memory performance issues

Johannes P. Schmidt jschmidt at avtrex.com
Fri Sep 24 19:04:42 PDT 2004


(1) GC_allocobj

 

I am running with GCJ on an embedded MIPS processor with limited RAM.
The main problem I have is that if I use GC_allocobj as-is, my program
invariably runs out of memory with little heap actually used.  If I
change GC_allocobj to force a collect before an expand every time, I
never run out of memory.  But, because of all the collects, things run
very slowly.

 

One thing that seems worrisome about GC_allocobj is that if not using
incremental collection, this function appears to always try GC_new_hblk
before C_collect_or_expand. 

 

This means if the heap is initialize at a large size (for example with
GC_expand_hp), then all of this memory will be used before any
collection takes place.  This can be bad if this allocation leaves
uncollectible objects in many blocks.

 

I am concerned that excessive blocks will be allocated even when
GC_expand_hp isn't used.

 

I wonder whether anybody has thought about a mechanism that keeps track
of how many blocks by size/kind have been allocated, and how many were
released on most recent GC and use this to better determine when to
collect versus expand.

 

note an earlier attempt at fixing this problem was: If no free items,
allocate a new block, but also force a GC.  The idea is that this would
over time proportion the optimal number of each size/kind of block.
However its performance was poor, mostly I think because of blocks
becoming entirely empty and returning to the heap (which I really didn't
want to disable either).

 

(2) USE_MUNMAP

 

If I am interpreting this correctly, free blocks which aren't touched
for a period of time will be unmapped, which will free up process memory
space.  However, these unmapped regions still count as part of the heap
size (I do use GC_set_max_heap_size).

 

The program is that this then doesn't seem to solve fragmentation
problems caused by large blocks of memory (>HBLKSZ).  If unused blocks
were simply deleted altogether, and the current GC_heapsize was
decreased, then the OS could satisfy the next request at a different
virtual address (if necessary), eliminating fragmentation problems.

 

First question is, do I misunderstand how the code actually works?
Second, what do you think about this type of change?  Too slow?

 

Johannes Schmidt

jschmidt at avtrex dot com

 

 

 

 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://napali.hpl.hp.com/pipermail/gc/attachments/20040924/e1844046/attachment.html


More information about the Gc mailing list