[Gc] PRINT_STATS kind, and free_space_divisor

Boehm, Hans hans.boehm at hp.com
Mon Nov 22 15:52:37 PST 2004


Thanks.

Here's a completely untested patch against CVS gcc/boehm-gc,
though it would probably apply anywhere:

--- dyn_load.c	2004-08-13 16:05:29.000000000 -0700
+++ dyn_load.c.new	2004-11-22 15:38:50.768581426 -0800
@@ -856,7 +856,7 @@
 		&& (protect == PAGE_EXECUTE_READWRITE
 		    || protect == PAGE_READWRITE)
 		&& !GC_is_heap_base(buf.AllocationBase)
-		&& !is_frame_buffer(p, buf.RegionSize)) {  
+		&& buf.Type != MEM_MAPPED) {  
 #	        ifdef DEBUG_VIRTUALQUERY
 	          GC_dump_meminfo(&buf);
 #	        endif

(This should take care of the frame buffer issues.  Hence
that line goes away.  This creates lots of dead code, which
I didn't worry about yet.)

If I'm right, based on your log, this should cut the root size
down to less than half, which will allow the collector to
collect more frequently, and thus reduce heap size for a give
GC_free_space_divisor.  It may have other positive effects.
(Or it may completely break gcj on MS Windows.)
I would try this with a default GC_free_space_divisor, and
see how it goes.

If you want to adjust GC_free_space_divisor dynamically from gcj,
you're right that you would need some native code.

Hans

> -----Original Message-----
> From: Rutger Ovidius [mailto:r_ovidius at eml.cc]
> Sent: Monday, November 22, 2004 2:54 PM
> To: Boehm, Hans
> Cc: gc at napali.hpl.hp.com; java at gcc.gnu.org
> Subject: Re: [Gc] PRINT_STATS kind, and free_space_divisor
> 
> 
> Monday, November 22, 2004, 10:37:10 AM, you wrote:
> 
> BH> A large free_space_divisor setting will have the effect 
> you describe.
> BH> The gcj root size really needs to get fixed.  You might 
> try adjusting
> BH> it dynamically, if you know that you will be going through a phase
> BH> during which very little memory is dropped.
> 
> I can't really do this from standard java AFAIK. (without patching and
> making gcj specific calls, which I'm trying to avoid)
> 
> BH> (If you want to understand more about where the 15MB of roots are
> BH> coming from, check whether your version of the GC checks
> BH> DEBUG_VIRTUALQUERY in dyn_load.c.  If so, you can turn that on,
> BH> which will produce tons of output.  It looks like the collector
> BH> might be able to ignore segments of type MEM_MAPPED(0x40000).
> BH> If you see a large number of those, it's easy to patch the
> BH> collector to get rid of them.  That will either improve matters,
> BH> or cause a crash.  I would guess it will improve matters, but it
> BH> would be great to find out either way.  Unless I hear complaints,
> BH> gc7.0alpha2 and later will start to ignore MEM_MAPPED segments.)
> 
> The full gc.log (gc1.zip, 47k):
> http://tinyurl.com/4n3qe
> 
> Here is what I see for Type=40000:
> 
> BaseAddress = 360000, AllocationBase = 360000, RegionSize = 
> 1000(4096)          AllocationProtect = 4, State = 1000, 
> Protect = 4, Type = 40000
> BaseAddress = 1790000, AllocationBase = 1790000, RegionSize = 
> 1ea000(2007040)   AllocationProtect = 4, State = 1000, 
> Protect = 4, Type = 40000
> BaseAddress = 1980000, AllocationBase = 1980000, RegionSize = 
> 1000(4096)        AllocationProtect = 4, State = 1000, 
> Protect = 4, Type = 40000
> BaseAddress = 1990000, AllocationBase = 1990000, RegionSize = 
> 10000(65536)      AllocationProtect = 4, State = 1000, 
> Protect = 4, Type = 40000
> BaseAddress = 2500000, AllocationBase = 2500000, RegionSize = 
> 10000(65536)      AllocationProtect = 4, State = 1000, 
> Protect = 4, Type = 40000
> BaseAddress = 2550000, AllocationBase = 2550000, RegionSize = 
> 5000(20480)       AllocationProtect = 4, State = 1000, 
> Protect = 4, Type = 40000
> BaseAddress = 30d0000, AllocationBase = 30d0000, RegionSize = 
> 4000(16384)       AllocationProtect = 4, State = 1000, 
> Protect = 4, Type = 40000
> BaseAddress = 3150000, AllocationBase = 3150000, RegionSize = 
> 1000(4096)        AllocationProtect = 4, State = 1000, 
> Protect = 4, Type = 40000
> BaseAddress = 3160000, AllocationBase = 3160000, RegionSize = 
> 8000(32768)       AllocationProtect = 4, State = 1000, 
> Protect = 4, Type = 40000
> BaseAddress = 3230000, AllocationBase = 3230000, RegionSize = 
> 6000(24576)       AllocationProtect = 4, State = 1000, 
> Protect = 4, Type = 40000
> BaseAddress = 3630000, AllocationBase = 3630000, RegionSize = 
> 3e4000(4079616)   AllocationProtect = 4, State = 1000, 
> Protect = 4, Type = 40000
> BaseAddress = 3a20000, AllocationBase = 3a20000, RegionSize = 
> 2d8000(2981888)   AllocationProtect = 4, State = 1000, 
> Protect = 4, Type = 40000
> BaseAddress = 4410000, AllocationBase = 4410000, RegionSize = 
> 1000(4096)        AllocationProtect = 4, State = 1000, 
> Protect = 4, Type = 40000
> BaseAddress = 5ff0000, AllocationBase = 5ff0000, RegionSize = 
> 2000(8192)        AllocationProtect = 4, State = 1000, 
> Protect = 4, Type = 40000
> BaseAddress = 6260000, AllocationBase = 6260000, RegionSize = 
> 14000(81920)      AllocationProtect = 4, State = 1000, 
> Protect = 4, Type = 40000
> BaseAddress = 6380000, AllocationBase = 6380000, RegionSize = 
> 4000(16384)       AllocationProtect = 4, State = 1000, 
> Protect = 4, Type = 40000
> 
> 
> Whereabouts in the source is the correct place to try to get the gc to
> get rid of them? I use a relatively recent cvs head version of gcc.
> 
> This will be saving gc time/cpu by not having to scan these 
> roots, correct?
> It won't have any (dramatic) effect on memory use?
> 
> BH> The rest of this is gcj-specific:
> 
> BH> Kind=6 probably refers to Java arrays with more than 16K elements.
> BH> Kinds can be created dynamically by the client (through 
> an interface
> BH> that's slowly getting less obscure).  This gives Java a 
> way to register
> BH> it's own tracing procedure for such arrays, which skips 
> the initial
> BH> size field.  (For smaller arrays, we cheat, since we assume that
> BH> such sizes can't look like addresses anyway.  Thus the default
> BH> conservative scan costs nothing and is faster.)  See 
> _Jv_AllocArray
> BH> in gcc/boehm.cc.
> 
> Thanks, found it.
> 
> Regards,
> Rutger Ovidius
> 
> 
> 


More information about the Gc mailing list