[Gc] Segmentation fault on out-of-memory
hans.boehm at hp.com
Mon Oct 15 11:49:15 PDT 2007
> -----Original Message-----
> From: John Bowman
> In Asymptote (asymptote.sourceforge.net), instead of getting
> an out-of-memory error when memory is exhausted I
> occasionally get a segmentation fault from gc-7.0 and also
> from the latest cvs version.
> This doesn't seem to happen with gc6.8, however (I'm using gcc-4.1.2):
> Program received signal SIGSEGV, Segmentation fault.
> 0x082b34eb in GC_mark_from (mark_stack_top=0x994e580,
> mark_stack_limit=0x9956010) at mark.c:834
> 834 PUSH_CONTENTS((ptr_t)current, mark_stack_top,
> Expanding the preprocessor macro, it appears that the error
> occurs here:
> size_t gran_offset = my_hhdr -> hb_map[gran_displ];
> gdb reports the value of gran_displ as 511
> Is it possible that this is a bug in gc-7.0?
However, it's also possible that this isn't an outright bug, but running
out of memory is somehow more likely to show up as a stack overflow in
the collector. As far as I can tell, out-of-memory handling on *nix
systems (and probably others as well) is not 100% reliable. If the
OS-level memory allocation actually fails, you can recover. Of the
OS-level memory allocation causes you to almost run out of memory, but
it's a stack expansion, or possibly store to previously untouched page,
that actually causes you to run out, you generally lose.
In either case, it would be good to understand exactly what's happening
here, to see if the collector could do better.
Can you determine, e.g. by looking at the executing instruction and
register contents, what address it's trying to access? Does my_hhdr
contain a valid address?
> Thanks for developing this very useful garbage collector.
> -- John Bowman
> University of Alberta
> Gc mailing list
> Gc at linux.hpl.hp.com
More information about the Gc