[Gc] GC_allochblk_nth bug

Roger Osborn roger.osborn at linguamatics.com
Mon Mar 3 10:06:38 PST 2008


Lothar Scholz wrote:
> Hello Roger,
>   
Hi,
> RO> (especially that code which is in third party libraries). We have found
> RO> (by dumping Windows heaps) that GC often can't get a large chunk (up to
> RO> 400-500Mb) of the address space because one of the heaps has reserved 
> RO> (but not committed) that region of the address space already. This can
>
> Are you trying to do this on a 32bit Windows?
>   
Yes, we're constrained to do that at the moment (though we're using 64 
bit Unix and will hopefully soon be able to move to 64 bit Windows).
> I remember Hans answer to this in the past was: If your memory
> usage is so high you whould move to 64bit. And i agree with it, after
> running a few stress tests with highly backlinked data i have set my
> limit to 512MB. If my program needs so much then i will move
> over to 64 bit memory.
>
> If every libary is using the msvcrt.dll you might be able to use API
> hooking to redirect every malloc to a gc_malloc call but i'm not sure
> if this is such a good idea. Remember that you might get lots of
> false positives making everything worse.
>   
I share your reservations.  For our particular issue being able to 
impose a maximum size on Windows heaps might be a better way to tide us 
over until we can switch to 64 bit (it's the fact a heap has reserved a 
big chunk of the address space rather than the memory it is actually 
using that is causing an issue). Then at least the extra memory made 
available to GC will be able to have been GC_MALLOC_ATOMICed.   I 
haven't found a way of limiting heap reserved address space yet though, 
nor have I worked out what in our pattern of ordinary malloc memory 
allocations has caused Windows to decide reserving a big chunk of 
address space is warranted.


Thanks for your comments and
Best regards,

Roger




More information about the Gc mailing list