[Gc] Benchmarks for 2 GB process
bruce at hoult.org
Tue Dec 18 16:05:28 PST 2012
It's important to distinguish between:
- 2 GB of process size that includes mostly pointer-free data such as
images or sounds or arrays of scientific data (which should be allocated
using *GC_malloc_atomic), and a much smaller amount of memory that actually
needs to be scanned in the mark phase of a GC*
- 2 GB of process size that consists of pointer-full objects that all need
to be scanned.
The latter would be quite unusual, though it is of course possible, and
will be slow (but that might be unavoidable). The former will be high
Either way, if you have 2 GB in the GC heap then you should be using 64
bit code, otherwise you'll start to have big problems with random integers
looking like pointers into the heap and a lot of retention of memory that
should have been freed.
Just random thoughts that don't really bear on your question as such...
On Wed, Dec 19, 2012 at 3:51 AM, Brian Beuning <bbeuning at corecard.com>wrote:
> I Googled for "Boehm GC benchmarks" and read some of the results.
> They all seem to be for smallish (30 to 200 MB) process sizes.
> Are there any GC benchmark results for process sizes around 2 GB.
> Note - I have worked with Boehm GC before. About 15 years ago
> we converted it to allocate from shared memory.
> Brian Beuning
> Gc mailing list
> Gc at linux.hpl.hp.com
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Gc