[Gc] Parallel mark and thread local alloc

Boehm, Hans hans.boehm at hp.com
Fri Jan 13 16:56:27 PST 2006


It should mostly be there, but my SPARC test machine currently is in
need of resurrection.

The parallel mark may not quite be usable, since the (included)
atomic_ops package needs a SPARC V9 port.  It really wants
compare-and-swap, and the current version only understands V8, IIRC.

For 7.0alpha, I would generally use the current CVS tree.  Instructions
are on the GC web page.

Hans

> -----Original Message-----
> From: gc-bounces at napali.hpl.hp.com 
> [mailto:gc-bounces at napali.hpl.hp.com] On Behalf Of Emmanuel Stapf [ES]
> Sent: Friday, January 13, 2006 10:59 AM
> To: Boehm, Hans
> Cc: gc at napali.hpl.hp.com
> Subject: RE: [Gc] Parallel mark and thread local alloc
> 
> 
> Hi Hans,
> 
> Is this now included in the latest 7.0 snapshot?
> 
> Regards,
> Manu 
> 
> > -----Original Message-----
> > From: gc-bounces at napali.hpl.hp.com
> > [mailto:gc-bounces at napali.hpl.hp.com] On Behalf Of Hans Boehm
> > Sent: Thursday, September 16, 2004 4:09 PM
> > To: Emmanuel Stapf [ES]
> > Cc: gc at napali.hpl.hp.com
> > Subject: Re: [Gc] Parallel mark and thread local alloc
> > 
> > That's not currently possible, since we don't currently use
> > the generic pthreads code on Solaris.  I'm planning on 
> > forcing this to get fixed in 7.0 by making Solaris use the 
> > standard code.
> > 
> > Hans
> > 
> > On Tue, 14 Sep 2004, Emmanuel Stapf [ES] wrote:
> > 
> > > Hi,
> > >
> > > I'd like to use PARALLEL_MARK and THREAD_LOCAL_ALLOC features with
> > > Solaris 8, but when I add those options it fails to compile 
> > gc.a with the following errors:
> > >
> > > Undefined                       first referenced
> > >  symbol                             in file
> > > GC_local_malloc_atomic              tests/test.o
> > > GC_atomic_add                       gc.a(mallocx.o)
> > > GC_compare_and_exchange             gc.a(alloc.o)
> > > GC_memory_barrier                   gc.a(mark.o)
> > > GC_local_malloc                     tests/test.o
> > > GC_acquire_mark_lock                gc.a(mark.o)
> > > GC_release_mark_lock                gc.a(mark.o)
> > > GC_wait_for_reclaim                 gc.a(alloc.o)
> > > GC_mark_thread_local_free_lists     gc.a(mark_rts.o)
> > > GC_wait_marker                      gc.a(mark.o)
> > > GC_notify_all_marker                gc.a(mark.o)
> > > GC_init_parallel                    gc.a(misc.o)
> > > GC_notify_all_builder               gc.a(mallocx.o)
> > >
> > > The command line options are:
> > >
> > > -I./include -DATOMIC_UNCOLLECTABLE -DNO_SIGNALS
> > > -DNO_EXECUTE_PERMISSION -DSILENT -D_REENTRANT 
> -DTHREAD_LOCAL_ALLOC 
> > > -DGC_SOLARIS_THREADS -DPARALLEL_MARK -DGC_SOLARIS_PTHREADS
> > >
> > > Am I missing something?
> > >
> > > Also it says somewhere that if one wants to use
> > PARALLEL_MARK, one has
> > > to include `gc.h' all the time before accessing any routine of the
> > > thread library. Is this still the case? And a final 
> > question, it looks
> > > that to use PARALLEL_MARK on Solaris, one has to use the pthread
> > > implementation and not the native thread library, is it correct?
> > >
> > > Thanks,
> > > Manu
> > >
> > > 
> > 
> ----------------------------------------------------------------------
> > > --
> > > Eiffel Software
> > > 805-685-1006
> > > http://www.eiffel.com
> > > Customer support: http://support.eiffel.com Product information:
> > > mailto:info at eiffel.com User group: http://groups.eiffel.com/join
> > > 
> > 
> ----------------------------------------------------------------------
> > > --
> > >
> > > _______________________________________________
> > > Gc mailing list
> > > Gc at linux.hpl.hp.com 
> > > http://www.hpl.hp.com/hosted/linux/mail-archives/gc/
> > >
> > _______________________________________________
> > Gc mailing list
> > Gc at linux.hpl.hp.com 
> > http://www.hpl.hp.com/hosted/linux/mail-archives/gc/
> > 
> 
> _______________________________________________
> Gc mailing list
> Gc at linux.hpl.hp.com 
> http://www.hpl.hp.com/hosted/linux/mail-archives/gc/
> 



More information about the Gc mailing list