[Gc] powerpc64 problems

Boehm, Hans hans.boehm at hp.com
Tue Nov 22 14:30:28 PST 2005


That's not so strange.  I expect another thread died in GC_test_and_set.
Various threads implementations under Linux seem to have problems with
not noticing dead threads on occasion.

Looking at the GC_test_and_set implementation for POWERPC, starting at
line around 157 in include/private/gc_locks.h, it looks to me like the
64 bit implementation is bogus.  The argument is always a pointer to a 4
byte quantity.  Hence I think the 32 bit code should work fine (?) Could
you try just deleting the 64 bit version, and getting rid of the ifdef?

Using pthread locks should also be fine.  Originally the GC used its own
locks, since many of the standard pthread_mutex implementations handed
off the mutex to another thread if the mutex was ever unlocked with
waiters.  It turns out that has disastrous performance implications if
there is any contention, even on a uniprocessor.  And I think that some
old implementations also neglected to spin first on a multiprocessor,
with similar effect.  But I think that NPTL fixes both of those, and
linuxthreads may also have started to behave better a while ago.

The GC lock implementation has the advantage that it doesn't need an
(often expensive) atomic operation on lock exit, while most
pthread_mutex_unlock implementations do.  And it can be partially
inlined.  But it has the probably bigger disadvantage that it doesn't
maintain a queue, and hence all it can do on contention is first spin,
and then yield and finally sleep.  Unfortunately, the sleep case arises
fairly frequently, and Linux often can't sleep for less than 20msecs,
which is painful.

Hans

> -----Original Message-----
> From: Christian Thalinger [mailto:twisti at complang.tuwien.ac.at] 
> Sent: Tuesday, November 22, 2005 1:15 PM
> To: Boehm, Hans
> Cc: gc-ml
> Subject: RE: [Gc] powerpc64 problems
> 
> 
> On Tue, 2005-11-22 at 12:39 -0800, Boehm, Hans wrote:
> > It looks like we're either seeing a deadlock, or another thread is 
> > hung. Could you check whether in this mode, it is consuming 
> > significant amounts of CPU time?
> 
> It's using 0% of cpu time.
> 
> > 
> > Could you also post stack backtraces for all the threads?  
> It would be 
> > good to know who is holding the allocation lock, and what 
> that thread 
> > is doing.  The stack trace you posted only indicates that 
> some thread 
> > is waiting for the allocation lock.
> 
> Hmm, strange enough but there is only one thread:
> 
> (gdb) info threads
> * 1 Thread 549757910752 (LWP 7065)  0x0000008000043cc0
> in .__nanosleep_nocancel
>     () from /lib/tls/libpthread.so.0
> (gdb) 
> 
> > 
> > It may be that the GC_test_and_set implementation is broken on 
> > PowerPC64.  You might try building with USE_PTHREAD_LOCKS, 
> and see if 
> > that works.  (This may be better with NPTL threads anyway.)
> 
> Interesting you're mentioning this function as this one 
> crashes when linked statically into CACAO (i should have 
> mentioned this in my first
> mail):
> 
> Program received signal SIGBUS, Bus error.
> [Switching to Thread 549758075648 (LWP 18022)] 
> 0x00000000100322f8 in GC_test_and_set (addr=0x1006b314) at 
> gc_locks.h:163
> 163               __asm__ __volatile__(
> (gdb) bt
> #0  0x00000000100322f8 in GC_test_and_set (addr=0x1006b314) 
> at gc_locks.h:163 #1  0x0000000010033460 in GC_expand_hp 
> (bytes=102400) at alloc.c:996 #2  0x0000000010005ddc in 
> gc_init (heapmaxsize=2097152,
> heapstartsize=102400) at boehm.c:110
> #3  0x00000000100035d4 in main (argc=6, argv=0x1ffffa5b448) 
> at cacaoh.c:273
> (gdb) disas
> Dump of assembler code for function GC_test_and_set:
> 0x00000000100322d8 <GC_test_and_set+0>: std     r31,-8(r1)
> 0x00000000100322dc <GC_test_and_set+4>: stdu    r1,-80(r1)
> 0x00000000100322e0 <GC_test_and_set+8>: mr      r31,r1
> 0x00000000100322e4 <GC_test_and_set+12>:        std     r3,128(r31)
> 0x00000000100322e8 <GC_test_and_set+16>:        li      r0,1
> 0x00000000100322ec <GC_test_and_set+20>:        std     r0,56(r31)
> 0x00000000100322f0 <GC_test_and_set+24>:        ld      r0,56(r31)
> 0x00000000100322f4 <GC_test_and_set+28>:        ld      r9,128(r31)
> 0x00000000100322f8 <GC_test_and_set+32>:        ldarx   r11,0,r9
> 
> When defining USE_PTHREAD_LOCKS the gctest works.  Why do you 
> think this will be better with NPTL threads?
> 
> TWISTI
> 
> 



More information about the Gc mailing list