[Gc] [libatomic_ops] bug with gcc/x86_64/CAS
aph at redhat.com
Wed Feb 17 09:33:37 PST 2010
On 02/17/2010 04:45 PM, Ivan Maidanski wrote:
> Andrew Haley <aph at redhat.com> wrote:
>> On 02/17/2010 12:17 PM, Patrick MARLIER wrote:
>>> I think I found a bug into libatomic_ops into AO_compare_and_swap_full
>>> function for gcc and x86_64 cpu.
>>> **** Possible FIX 2: set RAX as earlyclobbered output ****
>>> AO_INLINE int
>>> AO_compare_and_swap_full(volatile AO_t *addr,
>>> AO_t old, AO_t new_val)
>>> char result;
>>> __asm__ __volatile__("lock; cmpxchgq %4, %0; setz %1"
>>> : "=m"(*addr), "=q"(result) , "=&a" (old)
>>> : "m"(*addr), "r" (new_val), "0"(old) : "memory");
>>> return (int) result;
>> I think this asm is best, but it's pretty questionable to use an asm
>> at all, given that this is a built-in gcc function.
> Andrew -
> 1. could you explain why fix 1 is not so good as fix 2;
There is no powerful technical reason.
> 2. thanks for noting that __sync_... built-in funcs exist in GCC but:
> - from which GCC version they are supported?;
I can't remember, but for Intel for a very long time. If you really
need to know, I can do some archaeology.
> - they are supported for all targets (where applicable) or for Intel only?;
They work on others too, but I know certainly on Intel.
> - it would be good if someone send me a draft (at least) code using them;
> - as only bug fixes are accepted for gc72, the changes would go to
> the next major release (unless Hans says the opposite).
Sure, but I will suggest that a correct bug fix is
AO_compare_and_swap_full(volatile AO_t *addr,
AO_t old, AO_t new_val)
return __sync_bool_compare_and_swap (addr, old, new_val);
unless there is a problem with old gcc, in which case don't do this.
> 2. not clear to me what do you mean by "but it's pretty questionable
> to use an asm at all, given that this is a built-in gcc function" -
> you mean CAS
More information about the Gc