[Gc] Re[6]: libatomic_ops aarch64 support

Ivan Maidanski ivmai at mail.ru
Mon Mar 4 13:12:17 PST 2013

 Hi Yvan,

I've applied your patch (to add-aarch64-support branch) plus mine minor changes. Please retest.
2 questions:
* Could we discard stxp in double_load like for 32-bit ARM?
* The clobber lists of all asm statements are empty, is it ok?

Thank you.


Четверг, 28 февраля 2013, 22:57 +01:00 от Yvan Roux <yvan.roux at linaro.org>:
>Hi Ivan,
>I finally fixed the double_[load|store|compare_and_swap] AArch64
>support. I defined double_ptr_storage as an unsigned __int128 and used
>the load and store exclusive pair registers instruction of the A64
>isa. The testsuite is now fine (notice that the failures with stack
>and malloc was due to the previous compare_and_swap implementation). I
>kept the generic implementation garded by an ifndef, but maybe they
>could be put in something like a double_generic.h.
>On 15 February 2013 15:51, Yvan Roux < yvan.roux at linaro.org > wrote:
>> Ivan,
>> the native build is fine after the small fix below in your last commit
>> (the issue I encountered was on my side), we just have to fix the
>> correctness ;)
>>      return (int)__atomic_compare_exchange_n(&addr->AO_whole,
>> -                                &old_val->AO_whole /* p_expected */,
>> +                                &old_val.AO_whole /* p_expected */,
>> Yvan

-------------- next part --------------
An HTML attachment was scrubbed...
URL: https://napali.hpl.hp.com/pipermail/gc/attachments/20130305/4d65a642/attachment.htm

More information about the Gc mailing list