[Gc] Re: Re[8]: libatomic_ops aarch64 support

Yvan Roux yvan.roux at linaro.org
Tue Mar 12 04:04:40 PST 2013


Hi Ivan,

retested and everything is OK. Thanks a lot for your help.

Cheers,
Yvan


On 9 March 2013 14:37, Ivan Maidanski <ivmai at mail.ru> wrote:
> Hi Yvan,
>
> I've applied your patch (+ comments). Please retest - if everything is ok
> then I think the branch could be merged into master.
>
> Regards,
> Ivan
>
> Wed, 6 Mar 2013, 3:33 +01:00 from Yvan Roux <yvan.roux at linaro.org>:
>
> Hi Ivan,
>
>> I've applied your patch (to add-aarch64-support branch) plus mine minor
>> changes. Please retest.
>
> Cool ! thanks
>
> the testsuite is ok, now the only missing functions are :
>
> Missing: AO_compare_and_swap_double
> Missing: AO_compare_and_swap_double_acquire
> Missing: AO_compare_and_swap_double_release
> Missing: AO_compare_and_swap_double_read
> Missing: AO_compare_and_swap_double_write
> Missing: AO_compare_and_swap_double_full
> Missing: AO_compare_and_swap_double_release_write
> Missing: AO_compare_and_swap_double_acquire_read
>
> but the comment leader has to be reverted to '//' ('@' is no more
> supported by A64 assemblers)
>
>
>> 2 questions:
>> * Could we discard stxp in double_load like for 32-bit ARM?
>
> unfortunately no, because ldxp is not single-copy atomic.
>
>> * The clobber lists of all asm statements are empty, is it ok?
>
> Compared to the arm.h implementation, the 'cc' are no more clobbered
> because A64 doesn't include the concept of conditonal execution, and
> thus the number of instruction that clobbered cc was reduced. But
> indeed, some of the asm statements needs to clobber the memory, as the
> compiler is not aware that the register hold a memory location, but
> with the Q constraint it should be good without clobber information. I
> attached the patch which does it.
>
> Cheers,
> Yvan
>
>> Thank you.
>>
>> Regards,
>> Ivan
>>
>> Четверг, 28 февраля 2013, 22:57 +01:00 от Yvan Roux
>> <yvan.roux at linaro.org>:
>>
>> Hi Ivan,
>>
>> I finally fixed the double_[load|store|compare_and_swap] AArch64
>> support. I defined double_ptr_storage as an unsigned __int128 and used
>> the load and store exclusive pair registers instruction of the A64
>> isa. The testsuite is now fine (notice that the failures with stack
>> and malloc was due to the previous compare_and_swap implementation). I
>> kept the generic implementation garded by an ifndef, but maybe they
>> could be put in something like a double_generic.h.
>>
>> Cheers,
>> Yvan
>>
>> On 15 February 2013 15:51, Yvan Roux <yvan.roux at linaro.org> wrote:
>>> Ivan,
>>>
>>> the native build is fine after the small fix below in your last commit
>>> (the issue I encountered was on my side), we just have to fix the
>>> correctness ;)
>>>
>>> return (int)__atomic_compare_exchange_n(&addr->AO_whole,
>>> - &old_val->AO_whole /* p_expected */,
>>> + &old_val.AO_whole /* p_expected */,
>>>
>>> Yvan
>>
>>
>
> _______________________________________________
> Gc mailing list
> Gc at linux.hpl.hp.com
> http://www.hpl.hp.com/hosted/linux/mail-archives/gc/
>
>



More information about the Gc mailing list