summaryrefslogtreecommitdiff
path: root/arch/csky/include/asm/cmpxchg.h
AgeCommit message (Collapse)Author
2022-07-31csky: cmpxchg: Coding convention for BUILD_BUG()Guo Ren
Use BUILD_BUG() instead of the custom bad_xchg. Signed-off-by: Guo Ren <guoren@linux.alibaba.com> Signed-off-by: Guo Ren <guoren@kernel.org>
2022-07-31csky: Add qspinlock supportGuo Ren
Enable qspinlock by the requirements mentioned in a8ad07e5240c9 ("asm-generic: qspinlock: Indicate the use of mixed-size atomics"). C-SKY only has "ldex/stex" for all atomic operations. So csky give a strong forward guarantee for "ldex/stex." That means when ldex grabbed the cache line into $L1, it would block other cores from snooping the address with several cycles. The atomic_fetch_add & xchg16 has the same forward guarantee level in C-SKY. Qspinlock has better code size and performance in a fast path. Signed-off-by: Guo Ren <guoren@linux.alibaba.com> Signed-off-by: Guo Ren <guoren@kernel.org>
2022-04-25csky: atomic: Optimize cmpxchg with acquire & releaseGuo Ren
Optimize cmpxchg with ASM acquire/release fence ASM instructions instead of previous generic based. Prevent a fence when cmxchg's first load != old. Comments by Rutland: 8e86f0b409a4 ("arm64: atomics: fix use of acquire + release for full barrier semantics") Comments by Boqun: FWIW, you probably need to make sure that a barrier instruction inside an lr/sc loop is a good thing. IIUC, the execution time of a barrier instruction is determined by the status of store buffers and invalidate queues (and probably other stuffs), so it may increase the execution time of the lr/sc loop, and make it unlikely to succeed. But this really depends on how the arch executes these instructions. Link: https://lore.kernel.org/linux-riscv/CAJF2gTSAxpAi=LbAdu7jntZRUa=-dJwL0VfmDfBV5MHB=rcZ-w@mail.gmail.com/T/#m27a0f1342995deae49ce1d0e1f2683f8a181d6c3 Signed-off-by: Guo Ren <guoren@linux.alibaba.com> Signed-off-by: Guo Ren <guoren@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com>
2021-05-26locking/atomic: csky: move to ARCH_ATOMICMark Rutland
We'd like all architectures to convert to ARCH_ATOMIC, as once all architectures are converted it will be possible to make significant cleanups to the atomics headers, and this will make it much easier to generically enable atomic functionality (e.g. debug logic in the instrumented wrappers). As a step towards that, this patch migrates csky to ARCH_ATOMIC. The arch code provides arch_{atomic,atomic64,xchg,cmpxchg}*(), and common code wraps these with optional instrumentation to provide the regular functions. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Guo Ren <guoren@kernel.org> Cc: Boqun Feng <boqun.feng@gmail.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Will Deacon <will@kernel.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20210525140232.53872-17-mark.rutland@arm.com
2021-01-12csky: Fixup asm/cmpxchg.h with correct ordering barrierGuo Ren
Optimize the performance of cmpxchg by using more fine-grained acquire/release barriers. Signed-off-by: Guo Ren <guoren@linux.alibaba.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Paul E. McKenney <paulmck@kernel.org>
2018-10-26csky: Atomic operationsGuo Ren
This patch adds atomic, cmpxchg, spinlock files. Signed-off-by: Guo Ren <ren_guo@c-sky.com> Cc: Andrea Parri <andrea.parri@amarulasolutions.com> Cc: Arnd Bergmann <arnd@arndb.de> Reviewed-by: Peter Zijlstra <peterz@infradead.org>