From patchwork Tue Jun 19 12:53:11 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 139117 Delivered-To: patch@linaro.org Received: by 2002:a2e:970d:0:0:0:0:0 with SMTP id r13-v6csp5172530lji; Tue, 19 Jun 2018 05:55:23 -0700 (PDT) X-Google-Smtp-Source: ADUXVKLQv2FzkDho4GdQzc/OeSfqoG53XrwOmWkSFJ/OABOKreSlEpsi1W2lncY/f4dcow5yido+ X-Received: by 2002:a62:9419:: with SMTP id m25-v6mr17980184pfe.120.1529412923294; Tue, 19 Jun 2018 05:55:23 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1529412923; cv=none; d=google.com; s=arc-20160816; b=rG+MjWJw07kk9m7RglTy7nIYpdLzLLC+0tARbEMncQrMx3vRQkx5Dmm9S72nLSEWwx wDqKwAqM1Aq2FhrW/peZ8hSX7tI+rJerOs7vBUih5zepMjSCQNxcnhHNLamYOXsH1rHh 52EGMn/JhrobKLq+kmlcFMg49pJ71q+ukEtICFfqWngNOYXc1zjbcnS/PEwA4SDpSc3L YGJXe3XyWgt6So8+t6bnsShXx6zDo7eOKbsxYz/c8/65yR8W+M/3DyUsun8Hlzhlsvdu oQEkiyhiOUSfQDRNNyc66Re6bBXx1hgcNNVy6Q7TxAHS7Tdn/XU1ds5L87isK26Tc3Rx PAfQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=Gg9C4/6ZRYAo92TROsVov7p7aoQbcWEtQ183Cq48cLA=; b=uW5Aq6TYh8wEitNo89wEUTSHimMuzmLUoc8+p5tO3CjOIMW1oifuCdvEhsVBUb+ZKf vKmVmvpRr/t80EyQZlYO9s6gHaB2J9KAMAUT5FFwxJ5tGFhbnW6ctAl05lR7g+3mv9dU iFxwqY7gUzR7W8vbIWGV6gVciPcsMr2uM5UwdLEBf4DSPeOHoRRQcmurKrsXCc1x2Ar1 qXvi5WucCU72Kuv0wkQ6YQ/FVOrNfrUnvhOEJNxOJUE+gucu6cvNG8IBsydVgpDKV2QC yo1fZrcmytBDzKDhsK7Cb2BzGfdfW1YUJ34Ozy0Ib4XY9eeJJnSRHMHYTfEbAM/qXIDH qmRQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id n64-v6si17545306pfh.210.2018.06.19.05.55.23; Tue, 19 Jun 2018 05:55:23 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S937931AbeFSMzT (ORCPT + 30 others); Tue, 19 Jun 2018 08:55:19 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:49346 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S937854AbeFSMwl (ORCPT ); Tue, 19 Jun 2018 08:52:41 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id F41EA15B2; Tue, 19 Jun 2018 05:52:40 -0700 (PDT) Received: from edgewater-inn.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id C626B3F557; Tue, 19 Jun 2018 05:52:40 -0700 (PDT) Received: by edgewater-inn.cambridge.arm.com (Postfix, from userid 1000) id DC99E1AE36FB; Tue, 19 Jun 2018 13:53:15 +0100 (BST) From: Will Deacon To: linux-kernel@vger.kernel.org Cc: peterz@infradead.org, mingo@kernel.org, linux-arm-kernel@lists.infradead.org, yamada.masahiro@socionext.com, Will Deacon Subject: [RESEND PATCH v2 6/9] asm-generic/bitops/atomic.h: Rewrite using atomic_* Date: Tue, 19 Jun 2018 13:53:11 +0100 Message-Id: <1529412794-17720-7-git-send-email-will.deacon@arm.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1529412794-17720-1-git-send-email-will.deacon@arm.com> References: <1529412794-17720-1-git-send-email-will.deacon@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The atomic bitops can actually be implemented pretty efficiently using the atomic_* ops, rather than explicit use of spinlocks. Cc: Peter Zijlstra Cc: Ingo Molnar Acked-by: Peter Zijlstra (Intel) Signed-off-by: Will Deacon --- include/asm-generic/bitops/atomic.h | 188 +++++++----------------------------- 1 file changed, 33 insertions(+), 155 deletions(-) -- 2.1.4 diff --git a/include/asm-generic/bitops/atomic.h b/include/asm-generic/bitops/atomic.h index 04deffaf5f7d..dd90c9792909 100644 --- a/include/asm-generic/bitops/atomic.h +++ b/include/asm-generic/bitops/atomic.h @@ -2,189 +2,67 @@ #ifndef _ASM_GENERIC_BITOPS_ATOMIC_H_ #define _ASM_GENERIC_BITOPS_ATOMIC_H_ -#include -#include - -#ifdef CONFIG_SMP -#include -#include /* we use L1_CACHE_BYTES */ - -/* Use an array of spinlocks for our atomic_ts. - * Hash function to index into a different SPINLOCK. - * Since "a" is usually an address, use one spinlock per cacheline. - */ -# define ATOMIC_HASH_SIZE 4 -# define ATOMIC_HASH(a) (&(__atomic_hash[ (((unsigned long) a)/L1_CACHE_BYTES) & (ATOMIC_HASH_SIZE-1) ])) - -extern arch_spinlock_t __atomic_hash[ATOMIC_HASH_SIZE] __lock_aligned; - -/* Can't use raw_spin_lock_irq because of #include problems, so - * this is the substitute */ -#define _atomic_spin_lock_irqsave(l,f) do { \ - arch_spinlock_t *s = ATOMIC_HASH(l); \ - local_irq_save(f); \ - arch_spin_lock(s); \ -} while(0) - -#define _atomic_spin_unlock_irqrestore(l,f) do { \ - arch_spinlock_t *s = ATOMIC_HASH(l); \ - arch_spin_unlock(s); \ - local_irq_restore(f); \ -} while(0) - - -#else -# define _atomic_spin_lock_irqsave(l,f) do { local_irq_save(f); } while (0) -# define _atomic_spin_unlock_irqrestore(l,f) do { local_irq_restore(f); } while (0) -#endif +#include +#include +#include /* - * NMI events can occur at any time, including when interrupts have been - * disabled by *_irqsave(). So you can get NMI events occurring while a - * *_bit function is holding a spin lock. If the NMI handler also wants - * to do bit manipulation (and they do) then you can get a deadlock - * between the original caller of *_bit() and the NMI handler. - * - * by Keith Owens + * Implementation of atomic bitops using atomic-fetch ops. + * See Documentation/atomic_bitops.txt for details. */ -/** - * set_bit - Atomically set a bit in memory - * @nr: the bit to set - * @addr: the address to start counting from - * - * This function is atomic and may not be reordered. See __set_bit() - * if you do not require the atomic guarantees. - * - * Note: there are no guarantees that this function will not be reordered - * on non x86 architectures, so if you are writing portable code, - * make sure not to rely on its reordering guarantees. - * - * Note that @nr may be almost arbitrarily large; this function is not - * restricted to acting on a single-word quantity. - */ -static inline void set_bit(int nr, volatile unsigned long *addr) +static inline void set_bit(unsigned int nr, volatile unsigned long *p) { - unsigned long mask = BIT_MASK(nr); - unsigned long *p = ((unsigned long *)addr) + BIT_WORD(nr); - unsigned long flags; - - _atomic_spin_lock_irqsave(p, flags); - *p |= mask; - _atomic_spin_unlock_irqrestore(p, flags); + p += BIT_WORD(nr); + atomic_long_or(BIT_MASK(nr), (atomic_long_t *)p); } -/** - * clear_bit - Clears a bit in memory - * @nr: Bit to clear - * @addr: Address to start counting from - * - * clear_bit() is atomic and may not be reordered. However, it does - * not contain a memory barrier, so if it is used for locking purposes, - * you should call smp_mb__before_atomic() and/or smp_mb__after_atomic() - * in order to ensure changes are visible on other processors. - */ -static inline void clear_bit(int nr, volatile unsigned long *addr) +static inline void clear_bit(unsigned int nr, volatile unsigned long *p) { - unsigned long mask = BIT_MASK(nr); - unsigned long *p = ((unsigned long *)addr) + BIT_WORD(nr); - unsigned long flags; - - _atomic_spin_lock_irqsave(p, flags); - *p &= ~mask; - _atomic_spin_unlock_irqrestore(p, flags); + p += BIT_WORD(nr); + atomic_long_andnot(BIT_MASK(nr), (atomic_long_t *)p); } -/** - * change_bit - Toggle a bit in memory - * @nr: Bit to change - * @addr: Address to start counting from - * - * change_bit() is atomic and may not be reordered. It may be - * reordered on other architectures than x86. - * Note that @nr may be almost arbitrarily large; this function is not - * restricted to acting on a single-word quantity. - */ -static inline void change_bit(int nr, volatile unsigned long *addr) +static inline void change_bit(unsigned int nr, volatile unsigned long *p) { - unsigned long mask = BIT_MASK(nr); - unsigned long *p = ((unsigned long *)addr) + BIT_WORD(nr); - unsigned long flags; - - _atomic_spin_lock_irqsave(p, flags); - *p ^= mask; - _atomic_spin_unlock_irqrestore(p, flags); + p += BIT_WORD(nr); + atomic_long_xor(BIT_MASK(nr), (atomic_long_t *)p); } -/** - * test_and_set_bit - Set a bit and return its old value - * @nr: Bit to set - * @addr: Address to count from - * - * This operation is atomic and cannot be reordered. - * It may be reordered on other architectures than x86. - * It also implies a memory barrier. - */ -static inline int test_and_set_bit(int nr, volatile unsigned long *addr) +static inline int test_and_set_bit(unsigned int nr, volatile unsigned long *p) { + long old; unsigned long mask = BIT_MASK(nr); - unsigned long *p = ((unsigned long *)addr) + BIT_WORD(nr); - unsigned long old; - unsigned long flags; - _atomic_spin_lock_irqsave(p, flags); - old = *p; - *p = old | mask; - _atomic_spin_unlock_irqrestore(p, flags); + p += BIT_WORD(nr); + if (READ_ONCE(*p) & mask) + return 1; - return (old & mask) != 0; + old = atomic_long_fetch_or(mask, (atomic_long_t *)p); + return !!(old & mask); } -/** - * test_and_clear_bit - Clear a bit and return its old value - * @nr: Bit to clear - * @addr: Address to count from - * - * This operation is atomic and cannot be reordered. - * It can be reorderdered on other architectures other than x86. - * It also implies a memory barrier. - */ -static inline int test_and_clear_bit(int nr, volatile unsigned long *addr) +static inline int test_and_clear_bit(unsigned int nr, volatile unsigned long *p) { + long old; unsigned long mask = BIT_MASK(nr); - unsigned long *p = ((unsigned long *)addr) + BIT_WORD(nr); - unsigned long old; - unsigned long flags; - _atomic_spin_lock_irqsave(p, flags); - old = *p; - *p = old & ~mask; - _atomic_spin_unlock_irqrestore(p, flags); + p += BIT_WORD(nr); + if (!(READ_ONCE(*p) & mask)) + return 0; - return (old & mask) != 0; + old = atomic_long_fetch_andnot(mask, (atomic_long_t *)p); + return !!(old & mask); } -/** - * test_and_change_bit - Change a bit and return its old value - * @nr: Bit to change - * @addr: Address to count from - * - * This operation is atomic and cannot be reordered. - * It also implies a memory barrier. - */ -static inline int test_and_change_bit(int nr, volatile unsigned long *addr) +static inline int test_and_change_bit(unsigned int nr, volatile unsigned long *p) { + long old; unsigned long mask = BIT_MASK(nr); - unsigned long *p = ((unsigned long *)addr) + BIT_WORD(nr); - unsigned long old; - unsigned long flags; - - _atomic_spin_lock_irqsave(p, flags); - old = *p; - *p = old ^ mask; - _atomic_spin_unlock_irqrestore(p, flags); - return (old & mask) != 0; + p += BIT_WORD(nr); + old = atomic_long_fetch_xor(mask, (atomic_long_t *)p); + return !!(old & mask); } #endif /* _ASM_GENERIC_BITOPS_ATOMIC_H */