From patchwork Wed Apr 11 18:01:17 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 133165 Delivered-To: patch@linaro.org Received: by 10.46.84.29 with SMTP id i29csp897272ljb; Wed, 11 Apr 2018 11:03:22 -0700 (PDT) X-Google-Smtp-Source: AIpwx49X2Sb3qOinrfK1umOH+R7Cpfpeg2xr/UmrNkA/NA7VpYu1uTioEua2zxQGuDcoT3GGHZwp X-Received: by 10.98.127.144 with SMTP id a138mr3338347pfd.239.1523469802835; Wed, 11 Apr 2018 11:03:22 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1523469802; cv=none; d=google.com; s=arc-20160816; b=B24m0kYaj6seHtwFR0xiVoVMlcV/0GbIfQp8T+cBH8hDwK258YWmz/NiF8TR44QTKp t3w1D47mU10ceG7tErlS0NojWgXB9/zqeuejy5BfcmqpcSZbbQ+l8HxW3UY3y7mooOEQ EqkMH+jBSBpRghMO+39XKyWW6krVlCwrVNsr8u3Zcm81Ut8VL6L1nyhjYYdRE4stONXA 3PrOLqaGaCPn7Mub3WkMC0HlzCvmBZhe1ApZRxkp/q7SiPnysPKYBMSKCQ2TlUeayqGX /yhCvuyLthPksMvrLFYGFTlLc5WT+LZLIFFKRw/oQVx8LM3IaYlc8pv2bkcmApAmkmCi TUMg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=wkblKAmBxnaQe+cTwroykfXYTv1d1hqWOBC1dh053Dg=; b=C1+20e1r8V4fPxc2GQRqrLza6UnB5c9AkYjayOwPGokLopJG3Z1TXDyE6kl1I+ehqd f8/ezb8JdMFkmvK4tDMhGtIjJBxWf0V5kI3jKg75NB4FfOXaZBQgHj0+btRc6so3/H6E uIIK/6n7UeBz87d7QbKYD6xFKaA6S+luki+xc/abrM3lQlsdmcpIwr8xpMuw/yYZUaYe 6bFoHCbOsg2IRbm8uSrt/C/Awmo/BPlR8RsBOYOUwpTwsLexSjEqlmcJGqt27iK/7Ctn dUMKIXXcEqiobHCIxJYnnKmbEcXrQSZG3+OmCExs9sk+JgbbTo6L2dNet8cW+GuLKhQ7 BRBw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id p3si1042816pgc.463.2018.04.11.11.03.22; Wed, 11 Apr 2018 11:03:22 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754104AbeDKSC4 (ORCPT + 29 others); Wed, 11 Apr 2018 14:02:56 -0400 Received: from foss.arm.com ([217.140.101.70]:52152 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752662AbeDKSBH (ORCPT ); Wed, 11 Apr 2018 14:01:07 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 3B823168F; Wed, 11 Apr 2018 11:01:07 -0700 (PDT) Received: from edgewater-inn.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 0C4623F487; Wed, 11 Apr 2018 11:01:07 -0700 (PDT) Received: by edgewater-inn.cambridge.arm.com (Postfix, from userid 1000) id E60141AE55CF; Wed, 11 Apr 2018 19:01:21 +0100 (BST) From: Will Deacon To: linux-kernel@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org, peterz@infradead.org, mingo@kernel.org, boqun.feng@gmail.com, paulmck@linux.vnet.ibm.com, longman@redhat.com, Will Deacon Subject: [PATCH v2 10/13] locking/qspinlock: Make queued_spin_unlock use smp_store_release Date: Wed, 11 Apr 2018 19:01:17 +0100 Message-Id: <1523469680-17699-11-git-send-email-will.deacon@arm.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1523469680-17699-1-git-send-email-will.deacon@arm.com> References: <1523469680-17699-1-git-send-email-will.deacon@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org A qspinlock can be unlocked simply by writing zero to the locked byte. This can be implemented in the generic code, so do that and remove the arch-specific override for x86 in the !PV case. Cc: Peter Zijlstra Cc: Ingo Molnar Signed-off-by: Will Deacon --- arch/x86/include/asm/qspinlock.h | 17 ++++++----------- include/asm-generic/qspinlock.h | 2 +- 2 files changed, 7 insertions(+), 12 deletions(-) -- 2.1.4 diff --git a/arch/x86/include/asm/qspinlock.h b/arch/x86/include/asm/qspinlock.h index da1370ad206d..3e70bed8a978 100644 --- a/arch/x86/include/asm/qspinlock.h +++ b/arch/x86/include/asm/qspinlock.h @@ -9,6 +9,12 @@ #define _Q_PENDING_LOOPS (1 << 9) +#ifdef CONFIG_PARAVIRT_SPINLOCKS +extern void native_queued_spin_lock_slowpath(struct qspinlock *lock, u32 val); +extern void __pv_init_lock_hash(void); +extern void __pv_queued_spin_lock_slowpath(struct qspinlock *lock, u32 val); +extern void __raw_callee_save___pv_queued_spin_unlock(struct qspinlock *lock); + #define queued_spin_unlock queued_spin_unlock /** * queued_spin_unlock - release a queued spinlock @@ -21,12 +27,6 @@ static inline void native_queued_spin_unlock(struct qspinlock *lock) smp_store_release(&lock->locked, 0); } -#ifdef CONFIG_PARAVIRT_SPINLOCKS -extern void native_queued_spin_lock_slowpath(struct qspinlock *lock, u32 val); -extern void __pv_init_lock_hash(void); -extern void __pv_queued_spin_lock_slowpath(struct qspinlock *lock, u32 val); -extern void __raw_callee_save___pv_queued_spin_unlock(struct qspinlock *lock); - static inline void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val) { pv_queued_spin_lock_slowpath(lock, val); @@ -42,11 +42,6 @@ static inline bool vcpu_is_preempted(long cpu) { return pv_vcpu_is_preempted(cpu); } -#else -static inline void queued_spin_unlock(struct qspinlock *lock) -{ - native_queued_spin_unlock(lock); -} #endif #ifdef CONFIG_PARAVIRT diff --git a/include/asm-generic/qspinlock.h b/include/asm-generic/qspinlock.h index b37b4ad7eb94..a8ed0a352d75 100644 --- a/include/asm-generic/qspinlock.h +++ b/include/asm-generic/qspinlock.h @@ -100,7 +100,7 @@ static __always_inline void queued_spin_unlock(struct qspinlock *lock) /* * unlock() needs release semantics: */ - (void)atomic_sub_return_release(_Q_LOCKED_VAL, &lock->val); + smp_store_release(&lock->locked, 0); } #endif