From patchwork Thu Apr 26 10:34:24 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 134474 Delivered-To: patch@linaro.org Received: by 10.46.151.6 with SMTP id r6csp2080553lji; Thu, 26 Apr 2018 03:35:39 -0700 (PDT) X-Google-Smtp-Source: AIpwx48DpS3BUZVgFIyh1/J/2DzTpyL5LqxPI8UtPdI12HgI8CXoDotpy+MqU9UV//iiaShibIW5 X-Received: by 2002:a17:902:6f16:: with SMTP id w22-v6mr32748923plk.216.1524738939084; Thu, 26 Apr 2018 03:35:39 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1524738939; cv=none; d=google.com; s=arc-20160816; b=CqOBgiUy9Ao5e5WAs8p1ZLclZ5kzaK5huBNusZFRuOhjpji9l6rRpBsrp/yT+R20nl AQtgFuotK73I/j/4qx1mWOwvoNp579UMhVcNkpACUaooQYrm5G4/tgMz7F56L4o8PIwc tt5Ue+vyTiCz5MehED/TIJr6rW15EUqhuVg0BI+9cXbPMDE65FwHaMJ5r9HhtqJP79hS qQSk+uhUZFiDWueU+demRXk9BiOSSYORvnY4zNybD2MhCnaV9a00ON4ZI4CW149f56Ch pmCbxPsPE1m8l7rKpUSjb9v+99tBzx2KirhODD7x+a+K8DEgm7fc9rJtbdWB+kw1lQUI HDcw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=wkblKAmBxnaQe+cTwroykfXYTv1d1hqWOBC1dh053Dg=; b=YpmCUlDV7bOY2+5276l9vwC4QE04wEkrfH9lBOfrYZmznlTy9c4APO9iqINE6u06jB Vu0opcQscBrimaCh/+my+AgIWp5ljGECPRUL+SAXkhHY/aTiCkzN6dVO39b44D6mJhPe IrDfU6rvCou/9KzXw4WvBEBgudxf0K8/H56UGbYVqkImBzgZqO2HbfhnHZbAc30DfCvF MwyreC/1ibnnjHceNXiatqmWk4p0ig8rdAm1ZoZNY7VXiWbm/WFYOsJ/Hbzu4twVDBuN THQnMIKDU1o1chuhwIVcgRvmjPDS/hUuxR2aY+2/gkqQmjSk2IfrSE6DcyG94620sn4p O/qA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id b62si10585531pgc.505.2018.04.26.03.35.38; Thu, 26 Apr 2018 03:35:39 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755725AbeDZKfc (ORCPT + 29 others); Thu, 26 Apr 2018 06:35:32 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:51124 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755172AbeDZKeL (ORCPT ); Thu, 26 Apr 2018 06:34:11 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 770F51993; Thu, 26 Apr 2018 03:34:10 -0700 (PDT) Received: from edgewater-inn.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 48F203F487; Thu, 26 Apr 2018 03:34:10 -0700 (PDT) Received: by edgewater-inn.cambridge.arm.com (Postfix, from userid 1000) id 90BAE1AE5194; Thu, 26 Apr 2018 11:34:29 +0100 (BST) From: Will Deacon To: linux-kernel@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org, peterz@infradead.org, mingo@kernel.org, boqun.feng@gmail.com, paulmck@linux.vnet.ibm.com, longman@redhat.com, will.deacon@arm.com Subject: [PATCH v3 10/14] locking/qspinlock: Make queued_spin_unlock use smp_store_release Date: Thu, 26 Apr 2018 11:34:24 +0100 Message-Id: <1524738868-31318-11-git-send-email-will.deacon@arm.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1524738868-31318-1-git-send-email-will.deacon@arm.com> References: <1524738868-31318-1-git-send-email-will.deacon@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org A qspinlock can be unlocked simply by writing zero to the locked byte. This can be implemented in the generic code, so do that and remove the arch-specific override for x86 in the !PV case. Cc: Peter Zijlstra Cc: Ingo Molnar Signed-off-by: Will Deacon --- arch/x86/include/asm/qspinlock.h | 17 ++++++----------- include/asm-generic/qspinlock.h | 2 +- 2 files changed, 7 insertions(+), 12 deletions(-) -- 2.1.4 diff --git a/arch/x86/include/asm/qspinlock.h b/arch/x86/include/asm/qspinlock.h index da1370ad206d..3e70bed8a978 100644 --- a/arch/x86/include/asm/qspinlock.h +++ b/arch/x86/include/asm/qspinlock.h @@ -9,6 +9,12 @@ #define _Q_PENDING_LOOPS (1 << 9) +#ifdef CONFIG_PARAVIRT_SPINLOCKS +extern void native_queued_spin_lock_slowpath(struct qspinlock *lock, u32 val); +extern void __pv_init_lock_hash(void); +extern void __pv_queued_spin_lock_slowpath(struct qspinlock *lock, u32 val); +extern void __raw_callee_save___pv_queued_spin_unlock(struct qspinlock *lock); + #define queued_spin_unlock queued_spin_unlock /** * queued_spin_unlock - release a queued spinlock @@ -21,12 +27,6 @@ static inline void native_queued_spin_unlock(struct qspinlock *lock) smp_store_release(&lock->locked, 0); } -#ifdef CONFIG_PARAVIRT_SPINLOCKS -extern void native_queued_spin_lock_slowpath(struct qspinlock *lock, u32 val); -extern void __pv_init_lock_hash(void); -extern void __pv_queued_spin_lock_slowpath(struct qspinlock *lock, u32 val); -extern void __raw_callee_save___pv_queued_spin_unlock(struct qspinlock *lock); - static inline void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val) { pv_queued_spin_lock_slowpath(lock, val); @@ -42,11 +42,6 @@ static inline bool vcpu_is_preempted(long cpu) { return pv_vcpu_is_preempted(cpu); } -#else -static inline void queued_spin_unlock(struct qspinlock *lock) -{ - native_queued_spin_unlock(lock); -} #endif #ifdef CONFIG_PARAVIRT diff --git a/include/asm-generic/qspinlock.h b/include/asm-generic/qspinlock.h index b37b4ad7eb94..a8ed0a352d75 100644 --- a/include/asm-generic/qspinlock.h +++ b/include/asm-generic/qspinlock.h @@ -100,7 +100,7 @@ static __always_inline void queued_spin_unlock(struct qspinlock *lock) /* * unlock() needs release semantics: */ - (void)atomic_sub_return_release(_Q_LOCKED_VAL, &lock->val); + smp_store_release(&lock->locked, 0); } #endif