From patchwork Thu Apr 26 10:34:22 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 134476 Delivered-To: patch@linaro.org Received: by 10.46.151.6 with SMTP id r6csp2080654lji; Thu, 26 Apr 2018 03:35:44 -0700 (PDT) X-Google-Smtp-Source: AB8JxZoS4j8A+aSL1xvAe1qdiFdgX+MfEk5MoHv5R9QXQpg9XX40BB3+T4H1AiGckuS3oyrQMVB9 X-Received: by 2002:a17:902:5902:: with SMTP id o2-v6mr10910009pli.79.1524738943916; Thu, 26 Apr 2018 03:35:43 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1524738943; cv=none; d=google.com; s=arc-20160816; b=bwiUnUhoJNF7frQIxMcK7lU6bMbAL8o0n6RCHCtdPo3ITQl5eQiGNEy+tm7GK4eLVs b/3LHJBYsqIU+TGwqIv+887qYtgBV9yBsMmWVbJe5JrI1IFw5eqUTOmTmjak0XbCFuOJ IIoyJbIAY9fRLq54gwlGwozdSoBAVaStmlQyjc4OGdGSv9MKcDmRxbFuIMs9/pAzdOKG ghP/kwvAwcv7Ubbddy3GeoOeSceTcpLcKI2PQr66IwLewhLqBRscW/XMNf7Gfc6xY7rN pu6gy2it4lbwLDIAiQq5eJGqsMDAb4EaPYTpZiNYppvhMvIJFxk2u2w0ZalvlumkE/ZX Z6rA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=irkULNHmtNPgNAdekth1s68Hkl75EPNE3ReOorX3J4o=; b=gxykXyRAdJxCsl4yvcGC8y+xQ3djyTV3PlHcvb9BjwIdOe7zAcOkH58WXCi3bFL3y6 J3j3Jf6IiTPz3FJwBGgEpMOP10njEiIrZw23gST5aAWCE9exJXEcosvHIZxpEPSr/9Y5 SWvJ6RpTkLobHpTTl8kwYZsPT4XPHOtfhv3lsbPxh5ijo+MbTMMKNEA4eTvfTDk6iKnT 9dQg/ETFvrZS4ag6MGMIr0LRyTjZ1xb2KVkTnW3sgwVLulEv83I5PCK/EGOEv5oAs+eH 1dY7fZbWihSywyJK3/JC0AvTjnODvpH7DhUW0dpIuEl9S+ubvRw4r5VG5wkUIpRhCvvc EedA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id z188si15700603pgb.134.2018.04.26.03.35.43; Thu, 26 Apr 2018 03:35:43 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755780AbeDZKfk (ORCPT + 29 others); Thu, 26 Apr 2018 06:35:40 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:51080 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755074AbeDZKeK (ORCPT ); Thu, 26 Apr 2018 06:34:10 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 5E03A1713; Thu, 26 Apr 2018 03:34:10 -0700 (PDT) Received: from edgewater-inn.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 3074E3F77A; Thu, 26 Apr 2018 03:34:10 -0700 (PDT) Received: by edgewater-inn.cambridge.arm.com (Postfix, from userid 1000) id 5FE561AE515F; Thu, 26 Apr 2018 11:34:29 +0100 (BST) From: Will Deacon To: linux-kernel@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org, peterz@infradead.org, mingo@kernel.org, boqun.feng@gmail.com, paulmck@linux.vnet.ibm.com, longman@redhat.com, will.deacon@arm.com Subject: [PATCH v3 08/14] locking/mcs: Use smp_cond_load_acquire() in mcs spin loop Date: Thu, 26 Apr 2018 11:34:22 +0100 Message-Id: <1524738868-31318-9-git-send-email-will.deacon@arm.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1524738868-31318-1-git-send-email-will.deacon@arm.com> References: <1524738868-31318-1-git-send-email-will.deacon@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Jason Low For qspinlocks on ARM64, we would like to use WFE instead of purely spinning. Qspinlocks internally have lock contenders spin on an MCS lock. Update arch_mcs_spin_lock_contended() such that it uses the new smp_cond_load_acquire() so that ARM64 can also override this spin loop with its own implementation using WFE. On x86, this can also be cheaper than spinning on smp_load_acquire(). Signed-off-by: Jason Low Signed-off-by: Will Deacon --- kernel/locking/mcs_spinlock.h | 10 ++++++---- 1 file changed, 6 insertions(+), 4 deletions(-) -- 2.1.4 diff --git a/kernel/locking/mcs_spinlock.h b/kernel/locking/mcs_spinlock.h index f046b7ce9dd6..5e10153b4d3c 100644 --- a/kernel/locking/mcs_spinlock.h +++ b/kernel/locking/mcs_spinlock.h @@ -23,13 +23,15 @@ struct mcs_spinlock { #ifndef arch_mcs_spin_lock_contended /* - * Using smp_load_acquire() provides a memory barrier that ensures - * subsequent operations happen after the lock is acquired. + * Using smp_cond_load_acquire() provides the acquire semantics + * required so that subsequent operations happen after the + * lock is acquired. Additionally, some architectures such as + * ARM64 would like to do spin-waiting instead of purely + * spinning, and smp_cond_load_acquire() provides that behavior. */ #define arch_mcs_spin_lock_contended(l) \ do { \ - while (!(smp_load_acquire(l))) \ - cpu_relax(); \ + smp_cond_load_acquire(l, VAL); \ } while (0) #endif