From patchwork Wed Apr 11 18:01:14 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 133164 Delivered-To: patch@linaro.org Received: by 10.46.84.29 with SMTP id i29csp896966ljb; Wed, 11 Apr 2018 11:03:03 -0700 (PDT) X-Google-Smtp-Source: AIpwx494MaYX2XIFGgxR15NRHEWanaU29pjBMOytDx1G4jc1m/brJ4Na/ZeEIRzgGavYU6uwiATT X-Received: by 10.99.43.80 with SMTP id r77mr4136822pgr.193.1523469783406; Wed, 11 Apr 2018 11:03:03 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1523469783; cv=none; d=google.com; s=arc-20160816; b=YzTK8Ey5HVjc8xeMAele4ebTcDhaYu9Pw8Jk4BmdI5WKHKRFmCBnVk5gW5Rcv3dTl9 RiVzK1ZjhLW4CTrisdShesInqSwfH6nl8eGuie7VRRwuj/7OUmoFxGA3muzaf/dtrNrN BXwF9sijjsf0EuUd10OThqcIWIvKHqDfRh9VWLvow6TxCaQcQNF5gWBnaswWs6Ox9WRL ghNUAr0fEIpbc8DQjGzVGkVg9pLtwjEeGjyDSqWFJlaE3H0aBF97XaZam8Bl7p+vegWo nKm3x8TuBmGQnaItUBRpHMn3iP5ovd1SEuJ2fumDKYm7PJs6JWxStYxecjfzviS0usLS IsCA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=irkULNHmtNPgNAdekth1s68Hkl75EPNE3ReOorX3J4o=; b=e3lgye4TlPa/zQRXcfUukVCOMSrEnieUu0gFmzi32K1FxvXSdwBwgNw2i9X3GhuMVb f8pBmajPl2ywi8IDovOTPFukIdc+tcqCJLeQsCf2JttryII0rM65ZRAIBGzBUTO9MfnT y/oiRHp58u+RPWdKl0/IY5ZXpVeW4oO37lK8P+Nqu6S7MUhGmWEe7g/NkLEe5et+RKyC YCz6T4KAf4OkYnrjB/WlfCMeDUZ5sfk3cOPoZ+LR7D9ZOCh97C2CD2rd8o58nV108A0V Vzh4S4vUVkHocOXnTvI+7xdNF3kgb8SqDK1svFr49dBzQlMGNnL8rUcD+XmSxK89Ryye wDOQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id p3si1042816pgc.463.2018.04.11.11.03.03; Wed, 11 Apr 2018 11:03:03 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754138AbeDKSC6 (ORCPT + 29 others); Wed, 11 Apr 2018 14:02:58 -0400 Received: from foss.arm.com ([217.140.101.70]:52132 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752639AbeDKSBH (ORCPT ); Wed, 11 Apr 2018 14:01:07 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 17B451682; Wed, 11 Apr 2018 11:01:07 -0700 (PDT) Received: from edgewater-inn.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id DCDB13F614; Wed, 11 Apr 2018 11:01:06 -0700 (PDT) Received: by edgewater-inn.cambridge.arm.com (Postfix, from userid 1000) id B6D031AE55C3; Wed, 11 Apr 2018 19:01:21 +0100 (BST) From: Will Deacon To: linux-kernel@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org, peterz@infradead.org, mingo@kernel.org, boqun.feng@gmail.com, paulmck@linux.vnet.ibm.com, longman@redhat.com, Jason Low , Will Deacon Subject: [PATCH v2 07/13] locking/mcs: Use smp_cond_load_acquire() in mcs spin loop Date: Wed, 11 Apr 2018 19:01:14 +0100 Message-Id: <1523469680-17699-8-git-send-email-will.deacon@arm.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1523469680-17699-1-git-send-email-will.deacon@arm.com> References: <1523469680-17699-1-git-send-email-will.deacon@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Jason Low For qspinlocks on ARM64, we would like to use WFE instead of purely spinning. Qspinlocks internally have lock contenders spin on an MCS lock. Update arch_mcs_spin_lock_contended() such that it uses the new smp_cond_load_acquire() so that ARM64 can also override this spin loop with its own implementation using WFE. On x86, this can also be cheaper than spinning on smp_load_acquire(). Signed-off-by: Jason Low Signed-off-by: Will Deacon --- kernel/locking/mcs_spinlock.h | 10 ++++++---- 1 file changed, 6 insertions(+), 4 deletions(-) -- 2.1.4 diff --git a/kernel/locking/mcs_spinlock.h b/kernel/locking/mcs_spinlock.h index f046b7ce9dd6..5e10153b4d3c 100644 --- a/kernel/locking/mcs_spinlock.h +++ b/kernel/locking/mcs_spinlock.h @@ -23,13 +23,15 @@ struct mcs_spinlock { #ifndef arch_mcs_spin_lock_contended /* - * Using smp_load_acquire() provides a memory barrier that ensures - * subsequent operations happen after the lock is acquired. + * Using smp_cond_load_acquire() provides the acquire semantics + * required so that subsequent operations happen after the + * lock is acquired. Additionally, some architectures such as + * ARM64 would like to do spin-waiting instead of purely + * spinning, and smp_cond_load_acquire() provides that behavior. */ #define arch_mcs_spin_lock_contended(l) \ do { \ - while (!(smp_load_acquire(l))) \ - cpu_relax(); \ + smp_cond_load_acquire(l, VAL); \ } while (0) #endif