From patchwork Thu Apr 5 16:58:58 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 132866 Delivered-To: patch@linaro.org Received: by 10.46.84.29 with SMTP id i29csp6520059ljb; Thu, 5 Apr 2018 09:59:12 -0700 (PDT) X-Google-Smtp-Source: AIpwx4+w9mDVyDsLdfOhDZHLrzaEKBvRB4fbBuvs+GrlOKdJ1MXgF9y6iC/60D/bgn0ZIN3WKHEB X-Received: by 10.99.119.79 with SMTP id s76mr15008556pgc.291.1522947552359; Thu, 05 Apr 2018 09:59:12 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1522947552; cv=none; d=google.com; s=arc-20160816; b=fc5c0VVVz9D9iWaW9zgsnazWV924NFBevsch6CvVoq84WmjeOEZTNLNgLpX9WdCTMW GPrEVi9efHLNTwW2ogI0ZIHkGhNGVnuBMt1k5ZWM65BNdmwvcv6DanUhbKJpJ1wlzU88 eaQD4wQTJxsiN663S/38JfOMpQpYiMt+5KX0dscsnEl8EM9yWMBaW7H+XcfZUrckiox7 JEVCnlfxXg8KwhuZUKcezs4mMM2M37iFdTqilm4efs45bdh/IrS+az/WWEFqdxglr90S w2kXMFprwdfdtO9yCeJEvEqmkyFNlA/DQBHR2JU9QSsDNxNpPgMBmHl7oSM1ERiPqjpv FJ9A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=DW4Kf+pGDNjkNsFL6OaB3IDEm4KpxDFOzxcMP2UGVD8=; b=K0DSsdxZRgOdNA/NXr/smla6J5mPlJ2aOBGz0F8jY8oJZb2cpXx4X3Tb0JiLtT9llK AVx+b6s1N/MIL5+DlzXL1AMwTNK3whYSU1Pf+nx45EJyc6SOmHQVlqCts5PvEiVrK3wv GSPsSJWKwbhTLgXuuWWOPQPW59+F9qlQjCy1zbtROhfZo3ghVpj3Vjzm5RhmoITS51K9 eRofAFAYoWU333ZMqHVb6f3iusxGefgOmHlMyMDxC8Op6Q2OdL0TjP6vUpY96jKfvPPX qbfViKHDVaeNNzc6pcewpY9sBmUQmV+Y6PcnMawFs8hj0/0aG4F1ABJzpbm52vzg/QO+ Q7tQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id q12-v6si6717025pll.419.2018.04.05.09.59.12; Thu, 05 Apr 2018 09:59:12 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751544AbeDEQ65 (ORCPT + 29 others); Thu, 5 Apr 2018 12:58:57 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:57300 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751268AbeDEQ6z (ORCPT ); Thu, 5 Apr 2018 12:58:55 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 220B41596; Thu, 5 Apr 2018 09:58:55 -0700 (PDT) Received: from edgewater-inn.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id E85D13F5B1; Thu, 5 Apr 2018 09:58:54 -0700 (PDT) Received: by edgewater-inn.cambridge.arm.com (Postfix, from userid 1000) id 8BEC51AE5448; Thu, 5 Apr 2018 17:59:08 +0100 (BST) From: Will Deacon To: linux-kernel@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org, peterz@infradead.org, mingo@kernel.org, boqun.feng@gmail.com, paulmck@linux.vnet.ibm.com, catalin.marinas@arm.com, Will Deacon Subject: [PATCH 01/10] locking/qspinlock: Don't spin on pending->locked transition in slowpath Date: Thu, 5 Apr 2018 17:58:58 +0100 Message-Id: <1522947547-24081-2-git-send-email-will.deacon@arm.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1522947547-24081-1-git-send-email-will.deacon@arm.com> References: <1522947547-24081-1-git-send-email-will.deacon@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org If a locker taking the qspinlock slowpath reads a lock value indicating that only the pending bit is set, then it will spin whilst the concurrent pending->locked transition takes effect. Unfortunately, there is no guarantee that such a transition will ever be observed since concurrent lockers could continuously set pending and hand over the lock amongst themselves, leading to starvation. Whilst this would probably resolve in practice, it means that it is not possible to prove liveness properties about the lock and means that lock acquisition time is unbounded. Remove the pending->locked spinning from the slowpath and instead queue explicitly if pending is set. Cc: Peter Zijlstra Cc: Ingo Molnar Signed-off-by: Will Deacon --- kernel/locking/qspinlock.c | 10 ---------- 1 file changed, 10 deletions(-) -- 2.1.4 diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c index d880296245c5..a192af2fe378 100644 --- a/kernel/locking/qspinlock.c +++ b/kernel/locking/qspinlock.c @@ -306,16 +306,6 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val) return; /* - * wait for in-progress pending->locked hand-overs - * - * 0,1,0 -> 0,0,1 - */ - if (val == _Q_PENDING_VAL) { - while ((val = atomic_read(&lock->val)) == _Q_PENDING_VAL) - cpu_relax(); - } - - /* * trylock || pending * * 0,0,0 -> 0,0,1 ; trylock