From patchwork Tue Dec 18 22:10:46 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sebastian Andrzej Siewior X-Patchwork-Id: 154217 Delivered-To: patch@linaro.org Received: by 2002:a2e:299d:0:0:0:0:0 with SMTP id p29-v6csp4236519ljp; Tue, 18 Dec 2018 14:11:33 -0800 (PST) X-Google-Smtp-Source: AFSGD/Ujeki5eapv2GRPNJO9p/2cSu/p9WbtTRMkGm2uP/7fNtAS4CCs4169I514vqVdbJRazfWp X-Received: by 2002:a63:165e:: with SMTP id 30mr17308967pgw.103.1545171093123; Tue, 18 Dec 2018 14:11:33 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1545171093; cv=none; d=google.com; s=arc-20160816; b=Z5wCqz414MEmjutX5UGWFXdbl1hnn1On6p9YOFCAjr6LHeuJW7itxKMWYnbKpKYwjJ 9eiamqujpmYSzp4rGmfGWEAmdVspcUAa7Rg2PaipRHINeij13DjLRTRaf/rihOkZQCL0 dgKa0zWGbxQrGd/aFsx112AJ/gacm1226YgESBrv444mRhDH2OhgSCskiMmpSaGcK1uV CdLUl1yJ8OQzpKbQOHqRJKA+Gm6aIJ/jzbZ+cPc5+MS7ZUZQYc0RIw6Z0+Vx0XDGkh27 s5Fj8cpR6kaAok+K265KwuRB5mLwGc/s6qXyLnZJPUTdUnNUqpadk4LGzXIOQdS5Tp03 Xgfw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=iRM+O+LoilH+Mu8IGYgGisJ6/FKxjHBdH54cI5ZNFSE=; b=GCDSYayIA6QzWBcjnu88qA4P7lYe98I4hu4JYfeFLezQp7/r/H4peUavgnUiuag3aQ x3i9hnaRDZxmE3bp2K2YRQAJmWN/Vhf8TRBHTEKcnYfqzqxmNPQMmXWA+vuoHZDTa19r eLMccJZnW09Zck0OmLsOsBWwJ/RULocPK4QP/lYj30DyyQFPMQwWGVnq/FkTWkuj2b3X YF8sJvqiujE9QjDAW8bOHG1nSDDGYUCw4sXEjirtvuYTCFDGL1xS1Giir3kKLNjpsN/Y 2yzWI8FDfFwojGFfTQu2JoOekaO0HcGyY2soK6D2JAftopvgouei7RLXS7H/s7axJlX3 1H+g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=stable-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id c8si13817215pgl.507.2018.12.18.14.11.32; Tue, 18 Dec 2018 14:11:33 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=stable-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727678AbeLRWLc (ORCPT + 15 others); Tue, 18 Dec 2018 17:11:32 -0500 Received: from Galois.linutronix.de ([146.0.238.70]:57223 "EHLO Galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727333AbeLRWLa (ORCPT ); Tue, 18 Dec 2018 17:11:30 -0500 Received: from localhost ([127.0.0.1] helo=bazinga.breakpoint.cc) by Galois.linutronix.de with esmtp (Exim 4.80) (envelope-from ) id 1gZNaG-000889-Tj; Tue, 18 Dec 2018 23:11:25 +0100 From: Sebastian Andrzej Siewior To: stable@vger.kernel.org Cc: Peter Zijlstra , Will Deacon , Thomas Gleixner , Daniel Wagner , bigeasy@linutronix.de, Waiman Long , Linus Torvalds , boqun.feng@gmail.com, linux-arm-kernel@lists.infradead.org, paulmck@linux.vnet.ibm.com, Ingo Molnar Subject: [PATCH STABLE v4.9 07/10] locking/qspinlock: Kill cmpxchg() loop when claiming lock from head of queue Date: Tue, 18 Dec 2018 23:10:46 +0100 Message-Id: <20181218221049.6816-8-bigeasy@linutronix.de> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20181218221049.6816-1-bigeasy@linutronix.de> References: <20181218221049.6816-1-bigeasy@linutronix.de> MIME-Version: 1.0 Sender: stable-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Will Deacon commit c61da58d8a9ba9238250a548f00826eaf44af0f7 upstream. When a queued locker reaches the head of the queue, it claims the lock by setting _Q_LOCKED_VAL in the lockword. If there isn't contention, it must also clear the tail as part of this operation so that subsequent lockers can avoid taking the slowpath altogether. Currently this is expressed as a cmpxchg() loop that practically only runs up to two iterations. This is confusing to the reader and unhelpful to the compiler. Rewrite the cmpxchg() loop without the loop, so that a failed cmpxchg() implies that there is contention and we just need to write to _Q_LOCKED_VAL without considering the rest of the lockword. Signed-off-by: Will Deacon Acked-by: Peter Zijlstra (Intel) Acked-by: Waiman Long Cc: Linus Torvalds Cc: Thomas Gleixner Cc: boqun.feng@gmail.com Cc: linux-arm-kernel@lists.infradead.org Cc: paulmck@linux.vnet.ibm.com Link: http://lkml.kernel.org/r/1524738868-31318-7-git-send-email-will.deacon@arm.com Signed-off-by: Ingo Molnar Signed-off-by: Sebastian Andrzej Siewior --- kernel/locking/qspinlock.c | 19 ++++++++----------- 1 file changed, 8 insertions(+), 11 deletions(-) -- 2.20.1 diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c index e7ab99a1f4387..ba5dc86a4d831 100644 --- a/kernel/locking/qspinlock.c +++ b/kernel/locking/qspinlock.c @@ -581,24 +581,21 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val) * and nobody is pending, clear the tail code and grab the lock. * Otherwise, we only need to grab the lock. */ - for (;;) { - /* In the PV case we might already have _Q_LOCKED_VAL set */ - if ((val & _Q_TAIL_MASK) != tail || (val & _Q_PENDING_MASK)) { - set_locked(lock); - break; - } + + /* In the PV case we might already have _Q_LOCKED_VAL set */ + if ((val & _Q_TAIL_MASK) == tail) { /* * The smp_cond_load_acquire() call above has provided the - * necessary acquire semantics required for locking. At most - * two iterations of this loop may be ran. + * necessary acquire semantics required for locking. */ old = atomic_cmpxchg_relaxed(&lock->val, val, _Q_LOCKED_VAL); if (old == val) - goto release; /* No contention */ - - val = old; + goto release; /* No contention */ } + /* Either somebody is queued behind us or _Q_PENDING_VAL is set */ + set_locked(lock); + /* * contended path; wait for next if not observed yet, release. */