From patchwork Wed Apr 11 18:01:07 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 133160 Delivered-To: patch@linaro.org Received: by 10.46.84.29 with SMTP id i29csp896143ljb; Wed, 11 Apr 2018 11:02:13 -0700 (PDT) X-Google-Smtp-Source: AIpwx49Wb7i0bn6XSI09U02rV20SgFJ4KajrIAkN5NcAg3pop1v6VwX9KD6NdHZZvvwF6WZjZ1qF X-Received: by 10.99.111.136 with SMTP id k130mr4122677pgc.378.1523469733355; Wed, 11 Apr 2018 11:02:13 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1523469733; cv=none; d=google.com; s=arc-20160816; b=W4OPAk1UipOnDIZdKnGO6NbAB6CWoRr9RWIdy8qPdFpXz8m9Qygxr4+u+o+ZqfEXEY LA3+Zz6LOAd3fLK2nXGx+q2OnxPrCdXzyW/u/gqUBQIybKgeDzhj8jmBaqYpF6CnRhSF +4MQk3mJtKEMfLr2OtOAKqgW5XhrCj9zQkO/hWAQ9MEKmMNjplYkAiQQLiLEcf6Df18M h5RjTDg5wTpbeIyk/A7cr7o8YyVaK90C63rNS7WaGVSzJ256xClKMCZygMrKwP8mx1eX LPOf2SpqtuluH2mBt3Rp3JuJdKg1P8iAizz91HvHY8VRBPauxOBHrRnlscWqczrIB6qv Zu5w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:message-id:date:subject:cc:to:from :arc-authentication-results; bh=FQjGuFH+dkhUplVy50veyv6LaVboeJGengZXxT0VbQw=; b=dVGsnsGvmpI9RJZSmr14/4jRtlDyF0AUl/JdrtKDz183yn7f0Jg8yS8a951aGlCG/q lbsb/aaVihZKcHq0c9rGyuQ7qlwf2q0YDS/MIInzpDB0nj3UPNPOLtex7omOKbDTI9tq OuxuG06byambqos/4fuMNO4A25dKPOOrU/rV/0Zi2xiuzQFRYqNscrhirrIcb7ekRtUc 07y7RJCs2qQbnBuvxr783SG+iwMk7Q+TnJeeDbcR59DzJZG59MhAiu8/x0xDpIydZIhi rDBHJMCdDBFHCpCZtAauUrkLnQwcYawwD6w3o/nzxV7Gb5lzisGKZOd2wPWylZyiTyZS lpTA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id f2-v6si1587087plo.434.2018.04.11.11.02.13; Wed, 11 Apr 2018 11:02:13 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752527AbeDKSBM (ORCPT + 29 others); Wed, 11 Apr 2018 14:01:12 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:52032 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752551AbeDKSBG (ORCPT ); Wed, 11 Apr 2018 14:01:06 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 13D5A1529; Wed, 11 Apr 2018 11:01:06 -0700 (PDT) Received: from edgewater-inn.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id D83483F487; Wed, 11 Apr 2018 11:01:05 -0700 (PDT) Received: by edgewater-inn.cambridge.arm.com (Postfix, from userid 1000) id 4D2181AE5584; Wed, 11 Apr 2018 19:01:21 +0100 (BST) From: Will Deacon To: linux-kernel@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org, peterz@infradead.org, mingo@kernel.org, boqun.feng@gmail.com, paulmck@linux.vnet.ibm.com, longman@redhat.com, Will Deacon Subject: [PATCH v2 00/13] kernel/locking: qspinlock improvements Date: Wed, 11 Apr 2018 19:01:07 +0100 Message-Id: <1523469680-17699-1-git-send-email-will.deacon@arm.com> X-Mailer: git-send-email 2.1.4 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi all, Here's v2 of the qspinlock patches I posted last week: https://lkml.org/lkml/2018/4/5/496 Changes since v1 include: * Use WRITE_ONCE to clear the pending bit if we set it erroneously * Report pending and slowpath acquisitions via the qspinlock stat mechanism [Waiman Long] * Spin for a bounded duration while lock is observed in the pending->locked transition * Use try_cmpxchg to get better codegen on x86 * Reword comments All comments welcome, Will --->8 Jason Low (1): locking/mcs: Use smp_cond_load_acquire() in mcs spin loop Waiman Long (1): locking/qspinlock: Add stat tracking for pending vs slowpath Will Deacon (11): barriers: Introduce smp_cond_load_relaxed and atomic_cond_read_relaxed locking/qspinlock: Bound spinning on pending->locked transition in slowpath locking/qspinlock/x86: Increase _Q_PENDING_LOOPS upper bound locking/qspinlock: Remove unbounded cmpxchg loop from locking slowpath locking/qspinlock: Kill cmpxchg loop when claiming lock from head of queue locking/qspinlock: Use atomic_cond_read_acquire locking/qspinlock: Use smp_cond_load_relaxed to wait for next node locking/qspinlock: Merge struct __qspinlock into struct qspinlock locking/qspinlock: Make queued_spin_unlock use smp_store_release locking/qspinlock: Elide back-to-back RELEASE operations with smp_wmb() locking/qspinlock: Use try_cmpxchg instead of cmpxchg when locking arch/x86/include/asm/qspinlock.h | 21 ++- arch/x86/include/asm/qspinlock_paravirt.h | 3 +- include/asm-generic/atomic-long.h | 2 + include/asm-generic/barrier.h | 27 +++- include/asm-generic/qspinlock.h | 2 +- include/asm-generic/qspinlock_types.h | 32 +++- include/linux/atomic.h | 2 + kernel/locking/mcs_spinlock.h | 10 +- kernel/locking/qspinlock.c | 247 ++++++++++++++---------------- kernel/locking/qspinlock_paravirt.h | 41 ++--- kernel/locking/qspinlock_stat.h | 9 +- 11 files changed, 209 insertions(+), 187 deletions(-) -- 2.1.4