From patchwork Thu Apr 5 16:58:57 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 132867 Delivered-To: patch@linaro.org Received: by 10.46.84.29 with SMTP id i29csp6520239ljb; Thu, 5 Apr 2018 09:59:25 -0700 (PDT) X-Google-Smtp-Source: AIpwx49kM1Bh32b6MfcgMXMTAiIURFj2vFXrdrlB0vII7u6c+mscmlsx6IzUBEFufJuUt4jrH9Ba X-Received: by 2002:a17:902:bb8c:: with SMTP id m12-v6mr5677029pls.119.1522947565183; Thu, 05 Apr 2018 09:59:25 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1522947565; cv=none; d=google.com; s=arc-20160816; b=QlKabDDwDKyryMbDmhk6F01NJulndRogSGu0GOFnEtvzZs31WjHQPAh95Wn2J/gIX+ 9vXAjFAGQTjsK+rv7U6tf/LWK++Ac8tidzqIHYVtvB48/GfqsQqda58oJ4ZiQBodgq3k BJ5Q8Md9WC08DU3ezUUnNzfs2xXv9gSaZjnyrzdEncP5g3itcLfKPhpw7OK8bEIm15wl zTjnD/fCYkvCznLBCqOBLE7UVCkc5exiNejgs5cYrKY1ajXnPKRmWTmBIhN5yJqKJbKB nO5pi/CahNZJf74+25fdna8+lexHB4U8ZI92KBpaEGtDRA7PqCUKWLaokGt4t5bEkSrg TRBQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:message-id:date:subject:cc:to:from :arc-authentication-results; bh=r2v5yO6tiCRt+2p2LmDFH/+GVnW+vM2qG0/atT6biOA=; b=FJ/0gJ1WqhqlQ3jxuvnC9MQ6nigbnrgWnu1/9EQTOx4+zHYPdJWgD7couPRPwjVCpM aJJzxYLSEZtv0yVP2o0MFjcAJFyqJr33ww4lhVqUl8Gl/GS2JvNaQ3COfLijfbYLfF0V XNSDrejIIteds5ouSjZZd0CDVGyOmleMeCf/L9TKrhjABOHBFQq91n93NN11TIUD7PAW B6NvnvCm6Lknwxm30pOKG5aYfawQCE/kJ06ep9TGMBsLvD1eXy5ZmgzNtcS/REJP+jAY +Km4dQeW/W2G0AO/JBNXwNyuTnMphJWZyulj4nuHoSfiPsl9csjQ2tX62U0j/pvEvwiF SYRg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 34-v6si3750982plp.543.2018.04.05.09.59.24; Thu, 05 Apr 2018 09:59:25 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751506AbeDEQ64 (ORCPT + 29 others); Thu, 5 Apr 2018 12:58:56 -0400 Received: from foss.arm.com ([217.140.101.70]:57294 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750726AbeDEQ6z (ORCPT ); Thu, 5 Apr 2018 12:58:55 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 11F4A1435; Thu, 5 Apr 2018 09:58:55 -0700 (PDT) Received: from edgewater-inn.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id D70E63F25D; Thu, 5 Apr 2018 09:58:54 -0700 (PDT) Received: by edgewater-inn.cambridge.arm.com (Postfix, from userid 1000) id 7E2DC1AE555E; Thu, 5 Apr 2018 17:59:08 +0100 (BST) From: Will Deacon To: linux-kernel@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org, peterz@infradead.org, mingo@kernel.org, boqun.feng@gmail.com, paulmck@linux.vnet.ibm.com, catalin.marinas@arm.com, Will Deacon Subject: [PATCH 00/10] kernel/locking: qspinlock improvements Date: Thu, 5 Apr 2018 17:58:57 +0100 Message-Id: <1522947547-24081-1-git-send-email-will.deacon@arm.com> X-Mailer: git-send-email 2.1.4 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi all, I've been kicking the tyres further on qspinlock and with this set of patches I'm happy with the performance and fairness properties. In particular, the locking algorithm now guarantees forward progress whereas the implementation in mainline can starve threads indefinitely in cmpxchg loops. Catalin has also implemented a model of this using TLA to prove that the lock is fair, although this doesn't take the memory model into account: https://git.kernel.org/pub/scm/linux/kernel/git/cmarinas/kernel-tla.git/commit/ I'd still like to get more benchmark numbers and wider exposure before enabling this for arm64, but my current testing is looking very promising. This series, along with the arm64-specific patches, is available at: https://git.kernel.org/pub/scm/linux/kernel/git/will/linux.git/log/?h=qspinlock Cheers, Will --->8 Jason Low (1): locking/mcs: Use smp_cond_load_acquire() in mcs spin loop Will Deacon (9): locking/qspinlock: Don't spin on pending->locked transition in slowpath locking/qspinlock: Remove unbounded cmpxchg loop from locking slowpath locking/qspinlock: Kill cmpxchg loop when claiming lock from head of queue locking/qspinlock: Use atomic_cond_read_acquire barriers: Introduce smp_cond_load_relaxed and atomic_cond_read_relaxed locking/qspinlock: Use smp_cond_load_relaxed to wait for next node locking/qspinlock: Merge struct __qspinlock into struct qspinlock locking/qspinlock: Make queued_spin_unlock use smp_store_release locking/qspinlock: Elide back-to-back RELEASE operations with smp_wmb() arch/x86/include/asm/qspinlock.h | 19 ++- arch/x86/include/asm/qspinlock_paravirt.h | 3 +- include/asm-generic/barrier.h | 27 ++++- include/asm-generic/qspinlock.h | 2 +- include/asm-generic/qspinlock_types.h | 32 ++++- include/linux/atomic.h | 2 + kernel/locking/mcs_spinlock.h | 10 +- kernel/locking/qspinlock.c | 191 ++++++++++-------------------- kernel/locking/qspinlock_paravirt.h | 34 ++---- 9 files changed, 141 insertions(+), 179 deletions(-) -- 2.1.4