From patchwork Thu Apr 26 10:34:27 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 134470 Delivered-To: patch@linaro.org Received: by 10.46.151.6 with SMTP id r6csp2079700lji; Thu, 26 Apr 2018 03:34:58 -0700 (PDT) X-Google-Smtp-Source: AIpwx4+0FTIGcm3SxKCph//1QFV0iOlxUpv7LyWVc8vQCELANgVlOpiml+cc5AdXQH0yCi8oAWZ9 X-Received: by 10.99.127.80 with SMTP id p16mr26593807pgn.79.1524738898204; Thu, 26 Apr 2018 03:34:58 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1524738898; cv=none; d=google.com; s=arc-20160816; b=rjGwxBVNXEfKzxZY8avkj6CBCtJzQE7JLuX6wBgrXbvPB9ezhp/qUwAppLMYiTKxkv 5cjabMpowwZtIAgAojCk5W/UwSsFzXB3gEI66BVOqeqOs+T3BssCKxOPp0CO0A7RakFk N/QR6bdTcd1OLsP1/BqGLQFZrOVCEE5/owiVEIxQ4lsY0gxIgah7yXk/3xKkZZmAPAiP Hc/pOZE3J8AYj8S14/wUMAddQjGgmmHkT2A5kKfeLvNbHVt0/ERZmm4txoAnmkXR3PDh Y6En6d3uajXa4Dr5vx5xB4I/b6K7kKpUiwx2kvzPNdL0zV59vLjlKiOTGtMtCzsQxZXc LzAA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=Fo+tpocf6rtszTk920yWZ/brvMN099xxiJ87YT1ESVI=; b=uJVks6R/uXNgEX5NEhI5zzyZ4Zc0qGH5vzeLc4RYe7fIK9x0F91Mt4pgmA/jUBn21x zK3KsPYxyH1TYpW+1TiIAvxSYNCRT4sqautxB5YQSaVkOvL2GH7FYQsT+6pDo1ZymB+j yGbRp8X0A84xQ+hxD43f7fF9d3jNGhjSNNvJSYfEODzt7U2DwY6hIQFa9uXptkKccKxw LspI76JbrYsp8eMi07JEJfaVuEK4QCsqw4HLTgndSTXlYBfdfbAb3ACATPNgwQGGY2jb mHxh1gwuG9/mv81h4KqVGGU8zqUiOs2b3B9HX3ghQro0Kq4NeW5c6g5OaIYKiiy7Kj6y sw7w== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id j20-v6si2831688pll.223.2018.04.26.03.34.57; Thu, 26 Apr 2018 03:34:58 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755505AbeDZKez (ORCPT + 29 others); Thu, 26 Apr 2018 06:34:55 -0400 Received: from foss.arm.com ([217.140.101.70]:51152 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755379AbeDZKeM (ORCPT ); Thu, 26 Apr 2018 06:34:12 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 929981A09; Thu, 26 Apr 2018 03:34:10 -0700 (PDT) Received: from edgewater-inn.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 63F0A3F77A; Thu, 26 Apr 2018 03:34:10 -0700 (PDT) Received: by edgewater-inn.cambridge.arm.com (Postfix, from userid 1000) id D7F681AE4F9F; Thu, 26 Apr 2018 11:34:29 +0100 (BST) From: Will Deacon To: linux-kernel@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org, peterz@infradead.org, mingo@kernel.org, boqun.feng@gmail.com, paulmck@linux.vnet.ibm.com, longman@redhat.com, will.deacon@arm.com Subject: [PATCH v3 13/14] locking/qspinlock: Add stat tracking for pending vs slowpath Date: Thu, 26 Apr 2018 11:34:27 +0100 Message-Id: <1524738868-31318-14-git-send-email-will.deacon@arm.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1524738868-31318-1-git-send-email-will.deacon@arm.com> References: <1524738868-31318-1-git-send-email-will.deacon@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Waiman Long Currently, the qspinlock_stat code tracks only statistical counts in the PV qspinlock code. However, it may also be useful to track the number of locking operations done via the pending code vs. the MCS lock queue slowpath for the non-PV case. The qspinlock stat code is modified to do that. The stat counter pv_lock_slowpath is renamed to lock_slowpath so that it can be used by both the PV and non-PV cases. Signed-off-by: Waiman Long --- kernel/locking/qspinlock.c | 14 +++++++++++--- kernel/locking/qspinlock_paravirt.h | 7 +------ kernel/locking/qspinlock_stat.h | 9 ++++++--- 3 files changed, 18 insertions(+), 12 deletions(-) -- 2.1.4 diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c index 1e3ddc42135e..b39f341d831b 100644 --- a/kernel/locking/qspinlock.c +++ b/kernel/locking/qspinlock.c @@ -12,11 +12,11 @@ * GNU General Public License for more details. * * (C) Copyright 2013-2015 Hewlett-Packard Development Company, L.P. - * (C) Copyright 2013-2014 Red Hat, Inc. + * (C) Copyright 2013-2014,2018 Red Hat, Inc. * (C) Copyright 2015 Intel Corp. * (C) Copyright 2015 Hewlett-Packard Enterprise Development LP * - * Authors: Waiman Long + * Authors: Waiman Long * Peter Zijlstra */ @@ -33,6 +33,11 @@ #include /* + * Include queued spinlock statistics code + */ +#include "qspinlock_stat.h" + +/* * The basic principle of a queue-based spinlock can best be understood * by studying a classic queue-based spinlock implementation called the * MCS lock. The paper below provides a good description for this kind @@ -295,7 +300,7 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val) BUILD_BUG_ON(CONFIG_NR_CPUS >= (1U << _Q_TAIL_CPU_BITS)); if (pv_enabled()) - goto queue; + goto pv_queue; if (virt_spin_lock(lock)) return; @@ -348,6 +353,7 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val) * *,1,0 -> *,0,1 */ clear_pending_set_locked(lock); + qstat_inc(qstat_lock_pending, true); return; } @@ -363,6 +369,8 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val) * queuing. */ queue: + qstat_inc(qstat_lock_slowpath, true); +pv_queue: node = this_cpu_ptr(&mcs_nodes[0]); idx = node->count++; tail = encode_tail(smp_processor_id(), idx); diff --git a/kernel/locking/qspinlock_paravirt.h b/kernel/locking/qspinlock_paravirt.h index 2dbad2f25480..25730b2ac022 100644 --- a/kernel/locking/qspinlock_paravirt.h +++ b/kernel/locking/qspinlock_paravirt.h @@ -56,11 +56,6 @@ struct pv_node { }; /* - * Include queued spinlock statistics code - */ -#include "qspinlock_stat.h" - -/* * Hybrid PV queued/unfair lock * * By replacing the regular queued_spin_trylock() with the function below, @@ -428,7 +423,7 @@ pv_wait_head_or_lock(struct qspinlock *lock, struct mcs_spinlock *node) /* * Tracking # of slowpath locking operations */ - qstat_inc(qstat_pv_lock_slowpath, true); + qstat_inc(qstat_lock_slowpath, true); for (;; waitcnt++) { /* diff --git a/kernel/locking/qspinlock_stat.h b/kernel/locking/qspinlock_stat.h index 4a30ef63c607..6bd78c0740fc 100644 --- a/kernel/locking/qspinlock_stat.h +++ b/kernel/locking/qspinlock_stat.h @@ -22,13 +22,14 @@ * pv_kick_wake - # of vCPU kicks used for computing pv_latency_wake * pv_latency_kick - average latency (ns) of vCPU kick operation * pv_latency_wake - average latency (ns) from vCPU kick to wakeup - * pv_lock_slowpath - # of locking operations via the slowpath * pv_lock_stealing - # of lock stealing operations * pv_spurious_wakeup - # of spurious wakeups in non-head vCPUs * pv_wait_again - # of wait's after a queue head vCPU kick * pv_wait_early - # of early vCPU wait's * pv_wait_head - # of vCPU wait's at the queue head * pv_wait_node - # of vCPU wait's at a non-head queue node + * lock_pending - # of locking operations via pending code + * lock_slowpath - # of locking operations via MCS lock queue * * Writing to the "reset_counters" file will reset all the above counter * values. @@ -46,13 +47,14 @@ enum qlock_stats { qstat_pv_kick_wake, qstat_pv_latency_kick, qstat_pv_latency_wake, - qstat_pv_lock_slowpath, qstat_pv_lock_stealing, qstat_pv_spurious_wakeup, qstat_pv_wait_again, qstat_pv_wait_early, qstat_pv_wait_head, qstat_pv_wait_node, + qstat_lock_pending, + qstat_lock_slowpath, qstat_num, /* Total number of statistical counters */ qstat_reset_cnts = qstat_num, }; @@ -73,12 +75,13 @@ static const char * const qstat_names[qstat_num + 1] = { [qstat_pv_spurious_wakeup] = "pv_spurious_wakeup", [qstat_pv_latency_kick] = "pv_latency_kick", [qstat_pv_latency_wake] = "pv_latency_wake", - [qstat_pv_lock_slowpath] = "pv_lock_slowpath", [qstat_pv_lock_stealing] = "pv_lock_stealing", [qstat_pv_wait_again] = "pv_wait_again", [qstat_pv_wait_early] = "pv_wait_early", [qstat_pv_wait_head] = "pv_wait_head", [qstat_pv_wait_node] = "pv_wait_node", + [qstat_lock_pending] = "lock_pending", + [qstat_lock_slowpath] = "lock_slowpath", [qstat_reset_cnts] = "reset_counters", };