From patchwork Fri Dec 6 14:35:04 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Garry X-Patchwork-Id: 180923 Delivered-To: patch@linaro.org Received: by 2002:ac9:44c4:0:0:0:0:0 with SMTP id t4csp827072och; Fri, 6 Dec 2019 06:38:51 -0800 (PST) X-Google-Smtp-Source: APXvYqxtTOZR7rkEmUED2SY/U39wC7ZzfCj0k+U77mmK6Q32oce1nV3v0AwHm+JEUNADCuNbDhFX X-Received: by 2002:a05:6830:1e37:: with SMTP id t23mr11266407otr.16.1575643131554; Fri, 06 Dec 2019 06:38:51 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1575643131; cv=none; d=google.com; s=arc-20160816; b=mNQhseLLrqc6J3Z24tzBmNdyc6Q3OlCneh/eq63PfS6zmcjFJp6D7zb87jhU8qJMIY QHOb46WKSvDqMXf+TUNHBXI51APs6WfWnTSlgL3t2MPAW3cOi4qo83l//S9H8f4LlgNq OncT3PRxBsgybg3EFWh4jOroxd14z9VMfnGF4SueBflFVIK1NG+4Nb3G2b3hmAD0K4LY b5PRH7VB5vUE9ddVcZ3TfDMufWzlDLLRKm6d3g+bxsoc2/dbJz3vIMVGpinuypbTA4y0 E/bxt9Te4HZIfGn5sJOW9kdk+7bltUrMZHbNCGaDO2kacJPPfSte9vuY6yDEFIQXLaFG 05rw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from; bh=RjZ6fiVKiLbYk1fqSv3erwnkbHzoiPDIx2TxVD7y07k=; b=b8GNchvRgJyM4uyOi/+oXxA3X0NfiaK2XLbfbIsg5rDxDmn6t15ICP+ZmFJd0SXOop 6yo/qsSpNXk+AsdBEOw/V3Eu6ybaR7k3sESRCdYlmQl3SV9aKz48D+iecN2d7EpL2jgb uu/po04ec5wCQ2MjCQkjTIYF1bUkLME+n/f5LlQzObo88Y4IOiLo7NjDt0IcJqvEPTwK MqM4JP9zHIn8BfS1MwoM0SYz2aH112Edg5o9+Zw3fkuU/HOMNBgIU4t+7r7GvrAeYQB2 UgCcCAdnZsn88Obpu+o4B8o4+HOm2+wcm29hlaAui72CxTMuD1V0erDk6ocC6qeFkC7H ypoA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 90si4364681oty.278.2019.12.06.06.38.51; Fri, 06 Dec 2019 06:38:51 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726328AbfLFOiu (ORCPT + 27 others); Fri, 6 Dec 2019 09:38:50 -0500 Received: from szxga07-in.huawei.com ([45.249.212.35]:49562 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726195AbfLFOiu (ORCPT ); Fri, 6 Dec 2019 09:38:50 -0500 Received: from DGGEMS407-HUB.china.huawei.com (unknown [172.30.72.59]) by Forcepoint Email with ESMTP id 13E2F424AEAD2B2EE89B; Fri, 6 Dec 2019 22:38:42 +0800 (CST) Received: from localhost.localdomain (10.69.192.58) by DGGEMS407-HUB.china.huawei.com (10.3.19.207) with Microsoft SMTP Server id 14.3.439.0; Fri, 6 Dec 2019 22:38:34 +0800 From: John Garry To: CC: , , , , , , , , , , , John Garry Subject: [PATCH RFC 1/1] genirq: Make threaded handler use irq affinity for managed interrupt Date: Fri, 6 Dec 2019 22:35:04 +0800 Message-ID: <1575642904-58295-2-git-send-email-john.garry@huawei.com> X-Mailer: git-send-email 2.8.1 In-Reply-To: <1575642904-58295-1-git-send-email-john.garry@huawei.com> References: <1575642904-58295-1-git-send-email-john.garry@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.69.192.58] X-CFilter-Loop: Reflected Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Currently the cpu allowed mask for the threaded part of a threaded irq handler will be set to the effective affinity of the hard irq. Typically the effective affinity of the hard irq will be for a single cpu. As such, the threaded handler would always run on the same cpu as the hard irq. We have seen scenarios in high data-rate throughput testing that the cpu handling the interrupt can be totally saturated handling both the hard interrupt and threaded handler parts, limiting throughput. For when the interrupt is managed, allow the threaded part to run on all cpus in the irq affinity mask. Signed-off-by: John Garry --- kernel/irq/manage.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) -- 2.17.1 diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c index 1753486b440c..8e7f8e758a88 100644 --- a/kernel/irq/manage.c +++ b/kernel/irq/manage.c @@ -968,7 +968,11 @@ irq_thread_check_affinity(struct irq_desc *desc, struct irqaction *action) if (cpumask_available(desc->irq_common_data.affinity)) { const struct cpumask *m; - m = irq_data_get_effective_affinity_mask(&desc->irq_data); + if (irqd_affinity_is_managed(&desc->irq_data)) + m = desc->irq_common_data.affinity; + else + m = irq_data_get_effective_affinity_mask( + &desc->irq_data); cpumask_copy(mask, m); } else { valid = false;