From patchwork Thu Oct 3 15:50:27 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg KH X-Patchwork-Id: 175146 Delivered-To: patch@linaro.org Received: by 2002:a92:7e96:0:0:0:0:0 with SMTP id q22csp571214ill; Thu, 3 Oct 2019 09:41:11 -0700 (PDT) X-Google-Smtp-Source: APXvYqxD4liWzd00SPi/S4/DlgOdmI+DEw7gW+n1YtJLBN5zeIEc+imHVohk4/VMydfLmtyOr1Ew X-Received: by 2002:a17:906:4990:: with SMTP id p16mr8725687eju.9.1570120871552; Thu, 03 Oct 2019 09:41:11 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1570120871; cv=none; d=google.com; s=arc-20160816; b=J+5R9q1HJ1lJnzY/dq1qo/RXsNXT7QWIAHIya6DMlKWnaqvZwSLF5gi7TzPbk19m35 s8r18N6zvnVaDxIaN+jczYCHnL/1Qx76WpWNpK+24nuja2zrTDabh4TBB5M+vIalXJ0d 8keJSSgKWZEGQlEdy71h8Ht9yccPDipWWYOUEuCaUAzf0AoWPe6jgfkkzuRJT6LcZzAG vq757s1B6u8H6whffNR0YpJrMQ8Ee9LOwfn4h1LO2hWUlpQSWbSeHQrNrK7n163w+Rzc 5iSdLG3AXdGfsD5hYn5ExaBSo2EN4dTOTFwFwEJcMdKiUdjvFNj6beHaU3Q4FkRZezwM FZdg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=T85Mt7ceCZCc6xb2hg1v4u3najU42avN/Q4U+mTTaYc=; b=NM0FJjF7qYrDEoVod3joHSUUtjCaiylRtLmgMiwy8rgVug6GRxI0esoD5Sey8Utqhg p+Ct6fyW15Q91o9j26I6hbTPDk/OwUHRhIB783Nowmu/OsS1dzINlRwNezjiLeLA1aJT OyP8Yy0Irz9k4qis7wuscng//2EnwrNP+iuOHYJ9PaLviK0b2ygshwrTJp2CFqgFp54L PgMAdbmGlEF7hvq5f699++wNuhDt8XlGN30hhNwcHJeFgmVoohzvj5eVa6vlFSngK5Yp o45t1TZMqT33PZvkKKUdyZNq9EMMQHAbyU5GmbdV19O/qthyWx3A3o/CtQOkwsCvnXmO 3YNQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=UtAC4e+H; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id sb7si1581045ejb.321.2019.10.03.09.41.10; Thu, 03 Oct 2019 09:41:11 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=UtAC4e+H; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2392236AbfJCQlJ (ORCPT + 27 others); Thu, 3 Oct 2019 12:41:09 -0400 Received: from mail.kernel.org ([198.145.29.99]:51294 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2404979AbfJCQk6 (ORCPT ); Thu, 3 Oct 2019 12:40:58 -0400 Received: from localhost (83-86-89-107.cable.dynamic.v4.ziggo.nl [83.86.89.107]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id CE65F20830; Thu, 3 Oct 2019 16:40:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1570120857; bh=ZC9lpSM5Wj6CgnpGxKHza4H16yIMHAvueVMdgbA3+Lk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=UtAC4e+Htg+QQXGjJzl1tafNzEW3+vt9yh9oJKB6GuRRfgbtlSIn7vWG4gxnsYrVA BsiaClNRYxAXH4RDfjosl+EzVPINyUdaegNjbOtAK5ZR/Lo2YSF/GHAcAwYxT55BxW py0NnvagqoeRuNdoKUjIt6dFmpSmBQ2c31SQzxFQ= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Vincent Guittot , "Peter Zijlstra (Intel)" , Linus Torvalds , Thomas Gleixner , Ingo Molnar , Sasha Levin Subject: [PATCH 5.3 062/344] sched/fair: Fix imbalance due to CPU affinity Date: Thu, 3 Oct 2019 17:50:27 +0200 Message-Id: <20191003154546.221364910@linuxfoundation.org> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20191003154540.062170222@linuxfoundation.org> References: <20191003154540.062170222@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Vincent Guittot [ Upstream commit f6cad8df6b30a5d2bbbd2e698f74b4cafb9fb82b ] The load_balance() has a dedicated mecanism to detect when an imbalance is due to CPU affinity and must be handled at parent level. In this case, the imbalance field of the parent's sched_group is set. The description of sg_imbalanced() gives a typical example of two groups of 4 CPUs each and 4 tasks each with a cpumask covering 1 CPU of the first group and 3 CPUs of the second group. Something like: { 0 1 2 3 } { 4 5 6 7 } * * * * But the load_balance fails to fix this UC on my octo cores system made of 2 clusters of quad cores. Whereas the load_balance is able to detect that the imbalanced is due to CPU affinity, it fails to fix it because the imbalance field is cleared before letting parent level a chance to run. In fact, when the imbalance is detected, the load_balance reruns without the CPU with pinned tasks. But there is no other running tasks in the situation described above and everything looks balanced this time so the imbalance field is immediately cleared. The imbalance field should not be cleared if there is no other task to move when the imbalance is detected. Signed-off-by: Vincent Guittot Signed-off-by: Peter Zijlstra (Intel) Cc: Linus Torvalds Cc: Peter Zijlstra Cc: Thomas Gleixner Link: https://lkml.kernel.org/r/1561996022-28829-1-git-send-email-vincent.guittot@linaro.org Signed-off-by: Ingo Molnar Signed-off-by: Sasha Levin --- kernel/sched/fair.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) -- 2.20.1 diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 500f5db0de0ba..105b1aead0c3a 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -9052,9 +9052,10 @@ static int load_balance(int this_cpu, struct rq *this_rq, out_balanced: /* * We reach balance although we may have faced some affinity - * constraints. Clear the imbalance flag if it was set. + * constraints. Clear the imbalance flag only if other tasks got + * a chance to move and fix the imbalance. */ - if (sd_parent) { + if (sd_parent && !(env.flags & LBF_ALL_PINNED)) { int *group_imbalance = &sd_parent->groups->sgc->imbalance; if (*group_imbalance)