From patchwork Thu Oct 3 15:50:32 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg Kroah-Hartman X-Patchwork-Id: 175178 Delivered-To: patch@linaro.org Received: by 2002:a92:7e96:0:0:0:0:0 with SMTP id q22csp613382ill; Thu, 3 Oct 2019 10:15:07 -0700 (PDT) X-Google-Smtp-Source: APXvYqyp7PpeHw3sa4y/vcgyDT+HF+c1Or9r5tVBhe3LNndsj7hCoIMk0+d2UYjdCEBOehA4G41P X-Received: by 2002:a17:906:4bc3:: with SMTP id x3mr8578942ejv.200.1570122907651; Thu, 03 Oct 2019 10:15:07 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1570122907; cv=none; d=google.com; s=arc-20160816; b=VruT4MuirLS6cIJ5AzfSetbz0OylEaH/0EA88OJyKvpKhQ67R+4GVz2Tqhkx6dF6as Ufrk8VWy/L4SPE4ClwBxZy/fkmN4zYP4DHT2qVTQk4Jr++i1Xjehuo3ecAIVIUREPZkl xzhN0NcZcSLxDT7hpJjgdeybJEcSMAib1ErjvEgGNgyZw9oB3vcQEsv/qX7IWIUEh6vI dtUtxWcqV3u8VKYXroHKWWgz6thNinJR1DaYAfbhFWaAGjODYhjYO9z2EL0ByiXMTDP9 w3TujtXrN5v6g+JpvpIs16/VnSxXIdADvBUp2TiY4Gd5wSmSuKSJzhqPfL2cbY6rfuRj 7SiA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=GmOVFWRQ7Ti+RLPvKg/a4KDhdjKsxhEafrdYXoqB/is=; b=KAS1EpyIOC32eTcUwtc7buUieC+Q9zJ4fSwwG0K3UDwaLuR/Ul/pEqe8Z5IyQ8XuY8 gwTed49CGmt22rGYbA9hA2yN8R4u/7TijmxzhFyT53HUVb+TethdE5yKJfBH7nZBRUgI tZr1rBD+/qDg2j8S5y/zRDQM4ZXrEXVZFjVsmvl7meDO/w2Q2dRuYIo/HkOW8GbaK8me YZHQGTKWw21Hie+Chs2vqlHgzRfo6yS1WnIpEVGakK7877/xC5Q7kQHOa+eDIpmVxq7D 6USo1BbOLQYpCFGYmOW+UzKxmKFtSWJc0TmDjFYL5q12Yo/G7xnu9TKsTk7DD3xq3JhY KH/Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b="U+//bAof"; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id n19si1871942edo.172.2019.10.03.10.15.07; Thu, 03 Oct 2019 10:15:07 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b="U+//bAof"; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2404206AbfJCRPG (ORCPT + 27 others); Thu, 3 Oct 2019 13:15:06 -0400 Received: from mail.kernel.org ([198.145.29.99]:57472 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2391063AbfJCQ0X (ORCPT ); Thu, 3 Oct 2019 12:26:23 -0400 Received: from localhost (83-86-89-107.cable.dynamic.v4.ziggo.nl [83.86.89.107]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 0F2D120867; Thu, 3 Oct 2019 16:26:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1570119982; bh=VR7Qr2sSZDvvWUGGowSu7a6WGDqx1uqcGdlqRG8+qwM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=U+//bAofW5n3o0eTLRKEah11ljQrVsLxl+VtdIQz9T1sezcxCiM29VBU/2n6GSICN UHT6eGzZR4Zk0p2mj6y7eTKwV63T96QsMqerQu6HvCnRFcPwxC9sJ2IVVgzPnbhsrA QV1n2zsn9OFx5Kk2G1hpxBeJl/tJ9En/nfHhe3sg= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Vincent Guittot , "Peter Zijlstra (Intel)" , Linus Torvalds , Thomas Gleixner , Ingo Molnar , Sasha Levin Subject: [PATCH 5.2 054/313] sched/fair: Fix imbalance due to CPU affinity Date: Thu, 3 Oct 2019 17:50:32 +0200 Message-Id: <20191003154538.389857532@linuxfoundation.org> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20191003154533.590915454@linuxfoundation.org> References: <20191003154533.590915454@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Vincent Guittot [ Upstream commit f6cad8df6b30a5d2bbbd2e698f74b4cafb9fb82b ] The load_balance() has a dedicated mecanism to detect when an imbalance is due to CPU affinity and must be handled at parent level. In this case, the imbalance field of the parent's sched_group is set. The description of sg_imbalanced() gives a typical example of two groups of 4 CPUs each and 4 tasks each with a cpumask covering 1 CPU of the first group and 3 CPUs of the second group. Something like: { 0 1 2 3 } { 4 5 6 7 } * * * * But the load_balance fails to fix this UC on my octo cores system made of 2 clusters of quad cores. Whereas the load_balance is able to detect that the imbalanced is due to CPU affinity, it fails to fix it because the imbalance field is cleared before letting parent level a chance to run. In fact, when the imbalance is detected, the load_balance reruns without the CPU with pinned tasks. But there is no other running tasks in the situation described above and everything looks balanced this time so the imbalance field is immediately cleared. The imbalance field should not be cleared if there is no other task to move when the imbalance is detected. Signed-off-by: Vincent Guittot Signed-off-by: Peter Zijlstra (Intel) Cc: Linus Torvalds Cc: Peter Zijlstra Cc: Thomas Gleixner Link: https://lkml.kernel.org/r/1561996022-28829-1-git-send-email-vincent.guittot@linaro.org Signed-off-by: Ingo Molnar Signed-off-by: Sasha Levin --- kernel/sched/fair.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) -- 2.20.1 diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index b07672e793a81..f72bf8122fe4e 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -9319,9 +9319,10 @@ static int load_balance(int this_cpu, struct rq *this_rq, out_balanced: /* * We reach balance although we may have faced some affinity - * constraints. Clear the imbalance flag if it was set. + * constraints. Clear the imbalance flag only if other tasks got + * a chance to move and fix the imbalance. */ - if (sd_parent) { + if (sd_parent && !(env.flags & LBF_ALL_PINNED)) { int *group_imbalance = &sd_parent->groups->sgc->imbalance; if (*group_imbalance)