From patchwork Wed Mar 18 06:31:02 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xunlei Pang X-Patchwork-Id: 45920 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-we0-f197.google.com (mail-we0-f197.google.com [74.125.82.197]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id B7A612153B for ; Wed, 18 Mar 2015 07:06:42 +0000 (UTC) Received: by wesk11 with SMTP id k11sf924970wes.3 for ; Wed, 18 Mar 2015 00:06:42 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:sender:precedence:list-id:x-original-sender :x-original-authentication-results:mailing-list:list-post:list-help :list-archive:list-unsubscribe; bh=MKvrmR2a3YF7k5RWxYYMmfjydNXoXpM0+1TOTPhvQvc=; b=PzvqaJiQ3XxIsgs5pjgqv1jD+pMDS2zpCehPHZBEEkfTEYxFmpzCGodWNXVJENZxGR YoeXwhyPewjgvHEht38zhyFFH3Qc+dSJYW4jcfFupp+g6lK3mz1VGufp7XMr3vwZhdS4 uJag5AflHTAA5L9KyBmANDHcRvHYawC9DTBI5HmUVwOo6Xt59y8U0d/eNIx+Uu2QNuQc EuvXQ7G12A2QyRWsukzIgbL1Jub9l7+NfGw8XtP1tWKWBPtcZN46BIVvFRT4mgocRFNi ILBcU3LWUwI1u5j/C6QW2EAxxqhhxuISKKI9U7ekuCqYUKsiu2FlGuSEogtEcaDiAWwy KHGw== X-Gm-Message-State: ALoCoQmmSsX5GV992O9rzOwXxkzXqxhLPQ3M0XPsL6wi2QDLy3TR9sJWCHUbvd935CmmZH4dF65x X-Received: by 10.112.224.12 with SMTP id qy12mr10579668lbc.10.1426662402011; Wed, 18 Mar 2015 00:06:42 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.206.104 with SMTP id ln8ls107406lac.94.gmail; Wed, 18 Mar 2015 00:06:41 -0700 (PDT) X-Received: by 10.152.2.193 with SMTP id 1mr62729971law.17.1426662401515; Wed, 18 Mar 2015 00:06:41 -0700 (PDT) Received: from mail-la0-x22b.google.com (mail-la0-x22b.google.com. [2a00:1450:4010:c03::22b]) by mx.google.com with ESMTPS id z7si12239192lal.18.2015.03.18.00.06.41 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 18 Mar 2015 00:06:41 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 2a00:1450:4010:c03::22b as permitted sender) client-ip=2a00:1450:4010:c03::22b; Received: by labjg1 with SMTP id jg1so28233476lab.2 for ; Wed, 18 Mar 2015 00:06:41 -0700 (PDT) X-Received: by 10.152.43.51 with SMTP id t19mr62235030lal.73.1426662401298; Wed, 18 Mar 2015 00:06:41 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.35.133 with SMTP id h5csp972735lbj; Wed, 18 Mar 2015 00:06:40 -0700 (PDT) X-Received: by 10.70.131.227 with SMTP id op3mr119002620pdb.12.1426662399429; Wed, 18 Mar 2015 00:06:39 -0700 (PDT) Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id t8si34125439pdj.194.2015.03.18.00.06.38; Wed, 18 Mar 2015 00:06:39 -0700 (PDT) Received-SPF: none (google.com: linux-kernel-owner@vger.kernel.org does not designate permitted sender hosts) client-ip=209.132.180.67; Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755024AbbCRHGd (ORCPT + 27 others); Wed, 18 Mar 2015 03:06:33 -0400 Received: from m50-112.126.com ([123.125.50.112]:35037 "EHLO m50-112.126.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754926AbbCRHGb (ORCPT ); Wed, 18 Mar 2015 03:06:31 -0400 X-Greylist: delayed 1896 seconds by postgrey-1.27 at vger.kernel.org; Wed, 18 Mar 2015 03:06:31 EDT Received: from localhost.localdomain (unknown [210.21.223.3]) by smtp6 (Coremail) with SMTP id j9KowACnLDowHAlVTaOZFA--.991S2; Wed, 18 Mar 2015 14:33:27 +0800 (CST) From: Xunlei Pang To: linux-kernel@vger.kernel.org Cc: Peter Zijlstra , Steven Rostedt , Juri Lelli , Xunlei Pang Subject: [PATCH] sched/fair: Restore env status before goto redo in load_balance() Date: Wed, 18 Mar 2015 14:31:02 +0800 Message-Id: <1426660262-27526-1-git-send-email-xlpang@126.com> X-Mailer: git-send-email 1.9.1 X-CM-TRANSID: j9KowACnLDowHAlVTaOZFA--.991S2 X-Coremail-Antispam: 1Uf129KBjvJXoWxZr18Jw4rXFWDKFyDAw18Xwb_yoW5Aw48p3 9avFWrtF4Dt3W8J39avF4v9r4Sqr1fur47JFnrJ3WSyF45Wr1jyr1Sq3W3uFWjvF95tFs0 qr9IqryUuasFg3DanT9S1TB71UUUUUUqnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDUYxBIdaVFxhVjvjDU0xZFpf9x07jk8n5UUUUU= X-Originating-IP: [210.21.223.3] X-CM-SenderInfo: p0ost0bj6rjloofrz/1tbimhjAv1GfVfzA+QAAsj Sender: linux-kernel-owner@vger.kernel.org Precedence: list List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Original-Sender: xlpang@126.com X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 2a00:1450:4010:c03::22b as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org; dkim=neutral (body hash did not verify) header.i=@; dmarc=fail (p=NONE dis=NONE) header.from=126.com Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , From: Xunlei Pang In load_balance(), some members of lb_env will be assigned with new values in LBF_DST_PINNED case. But lb_env::flags may still retain LBF_ALL_PINNED if no proper tasks were found afterwards due to another balance, task affinity changing, etc, which can really happen because busiest rq lock has already been released. This is wrong, for example with env.dst_cpu assigned new_dst_cpu when going back to "redo" label, it may cause should_we_balance() to return false which is unreasonable. This patch restores proper status of env before "goto redo", and improves "out_all_pinned" and "out_one_pinned" labels. Signed-off-by: Xunlei Pang --- kernel/sched/fair.c | 35 ++++++++++++++++++++--------------- 1 file changed, 20 insertions(+), 15 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index ee595ef..45bbda1 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -6843,6 +6843,7 @@ static int load_balance(int this_cpu, struct rq *this_rq, .dst_cpu = this_cpu, .dst_rq = this_rq, .dst_grpmask = sched_group_cpus(sd->groups), + .new_dst_cpu = -1, .idle = idle, .loop_break = sched_nr_migrate_break, .cpus = cpus, @@ -6977,12 +6978,19 @@ more_balance: /* All tasks on this runqueue were pinned by CPU affinity */ if (unlikely(env.flags & LBF_ALL_PINNED)) { cpumask_clear_cpu(cpu_of(busiest), cpus); - if (!cpumask_empty(cpus)) { - env.loop = 0; - env.loop_break = sched_nr_migrate_break; - goto redo; + if (env.new_dst_cpu != -1) { + env.new_dst_cpu = -1; + cpumask_or(cpus, cpus, + sched_group_cpus(sd->groups)); + cpumask_and(cpus, cpus, cpu_active_mask); + + env.dst_cpu = this_cpu; + env.dst_rq = this_rq; } - goto out_all_pinned; + env.flags &= ~LBF_SOME_PINNED; + env.loop = 0; + env.loop_break = sched_nr_migrate_break; + goto redo; } } @@ -7009,7 +7017,7 @@ more_balance: raw_spin_unlock_irqrestore(&busiest->lock, flags); env.flags |= LBF_ALL_PINNED; - goto out_one_pinned; + goto out_active_balanced; } /* @@ -7058,26 +7066,23 @@ more_balance: out_balanced: /* * We reach balance although we may have faced some affinity - * constraints. Clear the imbalance flag if it was set. + * constraints. + * + * When LBF_ALL_PINNED was not set, clear the imbalance flag + * if it was set. */ - if (sd_parent) { + if (sd_parent && !(env.flags & LBF_ALL_PINNED)) { int *group_imbalance = &sd_parent->groups->sgc->imbalance; if (*group_imbalance) *group_imbalance = 0; } -out_all_pinned: - /* - * We reach balance because all tasks are pinned at this level so - * we can't migrate them. Let the imbalance flag set so parent level - * can try to migrate them. - */ schedstat_inc(sd, lb_balanced[idle]); sd->nr_balance_failed = 0; -out_one_pinned: +out_active_balanced: /* tune up the balancing interval */ if (((env.flags & LBF_ALL_PINNED) && sd->balance_interval < MAX_PINNED_INTERVAL) ||