From patchwork Tue May 12 14:46:41 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xunlei Pang X-Patchwork-Id: 48395 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-la0-f69.google.com (mail-la0-f69.google.com [209.85.215.69]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id C7A34214A1 for ; Tue, 12 May 2015 14:48:42 +0000 (UTC) Received: by lamp14 with SMTP id p14sf2630026lam.3 for ; Tue, 12 May 2015 07:48:41 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:sender:precedence:list-id:x-original-sender :x-original-authentication-results:mailing-list:list-post:list-help :list-archive:list-unsubscribe; bh=PM928SSyoZ8xnIPz9lMURQ8rERidSnxt9BrJt7+9AWw=; b=ENzT6195FcNUbKFbpfRCu7j7lQ0NTXcyEQePR51ahSwP5YNFI0sAyKvj1Bt9OA4uY0 HqKUYaGwKqHXRn+b6OvJZVhjYqqNyXg0tHwBAhHCdGofzh/axocJJO2ITtDiqGsUxkUX he+f403YWRV24EkiTbeOejfokXUOLOYDvfBGNjBhaO4Xjpux3BfsGVnjssd/lGyH0Z3o 5PxTcnFCN9J9Xvg663bEXBnALO3RcKh9kUaIHCHrzxXKCz6VUI/HQhhxruy3eyeT9ecB sC3BAmEgOXfBZ34rPKzfbDA/jSKJXJ+n4iiraks2o92909ol5Y92Mtr4NrCHLWWjAhdL wguQ== X-Gm-Message-State: ALoCoQlz086UXav7k/2Cgs5Ki135lUxyyOoxj87RMQHsdcrDMrBy+AHzrxuypbVKLWWtkaSlvSKD X-Received: by 10.180.85.129 with SMTP id h1mr2037572wiz.6.1431442121719; Tue, 12 May 2015 07:48:41 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.163.200 with SMTP id yk8ls42716lab.95.gmail; Tue, 12 May 2015 07:48:41 -0700 (PDT) X-Received: by 10.112.156.97 with SMTP id wd1mr12051577lbb.30.1431442121533; Tue, 12 May 2015 07:48:41 -0700 (PDT) Received: from mail-lb0-x22f.google.com (mail-lb0-x22f.google.com. [2a00:1450:4010:c04::22f]) by mx.google.com with ESMTPS id z9si10505224lbu.81.2015.05.12.07.48.41 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 12 May 2015 07:48:41 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 2a00:1450:4010:c04::22f as permitted sender) client-ip=2a00:1450:4010:c04::22f; Received: by lbbzk7 with SMTP id zk7so7868087lbb.0 for ; Tue, 12 May 2015 07:48:41 -0700 (PDT) X-Received: by 10.152.36.2 with SMTP id m2mr12155013laj.72.1431442121392; Tue, 12 May 2015 07:48:41 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.108.230 with SMTP id hn6csp2135924lbb; Tue, 12 May 2015 07:48:40 -0700 (PDT) X-Received: by 10.68.197.161 with SMTP id iv1mr29106916pbc.0.1431442119221; Tue, 12 May 2015 07:48:39 -0700 (PDT) Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id ko11si22740579pbd.257.2015.05.12.07.48.37; Tue, 12 May 2015 07:48:39 -0700 (PDT) Received-SPF: none (google.com: linux-kernel-owner@vger.kernel.org does not designate permitted sender hosts) client-ip=209.132.180.67; Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933400AbbELOsc (ORCPT + 28 others); Tue, 12 May 2015 10:48:32 -0400 Received: from m50-112.126.com ([123.125.50.112]:57853 "EHLO m50-112.126.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932990AbbELOsR (ORCPT ); Tue, 12 May 2015 10:48:17 -0400 Received: from localhost.localdomain (unknown [210.21.223.3]) by smtp6 (Coremail) with SMTP id j9KowAAnVHFdElJVTB0hBw--.1248S2; Tue, 12 May 2015 22:47:00 +0800 (CST) From: Xunlei Pang To: linux-kernel@vger.kernel.org Cc: Peter Zijlstra , Steven Rostedt , Juri Lelli , Ingo Molnar , Xunlei Pang Subject: [PATCH v3 1/4] sched/rt: Check to push the task away after its affinity was changed Date: Tue, 12 May 2015 22:46:41 +0800 Message-Id: <1431442004-18716-1-git-send-email-xlpang@126.com> X-Mailer: git-send-email 1.9.1 X-CM-TRANSID: j9KowAAnVHFdElJVTB0hBw--.1248S2 X-Coremail-Antispam: 1Uf129KBjvJXoWxtFWfJr1xKrW3Xw18Xr4DCFg_yoW7Cr47pF 4kCa45Gr4DJ3W0gw1fZ3ykZr4agwn2q343K3Z5t34FkFZ0qr4FvFn0qa13AFyagr1Y9a12 qr4DtrWIkr1jv3DanT9S1TB71UUUUUUqnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDUYxBIdaVFxhVjvjDU0xZFpf9x07jk8n5UUUUU= X-Originating-IP: [210.21.223.3] X-CM-SenderInfo: p0ost0bj6rjloofrz/1tbiJwf3v01sBrn8lwAAsd Sender: linux-kernel-owner@vger.kernel.org Precedence: list List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Original-Sender: xlpang@126.com X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 2a00:1450:4010:c04::22f as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org; dkim=neutral (body hash did not verify) header.i=@; dmarc=fail (p=NONE dis=NONE) header.from=126.com Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , From: Xunlei Pang We may suffer from extra rt overload rq due to the affinity, so when the affinity of any runnable rt task is changed, we should check to trigger balancing, otherwise it will cause some unnecessary delayed real-time response. Unfortunately, current RT global scheduler does nothing about this. For example: a 2-cpu system with two runnable FIFO tasks(same rt_priority) bound on CPU0, let's name them rt1(running) and rt2(runnable) respectively; CPU1 has no RTs. Then, someone sets the affinity of rt2 to 0x3(i.e. CPU0 and CPU1), but after this, rt2 still can't be scheduled enters schedule(), this definitely causes some/big response latency for rt2. This patch modified set_cpus_allowed_rt(), if the target task is runnable but not running, it tries to push it away once it got migratable. The patch also solves a problem about move_queued_task() called in set_cpus_allowed_ptr(): When a lower priorioty rt task got migrated due to its curr cpu isn't in the new affinity mask, after move_queued_task() it will miss the chance of pushing away, because check_preempt_curr() called by move_queued_task() doens't set the "need resched flag" for lower priority tasks. Signed-off-by: Xunlei Pang --- kernel/sched/core.c | 10 +++++++--- kernel/sched/deadline.c | 8 +++++--- kernel/sched/rt.c | 29 ++++++++++++++++++++++------- kernel/sched/sched.h | 3 ++- 4 files changed, 36 insertions(+), 14 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index d13fc13..c995a02 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -4768,11 +4768,15 @@ static struct rq *move_queued_task(struct task_struct *p, int new_cpu) void do_set_cpus_allowed(struct task_struct *p, const struct cpumask *new_mask) { + bool updated = false; + if (p->sched_class->set_cpus_allowed) - p->sched_class->set_cpus_allowed(p, new_mask); + updated = p->sched_class->set_cpus_allowed(p, new_mask); - cpumask_copy(&p->cpus_allowed, new_mask); - p->nr_cpus_allowed = cpumask_weight(new_mask); + if (!updated) { + cpumask_copy(&p->cpus_allowed, new_mask); + p->nr_cpus_allowed = cpumask_weight(new_mask); + } } /* diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c index 5e95145..3baffb2 100644 --- a/kernel/sched/deadline.c +++ b/kernel/sched/deadline.c @@ -1574,7 +1574,7 @@ static void task_woken_dl(struct rq *rq, struct task_struct *p) } } -static void set_cpus_allowed_dl(struct task_struct *p, +static bool set_cpus_allowed_dl(struct task_struct *p, const struct cpumask *new_mask) { struct rq *rq; @@ -1610,7 +1610,7 @@ static void set_cpus_allowed_dl(struct task_struct *p, * it is on the rq AND it is not throttled). */ if (!on_dl_rq(&p->dl)) - return; + return false; weight = cpumask_weight(new_mask); @@ -1619,7 +1619,7 @@ static void set_cpus_allowed_dl(struct task_struct *p, * can migrate or not. */ if ((p->nr_cpus_allowed > 1) == (weight > 1)) - return; + return false; /* * The process used to be able to migrate OR it can now migrate @@ -1636,6 +1636,8 @@ static void set_cpus_allowed_dl(struct task_struct *p, } update_dl_migration(&rq->dl); + + return false; } /* Assumes rq->lock is held */ diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c index 8885b65..4a49c6a 100644 --- a/kernel/sched/rt.c +++ b/kernel/sched/rt.c @@ -2241,7 +2241,7 @@ static void task_woken_rt(struct rq *rq, struct task_struct *p) push_rt_tasks(rq); } -static void set_cpus_allowed_rt(struct task_struct *p, +static bool set_cpus_allowed_rt(struct task_struct *p, const struct cpumask *new_mask) { struct rq *rq; @@ -2250,18 +2250,19 @@ static void set_cpus_allowed_rt(struct task_struct *p, BUG_ON(!rt_task(p)); if (!task_on_rq_queued(p)) - return; + return false; weight = cpumask_weight(new_mask); + rq = task_rq(p); + /* - * Only update if the process changes its state from whether it - * can migrate or not. + * Skip updating the migration stuff if the process doesn't change + * its migrate state, but still need to check if it can be pushed + * away due to its new affinity. */ if ((p->nr_cpus_allowed > 1) == (weight > 1)) - return; - - rq = task_rq(p); + goto check_push; /* * The process used to be able to migrate OR it can now migrate @@ -2278,6 +2279,20 @@ static void set_cpus_allowed_rt(struct task_struct *p, } update_rt_migration(&rq->rt); + +check_push: + if (weight > 1 && + !task_running(rq, p) && + !test_tsk_need_resched(rq->curr) && + !cpumask_subset(new_mask, &p->cpus_allowed)) { + /* Update new affinity and try to push. */ + cpumask_copy(&p->cpus_allowed, new_mask); + p->nr_cpus_allowed = weight; + push_rt_tasks(rq); + return true; + } + + return false; } /* Assumes rq->lock is held */ diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index e0e1299..101b359 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -1189,7 +1189,8 @@ struct sched_class { void (*task_waking) (struct task_struct *task); void (*task_woken) (struct rq *this_rq, struct task_struct *task); - void (*set_cpus_allowed)(struct task_struct *p, + /* Return true if p's affinity was updated, false otherwise. */ + bool (*set_cpus_allowed)(struct task_struct *p, const struct cpumask *newmask); void (*rq_online)(struct rq *rq);