From patchwork Tue Feb 3 11:55:48 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xunlei Pang X-Patchwork-Id: 44221 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-wg0-f72.google.com (mail-wg0-f72.google.com [74.125.82.72]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id AFC8321513 for ; Tue, 3 Feb 2015 12:00:36 +0000 (UTC) Received: by mail-wg0-f72.google.com with SMTP id k14sf20281353wgh.3 for ; Tue, 03 Feb 2015 04:00:35 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:sender:precedence:list-id:x-original-sender :x-original-authentication-results:mailing-list:list-post:list-help :list-archive:list-unsubscribe; bh=g2ajrplpWBgu0aenyvjKdTM2IM2wIQM7AQ0W6fX3Tlw=; b=GZQSF8eErh+QFLpb0SO7sF5psQiG3Pg9/JCrDjUgm5OZrEmo7v0BdGPfe/MQVCEyh6 QkYGQQ1PStcAz2z0Dw2A4thTD6nw2yvyD1pj5W5XH7dn1kJR3la2K6zN5W+KrlnbhFF4 O8O9f+IegNm8b0ncAHb8lNOMTBpvRS5WlUuxAz6r7q88Wmt935OhvQ78v03jJhuQtHsW yqgcvayVCKOTwNN5wcfJu0D+VdEc2rioLO1qMaG/ATUszq8uAHMNzwyT+fTg8Yh0I0yW vz7QBWaX8KbowwM6ZkrW6y0qXlQltawamTbPAZF8vfes4iGWIgGT+NGDlMstC1IMYGbr bmqQ== X-Gm-Message-State: ALoCoQkQ+fv/ZmQy15RVzDryAuf7m1z4k51BqDGCLuYPQMqQHkGYd9bCs7cNy0GqZT1jt+pROHQr X-Received: by 10.180.92.133 with SMTP id cm5mr1877988wib.4.1422964835923; Tue, 03 Feb 2015 04:00:35 -0800 (PST) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.43.113 with SMTP id v17ls181463lal.3.gmail; Tue, 03 Feb 2015 04:00:35 -0800 (PST) X-Received: by 10.112.148.34 with SMTP id tp2mr23681462lbb.94.1422964835730; Tue, 03 Feb 2015 04:00:35 -0800 (PST) Received: from mail-la0-x233.google.com (mail-la0-x233.google.com. [2a00:1450:4010:c03::233]) by mx.google.com with ESMTPS id je6si5310796lac.79.2015.02.03.04.00.35 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Tue, 03 Feb 2015 04:00:35 -0800 (PST) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 2a00:1450:4010:c03::233 as permitted sender) client-ip=2a00:1450:4010:c03::233; Received: by mail-la0-f51.google.com with SMTP id ge10so50913934lab.10 for ; Tue, 03 Feb 2015 04:00:35 -0800 (PST) X-Received: by 10.112.41.234 with SMTP id i10mr24120152lbl.25.1422964835614; Tue, 03 Feb 2015 04:00:35 -0800 (PST) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.35.133 with SMTP id h5csp1889995lbj; Tue, 3 Feb 2015 04:00:34 -0800 (PST) X-Received: by 10.66.65.195 with SMTP id z3mr37564025pas.10.1422964833683; Tue, 03 Feb 2015 04:00:33 -0800 (PST) Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id h5si2232157pdn.26.2015.02.03.04.00.32; Tue, 03 Feb 2015 04:00:33 -0800 (PST) Received-SPF: none (google.com: linux-kernel-owner@vger.kernel.org does not designate permitted sender hosts) client-ip=209.132.180.67; Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933927AbbBCMA1 (ORCPT + 29 others); Tue, 3 Feb 2015 07:00:27 -0500 Received: from m50-111.126.com ([123.125.50.111]:48610 "EHLO m50-111.126.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933165AbbBCMAV (ORCPT ); Tue, 3 Feb 2015 07:00:21 -0500 Received: from localhost.localdomain (unknown [210.21.223.3]) by smtp5 (Coremail) with SMTP id jtKowACn2AXtt9BU3Si1AA--.1174S2; Tue, 03 Feb 2015 19:58:45 +0800 (CST) From: Xunlei Pang To: linux-kernel@vger.kernel.org Cc: Peter Zijlstra , Steven Rostedt , Xunlei Pang Subject: [PATCH] sched/rt: Check to push the task when changing its affinity Date: Tue, 3 Feb 2015 19:55:48 +0800 Message-Id: <1422964548-27207-1-git-send-email-xlpang@126.com> X-Mailer: git-send-email 1.9.1 X-CM-TRANSID: jtKowACn2AXtt9BU3Si1AA--.1174S2 X-Coremail-Antispam: 1Uf129KBjvJXoWxXFy3Jr48KFW5CF1UZr1xAFb_yoWrCFyxpa 1vk390gr4UJay2gF1fZr4DZr13G3s3Z34rGrnayw1Fkan0qr4Fv3W5tF1ayr1a9r1j9F4a qr4ktr47WF1DZaDanT9S1TB71UUUUUUqnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDUYxBIdaVFxhVjvjDU0xZFpf9x07jk8n5UUUUU= X-Originating-IP: [210.21.223.3] X-CM-SenderInfo: p0ost0bj6rjloofrz/1tbi7xWVv00vbCCIVwAAsh Sender: linux-kernel-owner@vger.kernel.org Precedence: list List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Original-Sender: xlpang@126.com X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 2a00:1450:4010:c03::233 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org; dkim=neutral (body hash did not verify) header.i=@; dmarc=fail (p=NONE dis=NONE) header.from=126.com Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , From: Xunlei Pang We may suffer from extra rt overload rq due to the affinity, so when the affinity of any runnable rt task is changed, we should check to trigger balancing, otherwise it will cause some unnecessary delayed real-time response. Unfortunately, current RT global scheduler doesn't trigger anything. For example: a 2-cpu system with two runnable FIFO tasks(same rt_priority) bound on CPU0, let's name them rt1(running) and rt2(runnable) respectively; CPU1 has no RTs. Then, someone sets the affinity of rt2 to 0x3(i.e. CPU0 and CPU1), but after this, rt2 still can't be scheduled until rt1 enters schedule() in some time pushing rt2 onto CPU1 via post_schedule(), this definitely causes some/big response latency for rt2. So, when doing set_cpus_allowed_rt(), if detecting such cases, check to trigger a push behaviour. Signed-off-by: Xunlei Pang --- If there're no objections, I would be willing to pull deadline up to this. kernel/sched/rt.c | 69 +++++++++++++++++++++++++++++++++++++++++++++++-------- 1 file changed, 59 insertions(+), 10 deletions(-) diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c index f4d4b07..4dacb6e 100644 --- a/kernel/sched/rt.c +++ b/kernel/sched/rt.c @@ -1428,7 +1428,7 @@ static struct sched_rt_entity *pick_next_rt_entity(struct rq *rq, return next; } -static struct task_struct *_pick_next_task_rt(struct rq *rq) +static struct task_struct *_pick_next_task_rt(struct rq *rq, int peek_only) { struct sched_rt_entity *rt_se; struct task_struct *p; @@ -1441,7 +1441,8 @@ static struct task_struct *_pick_next_task_rt(struct rq *rq) } while (rt_rq); p = rt_task_of(rt_se); - p->se.exec_start = rq_clock_task(rq); + if (!peek_only) + p->se.exec_start = rq_clock_task(rq); return p; } @@ -1476,7 +1477,7 @@ pick_next_task_rt(struct rq *rq, struct task_struct *prev) put_prev_task(rq, prev); - p = _pick_next_task_rt(rq); + p = _pick_next_task_rt(rq, 0); /* The running task is never eligible for pushing */ dequeue_pushable_task(rq, p); @@ -1886,28 +1887,69 @@ static void set_cpus_allowed_rt(struct task_struct *p, const struct cpumask *new_mask) { struct rq *rq; - int weight; + int old_weight, new_weight; + int preempt_push = 0, direct_push = 0; BUG_ON(!rt_task(p)); if (!task_on_rq_queued(p)) return; - weight = cpumask_weight(new_mask); + old_weight = p->nr_cpus_allowed; + new_weight = cpumask_weight(new_mask); + + rq = task_rq(p); + + if (new_weight > 1 && + rt_task(rq->curr) && + !test_tsk_need_resched(rq->curr)) { + /* + * Set new mask information to prepare pushing. + * It's safe to do this here. + */ + cpumask_copy(&p->cpus_allowed, new_mask); + p->nr_cpus_allowed = new_weight; + + if (task_running(rq, p) && + cpumask_test_cpu(task_cpu(p), new_mask) && + cpupri_find(&rq->rd->cpupri, p, NULL)) { + /* + * At this point, current task gets migratable most + * likely due to the change of its affinity, let's + * figure out if we can migrate it. + * + * Is there any task with the same priority as that + * of current task? If found one, we should resched. + * NOTE: The target may be unpushable. + */ + if (p->prio == rq->rt.highest_prio.next) { + /* One target just in pushable_tasks list. */ + requeue_task_rt(rq, p, 0); + preempt_push = 1; + } else if (rq->rt.rt_nr_total > 1) { + struct task_struct *next; + + requeue_task_rt(rq, p, 0); + /* peek only */ + next = _pick_next_task_rt(rq, 1); + if (next != p && next->prio == p->prio) + preempt_push = 1; + } + } else if (!task_running(rq, p)) + direct_push = 1; + } /* * Only update if the process changes its state from whether it * can migrate or not. */ - if ((p->nr_cpus_allowed > 1) == (weight > 1)) - return; - - rq = task_rq(p); + if ((old_weight > 1) == (new_weight > 1)) + goto out; /* * The process used to be able to migrate OR it can now migrate */ - if (weight <= 1) { + if (new_weight <= 1) { if (!task_current(rq, p)) dequeue_pushable_task(rq, p); BUG_ON(!rq->rt.rt_nr_migratory); @@ -1919,6 +1961,13 @@ static void set_cpus_allowed_rt(struct task_struct *p, } update_rt_migration(&rq->rt); + +out: + if (direct_push) + push_rt_tasks(rq); + + if (preempt_push) + resched_curr(rq); } /* Assumes rq->lock is held */