From patchwork Mon Feb 16 09:32:24 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xunlei Pang X-Patchwork-Id: 44694 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-wg0-f72.google.com (mail-wg0-f72.google.com [74.125.82.72]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 951E321544 for ; Mon, 16 Feb 2015 09:36:37 +0000 (UTC) Received: by mail-wg0-f72.google.com with SMTP id l18sf18511826wgh.3 for ; Mon, 16 Feb 2015 01:36:36 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:in-reply-to:references:sender:precedence:list-id :x-original-sender:x-original-authentication-results:mailing-list :list-post:list-help:list-archive:list-unsubscribe; bh=ixCSVPQRDgAOTfXIoYSRzHszYV6kDpXEh56ZsBufMSA=; b=e9XSnX67CrzoKnuCHbDBbXVduzPGbqw/OBb2GNP3Qq+X6Gj2AFQe5KT4Hw9UlaN1s+ uUTbhT6KiKVr+4UpvxLxi/KCqxXwSfrR2QnEoObGt9dwurOd/IWDGvvxanNAoLqXcfgz F+cWW281OEZK2/KCn7797em6K1UPrIapGxL/kLyIcWElZJT7rlRXLe5gAgx417Q9SxzH UbsrcXWx6QyIAdJBGuq/+R1SdaFyvb++VzHBGqcdAyb6BxFLnEVb6wLQ0CoVtED5jVIz 0zcdzRiu/PllVTAdhht8pQIuaZIorzSfIiWKtzl4oI7OGl+vLwmmLzY/rqxcwISl2dVf h6gA== X-Gm-Message-State: ALoCoQnL4AnoyZVxwVqRPQn2hrE0By9qpajpbt/bwnGMe3E7e7WPDsY78cpQYok+lZdSGUwx/tOq X-Received: by 10.180.105.129 with SMTP id gm1mr2682737wib.3.1424079396924; Mon, 16 Feb 2015 01:36:36 -0800 (PST) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.10.5 with SMTP id e5ls93110lab.37.gmail; Mon, 16 Feb 2015 01:36:36 -0800 (PST) X-Received: by 10.152.7.204 with SMTP id l12mr21009463laa.1.1424079396724; Mon, 16 Feb 2015 01:36:36 -0800 (PST) Received: from mail-la0-f51.google.com (mail-la0-f51.google.com. [209.85.215.51]) by mx.google.com with ESMTPS id oe8si6793415lbc.166.2015.02.16.01.36.36 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 16 Feb 2015 01:36:36 -0800 (PST) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.51 as permitted sender) client-ip=209.85.215.51; Received: by lams18 with SMTP id s18so27259973lam.13 for ; Mon, 16 Feb 2015 01:36:36 -0800 (PST) X-Received: by 10.112.26.110 with SMTP id k14mr20721926lbg.29.1424079396411; Mon, 16 Feb 2015 01:36:36 -0800 (PST) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.35.133 with SMTP id h5csp1472613lbj; Mon, 16 Feb 2015 01:36:35 -0800 (PST) X-Received: by 10.70.64.131 with SMTP id o3mr38512382pds.146.1424079394262; Mon, 16 Feb 2015 01:36:34 -0800 (PST) Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id r4si1642888pdn.10.2015.02.16.01.36.33; Mon, 16 Feb 2015 01:36:34 -0800 (PST) Received-SPF: none (google.com: linux-kernel-owner@vger.kernel.org does not designate permitted sender hosts) client-ip=209.132.180.67; Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932296AbbBPJg3 (ORCPT + 28 others); Mon, 16 Feb 2015 04:36:29 -0500 Received: from m15-113.126.com ([220.181.15.113]:36980 "EHLO m15-113.126.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755211AbbBPJfd (ORCPT ); Mon, 16 Feb 2015 04:35:33 -0500 Received: from localhost.localdomain (unknown [210.21.223.3]) by smtp3 (Coremail) with SMTP id DcmowAA3sXVVueFU6xL6AQ--.1340S4; Mon, 16 Feb 2015 17:33:24 +0800 (CST) From: Xunlei Pang To: linux-kernel@vger.kernel.org Cc: Peter Zijlstra , Steven Rostedt , Juri Lelli , Andrew Morton , Dan Streetman , Xunlei Pang Subject: [PATCH v4 3/3] sched/rt: Check to push the task when changing its affinity Date: Mon, 16 Feb 2015 17:32:24 +0800 Message-Id: <1424079144-5194-3-git-send-email-xlpang@126.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1424079144-5194-1-git-send-email-xlpang@126.com> References: <1424079144-5194-1-git-send-email-xlpang@126.com> X-CM-TRANSID: DcmowAA3sXVVueFU6xL6AQ--.1340S4 X-Coremail-Antispam: 1Uf129KBjvJXoWxXFy3Jr48KFy5tFWUuw47XFb_yoWrCrWrpa 1vk39Ygr4UJaySgF1fZw4DZr45K3sav34rKrnxtw1Fkan0qr4Fv3W5tF4ayr95ur1j9F4a qr4Dtr47GF1UZa7anT9S1TB71UUUUUUqnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDUYxBIdaVFxhVjvjDU0xZFpf9x07jil1kUUUUU= X-Originating-IP: [210.21.223.3] X-CM-SenderInfo: p0ost0bj6rjloofrz/1tbiXASiv1R0U5Ul2QAAsf Sender: linux-kernel-owner@vger.kernel.org Precedence: list List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Original-Sender: xlpang@126.com X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.51 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org; dkim=neutral (body hash did not verify) header.i=@; dmarc=fail (p=NONE dis=NONE) header.from=126.com Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , From: Xunlei Pang We may suffer from extra rt overload rq due to the affinity, so when the affinity of any runnable rt task is changed, we should check to trigger balancing, otherwise it will cause some unnecessary delayed real-time response. Unfortunately, current RT global scheduler doesn't trigger anything. For example: a 2-cpu system with two runnable FIFO tasks(same rt_priority) bound on CPU0, let's name them rt1(running) and rt2(runnable) respectively; CPU1 has no RTs. Then, someone sets the affinity of rt2 to 0x3(i.e. CPU0 and CPU1), but after this, rt2 still can't be scheduled until rt1 enters schedule(), this definitely causes some/big response latency for rt2. So, when doing set_cpus_allowed_rt(), if detecting such cases, check to trigger a push behaviour. Signed-off-by: Xunlei Pang --- kernel/sched/rt.c | 78 ++++++++++++++++++++++++++++++++++++++++++++++++------- 1 file changed, 68 insertions(+), 10 deletions(-) diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c index 65de40e..2637e23 100644 --- a/kernel/sched/rt.c +++ b/kernel/sched/rt.c @@ -1433,10 +1433,9 @@ static struct sched_rt_entity *pick_next_rt_entity(struct rq *rq, return next; } -static struct task_struct *_pick_next_task_rt(struct rq *rq) +static struct task_struct *peek_next_task_rt(struct rq *rq) { struct sched_rt_entity *rt_se; - struct task_struct *p; struct rt_rq *rt_rq = &rq->rt; do { @@ -1445,7 +1444,14 @@ static struct task_struct *_pick_next_task_rt(struct rq *rq) rt_rq = group_rt_rq(rt_se); } while (rt_rq); - p = rt_task_of(rt_se); + return rt_task_of(rt_se); +} + +static inline struct task_struct *_pick_next_task_rt(struct rq *rq) +{ + struct task_struct *p; + + p = peek_next_task_rt(rq); p->se.exec_start = rq_clock_task(rq); return p; @@ -1895,28 +1901,74 @@ static void set_cpus_allowed_rt(struct task_struct *p, const struct cpumask *new_mask) { struct rq *rq; - int weight; + int old_weight, new_weight; + int preempt_push = 0, direct_push = 0; BUG_ON(!rt_task(p)); if (!task_on_rq_queued(p)) return; - weight = cpumask_weight(new_mask); + old_weight = p->nr_cpus_allowed; + new_weight = cpumask_weight(new_mask); + + rq = task_rq(p); + + if (new_weight > 1 && + rt_task(rq->curr) && + !test_tsk_need_resched(rq->curr)) { + /* + * We own p->pi_lock and rq->lock. rq->lock might + * get released when doing direct pushing, however + * p->pi_lock is always held, so it's safe to assign + * the new_mask and new_weight to p below. + */ + if (!task_running(rq, p)) { + cpumask_copy(&p->cpus_allowed, new_mask); + p->nr_cpus_allowed = new_weight; + direct_push = 1; + } else if (cpumask_test_cpu(task_cpu(p), new_mask)) { + cpumask_copy(&p->cpus_allowed, new_mask); + p->nr_cpus_allowed = new_weight; + if (!cpupri_find(&rq->rd->cpupri, p, NULL)) + goto update; + + /* + * At this point, current task gets migratable most + * likely due to the change of its affinity, let's + * figure out if we can migrate it. + * + * Is there any task with the same priority as that + * of current task? If found one, we should resched. + * NOTE: The target may be unpushable. + */ + if (p->prio == rq->rt.highest_prio.next) { + /* One target just in pushable_tasks list. */ + requeue_task_rt(rq, p, 0); + preempt_push = 1; + } else if (rq->rt.rt_nr_total > 1) { + struct task_struct *next; + + requeue_task_rt(rq, p, 0); + next = peek_next_task_rt(rq); + if (next != p && next->prio == p->prio) + preempt_push = 1; + } + } + } +update: /* * Only update if the process changes its state from whether it * can migrate or not. */ - if ((p->nr_cpus_allowed > 1) == (weight > 1)) - return; - - rq = task_rq(p); + if ((old_weight > 1) == (new_weight > 1)) + goto out; /* * The process used to be able to migrate OR it can now migrate */ - if (weight <= 1) { + if (new_weight <= 1) { if (!task_current(rq, p)) dequeue_pushable_task(rq, p); BUG_ON(!rq->rt.rt_nr_migratory); @@ -1928,6 +1980,12 @@ static void set_cpus_allowed_rt(struct task_struct *p, } update_rt_migration(&rq->rt); + +out: + if (direct_push) + push_rt_tasks(rq); + else if (preempt_push) + resched_curr(rq); } /* Assumes rq->lock is held */