From patchwork Wed Feb 4 01:12:20 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xunlei Pang X-Patchwork-Id: 44323 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-lb0-f197.google.com (mail-lb0-f197.google.com [209.85.217.197]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id DE4572034D for ; Wed, 4 Feb 2015 01:14:48 +0000 (UTC) Received: by mail-lb0-f197.google.com with SMTP id b6sf8607626lbj.0 for ; Tue, 03 Feb 2015 17:14:47 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:sender:precedence:list-id:x-original-sender :x-original-authentication-results:mailing-list:list-post:list-help :list-archive:list-unsubscribe; bh=LrKOU1SQCLc4yUoHJ1DSV975l/8r9EHpa0K2hJN0B/o=; b=hCarKTF3fXvIHstabQJ3/vgYM2KZCWjQ9RV8r9t1dMgPkkRqzLIy/2q1TKxFNvpXM3 B/lXcq05ZKQa9QIF7GwmgnNgS8OTZGjSiwsX175U8if2u6ueCIEpeO5XwCWazFIhFgHY DArY8dfdDUPEdBNuTh3Dy5V5JqO/jxVVp8BmJTEDB0o7jUoJyWiGlydvO/ffhE83VU+g yRrBBAJ7cPbri3XUOEB5c2A5tAkGur4TwbpNQQ6LVmMbJAEvOvzifzy8ae2YFS50J5Qm 4HwWfcpJJAB1eHqhsqRbo2N2d15WA0lr3H3tlJzJhHWlJKuey0MG0jnPiJP5uogVN1HP esEw== X-Gm-Message-State: ALoCoQkpW7GfwKtCyMQo2wSncMPuF5CvYPZkgMdjoQWezj/XCwgarmIxuIee+FnIoprcGD4Te+lh X-Received: by 10.152.43.166 with SMTP id x6mr3470618lal.3.1423012487507; Tue, 03 Feb 2015 17:14:47 -0800 (PST) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.7.226 with SMTP id m2ls823876laa.82.gmail; Tue, 03 Feb 2015 17:14:47 -0800 (PST) X-Received: by 10.112.44.171 with SMTP id f11mr27637773lbm.65.1423012487265; Tue, 03 Feb 2015 17:14:47 -0800 (PST) Received: from mail-lb0-x22b.google.com (mail-lb0-x22b.google.com. [2a00:1450:4010:c04::22b]) by mx.google.com with ESMTPS id tz3si134340lbb.66.2015.02.03.17.14.47 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Tue, 03 Feb 2015 17:14:47 -0800 (PST) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 2a00:1450:4010:c04::22b as permitted sender) client-ip=2a00:1450:4010:c04::22b; Received: by mail-lb0-f171.google.com with SMTP id u14so41643706lbd.2 for ; Tue, 03 Feb 2015 17:14:47 -0800 (PST) X-Received: by 10.152.228.161 with SMTP id sj1mr7043785lac.74.1423012487139; Tue, 03 Feb 2015 17:14:47 -0800 (PST) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.35.133 with SMTP id h5csp186533lbj; Tue, 3 Feb 2015 17:14:46 -0800 (PST) X-Received: by 10.68.112.194 with SMTP id is2mr8366674pbb.161.1423012485223; Tue, 03 Feb 2015 17:14:45 -0800 (PST) Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id z1si183985pas.104.2015.02.03.17.14.44; Tue, 03 Feb 2015 17:14:45 -0800 (PST) Received-SPF: none (google.com: linux-kernel-owner@vger.kernel.org does not designate permitted sender hosts) client-ip=209.132.180.67; Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S934079AbbBDBOl (ORCPT + 29 others); Tue, 3 Feb 2015 20:14:41 -0500 Received: from m15-114.126.com ([220.181.15.114]:38656 "EHLO m15-114.126.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754617AbbBDBOV (ORCPT ); Tue, 3 Feb 2015 20:14:21 -0500 Received: from localhost.localdomain (unknown [210.21.223.3]) by smtp7 (Coremail) with SMTP id DsmowADn8VwectFU_IBWAA--.619S2; Wed, 04 Feb 2015 09:13:09 +0800 (CST) From: Xunlei Pang To: linux-kernel@vger.kernel.org Cc: Peter Zijlstra , Steven Rostedt , Juri Lelli , Xunlei Pang Subject: [PATCH RESEND 1/2] sched/rt: Check to push the task when changing its affinity Date: Wed, 4 Feb 2015 09:12:20 +0800 Message-Id: <1423012341-30265-1-git-send-email-xlpang@126.com> X-Mailer: git-send-email 1.9.1 X-CM-TRANSID: DsmowADn8VwectFU_IBWAA--.619S2 X-Coremail-Antispam: 1Uf129KBjvJXoWxXFy3Jr48KFy5tryrCF47CFg_yoWrAF4Dpa 1vk390gr4UJay2gF1fZw4DZr13G3s3Z34rGrnayw1Fkan8tr4Fv3W5tF1ayr1a9r1j9F4a qr4ktr47WF1DZa7anT9S1TB71UUUUUUqnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDUYxBIdaVFxhVjvjDU0xZFpf9x07jk8n5UUUUU= X-Originating-IP: [210.21.223.3] X-CM-SenderInfo: p0ost0bj6rjloofrz/1tbiWwaWv1PM7BOkNwAAsH Sender: linux-kernel-owner@vger.kernel.org Precedence: list List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Original-Sender: xlpang@126.com X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 2a00:1450:4010:c04::22b as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org; dkim=neutral (body hash did not verify) header.i=@; dmarc=fail (p=NONE dis=NONE) header.from=126.com Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , From: Xunlei Pang We may suffer from extra rt overload rq due to the affinity, so when the affinity of any runnable rt task is changed, we should check to trigger balancing, otherwise it will cause some unnecessary delayed real-time response. Unfortunately, current RT global scheduler doesn't trigger anything. For example: a 2-cpu system with two runnable FIFO tasks(same rt_priority) bound on CPU0, let's name them rt1(running) and rt2(runnable) respectively; CPU1 has no RTs. Then, someone sets the affinity of rt2 to 0x3(i.e. CPU0 and CPU1), but after this, rt2 still can't be scheduled until rt1 enters schedule(), this definitely causes some/big response latency for rt2. So, when doing set_cpus_allowed_rt(), if detecting such cases, check to trigger a push behaviour. Signed-off-by: Xunlei Pang --- kernel/sched/rt.c | 69 +++++++++++++++++++++++++++++++++++++++++++++++-------- 1 file changed, 59 insertions(+), 10 deletions(-) diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c index f4d4b07..4dacb6e 100644 --- a/kernel/sched/rt.c +++ b/kernel/sched/rt.c @@ -1428,7 +1428,7 @@ static struct sched_rt_entity *pick_next_rt_entity(struct rq *rq, return next; } -static struct task_struct *_pick_next_task_rt(struct rq *rq) +static struct task_struct *_pick_next_task_rt(struct rq *rq, int peek_only) { struct sched_rt_entity *rt_se; struct task_struct *p; @@ -1441,7 +1441,8 @@ static struct task_struct *_pick_next_task_rt(struct rq *rq) } while (rt_rq); p = rt_task_of(rt_se); - p->se.exec_start = rq_clock_task(rq); + if (!peek_only) + p->se.exec_start = rq_clock_task(rq); return p; } @@ -1476,7 +1477,7 @@ pick_next_task_rt(struct rq *rq, struct task_struct *prev) put_prev_task(rq, prev); - p = _pick_next_task_rt(rq); + p = _pick_next_task_rt(rq, 0); /* The running task is never eligible for pushing */ dequeue_pushable_task(rq, p); @@ -1886,28 +1887,69 @@ static void set_cpus_allowed_rt(struct task_struct *p, const struct cpumask *new_mask) { struct rq *rq; - int weight; + int old_weight, new_weight; + int preempt_push = 0, direct_push = 0; BUG_ON(!rt_task(p)); if (!task_on_rq_queued(p)) return; - weight = cpumask_weight(new_mask); + old_weight = p->nr_cpus_allowed; + new_weight = cpumask_weight(new_mask); + + rq = task_rq(p); + + if (new_weight > 1 && + rt_task(rq->curr) && + !test_tsk_need_resched(rq->curr)) { + /* + * Set new mask information to prepare pushing. + * It's safe to do this here. + */ + cpumask_copy(&p->cpus_allowed, new_mask); + p->nr_cpus_allowed = new_weight; + + if (task_running(rq, p) && + cpumask_test_cpu(task_cpu(p), new_mask) && + cpupri_find(&rq->rd->cpupri, p, NULL)) { + /* + * At this point, current task gets migratable most + * likely due to the change of its affinity, let's + * figure out if we can migrate it. + * + * Is there any task with the same priority as that + * of current task? If found one, we should resched. + * NOTE: The target may be unpushable. + */ + if (p->prio == rq->rt.highest_prio.next) { + /* One target just in pushable_tasks list. */ + requeue_task_rt(rq, p, 0); + preempt_push = 1; + } else if (rq->rt.rt_nr_total > 1) { + struct task_struct *next; + + requeue_task_rt(rq, p, 0); + /* peek only */ + next = _pick_next_task_rt(rq, 1); + if (next != p && next->prio == p->prio) + preempt_push = 1; + } + } else if (!task_running(rq, p)) + direct_push = 1; + } /* * Only update if the process changes its state from whether it * can migrate or not. */ - if ((p->nr_cpus_allowed > 1) == (weight > 1)) - return; - - rq = task_rq(p); + if ((old_weight > 1) == (new_weight > 1)) + goto out; /* * The process used to be able to migrate OR it can now migrate */ - if (weight <= 1) { + if (new_weight <= 1) { if (!task_current(rq, p)) dequeue_pushable_task(rq, p); BUG_ON(!rq->rt.rt_nr_migratory); @@ -1919,6 +1961,13 @@ static void set_cpus_allowed_rt(struct task_struct *p, } update_rt_migration(&rq->rt); + +out: + if (direct_push) + push_rt_tasks(rq); + + if (preempt_push) + resched_curr(rq); } /* Assumes rq->lock is held */