From patchwork Sun Feb 8 15:51:25 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "pang.xunlei" X-Patchwork-Id: 44501 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-wi0-f199.google.com (mail-wi0-f199.google.com [209.85.212.199]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 46F2721521 for ; Sun, 8 Feb 2015 15:51:51 +0000 (UTC) Received: by mail-wi0-f199.google.com with SMTP id em10sf2473158wid.2 for ; Sun, 08 Feb 2015 07:51:50 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:sender:precedence:list-id:x-original-sender :x-original-authentication-results:mailing-list:list-post:list-help :list-archive:list-unsubscribe; bh=cclBV0B97hFZc++gGcFj9t1KUTJRk/stu3310qicDs0=; b=FwUQiNhJFGp/uQ3BRzXg/4WxxFp2FbrQIwKRJ/9jwX8eJ9ZYYRpdvmgdvRWnMdwe1Z mR37UzNf8z9S2AOxxUC7XJFYkP3a6//wlEkgLAsZXdewkUyRXfOXk4y2UcyLGUOGEkDQ rzOjfvLQ3LgMQ7py3IqBnyGUpDFs/f0An2NAEsAAl4snGJORRws1RhNicSxycVKqIu9y oHiKmA6Hfrvlu6KKCjndWoieq/2z3ZySrdek+lpyPK2mVCKFK5Npbo2qCKJHzk/wsLWk YLMmjD35YcLY6TZyHdkIgkA5gOGwLYz1oHurk3JGfQa+Oj9q6Ls22THxDB8lijEEJ58p Vi8w== X-Gm-Message-State: ALoCoQkfmYn7aSrpAwxNAEUJh4xfKQxxfSh/6P5RK0AXgNZbGxFwCLVy1XwZti/VIuqJ/vDq7zvB X-Received: by 10.152.121.65 with SMTP id li1mr1435121lab.0.1423410710150; Sun, 08 Feb 2015 07:51:50 -0800 (PST) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.44.162 with SMTP id f2ls460746lam.5.gmail; Sun, 08 Feb 2015 07:51:49 -0800 (PST) X-Received: by 10.112.138.233 with SMTP id qt9mr2667805lbb.44.1423410709971; Sun, 08 Feb 2015 07:51:49 -0800 (PST) Received: from mail-lb0-f171.google.com (mail-lb0-f171.google.com. [209.85.217.171]) by mx.google.com with ESMTPS id e2si6840266lbc.31.2015.02.08.07.51.49 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sun, 08 Feb 2015 07:51:49 -0800 (PST) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.171 as permitted sender) client-ip=209.85.217.171; Received: by mail-lb0-f171.google.com with SMTP id b6so3113749lbj.2 for ; Sun, 08 Feb 2015 07:51:49 -0800 (PST) X-Received: by 10.112.181.41 with SMTP id dt9mr12638572lbc.56.1423410709806; Sun, 08 Feb 2015 07:51:49 -0800 (PST) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.35.133 with SMTP id h5csp2899650lbj; Sun, 8 Feb 2015 07:51:48 -0800 (PST) X-Received: by 10.66.117.231 with SMTP id kh7mr21343162pab.21.1423410707918; Sun, 08 Feb 2015 07:51:47 -0800 (PST) Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id e2si18548801pdc.112.2015.02.08.07.51.46; Sun, 08 Feb 2015 07:51:47 -0800 (PST) Received-SPF: none (google.com: linux-kernel-owner@vger.kernel.org does not designate permitted sender hosts) client-ip=209.132.180.67; Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756131AbbBHPvn (ORCPT + 29 others); Sun, 8 Feb 2015 10:51:43 -0500 Received: from mail-pa0-f49.google.com ([209.85.220.49]:40714 "EHLO mail-pa0-f49.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751716AbbBHPvm (ORCPT ); Sun, 8 Feb 2015 10:51:42 -0500 Received: by mail-pa0-f49.google.com with SMTP id fb1so13248805pad.8 for ; Sun, 08 Feb 2015 07:51:41 -0800 (PST) X-Received: by 10.66.156.35 with SMTP id wb3mr21688703pab.70.1423410701524; Sun, 08 Feb 2015 07:51:41 -0800 (PST) Received: from vptest-PC.zte.com.cn ([101.78.161.162]) by mx.google.com with ESMTPSA id y4sm13747390pdj.59.2015.02.08.07.51.39 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sun, 08 Feb 2015 07:51:40 -0800 (PST) From: Xunlei Pang To: linux-kernel@vger.kernel.org Cc: Peter Zijlstra , Steven Rostedt , Juri Lelli , Xunlei Pang Subject: [PATCH v3 1/2] sched/rt: Check to push the task when changing its affinity Date: Sun, 8 Feb 2015 23:51:25 +0800 Message-Id: <1423410686-1928-1-git-send-email-pang.xunlei@linaro.org> X-Mailer: git-send-email 2.0.4 Sender: linux-kernel-owner@vger.kernel.org Precedence: list List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: pang.xunlei@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.171 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , We may suffer from extra rt overload rq due to the affinity, so when the affinity of any runnable rt task is changed, we should check to trigger balancing, otherwise it will cause some unnecessary delayed real-time response. Unfortunately, current RT global scheduler doesn't trigger anything. For example: a 2-cpu system with two runnable FIFO tasks(same rt_priority) bound on CPU0, let's name them rt1(running) and rt2(runnable) respectively; CPU1 has no RTs. Then, someone sets the affinity of rt2 to 0x3(i.e. CPU0 and CPU1), but after this, rt2 still can't be scheduled until rt1 enters schedule(), this definitely causes some/big response latency for rt2. So, when doing set_cpus_allowed_rt(), if detecting such cases, check to trigger a push behaviour. Signed-off-by: Xunlei Pang --- v2, v3: Refine according to Steven Rostedt's comments. kernel/sched/rt.c | 78 ++++++++++++++++++++++++++++++++++++++++++++++++------- 1 file changed, 68 insertions(+), 10 deletions(-) diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c index f4d4b07..04c58b7 100644 --- a/kernel/sched/rt.c +++ b/kernel/sched/rt.c @@ -1428,10 +1428,9 @@ static struct sched_rt_entity *pick_next_rt_entity(struct rq *rq, return next; } -static struct task_struct *_pick_next_task_rt(struct rq *rq) +static struct task_struct *peek_next_task_rt(struct rq *rq) { struct sched_rt_entity *rt_se; - struct task_struct *p; struct rt_rq *rt_rq = &rq->rt; do { @@ -1440,7 +1439,14 @@ static struct task_struct *_pick_next_task_rt(struct rq *rq) rt_rq = group_rt_rq(rt_se); } while (rt_rq); - p = rt_task_of(rt_se); + return rt_task_of(rt_se); +} + +static inline struct task_struct *_pick_next_task_rt(struct rq *rq) +{ + struct task_struct *p; + + p = peek_next_task_rt(rq); p->se.exec_start = rq_clock_task(rq); return p; @@ -1886,28 +1892,74 @@ static void set_cpus_allowed_rt(struct task_struct *p, const struct cpumask *new_mask) { struct rq *rq; - int weight; + int old_weight, new_weight; + int preempt_push = 0, direct_push = 0; BUG_ON(!rt_task(p)); if (!task_on_rq_queued(p)) return; - weight = cpumask_weight(new_mask); + old_weight = p->nr_cpus_allowed; + new_weight = cpumask_weight(new_mask); + + rq = task_rq(p); + + if (new_weight > 1 && + rt_task(rq->curr) && + !test_tsk_need_resched(rq->curr)) { + /* + * We own p->pi_lock and rq->lock. rq->lock might + * get released when doing direct pushing, however + * p->pi_lock is always held, so it's safe to assign + * the new_mask and new_weight to p below. + */ + if (!task_running(rq, p)) { + cpumask_copy(&p->cpus_allowed, new_mask); + p->nr_cpus_allowed = new_weight; + direct_push = 1; + } else if (cpumask_test_cpu(task_cpu(p), new_mask)) { + cpumask_copy(&p->cpus_allowed, new_mask); + p->nr_cpus_allowed = new_weight; + if (!cpupri_find(&rq->rd->cpupri, p, NULL)) + goto update; + + /* + * At this point, current task gets migratable most + * likely due to the change of its affinity, let's + * figure out if we can migrate it. + * + * Is there any task with the same priority as that + * of current task? If found one, we should resched. + * NOTE: The target may be unpushable. + */ + if (p->prio == rq->rt.highest_prio.next) { + /* One target just in pushable_tasks list. */ + requeue_task_rt(rq, p, 0); + preempt_push = 1; + } else if (rq->rt.rt_nr_total > 1) { + struct task_struct *next; + + requeue_task_rt(rq, p, 0); + next = peek_next_task_rt(rq); + if (next != p && next->prio == p->prio) + preempt_push = 1; + } + } + } +update: /* * Only update if the process changes its state from whether it * can migrate or not. */ - if ((p->nr_cpus_allowed > 1) == (weight > 1)) - return; - - rq = task_rq(p); + if ((old_weight > 1) == (new_weight > 1)) + goto out; /* * The process used to be able to migrate OR it can now migrate */ - if (weight <= 1) { + if (new_weight <= 1) { if (!task_current(rq, p)) dequeue_pushable_task(rq, p); BUG_ON(!rq->rt.rt_nr_migratory); @@ -1919,6 +1971,12 @@ static void set_cpus_allowed_rt(struct task_struct *p, } update_rt_migration(&rq->rt); + +out: + if (direct_push) + push_rt_tasks(rq); + else if (preempt_push) + resched_curr(rq); } /* Assumes rq->lock is held */