From patchwork Thu Jan 29 15:53:20 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xunlei Pang X-Patchwork-Id: 43950 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-wi0-f198.google.com (mail-wi0-f198.google.com [209.85.212.198]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 3A79A2410D for ; Thu, 29 Jan 2015 16:01:16 +0000 (UTC) Received: by mail-wi0-f198.google.com with SMTP id h11sf14003293wiw.1 for ; Thu, 29 Jan 2015 08:01:15 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:in-reply-to:references:sender:precedence:list-id :x-original-sender:x-original-authentication-results:mailing-list :list-post:list-help:list-archive:list-unsubscribe; bh=6ORqPGvebDnUQuuIZ7yuBCN7meR1Mkitbx0ydbFqWAw=; b=fPWZJLbZ8mbiY78tKBHvxVQekiNmCdHU5eUN8ms75TdUqTppw3vAJvbdEid9HVEnoL +8i0rCqYjInvuVez9uZaV7oeYF1QB1m3L79kfkN12KZVJc65uY3JMd3HUiB7D1zVNgnl iEvFPXScfZlj42CD0y07GNI5ZZb+2sKEsTWC8QLY9uU89x5qRl5L+kXoIYE7thlMUKqJ yKiTB2TtFMwwb564iWDuzb/E/1jYfPzlslv6ZhdItrAjGMNeMwSH01+bxc/AxNniqkmC mU3TASVz641PklC72m3sLhfsBVuKZ1z53MQ2wtmDINPpoNkHpG1OYQiorh0USgYt62gO Tvng== X-Gm-Message-State: ALoCoQnsSEIAP0DFjpeot4pNlBH3OIbaz2YGv07g3P/dIaHF9ujq755CCr1vc0EUUT9u7fB6tzdc X-Received: by 10.194.86.1 with SMTP id l1mr216150wjz.0.1422547275232; Thu, 29 Jan 2015 08:01:15 -0800 (PST) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.9.9 with SMTP id v9ls182788laa.34.gmail; Thu, 29 Jan 2015 08:01:15 -0800 (PST) X-Received: by 10.152.4.8 with SMTP id g8mr1725395lag.58.1422547274854; Thu, 29 Jan 2015 08:01:14 -0800 (PST) Received: from mail-lb0-x230.google.com (mail-lb0-x230.google.com. [2a00:1450:4010:c04::230]) by mx.google.com with ESMTPS id jt16si7858469lab.9.2015.01.29.08.01.14 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Thu, 29 Jan 2015 08:01:14 -0800 (PST) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 2a00:1450:4010:c04::230 as permitted sender) client-ip=2a00:1450:4010:c04::230; Received: by mail-lb0-f176.google.com with SMTP id z12so29620891lbi.7 for ; Thu, 29 Jan 2015 08:01:14 -0800 (PST) X-Received: by 10.152.203.230 with SMTP id kt6mr1746753lac.38.1422547274679; Thu, 29 Jan 2015 08:01:14 -0800 (PST) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.35.133 with SMTP id h5csp1806601lbj; Thu, 29 Jan 2015 08:01:13 -0800 (PST) X-Received: by 10.68.204.66 with SMTP id kw2mr1492274pbc.149.1422547272686; Thu, 29 Jan 2015 08:01:12 -0800 (PST) Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id qk3si10185399pbc.251.2015.01.29.08.01.11; Thu, 29 Jan 2015 08:01:12 -0800 (PST) Received-SPF: none (google.com: linux-kernel-owner@vger.kernel.org does not designate permitted sender hosts) client-ip=209.132.180.67; Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755143AbbA2QBH (ORCPT + 29 others); Thu, 29 Jan 2015 11:01:07 -0500 Received: from m50-111.126.com ([123.125.50.111]:43200 "EHLO m50-111.126.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752112AbbA2QBF (ORCPT ); Thu, 29 Jan 2015 11:01:05 -0500 Received: from localhost.localdomain (unknown [210.21.223.3]) by smtp5 (Coremail) with SMTP id jtKowABHc+eGV8pUbY0NDQ--.885S5; Thu, 29 Jan 2015 23:53:56 +0800 (CST) From: Xunlei Pang To: linux-kernel@vger.kernel.org Cc: Peter Zijlstra , Steven Rostedt , Juri Lelli , Xunlei Pang Subject: [PATCH v2 4/4] sched/rt: Consider deadline tasks in cpupri_find() Date: Thu, 29 Jan 2015 23:53:20 +0800 Message-Id: <1422546800-2935-4-git-send-email-xlpang@126.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1422546800-2935-1-git-send-email-xlpang@126.com> References: <1422546800-2935-1-git-send-email-xlpang@126.com> X-CM-TRANSID: jtKowABHc+eGV8pUbY0NDQ--.885S5 X-Coremail-Antispam: 1Uf129KBjvJXoWxKF17ZF4fCF45Wr4xAr15XFb_yoW7CF43pF 1q934UAF4DJFyUW3s5Zw4jkwnYgw1vg3Z8tw1rtas5tF9rtF10vF1qqr9xZryY9rWkuF13 tF4vyrW29r1jyFJanT9S1TB71UUUUUUqnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDUYxBIdaVFxhVjvjDU0xZFpf9x07bUyIUUUUUU= X-Originating-IP: [210.21.223.3] X-CM-SenderInfo: p0ost0bj6rjloofrz/1tbipByQv1GofOMxpQAAsl Sender: linux-kernel-owner@vger.kernel.org Precedence: list List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Original-Sender: xlpang@126.com X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 2a00:1450:4010:c04::230 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org; dkim=neutral (body hash did not verify) header.i=@; dmarc=fail (p=NONE dis=NONE) header.from=126.com Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , From: Xunlei Pang Currently, RT global scheduling doesn't factor deadline tasks, this may cause some problems. See a case below: On a 3 CPU system, CPU0 has one running deadline task, CPU1 has one running low priority RT task or idle, CPU3 has one running high priority RT task. When another mid priority RT task is woken on CPU3, it will be pushed to CPU0(this also disturbs the deadline task on CPU0), while it is reasonable to put it on CPU1. This patch eliminates this issue by filtering CPUs that have runnable deadline tasks, using cpudl->free_cpus in cpupri_find(). NOTE: We want to make the most use of percpu local_cpu_mask to save an extra mask allocation, so always passing a non-NULL lowest_mask to cpupri_find(). Signed-off-by: Xunlei Pang --- kernel/sched/core.c | 3 ++- kernel/sched/cpupri.c | 27 +++++++++------------------ kernel/sched/cpupri.h | 3 ++- kernel/sched/rt.c | 9 +++++---- 4 files changed, 18 insertions(+), 24 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index ade2958..d9e1db8 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -5650,8 +5650,9 @@ static int init_rootdomain(struct root_domain *rd) if (cpudl_init(&rd->cpudl) != 0) goto free_dlo_mask; - if (cpupri_init(&rd->cpupri) != 0) + if (cpupri_init(&rd->cpupri, &rd->cpudl) != 0) goto free_rto_mask; + return 0; free_rto_mask: diff --git a/kernel/sched/cpupri.c b/kernel/sched/cpupri.c index 981fcd7..34f5514 100644 --- a/kernel/sched/cpupri.c +++ b/kernel/sched/cpupri.c @@ -31,6 +31,7 @@ #include #include #include +#include "cpudeadline.h" #include "cpupri.h" /* Convert between a 140 based task->prio, and our 102 based cpupri */ @@ -54,7 +55,7 @@ static int convert_prio(int prio) * cpupri_find - find the best (lowest-pri) CPU in the system * @cp: The cpupri context * @p: The task - * @lowest_mask: A mask to fill in with selected CPUs (or NULL) + * @lowest_mask: A mask to fill in with selected CPUs (not NULL) * * Note: This function returns the recommended CPUs as calculated during the * current invocation. By the time the call returns, the CPUs may have in @@ -103,24 +104,11 @@ int cpupri_find(struct cpupri *cp, struct task_struct *p, if (skip) continue; - if (cpumask_any_and(&p->cpus_allowed, vec->mask) >= nr_cpu_ids) + cpumask_and(lowest_mask, &p->cpus_allowed, vec->mask); + cpumask_and(lowest_mask, lowest_mask, cp->cpudl->free_cpus); + if (cpumask_any(lowest_mask) >= nr_cpu_ids) continue; - if (lowest_mask) { - cpumask_and(lowest_mask, &p->cpus_allowed, vec->mask); - - /* - * We have to ensure that we have at least one bit - * still set in the array, since the map could have - * been concurrently emptied between the first and - * second reads of vec->mask. If we hit this - * condition, simply act as though we never hit this - * priority level and continue on. - */ - if (cpumask_any(lowest_mask) >= nr_cpu_ids) - continue; - } - return 1; } @@ -202,10 +190,11 @@ void cpupri_set(struct cpupri *cp, int cpu, int newpri) /** * cpupri_init - initialize the cpupri structure * @cp: The cpupri context + * @cpudl: The cpudl context of the same root domain * * Return: -ENOMEM on memory allocation failure. */ -int cpupri_init(struct cpupri *cp) +int cpupri_init(struct cpupri *cp, struct cpudl *cpudl) { int i; @@ -226,6 +215,8 @@ int cpupri_init(struct cpupri *cp) for_each_possible_cpu(i) cp->cpu_to_pri[i] = CPUPRI_INVALID; + cp->cpudl = cpudl; + return 0; cleanup: diff --git a/kernel/sched/cpupri.h b/kernel/sched/cpupri.h index 63cbb9c..6fee80b 100644 --- a/kernel/sched/cpupri.h +++ b/kernel/sched/cpupri.h @@ -18,13 +18,14 @@ struct cpupri_vec { struct cpupri { struct cpupri_vec pri_to_cpu[CPUPRI_NR_PRIORITIES]; int *cpu_to_pri; + struct cpudl *cpudl; }; #ifdef CONFIG_SMP int cpupri_find(struct cpupri *cp, struct task_struct *p, struct cpumask *lowest_mask); void cpupri_set(struct cpupri *cp, int cpu, int pri); -int cpupri_init(struct cpupri *cp); +int cpupri_init(struct cpupri *cp, struct cpudl *cpudl); void cpupri_cleanup(struct cpupri *cp); #endif diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c index 6725e3c..d28cfa4 100644 --- a/kernel/sched/rt.c +++ b/kernel/sched/rt.c @@ -1349,14 +1349,17 @@ out: return cpu; } +static DEFINE_PER_CPU(cpumask_var_t, local_cpu_mask); static void check_preempt_equal_prio(struct rq *rq, struct task_struct *p) { + struct cpumask *lowest_mask = this_cpu_cpumask_var_ptr(local_cpu_mask); + /* * Current can't be migrated, useless to reschedule, * let's hope p can move out. */ if (rq->curr->nr_cpus_allowed == 1 || - !cpupri_find(&rq->rd->cpupri, rq->curr, NULL)) + !cpupri_find(&rq->rd->cpupri, rq->curr, lowest_mask)) return; /* @@ -1364,7 +1367,7 @@ static void check_preempt_equal_prio(struct rq *rq, struct task_struct *p) * see if it is pushed or pulled somewhere else. */ if (p->nr_cpus_allowed != 1 - && cpupri_find(&rq->rd->cpupri, p, NULL)) + && cpupri_find(&rq->rd->cpupri, p, lowest_mask)) return; /* @@ -1526,8 +1529,6 @@ static struct task_struct *pick_highest_pushable_task(struct rq *rq, int cpu) return NULL; } -static DEFINE_PER_CPU(cpumask_var_t, local_cpu_mask); - static int find_lowest_rq(struct task_struct *task) { struct sched_domain *sd;