From patchwork Tue May 5 11:56:07 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xunlei Pang X-Patchwork-Id: 48026 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-la0-f69.google.com (mail-la0-f69.google.com [209.85.215.69]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id B535F20553 for ; Tue, 5 May 2015 11:58:06 +0000 (UTC) Received: by laat2 with SMTP id t2sf53929622laa.2 for ; Tue, 05 May 2015 04:58:05 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:sender:precedence:list-id:x-original-sender :x-original-authentication-results:mailing-list:list-post:list-help :list-archive:list-unsubscribe; bh=WlWoIZl07YLEWTQ8jqteN+E3gM7vcGZcuLjlIrndlkI=; b=gZAvfee2xudIRTME8PsfI/idm/qq68KJZphQNaVeFaDECMB8y7lHZd+uzEjrZwO2QK P9QNF00ht9Vjb7yf+GqPHTfKcOeAS0G42C6d43l3PBaIQvuy4tRUlIz41b1tXnXZIrE2 5tydLh6z/ycSpmXybwMoh+NHK1DJFW7o+lPrrVMxmyr+uXBrqRF9i/52NywlJ6DdpBmC Aace2JWDWhnZzHoYDtODTmab7yRWZlHPT3sTfo41PSfh4pvBfgpKbDNOOu2zWZaINK0E fMtqHKXZWwi9B3x9cG/474QyVduN8ugU0sLaV9KYljCazrD/tBgAayRDzHVpeqR8Ym3v 3FJg== X-Gm-Message-State: ALoCoQnQDQaDBAhtyUSpzpr3Zzvf4nH+DLdsRGnSLRAATTZlcMyXwWIn/HGE38NlcG8X2hn5yELe X-Received: by 10.112.166.137 with SMTP id zg9mr21909102lbb.11.1430827085591; Tue, 05 May 2015 04:58:05 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.229.129 with SMTP id sq1ls889482lac.4.gmail; Tue, 05 May 2015 04:58:05 -0700 (PDT) X-Received: by 10.153.7.66 with SMTP id da2mr17880796lad.94.1430827085420; Tue, 05 May 2015 04:58:05 -0700 (PDT) Received: from mail-la0-x235.google.com (mail-la0-x235.google.com. [2a00:1450:4010:c03::235]) by mx.google.com with ESMTPS id q8si12275487lal.71.2015.05.05.04.58.05 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 05 May 2015 04:58:05 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 2a00:1450:4010:c03::235 as permitted sender) client-ip=2a00:1450:4010:c03::235; Received: by laat2 with SMTP id t2so125391261laa.1 for ; Tue, 05 May 2015 04:58:05 -0700 (PDT) X-Received: by 10.152.4.137 with SMTP id k9mr23429949lak.29.1430827085298; Tue, 05 May 2015 04:58:05 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.67.65 with SMTP id l1csp2197991lbt; Tue, 5 May 2015 04:58:04 -0700 (PDT) X-Received: by 10.68.190.131 with SMTP id gq3mr50152203pbc.113.1430827083445; Tue, 05 May 2015 04:58:03 -0700 (PDT) Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id by4si24031419pdb.96.2015.05.05.04.58.02; Tue, 05 May 2015 04:58:03 -0700 (PDT) Received-SPF: none (google.com: linux-kernel-owner@vger.kernel.org does not designate permitted sender hosts) client-ip=209.132.180.67; Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1758480AbbEEL5z (ORCPT + 29 others); Tue, 5 May 2015 07:57:55 -0400 Received: from m15-113.126.com ([220.181.15.113]:60404 "EHLO m15-113.126.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1161608AbbEEL5g (ORCPT ); Tue, 5 May 2015 07:57:36 -0400 Received: from localhost.localdomain (unknown [210.21.223.3]) by smtp3 (Coremail) with SMTP id DcmowAC3Pkvcr0hVjmw4Ag--.12078S2; Tue, 05 May 2015 19:56:22 +0800 (CST) From: Xunlei Pang To: linux-kernel@vger.kernel.org Cc: Peter Zijlstra , Steven Rostedt , Juri Lelli , Ingo Molnar , Xunlei Pang Subject: [PATCH v2 1/2] sched/rt: Check to push task away when its affinity is changed Date: Tue, 5 May 2015 19:56:07 +0800 Message-Id: <1430826968-10251-1-git-send-email-xlpang@126.com> X-Mailer: git-send-email 1.9.1 X-CM-TRANSID: DcmowAC3Pkvcr0hVjmw4Ag--.12078S2 X-Coremail-Antispam: 1Uf129KBjvJXoWxXFy3Jr48JFWruFW7urWfKrg_yoW5tw1fpF Wvy345Ga1DJFyjg34fZw4kCr4Ygwn7Zw13Jan5JrySkFs0gr4Yvrs0qF43AFZIgr1UCFW2 qr4qgr92gF1jyaDanT9S1TB71UUUUUUqnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDUYxBIdaVFxhVjvjDU0xZFpf9x07jkYFAUUUUU= X-Originating-IP: [210.21.223.3] X-CM-SenderInfo: p0ost0bj6rjloofrz/1tbimgbwv1GfWQwUpQAAs5 Sender: linux-kernel-owner@vger.kernel.org Precedence: list List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Original-Sender: xlpang@126.com X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 2a00:1450:4010:c03::235 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org; dkim=neutral (body hash did not verify) header.i=@; dmarc=fail (p=NONE dis=NONE) header.from=126.com Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , From: Xunlei Pang We may suffer from extra rt overload rq due to the affinity, so when the affinity of any runnable rt task is changed, we should check to trigger balancing, otherwise it will cause some unnecessary delayed real-time response. Unfortunately, current RT global scheduler does nothing about this. For example: a 2-cpu system with two runnable FIFO tasks(same rt_priority) bound on CPU0, let's name them rt1(running) and rt2(runnable) respectively; CPU1 has no RTs. Then, someone sets the affinity of rt2 to 0x3(i.e. CPU0 and CPU1), but after this, rt2 still can't be scheduled enters schedule(), this definitely causes some/big response latency for rt2. This patch introduces a new sched_class::post_set_cpus_allowed() for RT called after set_cpus_allowed_rt(). In this new function, if the task is runnable but not running, it tries to push it away once it got migratable. The patch also solves a problem about move_queued_task() called in set_cpus_allowed_ptr(): When a lower priorioty rt task got migrated due to its curr cpu isn't in the new affinity mask, after move_queued_task() it will miss the chance of pushing away, because check_preempt_curr() called by move_queued_task() doens't set the "need resched flag" for lower priority tasks. Parts-suggested-by: Steven Rostedt Signed-off-by: Xunlei Pang --- v1->v2: Removed cpupri_find(), as it will probably be executed in push_rt_tasks(). kernel/sched/core.c | 3 +++ kernel/sched/rt.c | 15 +++++++++++++++ kernel/sched/sched.h | 1 + 3 files changed, 19 insertions(+) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index d13fc13..64a1603 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -4773,6 +4773,9 @@ void do_set_cpus_allowed(struct task_struct *p, const struct cpumask *new_mask) cpumask_copy(&p->cpus_allowed, new_mask); p->nr_cpus_allowed = cpumask_weight(new_mask); + + if (p->sched_class->post_set_cpus_allowed) + p->sched_class->post_set_cpus_allowed(p); } /* diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c index 8885b65..4176f33 100644 --- a/kernel/sched/rt.c +++ b/kernel/sched/rt.c @@ -2280,6 +2280,20 @@ static void set_cpus_allowed_rt(struct task_struct *p, update_rt_migration(&rq->rt); } +static void post_set_cpus_allowed_rt(struct task_struct *p) +{ + struct rq *rq; + + if (!task_on_rq_queued(p)) + return; + + rq = task_rq(p); + if (!task_running(rq, p) && + p->nr_cpus_allowed > 1 && + !test_tsk_need_resched(rq->curr)) + push_rt_tasks(rq); +} + /* Assumes rq->lock is held */ static void rq_online_rt(struct rq *rq) { @@ -2494,6 +2508,7 @@ const struct sched_class rt_sched_class = { .select_task_rq = select_task_rq_rt, .set_cpus_allowed = set_cpus_allowed_rt, + .post_set_cpus_allowed = post_set_cpus_allowed_rt, .rq_online = rq_online_rt, .rq_offline = rq_offline_rt, .post_schedule = post_schedule_rt, diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index e0e1299..6f90645 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -1191,6 +1191,7 @@ struct sched_class { void (*set_cpus_allowed)(struct task_struct *p, const struct cpumask *newmask); + void (*post_set_cpus_allowed)(struct task_struct *p); void (*rq_online)(struct rq *rq); void (*rq_offline)(struct rq *rq);