From patchwork Mon Apr 27 06:48:35 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xunlei Pang X-Patchwork-Id: 47596 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-wi0-f200.google.com (mail-wi0-f200.google.com [209.85.212.200]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 7FC8120553 for ; Mon, 27 Apr 2015 06:50:21 +0000 (UTC) Received: by wixv7 with SMTP id v7sf15704031wix.0 for ; Sun, 26 Apr 2015 23:50:20 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:in-reply-to:references:sender:precedence:list-id :x-original-sender:x-original-authentication-results:mailing-list :list-post:list-help:list-archive:list-unsubscribe; bh=TK0P29BwyrjDwpUnc5eZ2BUbQqHFL+J2EUYD/lwLyHs=; b=BoW50tAP9AjWsgiQwTjGYyCTgwnqA+eo50/n4u9+CApuE5Yb2n5F0PGWqi4h0bArTd PeSQCww6wJx3EXhc/4ltF+vsCMwXcVY+IC7dxAa/l9Ddd8VRdTyHfT2/Aucse9+KVkTm v2QCp5KqCxfGtQK2Y2Dn414q48nMGZ7OarlUHVK5tCMZtcqCyhUK+xQkIJSklbkxhbXf 8s7qbF32Aa8kFEzbftEDNIBiFYvXRk8HXLRAEXlnHgRKCyueqZr63z9hz06TQlbUBM/4 RtOecl6PhW59cBS0rBD150aMElKgjokALGJOn8DS1g3DxtgYYK4TbsZHo9OKsHgp0G1J xucQ== X-Gm-Message-State: ALoCoQkrf7gGR5Qr75rSf0Vzbrdbj3vz4NdOjxxR+W52uZJktJi3bNUzmb668U9IEfyhRTRtHZuS X-Received: by 10.180.97.9 with SMTP id dw9mr6276506wib.2.1430117420787; Sun, 26 Apr 2015 23:50:20 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.44.195 with SMTP id g3ls743391lam.0.gmail; Sun, 26 Apr 2015 23:50:20 -0700 (PDT) X-Received: by 10.152.43.225 with SMTP id z1mr8376343lal.53.1430117420491; Sun, 26 Apr 2015 23:50:20 -0700 (PDT) Received: from mail-lb0-x230.google.com (mail-lb0-x230.google.com. [2a00:1450:4010:c04::230]) by mx.google.com with ESMTPS id ju10si14085983lab.155.2015.04.26.23.50.20 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sun, 26 Apr 2015 23:50:20 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 2a00:1450:4010:c04::230 as permitted sender) client-ip=2a00:1450:4010:c04::230; Received: by lbbqq2 with SMTP id qq2so74770786lbb.3 for ; Sun, 26 Apr 2015 23:50:20 -0700 (PDT) X-Received: by 10.152.4.137 with SMTP id k9mr8725781lak.29.1430117420026; Sun, 26 Apr 2015 23:50:20 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.67.65 with SMTP id l1csp1101504lbt; Sun, 26 Apr 2015 23:50:19 -0700 (PDT) X-Received: by 10.70.124.233 with SMTP id ml9mr19592908pdb.9.1430117418123; Sun, 26 Apr 2015 23:50:18 -0700 (PDT) Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id cz2si5404014pad.93.2015.04.26.23.50.16; Sun, 26 Apr 2015 23:50:18 -0700 (PDT) Received-SPF: none (google.com: linux-kernel-owner@vger.kernel.org does not designate permitted sender hosts) client-ip=209.132.180.67; Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752694AbbD0GuL (ORCPT + 27 others); Mon, 27 Apr 2015 02:50:11 -0400 Received: from m15-111.126.com ([220.181.15.111]:33204 "EHLO m15-111.126.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752496AbbD0GuJ (ORCPT ); Mon, 27 Apr 2015 02:50:09 -0400 Received: from localhost.localdomain (unknown [210.21.223.3]) by smtp1 (Coremail) with SMTP id C8mowEC5g0PH2z1VRqg4AA--.10035S3; Mon, 27 Apr 2015 14:48:52 +0800 (CST) From: Xunlei Pang To: linux-kernel@vger.kernel.org Cc: Peter Zijlstra , Steven Rostedt , Juri Lelli , Ingo Molnar , Xunlei Pang Subject: [RFC PATCH RESEND 1/4] sched/rt: Modify check_preempt_equal_prio() for multiple tasks queued at the same priority Date: Mon, 27 Apr 2015 14:48:35 +0800 Message-Id: <1430117318-2080-2-git-send-email-xlpang@126.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1430117318-2080-1-git-send-email-xlpang@126.com> References: <1430117318-2080-1-git-send-email-xlpang@126.com> X-CM-TRANSID: C8mowEC5g0PH2z1VRqg4AA--.10035S3 X-Coremail-Antispam: 1Uf129KBjvJXoWxJw1kuryUAFy3Cr13ury8Xwb_yoWrJr15pa 1xW34rZa1DJ3WIgw1fAr4kuw4fKwnYyw45Krn3t3yFkF45tr4F93W5JF12yryrZr18WF1a qF4DtFW7Ga1qvFJanT9S1TB71UUUUUUqnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDUYxBIdaVFxhVjvjDU0xZFpf9x07jlc_3UUUUU= X-Originating-IP: [210.21.223.3] X-CM-SenderInfo: p0ost0bj6rjloofrz/1tbiJxDov01sBcIrtAAAsZ Sender: linux-kernel-owner@vger.kernel.org Precedence: list List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Original-Sender: xlpang@126.com X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 2a00:1450:4010:c04::230 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org; dkim=neutral (body hash did not verify) header.i=@; dmarc=fail (p=NONE dis=NONE) header.from=126.com Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , From: Xunlei Pang In check_preempt_equal_prio(), when p is queued, there may be other tasks already queued at the same priority in the "run queue", so we should peek the most front one to do the preemption, not the p. This patch modified it and moved the preemption job to a new function named check_preempt_equal_prio_common() to make the logic clearer. Signed-off-by: Xunlei Pang --- kernel/sched/rt.c | 70 ++++++++++++++++++++++++++++++++++++++++++------------- 1 file changed, 54 insertions(+), 16 deletions(-) diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c index 575da76..0c0f4df 100644 --- a/kernel/sched/rt.c +++ b/kernel/sched/rt.c @@ -1366,33 +1366,66 @@ out: return cpu; } -static void check_preempt_equal_prio(struct rq *rq, struct task_struct *p) +static struct task_struct *peek_next_task_rt(struct rq *rq); + +static void check_preempt_equal_prio_common(struct rq *rq) { + struct task_struct *curr = rq->curr; + struct task_struct *next; + + /* Current can't be migrated, useless to reschedule */ + if (curr->nr_cpus_allowed == 1 || + !cpupri_find(&rq->rd->cpupri, curr, NULL)) + return; + /* - * Current can't be migrated, useless to reschedule, - * let's hope p can move out. + * Can we find any task with the same priority as + * curr? To accomplish this, firstly requeue curr + * to the tail, then peek next, finally put curr + * back to the head if a different task was peeked. */ - if (rq->curr->nr_cpus_allowed == 1 || - !cpupri_find(&rq->rd->cpupri, rq->curr, NULL)) + requeue_task_rt(rq, curr, 0); + next = peek_next_task_rt(rq); + if (next == curr) + return; + + requeue_task_rt(rq, curr, 1); + + if (next->prio != curr->prio) return; /* - * p is migratable, so let's not schedule it and - * see if it is pushed or pulled somewhere else. + * Got the right "next" queued with the same priority + * as current. If next is migratable, don't schedule + * it as it will be pushed or pulled somewhere else. */ - if (p->nr_cpus_allowed != 1 - && cpupri_find(&rq->rd->cpupri, p, NULL)) + if (next->nr_cpus_allowed != 1 && + cpupri_find(&rq->rd->cpupri, next, NULL)) return; /* * There appears to be other cpus that can accept - * current and none to run 'p', so lets reschedule - * to try and push current away: + * current and none to run next, so lets reschedule + * to try and push current away. */ - requeue_task_rt(rq, p, 1); + requeue_task_rt(rq, next, 1); resched_curr(rq); } +static inline +void check_preempt_equal_prio(struct rq *rq, struct task_struct *p) +{ + /* + * p is migratable, so let's not schedule it and + * see if it is pushed or pulled somewhere else. + */ + if (p->nr_cpus_allowed != 1 && + cpupri_find(&rq->rd->cpupri, p, NULL)) + return; + + check_preempt_equal_prio_common(rq); +} + #endif /* CONFIG_SMP */ /* @@ -1440,10 +1473,9 @@ static struct sched_rt_entity *pick_next_rt_entity(struct rq *rq, return next; } -static struct task_struct *_pick_next_task_rt(struct rq *rq) +static struct task_struct *peek_next_task_rt(struct rq *rq) { struct sched_rt_entity *rt_se; - struct task_struct *p; struct rt_rq *rt_rq = &rq->rt; do { @@ -1452,9 +1484,15 @@ static struct task_struct *_pick_next_task_rt(struct rq *rq) rt_rq = group_rt_rq(rt_se); } while (rt_rq); - p = rt_task_of(rt_se); - p->se.exec_start = rq_clock_task(rq); + return rt_task_of(rt_se); +} +static inline struct task_struct *_pick_next_task_rt(struct rq *rq) +{ + struct task_struct *p; + + p = peek_next_task_rt(rq); + p->se.exec_start = rq_clock_task(rq); return p; }