From patchwork Mon Apr 27 06:48:38 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xunlei Pang X-Patchwork-Id: 47598 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-la0-f70.google.com (mail-la0-f70.google.com [209.85.215.70]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id CA61720553 for ; Mon, 27 Apr 2015 06:51:02 +0000 (UTC) Received: by laat2 with SMTP id t2sf23726424laa.2 for ; Sun, 26 Apr 2015 23:51:01 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:in-reply-to:references:sender:precedence:list-id :x-original-sender:x-original-authentication-results:mailing-list :list-post:list-help:list-archive:list-unsubscribe; bh=QxFkfCATJW7LocV1oNigA6TgbYPv8ztyFxbo8Ul9AzM=; b=EgSU6ardiMvYaL0atxR2ITt0vMXL++R4pxdcU9tESy2pW+NGHyHNpOPZY1r6O5+Og3 opbFwM+L9icNc7hyPUBXps+KaY5/oOQ9b5ua1umCTcSg41wrMp9gbubsR0Xy6pM+N7pC DfuAuKTMW0VwLGjZ5Y82YcsoRcHWjsO9PG4Ki34NUtyDee83LF3I5T8GIfKE1x7W10OE LSkmZH31Y1jSVWDBRLdrCxJKJfm4xtT7xaiOwnqOhHil0crh/igrRC2u0Qqneky4wkQR 88eeyVpODq1AKMmkIBwqkkqS7QFxUPZoivCCf/Cn+F3fxyHkoHDE+BXpzbsLRcYc7afd HbNg== X-Gm-Message-State: ALoCoQlnrz0rHW9Z35jdx19fhMcCRb+OclJjhaLAgu11ziNf5kOMra0lUSMP/h2QbgBHI7f37m73 X-Received: by 10.112.13.200 with SMTP id j8mr6437645lbc.14.1430117461766; Sun, 26 Apr 2015 23:51:01 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.21.98 with SMTP id u2ls665993lae.84.gmail; Sun, 26 Apr 2015 23:51:01 -0700 (PDT) X-Received: by 10.152.120.106 with SMTP id lb10mr8548691lab.92.1430117461569; Sun, 26 Apr 2015 23:51:01 -0700 (PDT) Received: from mail-la0-x233.google.com (mail-la0-x233.google.com. [2a00:1450:4010:c03::233]) by mx.google.com with ESMTPS id xn1si14100020lbb.112.2015.04.26.23.51.01 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sun, 26 Apr 2015 23:51:01 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 2a00:1450:4010:c03::233 as permitted sender) client-ip=2a00:1450:4010:c03::233; Received: by layy10 with SMTP id y10so72728075lay.0 for ; Sun, 26 Apr 2015 23:51:01 -0700 (PDT) X-Received: by 10.152.4.137 with SMTP id k9mr8727975lak.29.1430117461427; Sun, 26 Apr 2015 23:51:01 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.67.65 with SMTP id l1csp1101719lbt; Sun, 26 Apr 2015 23:51:00 -0700 (PDT) X-Received: by 10.70.32.164 with SMTP id k4mr19611268pdi.138.1430117459604; Sun, 26 Apr 2015 23:50:59 -0700 (PDT) Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id v8si28560616pds.131.2015.04.26.23.50.58; Sun, 26 Apr 2015 23:50:59 -0700 (PDT) Received-SPF: none (google.com: linux-kernel-owner@vger.kernel.org does not designate permitted sender hosts) client-ip=209.132.180.67; Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932163AbbD0Gun (ORCPT + 27 others); Mon, 27 Apr 2015 02:50:43 -0400 Received: from m15-111.126.com ([220.181.15.111]:35203 "EHLO m15-111.126.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932074AbbD0Guf (ORCPT ); Mon, 27 Apr 2015 02:50:35 -0400 Received: from localhost.localdomain (unknown [210.21.223.3]) by smtp1 (Coremail) with SMTP id C8mowEC5g0PH2z1VRqg4AA--.10035S6; Mon, 27 Apr 2015 14:48:58 +0800 (CST) From: Xunlei Pang To: linux-kernel@vger.kernel.org Cc: Peter Zijlstra , Steven Rostedt , Juri Lelli , Ingo Molnar , Xunlei Pang Subject: [RFC PATCH RESEND 4/4] sched/rt: Requeue p back if the preemption initiated by check_preempt_equal_prio_common() failed Date: Mon, 27 Apr 2015 14:48:38 +0800 Message-Id: <1430117318-2080-5-git-send-email-xlpang@126.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1430117318-2080-1-git-send-email-xlpang@126.com> References: <1430117318-2080-1-git-send-email-xlpang@126.com> X-CM-TRANSID: C8mowEC5g0PH2z1VRqg4AA--.10035S6 X-Coremail-Antispam: 1Uf129KBjvJXoW3GF1kAFW8uF4rtw13WF18Grg_yoW7XrW7pa 95C397Jw4UJay2grWSvr4kZry5Gwnaqa97Jr97KayFyF15Kr18WFn5Jr1ayr45ury8uF1a yFs5tr47Gw1qqF7anT9S1TB71UUUUUUqnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDUYxBIdaVFxhVjvjDU0xZFpf9x07jpHq7UUUUU= X-Originating-IP: [210.21.223.3] X-CM-SenderInfo: p0ost0bj6rjloofrz/1tbiXBvov1R0V4-+JgAAsw Sender: linux-kernel-owner@vger.kernel.org Precedence: list List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Original-Sender: xlpang@126.com X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 2a00:1450:4010:c03::233 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org; dkim=neutral (body hash did not verify) header.i=@; dmarc=fail (p=NONE dis=NONE) header.from=126.com Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , From: Xunlei Pang In check_preempt_equal_prio_common(), it requeues "next" ahead in the "run queue" and want to push current away. But when doing the actual pushing, if the system state changes, the pushing may fail as a result. In this case, p finally becomes the new current and gets running, while previous current was queued back waiting in the same "run queue". This broke FIFO. This patch adds a flag named RT_PREEMPT_PUSHAWAY for task_struct:: rt_preempt, sets it when doing check_preempt_equal_prio_common(), and clears it if current is away(it will call dequeued). So we can test this flag in p's post_schedule_rt() to judge if the pushing has happened. If the pushing failed, requeue previous current back to the head of its "run queue" and start a rescheduling. Signed-off-by: Xunlei Pang --- kernel/sched/rt.c | 87 ++++++++++++++++++++++++++++++++++++++++++++++++++----- 1 file changed, 79 insertions(+), 8 deletions(-) diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c index 7439121..d1cecd6 100644 --- a/kernel/sched/rt.c +++ b/kernel/sched/rt.c @@ -258,6 +258,8 @@ int alloc_rt_sched_group(struct task_group *tg, struct task_group *parent) #ifdef CONFIG_SMP #define RT_PREEMPT_QUEUEAHEAD 1UL +#define RT_PREEMPT_PUSHAWAY 2UL +#define RT_PREEMPT_MASK 3UL /* * p(current) was preempted, and to be put ahead of @@ -273,6 +275,30 @@ static inline void clear_rt_preempted(struct task_struct *p) p->rt_preempt = 0; } +static inline struct task_struct *rt_preempting_target(struct task_struct *p) +{ + return (struct task_struct *) (p->rt_preempt & ~RT_PREEMPT_MASK); +} + +/* + * p(new current) is preempting and pushing previous current away. + */ +static inline bool rt_preempting(struct task_struct *p) +{ + if ((p->rt_preempt & RT_PREEMPT_PUSHAWAY) && rt_preempting_target(p)) + return true; + + return false; +} + +static inline void clear_rt_preempting(struct task_struct *p) +{ + if (rt_preempting(p)) + put_task_struct(rt_preempting_target(p)); + + p->rt_preempt = 0; +} + void resched_curr_preempted_rt(struct rq *rq) { if (rt_task(rq->curr)) @@ -375,13 +401,17 @@ static inline int has_pushable_tasks(struct rq *rq) return !plist_head_empty(&rq->rt.pushable_tasks); } -static inline void set_post_schedule(struct rq *rq) +static inline void set_post_schedule(struct rq *rq, struct task_struct *p) { - /* - * We detect this state here so that we can avoid taking the RQ - * lock again later if there is no need to push - */ - rq->post_schedule = has_pushable_tasks(rq); + if (rt_preempting(p)) + /* Forced post schedule */ + rq->post_schedule = 1; + else + /* + * We detect this state here so that we can avoid taking + * the RQ lock again later if there is no need to push + */ + rq->post_schedule = has_pushable_tasks(rq); } static void @@ -434,6 +464,15 @@ static inline void clear_rt_preempted(struct task_struct *p) { } +static inline bool rt_preempting(struct task_struct *p) +{ + return false; +} + +static inline void clear_rt_preempting(struct task_struct *p) +{ +} + static inline void resched_curr_preempted_rt(struct rq *rq) { resched_curr(rq); @@ -472,7 +511,7 @@ static inline int pull_rt_task(struct rq *this_rq) return 0; } -static inline void set_post_schedule(struct rq *rq) +static inline void set_post_schedule(struct rq *rq, struct task_struct *p) { } #endif /* CONFIG_SMP */ @@ -1330,6 +1369,7 @@ static void dequeue_task_rt(struct rq *rq, struct task_struct *p, int flags) dequeue_rt_entity(rt_se); dequeue_pushable_task(rq, p); + clear_rt_preempting(p); } /* @@ -1468,6 +1508,11 @@ static void check_preempt_equal_prio_common(struct rq *rq) * to try and push current away. */ requeue_task_rt(rq, next, 1); + + get_task_struct(curr); + curr->rt_preempt |= RT_PREEMPT_PUSHAWAY; + next->rt_preempt = (unsigned long) curr; + next->rt_preempt |= RT_PREEMPT_PUSHAWAY; resched_curr_preempted_rt(rq); } @@ -1590,7 +1635,7 @@ pick_next_task_rt(struct rq *rq, struct task_struct *prev) /* The running task is never eligible for pushing */ dequeue_pushable_task(rq, p); - set_post_schedule(rq); + set_post_schedule(rq, p); return p; } @@ -2151,6 +2196,32 @@ skip: static void post_schedule_rt(struct rq *rq) { push_rt_tasks(rq); + + if (rt_preempting(current)) { + struct task_struct *target; + + target = rt_preempting_target(current); + current->rt_preempt = 0; + if (!(target->rt_preempt & RT_PREEMPT_PUSHAWAY)) + goto out; + + /* + * target still has RT_PREEMPT_PUSHAWAY set which + * means it wasn't pushed away successfully if it + * is still on this rq. thus restore former status + * of current and target if so. + */ + if (!task_on_rq_queued(target) || + task_cpu(target) != rq->cpu) + goto out; + + /* target is previous current, requeue it back ahead. */ + requeue_task_rt(rq, target, 1); + /* Let's preempt current, loop back to __schedule(). */ + resched_curr_preempted_rt(rq); +out: + put_task_struct(target); + } } /*