From patchwork Sun Apr 26 17:10:53 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xunlei Pang X-Patchwork-Id: 47587 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-la0-f72.google.com (mail-la0-f72.google.com [209.85.215.72]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 9257620553 for ; Sun, 26 Apr 2015 17:13:11 +0000 (UTC) Received: by lamp14 with SMTP id p14sf21325570lam.3 for ; Sun, 26 Apr 2015 10:13:10 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:sender:precedence:list-id:x-original-sender :x-original-authentication-results:mailing-list:list-post:list-help :list-archive:list-unsubscribe; bh=I3bGqS/784aXemqSTGY3l14dAFIMd9wqp6vlXoLtXI0=; b=GEquYjtKa3cixLhSMleWPTkb2yByUNmU7+G0/CMaQ7Ow5qPTu3Yl/Cs8jqsq/IMwBY cHZqrGuoObjP4C3aIH8UaeQIL3Hhna6z0m1pJHbTGTmrPCMbEYFFpHyWdppY7ns4RmoG ZDlumfG1pf7Kz9yMJxziFEbFUyWtyOAZzBr3ottZ8+RTF+CR/nYH5wbvAireFc4EysO9 PvsJ4vjav4AdbN4V/jnjsTKR1LREi2G7apopNwkUmMaTQkpyuvlbtSTGCTK2bEC6j5hu DbQ8Q6XXrUytFeZWjrRhVJ1ORVsJyo3p4Ou0ETotkoVaKJv1iJEKzqicvbkKo7fe/bAA KR4Q== X-Gm-Message-State: ALoCoQmAcjs1NS3dz0pimTHeSlduo6i371qUX5ZqSVhf3uYSUR+dBHjQCC3FuGGpV+wCg+mtEyyh X-Received: by 10.180.96.6 with SMTP id do6mr4811776wib.4.1430068390354; Sun, 26 Apr 2015 10:13:10 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.44.195 with SMTP id g3ls694197lam.0.gmail; Sun, 26 Apr 2015 10:13:10 -0700 (PDT) X-Received: by 10.112.169.42 with SMTP id ab10mr6826774lbc.3.1430068390164; Sun, 26 Apr 2015 10:13:10 -0700 (PDT) Received: from mail-lb0-x22c.google.com (mail-lb0-x22c.google.com. [2a00:1450:4010:c04::22c]) by mx.google.com with ESMTPS id ju10si12951508lab.155.2015.04.26.10.13.10 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sun, 26 Apr 2015 10:13:10 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 2a00:1450:4010:c04::22c as permitted sender) client-ip=2a00:1450:4010:c04::22c; Received: by lbbuc2 with SMTP id uc2so66970181lbb.2 for ; Sun, 26 Apr 2015 10:13:10 -0700 (PDT) X-Received: by 10.112.29.36 with SMTP id g4mr7030914lbh.56.1430068390071; Sun, 26 Apr 2015 10:13:10 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.67.65 with SMTP id l1csp868307lbt; Sun, 26 Apr 2015 10:13:09 -0700 (PDT) X-Received: by 10.68.201.138 with SMTP id ka10mr15360745pbc.6.1430068388349; Sun, 26 Apr 2015 10:13:08 -0700 (PDT) Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id n8si26356306pdp.100.2015.04.26.10.13.07; Sun, 26 Apr 2015 10:13:08 -0700 (PDT) Received-SPF: none (google.com: linux-kernel-owner@vger.kernel.org does not designate permitted sender hosts) client-ip=209.132.180.67; Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752287AbbDZRM5 (ORCPT + 27 others); Sun, 26 Apr 2015 13:12:57 -0400 Received: from m15-114.126.com ([220.181.15.114]:56524 "EHLO m15-114.126.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750937AbbDZRMu (ORCPT ); Sun, 26 Apr 2015 13:12:50 -0400 Received: from localhost.localdomain (unknown [220.166.221.63]) by smtp7 (Coremail) with SMTP id DsmowACX+G4iHD1V8BojAA--.830S2; Mon, 27 Apr 2015 01:11:07 +0800 (CST) From: Xunlei Pang To: linux-kernel@vger.kernel.org Cc: Peter Zijlstra , Steven Rostedt , Juri Lelli , Ingo Molnar , Xunlei Pang Subject: [RFC PATCH 1/6] sched/rt: Provide new check_preempt_equal_prio_common() Date: Mon, 27 Apr 2015 01:10:53 +0800 Message-Id: <1430068258-1960-1-git-send-email-xlpang@126.com> X-Mailer: git-send-email 2.1.0 X-CM-TRANSID: DsmowACX+G4iHD1V8BojAA--.830S2 X-Coremail-Antispam: 1Uf129KBjvJXoWxZFWxGw4rur48Cry3Aw45GFg_yoWrJF1Dpa 1ku34rZw4DJ3WIgw1fAr4kZw4fKwnYyw45Krn3t3yFkF45tr4F93W5JF17tryrZr18WF1a qr4DtFW7Ca1qvFJanT9S1TB71UUUUUUqnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDUYxBIdaVFxhVjvjDU0xZFpf9x07jSWlkUUUUU= X-Originating-IP: [220.166.221.63] X-CM-SenderInfo: p0ost0bj6rjloofrz/1tbipBHnv1GogZ1G4AAAsu Sender: linux-kernel-owner@vger.kernel.org Precedence: list List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Original-Sender: xlpang@126.com X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 2a00:1450:4010:c04::22c as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org; dkim=neutral (body hash did not verify) header.i=@; dmarc=fail (p=NONE dis=NONE) header.from=126.com Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , From: Xunlei Pang When p is queued, there may be other tasks already queued at the same priority in the "run queue", so we should peek the most front one to do the equal priority preemption. This patch modifies check_preempt_equal_prio() and provides new check_preempt_equal_prio_common() to do the common preemption. There are also other cases to be added calling the new interface in the following patches. Signed-off-by: Xunlei Pang --- kernel/sched/rt.c | 70 ++++++++++++++++++++++++++++++++++++++++++------------- 1 file changed, 54 insertions(+), 16 deletions(-) diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c index 575da76..6b40555 100644 --- a/kernel/sched/rt.c +++ b/kernel/sched/rt.c @@ -1366,33 +1366,66 @@ out: return cpu; } -static void check_preempt_equal_prio(struct rq *rq, struct task_struct *p) +static struct task_struct *peek_next_task_rt(struct rq *rq); + +static void check_preempt_equal_prio_common(struct rq *rq) { + struct task_struct *curr = rq->curr; + struct task_struct *next; + + /* Current can't be migrated, useless to reschedule */ + if (curr->nr_cpus_allowed == 1 || + !cpupri_find(&rq->rd->cpupri, curr, NULL)) + return; + /* - * Current can't be migrated, useless to reschedule, - * let's hope p can move out. + * Can we find any task with the same priority as + * curr? To accomplish this, firstly requeue curr + * to the tail, then peek next, finally put curr + * back to the head if a different task was peeked. */ - if (rq->curr->nr_cpus_allowed == 1 || - !cpupri_find(&rq->rd->cpupri, rq->curr, NULL)) + requeue_task_rt(rq, curr, 0); + next = peek_next_task_rt(rq); + if (next == curr) + return; + + requeue_task_rt(rq, curr, 1); + + if (next->prio != curr->prio) return; /* - * p is migratable, so let's not schedule it and - * see if it is pushed or pulled somewhere else. + * Got the right next queued with the same priority + * as current. If next is migratable, don't schedule + * it as it will be pushed or pulled somewhere else. */ - if (p->nr_cpus_allowed != 1 - && cpupri_find(&rq->rd->cpupri, p, NULL)) + if (next->nr_cpus_allowed != 1 && + cpupri_find(&rq->rd->cpupri, next, NULL)) return; /* * There appears to be other cpus that can accept - * current and none to run 'p', so lets reschedule - * to try and push current away: + * current and none to run next, so lets reschedule + * to try and push current away. */ - requeue_task_rt(rq, p, 1); + requeue_task_rt(rq, next, 1); resched_curr(rq); } +static inline +void check_preempt_equal_prio(struct rq *rq, struct task_struct *p) +{ + /* + * p is migratable, so let's not schedule it and + * see if it is pushed or pulled somewhere else. + */ + if (p->nr_cpus_allowed != 1 && + cpupri_find(&rq->rd->cpupri, p, NULL)) + return; + + check_preempt_equal_prio_common(rq); +} + #endif /* CONFIG_SMP */ /* @@ -1440,10 +1473,9 @@ static struct sched_rt_entity *pick_next_rt_entity(struct rq *rq, return next; } -static struct task_struct *_pick_next_task_rt(struct rq *rq) +static struct task_struct *peek_next_task_rt(struct rq *rq) { struct sched_rt_entity *rt_se; - struct task_struct *p; struct rt_rq *rt_rq = &rq->rt; do { @@ -1452,9 +1484,15 @@ static struct task_struct *_pick_next_task_rt(struct rq *rq) rt_rq = group_rt_rq(rt_se); } while (rt_rq); - p = rt_task_of(rt_se); - p->se.exec_start = rq_clock_task(rq); + return rt_task_of(rt_se); +} +static inline struct task_struct *_pick_next_task_rt(struct rq *rq) +{ + struct task_struct *p; + + p = peek_next_task_rt(rq); + p->se.exec_start = rq_clock_task(rq); return p; }