Message ID | 1415010620-6176-1-git-send-email-pang.xunlei@linaro.org |
---|---|
State | New |
Headers | show |
On Mon, Nov 03, 2014 at 06:30:18PM +0800, pang.xunlei wrote: > kernel/sched/idle_task.c | 21 +++++++++++++++++++++ > 1 file changed, 21 insertions(+) > > diff --git a/kernel/sched/idle_task.c b/kernel/sched/idle_task.c > index 67ad4e7..3dc372e 100644 > --- a/kernel/sched/idle_task.c > +++ b/kernel/sched/idle_task.c > @@ -26,6 +26,15 @@ static void check_preempt_curr_idle(struct rq *rq, struct task_struct *p, int fl > static struct task_struct * > pick_next_task_idle(struct rq *rq, struct task_struct *prev) > { > +#ifdef CONFIG_SMP > + struct cpupri *cp = &rq->rd->cpupri; > + int currpri = cp->cpu_to_pri[rq->cpu]; > + > + BUG_ON(currpri != CPUPRI_NORMAL); > + /* Set CPUPRI_IDLE bitmap for this cpu */ > + cpupri_set(cp, rq->cpu, MAX_PRIO); > +#endif > + This should really be idle_enter_rt() and implemented in kernel/sched/rt.c. > put_prev_task(rq, prev); > > schedstat_inc(rq, sched_goidle); > @@ -47,6 +56,18 @@ dequeue_task_idle(struct rq *rq, struct task_struct *p, int flags) > > static void put_prev_task_idle(struct rq *rq, struct task_struct *prev) > { > +#ifdef CONFIG_SMP > + struct cpupri *cp = &rq->rd->cpupri; > + int currpri = cp->cpu_to_pri[rq->cpu]; > + > + /* > + * Set CPUPRI_NORMAL bitmap for this cpu when exiting from idle. > + * RT tasks may be queued beforehand, so the judgement is needed. > + */ > + if (currpri == CPUPRI_IDLE) > + cpupri_set(cp, rq->cpu, MAX_RT_PRIO); > +#endif idle_exit_rt() and the same. > idle_exit_fair(rq); > rq_last_tick_reset(rq); > } Also, try and keep the deadline bits in sync with the rt semantics. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
diff --git a/kernel/sched/idle_task.c b/kernel/sched/idle_task.c index 67ad4e7..3dc372e 100644 --- a/kernel/sched/idle_task.c +++ b/kernel/sched/idle_task.c @@ -26,6 +26,15 @@ static void check_preempt_curr_idle(struct rq *rq, struct task_struct *p, int fl static struct task_struct * pick_next_task_idle(struct rq *rq, struct task_struct *prev) { +#ifdef CONFIG_SMP + struct cpupri *cp = &rq->rd->cpupri; + int currpri = cp->cpu_to_pri[rq->cpu]; + + BUG_ON(currpri != CPUPRI_NORMAL); + /* Set CPUPRI_IDLE bitmap for this cpu */ + cpupri_set(cp, rq->cpu, MAX_PRIO); +#endif + put_prev_task(rq, prev); schedstat_inc(rq, sched_goidle); @@ -47,6 +56,18 @@ dequeue_task_idle(struct rq *rq, struct task_struct *p, int flags) static void put_prev_task_idle(struct rq *rq, struct task_struct *prev) { +#ifdef CONFIG_SMP + struct cpupri *cp = &rq->rd->cpupri; + int currpri = cp->cpu_to_pri[rq->cpu]; + + /* + * Set CPUPRI_NORMAL bitmap for this cpu when exiting from idle. + * RT tasks may be queued beforehand, so the judgement is needed. + */ + if (currpri == CPUPRI_IDLE) + cpupri_set(cp, rq->cpu, MAX_RT_PRIO); +#endif + idle_exit_fair(rq); rq_last_tick_reset(rq); }
When a runqueue runs out of RT tasks, it may have non-RT tasks or none tasks(idle). Currently, RT balance treats the two cases equally and manipulates cpupri.pri_to_cpu[CPUPRI_NORMAL] only which may cause problems. For instance, 4 cpus system, non-RT task1 is running on cpu0, RT task2 is running on cpu3, cpu1/cpu2 both are idle. Then RT task3 (usually CPU-intensive) is waken up or created on cpu3, it will be placed to cpu0 (see find_lowest_rq()) causing task1 starving until cfs load balance places task1 to another cpu, or even worse if task1 is bound on cpu0. So, it would be reasonable to put task3 to cpu1 or cpu2 which is idle(even though doing this may break the energy-saving idle state). This patch tackles the problem by operating pri_to_cpu[CPUPRI_IDLE] of cpupri according to the stages of idle task, so that when pushing or selecting RT tasks through find_lowest_rq(), it will try to find one idle cpu as the goal. Signed-off-by: pang.xunlei <pang.xunlei@linaro.org> --- kernel/sched/idle_task.c | 21 +++++++++++++++++++++ 1 file changed, 21 insertions(+)