Message ID | 1389949444-14821-2-git-send-email-daniel.lezcano@linaro.org |
---|---|
State | Superseded |
Headers | show |
On 01/17/2014 02:33 PM, Peter Zijlstra wrote: > On Fri, Jan 17, 2014 at 10:04:02AM +0100, Daniel Lezcano wrote: >> The scheduler main function 'schedule()' checks if there are no more tasks >> on the runqueue. Then it checks if a task should be pulled in the current >> runqueue in idle_balance() assuming it will go to idle otherwise. >> >> But the idle_balance() releases the rq->lock in order to lookup in the sched >> domains and takes the lock again right after. That opens a window where >> another cpu may put a task in our runqueue, so we won't go to idle but >> we have filled the idle_stamp, thinking we will. >> >> This patch closes the window by checking if the runqueue has been modified >> but without pulling a task after taking the lock again, so we won't go to idle >> right after in the __schedule() function. > > Did you actually observe this or was it found by reading the code? When I tried to achieve what is doing the patch 4/4, I was falling in the BUG() (comment in patch 4/4). So I did some tests and checked that we enter idle_balance() with nr_running == 0 but we exit with nr_running > 0 and pulled_task == 0.
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index d601df3..502c51c 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -6417,6 +6417,13 @@ void idle_balance(struct rq *this_rq) raw_spin_lock(&this_rq->lock); + /* + * While browsing the domains, we released the rq lock. + * A task could have be enqueued in the meantime + */ + if (this_rq->nr_running && !pulled_task) + return; + if (pulled_task || time_after(jiffies, this_rq->next_balance)) { /* * We are going idle. next_balance may be set based on
The scheduler main function 'schedule()' checks if there are no more tasks on the runqueue. Then it checks if a task should be pulled in the current runqueue in idle_balance() assuming it will go to idle otherwise. But the idle_balance() releases the rq->lock in order to lookup in the sched domains and takes the lock again right after. That opens a window where another cpu may put a task in our runqueue, so we won't go to idle but we have filled the idle_stamp, thinking we will. This patch closes the window by checking if the runqueue has been modified but without pulling a task after taking the lock again, so we won't go to idle right after in the __schedule() function. Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org> --- kernel/sched/fair.c | 7 +++++++ 1 file changed, 7 insertions(+)