Message ID | 7a61e955abdcbb1dfa9fe493f11a5ec53a11ddd3.1370948150.git.viresh.kumar@linaro.org |
---|---|
State | Accepted |
Headers | show |
On Tue, Jun 11, 2013 at 04:32:44PM +0530, Viresh Kumar wrote: > In build_sched_groups() we don't need to call get_group() for cpus which are > already covered in previous iterations. So, call get_group() after checking if > cpu is covered or not. > Aside from not needing it; doing it would mark the group used and eventually leak it since we wouldn't connect it and not find it again to free it, right? So effectively this fixes a memory leak? > Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org> > --- > kernel/sched/core.c | 4 ++-- > 1 file changed, 2 insertions(+), 2 deletions(-) > > diff --git a/kernel/sched/core.c b/kernel/sched/core.c > index 4abc743..27842f5 100644 > --- a/kernel/sched/core.c > +++ b/kernel/sched/core.c > @@ -5344,12 +5344,12 @@ build_sched_groups(struct sched_domain *sd, int cpu) > > for_each_cpu(i, span) { > struct sched_group *sg; > - int group = get_group(i, sdd, &sg); > - int j; > + int group, j; > > if (cpumask_test_cpu(i, covered)) > continue; > > + group = get_group(i, sdd, &sg); > cpumask_clear(sched_group_cpus(sg)); > sg->sgp->power = 0; > cpumask_setall(sched_group_mask(sg)); > -- > 1.7.12.rc2.18.g61b472e >
On 18 June 2013 15:33, Peter Zijlstra <peterz@infradead.org> wrote: > On Tue, Jun 11, 2013 at 04:32:44PM +0530, Viresh Kumar wrote: >> In build_sched_groups() we don't need to call get_group() for cpus which are >> already covered in previous iterations. So, call get_group() after checking if >> cpu is covered or not. >> > > Aside from not needing it; doing it would mark the group used and > eventually leak it since we wouldn't connect it and not find it again to > free it, right? > > So effectively this fixes a memory leak? Exactly. To be precise: In cases where sg->cpumask contained more than one cpu (For any topology level), this patch helps freeing sg's memory for all cpus leaving the group leader. May I update the logs again?
On Tue, Jun 18, 2013 at 03:45:15PM +0530, Viresh Kumar wrote: > On 18 June 2013 15:33, Peter Zijlstra <peterz@infradead.org> wrote: > > On Tue, Jun 11, 2013 at 04:32:44PM +0530, Viresh Kumar wrote: > >> In build_sched_groups() we don't need to call get_group() for cpus which are > >> already covered in previous iterations. So, call get_group() after checking if > >> cpu is covered or not. > >> > > > > Aside from not needing it; doing it would mark the group used and > > eventually leak it since we wouldn't connect it and not find it again to > > free it, right? > > > > So effectively this fixes a memory leak? > > Exactly. To be precise: In cases where sg->cpumask contained more > than one cpu (For any topology level), this patch helps freeing sg's > memory for all cpus leaving the group leader. > > May I update the logs again? Sure. Just send a new patch and I'll substitute.
On 18 June 2013 20:49, Peter Zijlstra <peterz@infradead.org> wrote: > Sure. Just send a new patch and I'll substitute. This is the new log (patch is attached): sched: Fix memory leakage in build_sched_groups() In build_sched_groups() we don't need to call get_group() for cpus which are already covered in previous iterations. Calling get_group() would mark the group used and eventually leak it since we wouldn't connect it and not find it again to free it. This will happen only in cases where sg->cpumask contained more than one cpu (For any topology level). This patch would free sg's memory for all cpus leaving the group leader as the group isn't marked used now. Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org> --- kernel/sched/core.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
On Tue, Jun 18, 2013 at 08:59:39PM +0530, Viresh Kumar wrote: > On 18 June 2013 20:49, Peter Zijlstra <peterz@infradead.org> wrote: > > Sure. Just send a new patch and I'll substitute. > > This is the new log (patch is attached): > > sched: Fix memory leakage in build_sched_groups() > > In build_sched_groups() we don't need to call get_group() for cpus which are > already covered in previous iterations. Calling get_group() would mark the group > used and eventually leak it since we wouldn't connect it and not find it again > to free it. > > This will happen only in cases where sg->cpumask contained more than one cpu > (For any topology level). This patch would free sg's memory for all cpus leaving > the group leader as the group isn't marked used now. > > Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org> Thanks!
diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 4abc743..27842f5 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -5344,12 +5344,12 @@ build_sched_groups(struct sched_domain *sd, int cpu) for_each_cpu(i, span) { struct sched_group *sg; - int group = get_group(i, sdd, &sg); - int j; + int group, j; if (cpumask_test_cpu(i, covered)) continue; + group = get_group(i, sdd, &sg); cpumask_clear(sched_group_cpus(sg)); sg->sgp->power = 0; cpumask_setall(sched_group_mask(sg));
In build_sched_groups() we don't need to call get_group() for cpus which are already covered in previous iterations. So, call get_group() after checking if cpu is covered or not. Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org> --- kernel/sched/core.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)