diff mbox

[V2,3/3] cpufreq: initialize governor for a new policy under policy->rwsem

Message ID 2f6efe0c9058f64212ee220a1b386e04ba415686.1393904428.git.viresh.kumar@linaro.org
State New
Headers show

Commit Message

Viresh Kumar March 4, 2014, 3:44 a.m. UTC
policy->rwsem is used to lock access to all parts of code modifying struct
cpufreq_policy but wasn't used on a new policy created from __cpufreq_add_dev().

Because of which if we call cpufreq_update_policy() repeatedly on one CPU and do
offline/online of another CPU then we might see these crashes:

Unable to handle kernel NULL pointer dereference at virtual address 00000020
pgd = c0003000
[00000020] *pgd=80000000004003, *pmd=00000000
Internal error: Oops: 206 [#1] PREEMPT SMP ARM

PC is at __cpufreq_governor+0x10/0x1ac
LR is at cpufreq_update_policy+0x114/0x150

---[ end trace f23a8defea6cd706 ]---
Kernel panic - not syncing: Fatal exception
CPU0: stopping
CPU: 0 PID: 7136 Comm: mpdecision Tainted: G      D W    3.10.0-gd727407-00074-g979ede8 #396

[<c0afe180>] (notifier_call_chain+0x40/0x68) from [<c02a23ac>] (__blocking_notifier_call_chain+0x40/0x58)
[<c02a23ac>] (__blocking_notifier_call_chain+0x40/0x58) from [<c02a23d8>] (blocking_notifier_call_chain+0x14/0x1c)
[<c02a23d8>] (blocking_notifier_call_chain+0x14/0x1c) from [<c0803c68>] (cpufreq_set_policy+0xd4/0x2b8)
[<c0803c68>] (cpufreq_set_policy+0xd4/0x2b8) from [<c0803e7c>] (cpufreq_init_policy+0x30/0x98)
[<c0803e7c>] (cpufreq_init_policy+0x30/0x98) from [<c0805a18>] (__cpufreq_add_dev.isra.17+0x4dc/0x7a4)
[<c0805a18>] (__cpufreq_add_dev.isra.17+0x4dc/0x7a4) from [<c0805d38>] (cpufreq_cpu_callback+0x58/0x84)
[<c0805d38>] (cpufreq_cpu_callback+0x58/0x84) from [<c0afe180>] (notifier_call_chain+0x40/0x68)
[<c0afe180>] (notifier_call_chain+0x40/0x68) from [<c02812dc>] (__cpu_notify+0x28/0x44)
[<c02812dc>] (__cpu_notify+0x28/0x44) from [<c0aeed90>] (_cpu_up+0xf4/0x1dc)
[<c0aeed90>] (_cpu_up+0xf4/0x1dc) from [<c0aeeed4>] (cpu_up+0x5c/0x78)
[<c0aeeed4>] (cpu_up+0x5c/0x78) from [<c0aec808>] (store_online+0x44/0x74)
[<c0aec808>] (store_online+0x44/0x74) from [<c03a40f4>] (sysfs_write_file+0x108/0x14c)
[<c03a40f4>] (sysfs_write_file+0x108/0x14c) from [<c03517d4>] (vfs_write+0xd0/0x180)
[<c03517d4>] (vfs_write+0xd0/0x180) from [<c0351ca8>] (SyS_write+0x38/0x68)
[<c0351ca8>] (SyS_write+0x38/0x68) from [<c0205de0>] (ret_fast_syscall+0x0/0x30)

Fix these by taking locks at appropriate places in __cpufreq_add_dev() as well.

Reported-by: Saravana Kannan <skannan@codeaurora.org>
Suggested-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
---
V1->V2: No change

 drivers/cpufreq/cpufreq.c | 2 ++
 1 file changed, 2 insertions(+)

Comments

Rafael J. Wysocki March 6, 2014, 1:04 a.m. UTC | #1
On Tuesday, March 04, 2014 11:44:01 AM Viresh Kumar wrote:
> policy->rwsem is used to lock access to all parts of code modifying struct
> cpufreq_policy but wasn't used on a new policy created from __cpufreq_add_dev().
> 
> Because of which if we call cpufreq_update_policy() repeatedly on one CPU and do
> offline/online of another CPU then we might see these crashes:
> 
> Unable to handle kernel NULL pointer dereference at virtual address 00000020
> pgd = c0003000
> [00000020] *pgd=80000000004003, *pmd=00000000
> Internal error: Oops: 206 [#1] PREEMPT SMP ARM
> 
> PC is at __cpufreq_governor+0x10/0x1ac
> LR is at cpufreq_update_policy+0x114/0x150
> 
> ---[ end trace f23a8defea6cd706 ]---
> Kernel panic - not syncing: Fatal exception
> CPU0: stopping
> CPU: 0 PID: 7136 Comm: mpdecision Tainted: G      D W    3.10.0-gd727407-00074-g979ede8 #396
> 
> [<c0afe180>] (notifier_call_chain+0x40/0x68) from [<c02a23ac>] (__blocking_notifier_call_chain+0x40/0x58)
> [<c02a23ac>] (__blocking_notifier_call_chain+0x40/0x58) from [<c02a23d8>] (blocking_notifier_call_chain+0x14/0x1c)
> [<c02a23d8>] (blocking_notifier_call_chain+0x14/0x1c) from [<c0803c68>] (cpufreq_set_policy+0xd4/0x2b8)
> [<c0803c68>] (cpufreq_set_policy+0xd4/0x2b8) from [<c0803e7c>] (cpufreq_init_policy+0x30/0x98)
> [<c0803e7c>] (cpufreq_init_policy+0x30/0x98) from [<c0805a18>] (__cpufreq_add_dev.isra.17+0x4dc/0x7a4)
> [<c0805a18>] (__cpufreq_add_dev.isra.17+0x4dc/0x7a4) from [<c0805d38>] (cpufreq_cpu_callback+0x58/0x84)
> [<c0805d38>] (cpufreq_cpu_callback+0x58/0x84) from [<c0afe180>] (notifier_call_chain+0x40/0x68)
> [<c0afe180>] (notifier_call_chain+0x40/0x68) from [<c02812dc>] (__cpu_notify+0x28/0x44)
> [<c02812dc>] (__cpu_notify+0x28/0x44) from [<c0aeed90>] (_cpu_up+0xf4/0x1dc)
> [<c0aeed90>] (_cpu_up+0xf4/0x1dc) from [<c0aeeed4>] (cpu_up+0x5c/0x78)
> [<c0aeeed4>] (cpu_up+0x5c/0x78) from [<c0aec808>] (store_online+0x44/0x74)
> [<c0aec808>] (store_online+0x44/0x74) from [<c03a40f4>] (sysfs_write_file+0x108/0x14c)
> [<c03a40f4>] (sysfs_write_file+0x108/0x14c) from [<c03517d4>] (vfs_write+0xd0/0x180)
> [<c03517d4>] (vfs_write+0xd0/0x180) from [<c0351ca8>] (SyS_write+0x38/0x68)
> [<c0351ca8>] (SyS_write+0x38/0x68) from [<c0205de0>] (ret_fast_syscall+0x0/0x30)
> 
> Fix these by taking locks at appropriate places in __cpufreq_add_dev() as well.
> 
> Reported-by: Saravana Kannan <skannan@codeaurora.org>
> Suggested-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
> Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>

I've rebased this one on top of 3.14-rc5 and queued it up for 3.14-rc6.

Please check the bleeding-edge branch for the result.

> ---
> V1->V2: No change
> 
>  drivers/cpufreq/cpufreq.c | 2 ++
>  1 file changed, 2 insertions(+)
> 
> diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c
> index 3c6f9a5..e2a1e67 100644
> --- a/drivers/cpufreq/cpufreq.c
> +++ b/drivers/cpufreq/cpufreq.c
> @@ -1128,6 +1128,7 @@ static int __cpufreq_add_dev(struct device *dev, struct subsys_interface *sif,
>  		policy->user_policy.max = policy->max;
>  	}
>  
> +	down_write(&policy->rwsem);
>  	write_lock_irqsave(&cpufreq_driver_lock, flags);
>  	for_each_cpu(j, policy->cpus)
>  		per_cpu(cpufreq_cpu_data, j) = policy;
> @@ -1202,6 +1203,7 @@ static int __cpufreq_add_dev(struct device *dev, struct subsys_interface *sif,
>  		policy->user_policy.policy = policy->policy;
>  		policy->user_policy.governor = policy->governor;
>  	}
> +	up_write(&policy->rwsem);
>  
>  	kobject_uevent(&policy->kobj, KOBJ_ADD);
>  	up_read(&cpufreq_rwsem);
>
Rafael J. Wysocki March 6, 2014, 1:06 a.m. UTC | #2
On Thursday, March 06, 2014 02:04:39 AM Rafael J. Wysocki wrote:
> On Tuesday, March 04, 2014 11:44:01 AM Viresh Kumar wrote:
> > policy->rwsem is used to lock access to all parts of code modifying struct
> > cpufreq_policy but wasn't used on a new policy created from __cpufreq_add_dev().
> > 
> > Because of which if we call cpufreq_update_policy() repeatedly on one CPU and do
> > offline/online of another CPU then we might see these crashes:
> > 
> > Unable to handle kernel NULL pointer dereference at virtual address 00000020
> > pgd = c0003000
> > [00000020] *pgd=80000000004003, *pmd=00000000
> > Internal error: Oops: 206 [#1] PREEMPT SMP ARM
> > 
> > PC is at __cpufreq_governor+0x10/0x1ac
> > LR is at cpufreq_update_policy+0x114/0x150
> > 
> > ---[ end trace f23a8defea6cd706 ]---
> > Kernel panic - not syncing: Fatal exception
> > CPU0: stopping
> > CPU: 0 PID: 7136 Comm: mpdecision Tainted: G      D W    3.10.0-gd727407-00074-g979ede8 #396
> > 
> > [<c0afe180>] (notifier_call_chain+0x40/0x68) from [<c02a23ac>] (__blocking_notifier_call_chain+0x40/0x58)
> > [<c02a23ac>] (__blocking_notifier_call_chain+0x40/0x58) from [<c02a23d8>] (blocking_notifier_call_chain+0x14/0x1c)
> > [<c02a23d8>] (blocking_notifier_call_chain+0x14/0x1c) from [<c0803c68>] (cpufreq_set_policy+0xd4/0x2b8)
> > [<c0803c68>] (cpufreq_set_policy+0xd4/0x2b8) from [<c0803e7c>] (cpufreq_init_policy+0x30/0x98)
> > [<c0803e7c>] (cpufreq_init_policy+0x30/0x98) from [<c0805a18>] (__cpufreq_add_dev.isra.17+0x4dc/0x7a4)
> > [<c0805a18>] (__cpufreq_add_dev.isra.17+0x4dc/0x7a4) from [<c0805d38>] (cpufreq_cpu_callback+0x58/0x84)
> > [<c0805d38>] (cpufreq_cpu_callback+0x58/0x84) from [<c0afe180>] (notifier_call_chain+0x40/0x68)
> > [<c0afe180>] (notifier_call_chain+0x40/0x68) from [<c02812dc>] (__cpu_notify+0x28/0x44)
> > [<c02812dc>] (__cpu_notify+0x28/0x44) from [<c0aeed90>] (_cpu_up+0xf4/0x1dc)
> > [<c0aeed90>] (_cpu_up+0xf4/0x1dc) from [<c0aeeed4>] (cpu_up+0x5c/0x78)
> > [<c0aeeed4>] (cpu_up+0x5c/0x78) from [<c0aec808>] (store_online+0x44/0x74)
> > [<c0aec808>] (store_online+0x44/0x74) from [<c03a40f4>] (sysfs_write_file+0x108/0x14c)
> > [<c03a40f4>] (sysfs_write_file+0x108/0x14c) from [<c03517d4>] (vfs_write+0xd0/0x180)
> > [<c03517d4>] (vfs_write+0xd0/0x180) from [<c0351ca8>] (SyS_write+0x38/0x68)
> > [<c0351ca8>] (SyS_write+0x38/0x68) from [<c0205de0>] (ret_fast_syscall+0x0/0x30)
> > 
> > Fix these by taking locks at appropriate places in __cpufreq_add_dev() as well.
> > 
> > Reported-by: Saravana Kannan <skannan@codeaurora.org>
> > Suggested-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
> > Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
> 
> I've rebased this one on top of 3.14-rc5 and queued it up for 3.14-rc6.
> 
> Please check the bleeding-edge branch for the result.

Actually, I think I'll queue up [2-3/3] for 3.14-rc6 instead.

> 
> > ---
> > V1->V2: No change
> > 
> >  drivers/cpufreq/cpufreq.c | 2 ++
> >  1 file changed, 2 insertions(+)
> > 
> > diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c
> > index 3c6f9a5..e2a1e67 100644
> > --- a/drivers/cpufreq/cpufreq.c
> > +++ b/drivers/cpufreq/cpufreq.c
> > @@ -1128,6 +1128,7 @@ static int __cpufreq_add_dev(struct device *dev, struct subsys_interface *sif,
> >  		policy->user_policy.max = policy->max;
> >  	}
> >  
> > +	down_write(&policy->rwsem);
> >  	write_lock_irqsave(&cpufreq_driver_lock, flags);
> >  	for_each_cpu(j, policy->cpus)
> >  		per_cpu(cpufreq_cpu_data, j) = policy;
> > @@ -1202,6 +1203,7 @@ static int __cpufreq_add_dev(struct device *dev, struct subsys_interface *sif,
> >  		policy->user_policy.policy = policy->policy;
> >  		policy->user_policy.governor = policy->governor;
> >  	}
> > +	up_write(&policy->rwsem);
> >  
> >  	kobject_uevent(&policy->kobj, KOBJ_ADD);
> >  	up_read(&cpufreq_rwsem);
> > 
> 
>
Saravana Kannan March 6, 2014, 1:10 a.m. UTC | #3
On 03/05/2014 05:06 PM, Rafael J. Wysocki wrote:
> On Thursday, March 06, 2014 02:04:39 AM Rafael J. Wysocki wrote:
>> On Tuesday, March 04, 2014 11:44:01 AM Viresh Kumar wrote:
>>> policy->rwsem is used to lock access to all parts of code modifying struct
>>> cpufreq_policy but wasn't used on a new policy created from __cpufreq_add_dev().
>>>
>>> Because of which if we call cpufreq_update_policy() repeatedly on one CPU and do
>>> offline/online of another CPU then we might see these crashes:
>>>
>>> Unable to handle kernel NULL pointer dereference at virtual address 00000020
>>> pgd = c0003000
>>> [00000020] *pgd=80000000004003, *pmd=00000000
>>> Internal error: Oops: 206 [#1] PREEMPT SMP ARM
>>>
>>> PC is at __cpufreq_governor+0x10/0x1ac
>>> LR is at cpufreq_update_policy+0x114/0x150
>>>
>>> ---[ end trace f23a8defea6cd706 ]---
>>> Kernel panic - not syncing: Fatal exception
>>> CPU0: stopping
>>> CPU: 0 PID: 7136 Comm: mpdecision Tainted: G      D W    3.10.0-gd727407-00074-g979ede8 #396
>>>
>>> [<c0afe180>] (notifier_call_chain+0x40/0x68) from [<c02a23ac>] (__blocking_notifier_call_chain+0x40/0x58)
>>> [<c02a23ac>] (__blocking_notifier_call_chain+0x40/0x58) from [<c02a23d8>] (blocking_notifier_call_chain+0x14/0x1c)
>>> [<c02a23d8>] (blocking_notifier_call_chain+0x14/0x1c) from [<c0803c68>] (cpufreq_set_policy+0xd4/0x2b8)
>>> [<c0803c68>] (cpufreq_set_policy+0xd4/0x2b8) from [<c0803e7c>] (cpufreq_init_policy+0x30/0x98)
>>> [<c0803e7c>] (cpufreq_init_policy+0x30/0x98) from [<c0805a18>] (__cpufreq_add_dev.isra.17+0x4dc/0x7a4)
>>> [<c0805a18>] (__cpufreq_add_dev.isra.17+0x4dc/0x7a4) from [<c0805d38>] (cpufreq_cpu_callback+0x58/0x84)
>>> [<c0805d38>] (cpufreq_cpu_callback+0x58/0x84) from [<c0afe180>] (notifier_call_chain+0x40/0x68)
>>> [<c0afe180>] (notifier_call_chain+0x40/0x68) from [<c02812dc>] (__cpu_notify+0x28/0x44)
>>> [<c02812dc>] (__cpu_notify+0x28/0x44) from [<c0aeed90>] (_cpu_up+0xf4/0x1dc)
>>> [<c0aeed90>] (_cpu_up+0xf4/0x1dc) from [<c0aeeed4>] (cpu_up+0x5c/0x78)
>>> [<c0aeeed4>] (cpu_up+0x5c/0x78) from [<c0aec808>] (store_online+0x44/0x74)
>>> [<c0aec808>] (store_online+0x44/0x74) from [<c03a40f4>] (sysfs_write_file+0x108/0x14c)
>>> [<c03a40f4>] (sysfs_write_file+0x108/0x14c) from [<c03517d4>] (vfs_write+0xd0/0x180)
>>> [<c03517d4>] (vfs_write+0xd0/0x180) from [<c0351ca8>] (SyS_write+0x38/0x68)
>>> [<c0351ca8>] (SyS_write+0x38/0x68) from [<c0205de0>] (ret_fast_syscall+0x0/0x30)
>>>
>>> Fix these by taking locks at appropriate places in __cpufreq_add_dev() as well.
>>>
>>> Reported-by: Saravana Kannan <skannan@codeaurora.org>
>>> Suggested-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
>>> Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
>>
>> I've rebased this one on top of 3.14-rc5 and queued it up for 3.14-rc6.
>>
>> Please check the bleeding-edge branch for the result.
>
> Actually, I think I'll queue up [2-3/3] for 3.14-rc6 instead.
>

Pretty close to having this tested and reported back. So, if you can 
wait, that would be better. Should probably see an email by Fri evening PST.

-Saravana
Rafael J. Wysocki March 6, 2014, 1:27 a.m. UTC | #4
On Wednesday, March 05, 2014 05:10:01 PM Saravana Kannan wrote:
> On 03/05/2014 05:06 PM, Rafael J. Wysocki wrote:
> > On Thursday, March 06, 2014 02:04:39 AM Rafael J. Wysocki wrote:
> >> On Tuesday, March 04, 2014 11:44:01 AM Viresh Kumar wrote:
> >>> policy->rwsem is used to lock access to all parts of code modifying struct
> >>> cpufreq_policy but wasn't used on a new policy created from __cpufreq_add_dev().
> >>>
> >>> Because of which if we call cpufreq_update_policy() repeatedly on one CPU and do
> >>> offline/online of another CPU then we might see these crashes:
> >>>
> >>> Unable to handle kernel NULL pointer dereference at virtual address 00000020
> >>> pgd = c0003000
> >>> [00000020] *pgd=80000000004003, *pmd=00000000
> >>> Internal error: Oops: 206 [#1] PREEMPT SMP ARM
> >>>
> >>> PC is at __cpufreq_governor+0x10/0x1ac
> >>> LR is at cpufreq_update_policy+0x114/0x150
> >>>
> >>> ---[ end trace f23a8defea6cd706 ]---
> >>> Kernel panic - not syncing: Fatal exception
> >>> CPU0: stopping
> >>> CPU: 0 PID: 7136 Comm: mpdecision Tainted: G      D W    3.10.0-gd727407-00074-g979ede8 #396
> >>>
> >>> [<c0afe180>] (notifier_call_chain+0x40/0x68) from [<c02a23ac>] (__blocking_notifier_call_chain+0x40/0x58)
> >>> [<c02a23ac>] (__blocking_notifier_call_chain+0x40/0x58) from [<c02a23d8>] (blocking_notifier_call_chain+0x14/0x1c)
> >>> [<c02a23d8>] (blocking_notifier_call_chain+0x14/0x1c) from [<c0803c68>] (cpufreq_set_policy+0xd4/0x2b8)
> >>> [<c0803c68>] (cpufreq_set_policy+0xd4/0x2b8) from [<c0803e7c>] (cpufreq_init_policy+0x30/0x98)
> >>> [<c0803e7c>] (cpufreq_init_policy+0x30/0x98) from [<c0805a18>] (__cpufreq_add_dev.isra.17+0x4dc/0x7a4)
> >>> [<c0805a18>] (__cpufreq_add_dev.isra.17+0x4dc/0x7a4) from [<c0805d38>] (cpufreq_cpu_callback+0x58/0x84)
> >>> [<c0805d38>] (cpufreq_cpu_callback+0x58/0x84) from [<c0afe180>] (notifier_call_chain+0x40/0x68)
> >>> [<c0afe180>] (notifier_call_chain+0x40/0x68) from [<c02812dc>] (__cpu_notify+0x28/0x44)
> >>> [<c02812dc>] (__cpu_notify+0x28/0x44) from [<c0aeed90>] (_cpu_up+0xf4/0x1dc)
> >>> [<c0aeed90>] (_cpu_up+0xf4/0x1dc) from [<c0aeeed4>] (cpu_up+0x5c/0x78)
> >>> [<c0aeeed4>] (cpu_up+0x5c/0x78) from [<c0aec808>] (store_online+0x44/0x74)
> >>> [<c0aec808>] (store_online+0x44/0x74) from [<c03a40f4>] (sysfs_write_file+0x108/0x14c)
> >>> [<c03a40f4>] (sysfs_write_file+0x108/0x14c) from [<c03517d4>] (vfs_write+0xd0/0x180)
> >>> [<c03517d4>] (vfs_write+0xd0/0x180) from [<c0351ca8>] (SyS_write+0x38/0x68)
> >>> [<c0351ca8>] (SyS_write+0x38/0x68) from [<c0205de0>] (ret_fast_syscall+0x0/0x30)
> >>>
> >>> Fix these by taking locks at appropriate places in __cpufreq_add_dev() as well.
> >>>
> >>> Reported-by: Saravana Kannan <skannan@codeaurora.org>
> >>> Suggested-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
> >>> Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
> >>
> >> I've rebased this one on top of 3.14-rc5 and queued it up for 3.14-rc6.
> >>
> >> Please check the bleeding-edge branch for the result.
> >
> > Actually, I think I'll queue up [2-3/3] for 3.14-rc6 instead.
> >
> 
> Pretty close to having this tested and reported back. So, if you can 
> wait, that would be better. Should probably see an email by Fri evening PST.

OK

It won't hurt if they stay in bleeding-edge/linux-next till then, though.

Thanks!
Viresh Kumar March 6, 2014, 2:24 a.m. UTC | #5
On 6 March 2014 09:04, Rafael J. Wysocki <rjw@rjwysocki.net> wrote:
> I've rebased this one on top of 3.14-rc5 and queued it up for 3.14-rc6.
>
> Please check the bleeding-edge branch for the result.

Yeah, it looks fine. And I assume that you are planning to take 1/3 in 3.15?
Or going to drop it?
--
To unsubscribe from this list: send the line "unsubscribe cpufreq" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Rafael J. Wysocki March 6, 2014, 12:34 p.m. UTC | #6
On Thursday, March 06, 2014 10:24:38 AM Viresh Kumar wrote:
> On 6 March 2014 09:04, Rafael J. Wysocki <rjw@rjwysocki.net> wrote:
> > I've rebased this one on top of 3.14-rc5 and queued it up for 3.14-rc6.
> >
> > Please check the bleeding-edge branch for the result.
> 
> Yeah, it looks fine. And I assume that you are planning to take 1/3 in 3.15?
> Or going to drop it?

I'm going to queue it up for 3.15.

Thanks!
diff mbox

Patch

diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c
index 3c6f9a5..e2a1e67 100644
--- a/drivers/cpufreq/cpufreq.c
+++ b/drivers/cpufreq/cpufreq.c
@@ -1128,6 +1128,7 @@  static int __cpufreq_add_dev(struct device *dev, struct subsys_interface *sif,
 		policy->user_policy.max = policy->max;
 	}
 
+	down_write(&policy->rwsem);
 	write_lock_irqsave(&cpufreq_driver_lock, flags);
 	for_each_cpu(j, policy->cpus)
 		per_cpu(cpufreq_cpu_data, j) = policy;
@@ -1202,6 +1203,7 @@  static int __cpufreq_add_dev(struct device *dev, struct subsys_interface *sif,
 		policy->user_policy.policy = policy->policy;
 		policy->user_policy.governor = policy->governor;
 	}
+	up_write(&policy->rwsem);
 
 	kobject_uevent(&policy->kobj, KOBJ_ADD);
 	up_read(&cpufreq_rwsem);