diff mbox series

[v4,2/3] sched/fair: Take thermal pressure into account while estimating energy

Message ID 20210614191128.22735-1-lukasz.luba@arm.com
State Accepted
Commit 489f16459e0008c7a5c4c5af34bd80898aa82c2d
Headers show
Series Add allowed CPU capacity knowledge to EAS | expand

Commit Message

Lukasz Luba June 14, 2021, 7:11 p.m. UTC
Energy Aware Scheduling (EAS) needs to be able to predict the frequency
requests made by the SchedUtil governor to properly estimate energy used
in the future. It has to take into account CPUs utilization and forecast
Performance Domain (PD) frequency. There is a corner case when the max
allowed frequency might be reduced due to thermal. SchedUtil is aware of
that reduced frequency, so it should be taken into account also in EAS
estimations.

SchedUtil, as a CPUFreq governor, knows the maximum allowed frequency of
a CPU, thanks to cpufreq_driver_resolve_freq() and internal clamping
to 'policy::max'. SchedUtil is responsible to respect that upper limit
while setting the frequency through CPUFreq drivers. This effective
frequency is stored internally in 'sugov_policy::next_freq' and EAS has
to predict that value.

In the existing code the raw value of arch_scale_cpu_capacity() is used
for clamping the returned CPU utilization from effective_cpu_util().
This patch fixes issue with too big single CPU utilization, by introducing
clamping to the allowed CPU capacity. The allowed CPU capacity is a CPU
capacity reduced by thermal pressure raw value.

Thanks to knowledge about allowed CPU capacity, we don't get too big value
for a single CPU utilization, which is then added to the util sum. The
util sum is used as a source of information for estimating whole PD energy.
To avoid wrong energy estimation in EAS (due to capped frequency), make
sure that the calculation of util sum is aware of allowed CPU capacity.

This thermal pressure might be visible in scenarios where the CPUs are not
heavily loaded, but some other component (like GPU) drastically reduced
available power budget and increased the SoC temperature. Thus, we still
use EAS for task placement and CPUs are not over-utilized.

Reviewed-by: Vincent Guittot <vincent.guittot@linaro.org>
Signed-off-by: Lukasz Luba <lukasz.luba@arm.com>
---
 kernel/sched/fair.c | 11 ++++++++---
 1 file changed, 8 insertions(+), 3 deletions(-)

Comments

Dietmar Eggemann June 15, 2021, 3:31 p.m. UTC | #1
On 14/06/2021 21:11, Lukasz Luba wrote:
> Energy Aware Scheduling (EAS) needs to be able to predict the frequency
> requests made by the SchedUtil governor to properly estimate energy used
> in the future. It has to take into account CPUs utilization and forecast
> Performance Domain (PD) frequency. There is a corner case when the max
> allowed frequency might be reduced due to thermal. SchedUtil is aware of
> that reduced frequency, so it should be taken into account also in EAS
> estimations.

It's important to highlight that this will only fix this issue between
schedutil and EAS when it's due to `thermal pressure` (today only via
CPU cooling). There are other places which could restrict policy->max
via freq_qos_update_request() and EAS will be unaware of it.

> SchedUtil, as a CPUFreq governor, knows the maximum allowed frequency of
> a CPU, thanks to cpufreq_driver_resolve_freq() and internal clamping
> to 'policy::max'. SchedUtil is responsible to respect that upper limit
> while setting the frequency through CPUFreq drivers. This effective
> frequency is stored internally in 'sugov_policy::next_freq' and EAS has
> to predict that value.
> 
> In the existing code the raw value of arch_scale_cpu_capacity() is used
> for clamping the returned CPU utilization from effective_cpu_util().
> This patch fixes issue with too big single CPU utilization, by introducing
> clamping to the allowed CPU capacity. The allowed CPU capacity is a CPU
> capacity reduced by thermal pressure raw value.
> 
> Thanks to knowledge about allowed CPU capacity, we don't get too big value
> for a single CPU utilization, which is then added to the util sum. The
> util sum is used as a source of information for estimating whole PD energy.
> To avoid wrong energy estimation in EAS (due to capped frequency), make
> sure that the calculation of util sum is aware of allowed CPU capacity.
> 
> This thermal pressure might be visible in scenarios where the CPUs are not
> heavily loaded, but some other component (like GPU) drastically reduced
> available power budget and increased the SoC temperature. Thus, we still
> use EAS for task placement and CPUs are not over-utilized.

IMHO, this means that this is catered for the IPA governor then. I'm not
sure if this would be beneficial when another thermal governor is used?

The mechanical side of the code would allow for such benefits, I just
don't know if their CPU cooling device + thermal zone setups would cater
for this?

> Reviewed-by: Vincent Guittot <vincent.guittot@linaro.org>
> Signed-off-by: Lukasz Luba <lukasz.luba@arm.com>
> ---
>  kernel/sched/fair.c | 11 ++++++++---
>  1 file changed, 8 insertions(+), 3 deletions(-)
> 
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 161b92aa1c79..3634e077051d 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -6527,8 +6527,11 @@ compute_energy(struct task_struct *p, int dst_cpu, struct perf_domain *pd)
>  	struct cpumask *pd_mask = perf_domain_span(pd);
>  	unsigned long cpu_cap = arch_scale_cpu_capacity(cpumask_first(pd_mask));
>  	unsigned long max_util = 0, sum_util = 0;
> +	unsigned long _cpu_cap = cpu_cap;
>  	int cpu;
>  
> +	_cpu_cap -= arch_scale_thermal_pressure(cpumask_first(pd_mask));
> +

Maybe shorter?

        struct cpumask *pd_mask = perf_domain_span(pd);
-       unsigned long cpu_cap =
arch_scale_cpu_capacity(cpumask_first(pd_mask));
+       int cpu = cpumask_first(pd_mask);
+       unsigned long cpu_cap = arch_scale_cpu_capacity(cpu);
+       unsigned long _cpu_cap = cpu_cap - arch_scale_thermal_pressure(cpu);
        unsigned long max_util = 0, sum_util = 0;
-       unsigned long _cpu_cap = cpu_cap;
-       int cpu;
-
-       _cpu_cap -= arch_scale_thermal_pressure(cpumask_first(pd_mask));

>  	/*
>  	 * The capacity state of CPUs of the current rd can be driven by CPUs
>  	 * of another rd if they belong to the same pd. So, account for the
> @@ -6564,8 +6567,10 @@ compute_energy(struct task_struct *p, int dst_cpu, struct perf_domain *pd)
>  		 * is already enough to scale the EM reported power
>  		 * consumption at the (eventually clamped) cpu_capacity.
>  		 */
> -		sum_util += effective_cpu_util(cpu, util_running, cpu_cap,
> -					       ENERGY_UTIL, NULL);
> +		cpu_util = effective_cpu_util(cpu, util_running, cpu_cap,
> +					      ENERGY_UTIL, NULL);
> +
> +		sum_util += min(cpu_util, _cpu_cap);
>  
>  		/*
>  		 * Performance domain frequency: utilization clamping
> @@ -6576,7 +6581,7 @@ compute_energy(struct task_struct *p, int dst_cpu, struct perf_domain *pd)
>  		 */
>  		cpu_util = effective_cpu_util(cpu, util_freq, cpu_cap,
>  					      FREQUENCY_UTIL, tsk);
> -		max_util = max(max_util, cpu_util);
> +		max_util = max(max_util, min(cpu_util, _cpu_cap));
>  	}
>  
>  	return em_cpu_energy(pd->em_pd, max_util, sum_util);

There is IPA specific code in cpufreq_set_cur_state() ->
get_state_freq() which accesses the EM:

    ...
    return cpufreq_cdev->em->table[idx].frequency;
    ...

Has it been discussed that the `per-PD max (allowed) CPU capacity` (1)
could be stored in the EM from there so that code like the EAS wakeup
code (compute_energy()) could retrieve this information from the EM?
And there wouldn't be any need to pass (1) into the EM (like now via
em_cpu_energy()).
This would be signalling within the EM compared to external signalling
via `CPU cooling -> thermal pressure <- EAS wakeup -> EM`.
Lukasz Luba June 15, 2021, 4:09 p.m. UTC | #2
On 6/15/21 4:31 PM, Dietmar Eggemann wrote:
> On 14/06/2021 21:11, Lukasz Luba wrote:
>> Energy Aware Scheduling (EAS) needs to be able to predict the frequency
>> requests made by the SchedUtil governor to properly estimate energy used
>> in the future. It has to take into account CPUs utilization and forecast
>> Performance Domain (PD) frequency. There is a corner case when the max
>> allowed frequency might be reduced due to thermal. SchedUtil is aware of
>> that reduced frequency, so it should be taken into account also in EAS
>> estimations.
> 
> It's important to highlight that this will only fix this issue between
> schedutil and EAS when it's due to `thermal pressure` (today only via
> CPU cooling). There are other places which could restrict policy->max
> via freq_qos_update_request() and EAS will be unaware of it.

True, but for this I have some other plans.

> 
>> SchedUtil, as a CPUFreq governor, knows the maximum allowed frequency of
>> a CPU, thanks to cpufreq_driver_resolve_freq() and internal clamping
>> to 'policy::max'. SchedUtil is responsible to respect that upper limit
>> while setting the frequency through CPUFreq drivers. This effective
>> frequency is stored internally in 'sugov_policy::next_freq' and EAS has
>> to predict that value.
>>
>> In the existing code the raw value of arch_scale_cpu_capacity() is used
>> for clamping the returned CPU utilization from effective_cpu_util().
>> This patch fixes issue with too big single CPU utilization, by introducing
>> clamping to the allowed CPU capacity. The allowed CPU capacity is a CPU
>> capacity reduced by thermal pressure raw value.
>>
>> Thanks to knowledge about allowed CPU capacity, we don't get too big value
>> for a single CPU utilization, which is then added to the util sum. The
>> util sum is used as a source of information for estimating whole PD energy.
>> To avoid wrong energy estimation in EAS (due to capped frequency), make
>> sure that the calculation of util sum is aware of allowed CPU capacity.
>>
>> This thermal pressure might be visible in scenarios where the CPUs are not
>> heavily loaded, but some other component (like GPU) drastically reduced
>> available power budget and increased the SoC temperature. Thus, we still
>> use EAS for task placement and CPUs are not over-utilized.
> 
> IMHO, this means that this is catered for the IPA governor then. I'm not
> sure if this would be beneficial when another thermal governor is used?

Yes, it will be, the cpufreq_set_cur_state() is called by
thermal exported function:
thermal_cdev_update()
   __thermal_cdev_update()
     thermal_cdev_set_cur_state()
       cdev->ops->set_cur_state(cdev, target)

So it can be called not only by IPA. All governors call it, because
that's the default mechanism.

> 
> The mechanical side of the code would allow for such benefits, I just
> don't know if their CPU cooling device + thermal zone setups would cater
> for this?

Yes, it's possible. Even for custom vendor governors (modified clones
of IPA)

> 
>> Reviewed-by: Vincent Guittot <vincent.guittot@linaro.org>
>> Signed-off-by: Lukasz Luba <lukasz.luba@arm.com>
>> ---
>>   kernel/sched/fair.c | 11 ++++++++---
>>   1 file changed, 8 insertions(+), 3 deletions(-)
>>
>> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
>> index 161b92aa1c79..3634e077051d 100644
>> --- a/kernel/sched/fair.c
>> +++ b/kernel/sched/fair.c
>> @@ -6527,8 +6527,11 @@ compute_energy(struct task_struct *p, int dst_cpu, struct perf_domain *pd)
>>   	struct cpumask *pd_mask = perf_domain_span(pd);
>>   	unsigned long cpu_cap = arch_scale_cpu_capacity(cpumask_first(pd_mask));
>>   	unsigned long max_util = 0, sum_util = 0;
>> +	unsigned long _cpu_cap = cpu_cap;
>>   	int cpu;
>>   
>> +	_cpu_cap -= arch_scale_thermal_pressure(cpumask_first(pd_mask));
>> +
> 
> Maybe shorter?
> 
>          struct cpumask *pd_mask = perf_domain_span(pd);
> -       unsigned long cpu_cap =
> arch_scale_cpu_capacity(cpumask_first(pd_mask));
> +       int cpu = cpumask_first(pd_mask);
> +       unsigned long cpu_cap = arch_scale_cpu_capacity(cpu);
> +       unsigned long _cpu_cap = cpu_cap - arch_scale_thermal_pressure(cpu);
>          unsigned long max_util = 0, sum_util = 0;
> -       unsigned long _cpu_cap = cpu_cap;
> -       int cpu;
> -
> -       _cpu_cap -= arch_scale_thermal_pressure(cpumask_first(pd_mask));

Could be, but still, the definitions should be sorted from longest on
top, to shortest at the bottom. I wanted to avoid modifying too many
lines with this simple patch.

> 
>>   	/*
>>   	 * The capacity state of CPUs of the current rd can be driven by CPUs
>>   	 * of another rd if they belong to the same pd. So, account for the
>> @@ -6564,8 +6567,10 @@ compute_energy(struct task_struct *p, int dst_cpu, struct perf_domain *pd)
>>   		 * is already enough to scale the EM reported power
>>   		 * consumption at the (eventually clamped) cpu_capacity.
>>   		 */
>> -		sum_util += effective_cpu_util(cpu, util_running, cpu_cap,
>> -					       ENERGY_UTIL, NULL);
>> +		cpu_util = effective_cpu_util(cpu, util_running, cpu_cap,
>> +					      ENERGY_UTIL, NULL);
>> +
>> +		sum_util += min(cpu_util, _cpu_cap);
>>   
>>   		/*
>>   		 * Performance domain frequency: utilization clamping
>> @@ -6576,7 +6581,7 @@ compute_energy(struct task_struct *p, int dst_cpu, struct perf_domain *pd)
>>   		 */
>>   		cpu_util = effective_cpu_util(cpu, util_freq, cpu_cap,
>>   					      FREQUENCY_UTIL, tsk);
>> -		max_util = max(max_util, cpu_util);
>> +		max_util = max(max_util, min(cpu_util, _cpu_cap));
>>   	}
>>   
>>   	return em_cpu_energy(pd->em_pd, max_util, sum_util);
> 
> There is IPA specific code in cpufreq_set_cur_state() ->
> get_state_freq() which accesses the EM:
> 
>      ...
>      return cpufreq_cdev->em->table[idx].frequency;
>      ...
> 
> Has it been discussed that the `per-PD max (allowed) CPU capacity` (1)
> could be stored in the EM from there so that code like the EAS wakeup
> code (compute_energy()) could retrieve this information from the EM?

No, we haven't think about this approach in these patch sets.
The EM structure given to the cpufreq_cooling device and stored in:
cpufreq_cdev->em should not be modified. There are a few places which
receive the EM, but they all should not touch it. For those clients
it's a read-only data structure.

> And there wouldn't be any need to pass (1) into the EM (like now via
> em_cpu_energy()).
> This would be signalling within the EM compared to external signalling
> via `CPU cooling -> thermal pressure <- EAS wakeup -> EM`.
> 

I see what you mean, but this might cause some issues in the design
(per-cpu scmi cpu perf control). Let's use this EM pointer gently ;)
Dietmar Eggemann June 16, 2021, 5:24 p.m. UTC | #3
On 15/06/2021 18:09, Lukasz Luba wrote:
> 

> On 6/15/21 4:31 PM, Dietmar Eggemann wrote:

>> On 14/06/2021 21:11, Lukasz Luba wrote:


[...]

>> It's important to highlight that this will only fix this issue between

>> schedutil and EAS when it's due to `thermal pressure` (today only via

>> CPU cooling). There are other places which could restrict policy->max

>> via freq_qos_update_request() and EAS will be unaware of it.

> 

> True, but for this I have some other plans.


As long as people are aware of the fact that this was developed to be
beneficial for `EAS - IPA` integration, I'm fine with this.

[...]

>> IMHO, this means that this is catered for the IPA governor then. I'm not

>> sure if this would be beneficial when another thermal governor is used?

> 

> Yes, it will be, the cpufreq_set_cur_state() is called by

> thermal exported function:

> thermal_cdev_update()

>   __thermal_cdev_update()

>     thermal_cdev_set_cur_state()

>       cdev->ops->set_cur_state(cdev, target)

> 

> So it can be called not only by IPA. All governors call it, because

> that's the default mechanism.


True, but I'm still not convinced that it is useful outside `EAS - IPA`.

>> The mechanical side of the code would allow for such benefits, I just

>> don't know if their CPU cooling device + thermal zone setups would cater

>> for this?

> 

> Yes, it's possible. Even for custom vendor governors (modified clones

> of IPA)


Let's stick to mainline here ;-) It's complicated enough ...

[...]

>> Maybe shorter?

>>

>>          struct cpumask *pd_mask = perf_domain_span(pd);

>> -       unsigned long cpu_cap =

>> arch_scale_cpu_capacity(cpumask_first(pd_mask));

>> +       int cpu = cpumask_first(pd_mask);

>> +       unsigned long cpu_cap = arch_scale_cpu_capacity(cpu);

>> +       unsigned long _cpu_cap = cpu_cap -

>> arch_scale_thermal_pressure(cpu);

>>          unsigned long max_util = 0, sum_util = 0;

>> -       unsigned long _cpu_cap = cpu_cap;

>> -       int cpu;

>> -

>> -       _cpu_cap -= arch_scale_thermal_pressure(cpumask_first(pd_mask));

> 

> Could be, but still, the definitions should be sorted from longest on

> top, to shortest at the bottom. I wanted to avoid modifying too many

> lines with this simple patch.


Only if there are no dependencies, but here we have already `cpu_cap ->
pd_mask`. OK, not a big deal.

[...]

>> There is IPA specific code in cpufreq_set_cur_state() ->

>> get_state_freq() which accesses the EM:

>>

>>      ...

>>      return cpufreq_cdev->em->table[idx].frequency;

>>      ...

>>

>> Has it been discussed that the `per-PD max (allowed) CPU capacity` (1)

>> could be stored in the EM from there so that code like the EAS wakeup

>> code (compute_energy()) could retrieve this information from the EM?

> 

> No, we haven't think about this approach in these patch sets.

> The EM structure given to the cpufreq_cooling device and stored in:

> cpufreq_cdev->em should not be modified. There are a few places which

> receive the EM, but they all should not touch it. For those clients

> it's a read-only data structure.

> 

>> And there wouldn't be any need to pass (1) into the EM (like now via

>> em_cpu_energy()).

>> This would be signalling within the EM compared to external signalling

>> via `CPU cooling -> thermal pressure <- EAS wakeup -> EM`.

> 

> I see what you mean, but this might cause some issues in the design

> (per-cpu scmi cpu perf control). Let's use this EM pointer gently ;)


OK, with the requirement that clients see the EM as ro:

Reviewed-by: Dietmar Eggemann <dietmar.eggemann@arm.com>
Lukasz Luba June 16, 2021, 6:31 p.m. UTC | #4
On 6/16/21 6:24 PM, Dietmar Eggemann wrote:
> On 15/06/2021 18:09, Lukasz Luba wrote:

>>

>> On 6/15/21 4:31 PM, Dietmar Eggemann wrote:

>>> On 14/06/2021 21:11, Lukasz Luba wrote:

> 

> [...]

> 

>>> It's important to highlight that this will only fix this issue between

>>> schedutil and EAS when it's due to `thermal pressure` (today only via

>>> CPU cooling). There are other places which could restrict policy->max

>>> via freq_qos_update_request() and EAS will be unaware of it.

>>

>> True, but for this I have some other plans.

> 

> As long as people are aware of the fact that this was developed to be

> beneficial for `EAS - IPA` integration, I'm fine with this.


Good. I had in mind that I will have to do some re-work on this
thermal pressure code in the cpufreq cooling, to satisfy our roadmap
goals...

> 

> [...]

> 

>>> IMHO, this means that this is catered for the IPA governor then. I'm not

>>> sure if this would be beneficial when another thermal governor is used?

>>

>> Yes, it will be, the cpufreq_set_cur_state() is called by

>> thermal exported function:

>> thermal_cdev_update()

>>    __thermal_cdev_update()

>>      thermal_cdev_set_cur_state()

>>        cdev->ops->set_cur_state(cdev, target)

>>

>> So it can be called not only by IPA. All governors call it, because

>> that's the default mechanism.

> 

> True, but I'm still not convinced that it is useful outside `EAS - IPA`.


It is. So in mainline thermal there is another governor: fair_share [1],
which uses 'weights' to split the cooling effort across cooling devices
in the thermal zone. That governor would manage CPUs and GPU and
set throttling like IPA.

> 

>>> The mechanical side of the code would allow for such benefits, I just

>>> don't know if their CPU cooling device + thermal zone setups would cater

>>> for this?

>>

>> Yes, it's possible. Even for custom vendor governors (modified clones

>> of IPA)

> 

> Let's stick to mainline here ;-) It's complicated enough ...


I agree, so there isn't only IPA in mainline.

> 

> [...]

> 

>>> Maybe shorter?

>>>

>>>           struct cpumask *pd_mask = perf_domain_span(pd);

>>> -       unsigned long cpu_cap =

>>> arch_scale_cpu_capacity(cpumask_first(pd_mask));

>>> +       int cpu = cpumask_first(pd_mask);

>>> +       unsigned long cpu_cap = arch_scale_cpu_capacity(cpu);

>>> +       unsigned long _cpu_cap = cpu_cap -

>>> arch_scale_thermal_pressure(cpu);

>>>           unsigned long max_util = 0, sum_util = 0;

>>> -       unsigned long _cpu_cap = cpu_cap;

>>> -       int cpu;

>>> -

>>> -       _cpu_cap -= arch_scale_thermal_pressure(cpumask_first(pd_mask));

>>

>> Could be, but still, the definitions should be sorted from longest on

>> top, to shortest at the bottom. I wanted to avoid modifying too many

>> lines with this simple patch.

> 

> Only if there are no dependencies, but here we have already `cpu_cap ->

> pd_mask`. OK, not a big deal.


True, those dependencies are tricky to sort them properly, so I coded
it this way.

[snip]

>> I see what you mean, but this might cause some issues in the design

>> (per-cpu scmi cpu perf control). Let's use this EM pointer gently ;)

> 

> OK, with the requirement that clients see the EM as ro:

> 

> Reviewed-by: Dietmar Eggemann <dietmar.eggemann@arm.com>

> 


Thank you Dietmar for the review!

Regards,
Lukasz

[1] 
https://elixir.bootlin.com/linux/v5.13-rc6/source/drivers/thermal/gov_fair_share.c#L111
Vincent Guittot June 16, 2021, 7:25 p.m. UTC | #5
On Wed, 16 Jun 2021 at 19:24, Dietmar Eggemann <dietmar.eggemann@arm.com> wrote:
>

> On 15/06/2021 18:09, Lukasz Luba wrote:

> >

> > On 6/15/21 4:31 PM, Dietmar Eggemann wrote:

> >> On 14/06/2021 21:11, Lukasz Luba wrote:

>

> [...]

>

> >> It's important to highlight that this will only fix this issue between

> >> schedutil and EAS when it's due to `thermal pressure` (today only via

> >> CPU cooling). There are other places which could restrict policy->max

> >> via freq_qos_update_request() and EAS will be unaware of it.

> >

> > True, but for this I have some other plans.

>

> As long as people are aware of the fact that this was developed to be

> beneficial for `EAS - IPA` integration, I'm fine with this.


I don't think it's only for EAS - IPA. Thermal_pressure can be used by
HW throttling like here:
https://lkml.org/lkml/2021/6/8/1791

EAS is involved but not IPA

>

> [...]

>

> >> IMHO, this means that this is catered for the IPA governor then. I'm not

> >> sure if this would be beneficial when another thermal governor is used?

> >

> > Yes, it will be, the cpufreq_set_cur_state() is called by

> > thermal exported function:

> > thermal_cdev_update()

> >   __thermal_cdev_update()

> >     thermal_cdev_set_cur_state()

> >       cdev->ops->set_cur_state(cdev, target)

> >

> > So it can be called not only by IPA. All governors call it, because

> > that's the default mechanism.

>

> True, but I'm still not convinced that it is useful outside `EAS - IPA`.

>

> >> The mechanical side of the code would allow for such benefits, I just

> >> don't know if their CPU cooling device + thermal zone setups would cater

> >> for this?

> >

> > Yes, it's possible. Even for custom vendor governors (modified clones

> > of IPA)

>

> Let's stick to mainline here ;-) It's complicated enough ...

>

> [...]

>

> >> Maybe shorter?

> >>

> >>          struct cpumask *pd_mask = perf_domain_span(pd);

> >> -       unsigned long cpu_cap =

> >> arch_scale_cpu_capacity(cpumask_first(pd_mask));

> >> +       int cpu = cpumask_first(pd_mask);

> >> +       unsigned long cpu_cap = arch_scale_cpu_capacity(cpu);

> >> +       unsigned long _cpu_cap = cpu_cap -

> >> arch_scale_thermal_pressure(cpu);

> >>          unsigned long max_util = 0, sum_util = 0;

> >> -       unsigned long _cpu_cap = cpu_cap;

> >> -       int cpu;

> >> -

> >> -       _cpu_cap -= arch_scale_thermal_pressure(cpumask_first(pd_mask));

> >

> > Could be, but still, the definitions should be sorted from longest on

> > top, to shortest at the bottom. I wanted to avoid modifying too many

> > lines with this simple patch.

>

> Only if there are no dependencies, but here we have already `cpu_cap ->

> pd_mask`. OK, not a big deal.

>

> [...]

>

> >> There is IPA specific code in cpufreq_set_cur_state() ->

> >> get_state_freq() which accesses the EM:

> >>

> >>      ...

> >>      return cpufreq_cdev->em->table[idx].frequency;

> >>      ...

> >>

> >> Has it been discussed that the `per-PD max (allowed) CPU capacity` (1)

> >> could be stored in the EM from there so that code like the EAS wakeup

> >> code (compute_energy()) could retrieve this information from the EM?

> >

> > No, we haven't think about this approach in these patch sets.

> > The EM structure given to the cpufreq_cooling device and stored in:

> > cpufreq_cdev->em should not be modified. There are a few places which

> > receive the EM, but they all should not touch it. For those clients

> > it's a read-only data structure.

> >

> >> And there wouldn't be any need to pass (1) into the EM (like now via

> >> em_cpu_energy()).

> >> This would be signalling within the EM compared to external signalling

> >> via `CPU cooling -> thermal pressure <- EAS wakeup -> EM`.

> >

> > I see what you mean, but this might cause some issues in the design

> > (per-cpu scmi cpu perf control). Let's use this EM pointer gently ;)

>

> OK, with the requirement that clients see the EM as ro:

>

> Reviewed-by: Dietmar Eggemann <dietmar.eggemann@arm.com>
Lukasz Luba June 16, 2021, 8:22 p.m. UTC | #6
On 6/16/21 8:25 PM, Vincent Guittot wrote:
> On Wed, 16 Jun 2021 at 19:24, Dietmar Eggemann <dietmar.eggemann@arm.com> wrote:

>>

>> On 15/06/2021 18:09, Lukasz Luba wrote:

>>>

>>> On 6/15/21 4:31 PM, Dietmar Eggemann wrote:

>>>> On 14/06/2021 21:11, Lukasz Luba wrote:

>>

>> [...]

>>

>>>> It's important to highlight that this will only fix this issue between

>>>> schedutil and EAS when it's due to `thermal pressure` (today only via

>>>> CPU cooling). There are other places which could restrict policy->max

>>>> via freq_qos_update_request() and EAS will be unaware of it.

>>>

>>> True, but for this I have some other plans.

>>

>> As long as people are aware of the fact that this was developed to be

>> beneficial for `EAS - IPA` integration, I'm fine with this.

> 

> I don't think it's only for EAS - IPA. Thermal_pressure can be used by

> HW throttling like here:

> https://lkml.org/lkml/2021/6/8/1791

> 

> EAS is involved but not IPA


Thank you Vincent for pointing to Thara's patches. Indeed, this is a
good example. We will have to provide similar for our SCMI perf
notifications - these are the plans that I've mentioned. In both
new examples, the IPA (or other governors) won't be even involved.
diff mbox series

Patch

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 161b92aa1c79..3634e077051d 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -6527,8 +6527,11 @@  compute_energy(struct task_struct *p, int dst_cpu, struct perf_domain *pd)
 	struct cpumask *pd_mask = perf_domain_span(pd);
 	unsigned long cpu_cap = arch_scale_cpu_capacity(cpumask_first(pd_mask));
 	unsigned long max_util = 0, sum_util = 0;
+	unsigned long _cpu_cap = cpu_cap;
 	int cpu;
 
+	_cpu_cap -= arch_scale_thermal_pressure(cpumask_first(pd_mask));
+
 	/*
 	 * The capacity state of CPUs of the current rd can be driven by CPUs
 	 * of another rd if they belong to the same pd. So, account for the
@@ -6564,8 +6567,10 @@  compute_energy(struct task_struct *p, int dst_cpu, struct perf_domain *pd)
 		 * is already enough to scale the EM reported power
 		 * consumption at the (eventually clamped) cpu_capacity.
 		 */
-		sum_util += effective_cpu_util(cpu, util_running, cpu_cap,
-					       ENERGY_UTIL, NULL);
+		cpu_util = effective_cpu_util(cpu, util_running, cpu_cap,
+					      ENERGY_UTIL, NULL);
+
+		sum_util += min(cpu_util, _cpu_cap);
 
 		/*
 		 * Performance domain frequency: utilization clamping
@@ -6576,7 +6581,7 @@  compute_energy(struct task_struct *p, int dst_cpu, struct perf_domain *pd)
 		 */
 		cpu_util = effective_cpu_util(cpu, util_freq, cpu_cap,
 					      FREQUENCY_UTIL, tsk);
-		max_util = max(max_util, cpu_util);
+		max_util = max(max_util, min(cpu_util, _cpu_cap));
 	}
 
 	return em_cpu_energy(pd->em_pd, max_util, sum_util);