diff mbox series

[v2,2/2] iopoll: Do not use timekeeping in read_poll_timeout_atomic()

Message ID 8db63020d18fc22e137e4a8f0aa15e6b9949a6f6.1683722688.git.geert+renesas@glider.be
State New
Headers show
Series iopoll: Busy loop and timeout improvements | expand

Commit Message

Geert Uytterhoeven May 10, 2023, 1:23 p.m. UTC
read_poll_timeout_atomic() uses ktime_get() to implement the timeout
feature, just like its non-atomic counterpart.  However, there are
several issues with this, due to its use in atomic contexts:

  1. When called in the s2ram path (as typically done by clock or PM
     domain drivers), timekeeping may be suspended, triggering the
     WARN_ON(timekeeping_suspended) in ktime_get():

	WARNING: CPU: 0 PID: 654 at kernel/time/timekeeping.c:843 ktime_get+0x28/0x78

     Calling ktime_get_mono_fast_ns() instead of ktime_get() would get
     rid of that warning.  However, that would break timeout handling,
     as (at least on systems with an ARM architectured timer), the time
     returned by ktime_get_mono_fast_ns() does not advance while
     timekeeping is suspended.
     Interestingly, (on the same ARM systems) the time returned by
     ktime_get() does advance while timekeeping is suspended, despite
     the warning.

  2. Depending on the actual clock source, and especially before a
     high-resolution clocksource (e.g. the ARM architectured timer)
     becomes available, time may not advance in atomic contexts, thus
     breaking timeout handling.

Fix this by abandoning the idea that one can rely on timekeeping to
implement timeout handling in all atomic contexts, and switch from a
global time-based to a locally-estimated timeout handling.  In most
(all?) cases the timeout condition is exceptional and an error
condition, hence any additional delays due to underestimating wall clock
time are irrelevant.

Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
---
Alternatively, one could use a mixed approach (use both
ktime_get_mono_fast_ns() and a local (under)estimate, and timeout on the
earliest occasion), but I think that would complicate things without
much gain.

v2:
  - New.
---
 include/linux/iopoll.h | 18 +++++++++++++-----
 1 file changed, 13 insertions(+), 5 deletions(-)

Comments

Arnd Bergmann May 10, 2023, 1:35 p.m. UTC | #1
On Wed, May 10, 2023, at 15:23, Geert Uytterhoeven wrote:
> read_poll_timeout_atomic() uses ktime_get() to implement the timeout
> feature, just like its non-atomic counterpart.  However, there are
> several issues with this, due to its use in atomic contexts:
>
>   1. When called in the s2ram path (as typically done by clock or PM
>      domain drivers), timekeeping may be suspended, triggering the
>      WARN_ON(timekeeping_suspended) in ktime_get():
>
> 	WARNING: CPU: 0 PID: 654 at kernel/time/timekeeping.c:843 ktime_get+0x28/0x78
>
>      Calling ktime_get_mono_fast_ns() instead of ktime_get() would get
>      rid of that warning.  However, that would break timeout handling,
>      as (at least on systems with an ARM architectured timer), the time
>      returned by ktime_get_mono_fast_ns() does not advance while
>      timekeeping is suspended.
>      Interestingly, (on the same ARM systems) the time returned by
>      ktime_get() does advance while timekeeping is suspended, despite
>      the warning.
>
>   2. Depending on the actual clock source, and especially before a
>      high-resolution clocksource (e.g. the ARM architectured timer)
>      becomes available, time may not advance in atomic contexts, thus
>      breaking timeout handling.
>
> Fix this by abandoning the idea that one can rely on timekeeping to
> implement timeout handling in all atomic contexts, and switch from a
> global time-based to a locally-estimated timeout handling.  In most
> (all?) cases the timeout condition is exceptional and an error
> condition, hence any additional delays due to underestimating wall clock
> time are irrelevant.
>
> Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>

This looks reasonable to me,

Acked-by: Arnd Bergmann <arnd@arndb.de>

I assume you sent this because you ran into the bug on a
particular driver. It might help to be more specific about
how this can be reproduced.

> ---
> Alternatively, one could use a mixed approach (use both
> ktime_get_mono_fast_ns() and a local (under)estimate, and timeout on the
> earliest occasion), but I think that would complicate things without
> much gain.

Agreed.

     Arnd
Geert Uytterhoeven May 10, 2023, 1:46 p.m. UTC | #2
Hi Arnd,

On Wed, May 10, 2023 at 3:36 PM Arnd Bergmann <arnd@arndb.de> wrote:
> On Wed, May 10, 2023, at 15:23, Geert Uytterhoeven wrote:
> > read_poll_timeout_atomic() uses ktime_get() to implement the timeout
> > feature, just like its non-atomic counterpart.  However, there are
> > several issues with this, due to its use in atomic contexts:
> >
> >   1. When called in the s2ram path (as typically done by clock or PM
> >      domain drivers), timekeeping may be suspended, triggering the
> >      WARN_ON(timekeeping_suspended) in ktime_get():
> >
> >       WARNING: CPU: 0 PID: 654 at kernel/time/timekeeping.c:843 ktime_get+0x28/0x78
> >
> >      Calling ktime_get_mono_fast_ns() instead of ktime_get() would get
> >      rid of that warning.  However, that would break timeout handling,
> >      as (at least on systems with an ARM architectured timer), the time
> >      returned by ktime_get_mono_fast_ns() does not advance while
> >      timekeeping is suspended.
> >      Interestingly, (on the same ARM systems) the time returned by
> >      ktime_get() does advance while timekeeping is suspended, despite
> >      the warning.
> >
> >   2. Depending on the actual clock source, and especially before a
> >      high-resolution clocksource (e.g. the ARM architectured timer)
> >      becomes available, time may not advance in atomic contexts, thus
> >      breaking timeout handling.
> >
> > Fix this by abandoning the idea that one can rely on timekeeping to
> > implement timeout handling in all atomic contexts, and switch from a
> > global time-based to a locally-estimated timeout handling.  In most
> > (all?) cases the timeout condition is exceptional and an error
> > condition, hence any additional delays due to underestimating wall clock
> > time are irrelevant.
> >
> > Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
>
> This looks reasonable to me,
>
> Acked-by: Arnd Bergmann <arnd@arndb.de>

Thanks!

> I assume you sent this because you ran into the bug on a
> particular driver. It might help to be more specific about
> how this can be reproduced.

I first ran into it when converting open-coded loops to
read*_poll_timeout_atomic().
Later, I also saw the issue with the existing
read*_poll_timeout_atomic() calls in the R-Car SYSC driver, but only
after applying additional patches from the BSP that impact the moment
PM Domains are powered during s2ram.
The various pointers to existing mitigations in the cover letter should
give you other suggestions for how to reproduce...

Gr{oetje,eeting}s,

                        Geert
Tony Lindgren May 11, 2023, 6:48 a.m. UTC | #3
Hi,

* Geert Uytterhoeven <geert+renesas@glider.be> [230510 13:23]:
> read_poll_timeout_atomic() uses ktime_get() to implement the timeout
> feature, just like its non-atomic counterpart.  However, there are
> several issues with this, due to its use in atomic contexts:
> 
>   1. When called in the s2ram path (as typically done by clock or PM
>      domain drivers), timekeeping may be suspended, triggering the
>      WARN_ON(timekeeping_suspended) in ktime_get():

Maybe add a comment to read_poll_timeout_atomic() saying it can be
used also with timekeeping_suspended?

Otherwise a few years later it might get broken when somebody goes
to patch it without testing it with timekeeping_suspended :)

Other than that looks good to me:

Reviewed-by: Tony Lindgren <tony@atomide.com>
Ulf Hansson May 11, 2023, 10:26 a.m. UTC | #4
On Wed, 10 May 2023 at 15:23, Geert Uytterhoeven
<geert+renesas@glider.be> wrote:
>
> read_poll_timeout_atomic() uses ktime_get() to implement the timeout
> feature, just like its non-atomic counterpart.  However, there are
> several issues with this, due to its use in atomic contexts:
>
>   1. When called in the s2ram path (as typically done by clock or PM
>      domain drivers), timekeeping may be suspended, triggering the
>      WARN_ON(timekeeping_suspended) in ktime_get():
>
>         WARNING: CPU: 0 PID: 654 at kernel/time/timekeeping.c:843 ktime_get+0x28/0x78
>
>      Calling ktime_get_mono_fast_ns() instead of ktime_get() would get
>      rid of that warning.  However, that would break timeout handling,
>      as (at least on systems with an ARM architectured timer), the time
>      returned by ktime_get_mono_fast_ns() does not advance while
>      timekeeping is suspended.
>      Interestingly, (on the same ARM systems) the time returned by
>      ktime_get() does advance while timekeeping is suspended, despite
>      the warning.

Interesting, looks like we should spend some time to further
investigate this behaviour.

>
>   2. Depending on the actual clock source, and especially before a
>      high-resolution clocksource (e.g. the ARM architectured timer)
>      becomes available, time may not advance in atomic contexts, thus
>      breaking timeout handling.
>
> Fix this by abandoning the idea that one can rely on timekeeping to
> implement timeout handling in all atomic contexts, and switch from a
> global time-based to a locally-estimated timeout handling.  In most
> (all?) cases the timeout condition is exceptional and an error
> condition, hence any additional delays due to underestimating wall clock
> time are irrelevant.

I wonder if this isn't an oversimplification of the situation. Don't
we have timeout-error-conditions that we expected to happen quite
frequently?

If so, in these cases, we really don't want to continue looping longer
than actually needed, as then we will remain in the atomic context
longer than necessary.

I guess some information about how big these additional delays could
be, would help to understand better. Of course, it's not entirely easy
to get that data, but did you run some tests to see how this changes?

>
> Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
> ---
> Alternatively, one could use a mixed approach (use both
> ktime_get_mono_fast_ns() and a local (under)estimate, and timeout on the
> earliest occasion), but I think that would complicate things without
> much gain.

Another option could be to provide two different polling APIs for the
atomic use-case.

One that keeps using ktime, which is more accurate and generally
favourable - and another, along the lines of what you propose, that
should be used by those that can't rely on timekeeping.

>
> v2:
>   - New.
> ---
>  include/linux/iopoll.h | 18 +++++++++++++-----
>  1 file changed, 13 insertions(+), 5 deletions(-)
>
> diff --git a/include/linux/iopoll.h b/include/linux/iopoll.h
> index 0417360a6db9b0d6..bb2e1d9117e96679 100644
> --- a/include/linux/iopoll.h
> +++ b/include/linux/iopoll.h
> @@ -81,22 +81,30 @@
>                                         delay_before_read, args...) \
>  ({ \
>         u64 __timeout_us = (timeout_us); \
> +       s64 __left_ns = __timeout_us * NSEC_PER_USEC; \
>         unsigned long __delay_us = (delay_us); \
> -       ktime_t __timeout = ktime_add_us(ktime_get(), __timeout_us); \
> -       if (delay_before_read && __delay_us) \
> +       u64 __delay_ns = __delay_us * NSEC_PER_USEC; \
> +       if (delay_before_read && __delay_us) { \
>                 udelay(__delay_us); \
> +               if (__timeout_us) \
> +                       __left_ns -= __delay_ns; \
> +       } \
>         for (;;) { \
>                 (val) = op(args); \
>                 if (cond) \
>                         break; \
> -               if (__timeout_us && \
> -                   ktime_compare(ktime_get(), __timeout) > 0) { \
> +               if (__timeout_us && __left_ns < 0) { \
>                         (val) = op(args); \
>                         break; \
>                 } \
> -               if (__delay_us) \
> +               if (__delay_us) { \
>                         udelay(__delay_us); \
> +                       if (__timeout_us) \
> +                               __left_ns -= __delay_ns; \
> +               } \
>                 cpu_relax(); \
> +               if (__timeout_us) \
> +                       __left_ns--; \
>         } \
>         (cond) ? 0 : -ETIMEDOUT; \
>  })
> --
> 2.34.1
>

Kind regards
Uffe
Geert Uytterhoeven May 11, 2023, 12:44 p.m. UTC | #5
Hi Ulf,

On Thu, May 11, 2023 at 12:27 PM Ulf Hansson <ulf.hansson@linaro.org> wrote:
> On Wed, 10 May 2023 at 15:23, Geert Uytterhoeven
> <geert+renesas@glider.be> wrote:
> > read_poll_timeout_atomic() uses ktime_get() to implement the timeout
> > feature, just like its non-atomic counterpart.  However, there are
> > several issues with this, due to its use in atomic contexts:
> >
> >   1. When called in the s2ram path (as typically done by clock or PM
> >      domain drivers), timekeeping may be suspended, triggering the
> >      WARN_ON(timekeeping_suspended) in ktime_get():
> >
> >         WARNING: CPU: 0 PID: 654 at kernel/time/timekeeping.c:843 ktime_get+0x28/0x78
> >
> >      Calling ktime_get_mono_fast_ns() instead of ktime_get() would get
> >      rid of that warning.  However, that would break timeout handling,
> >      as (at least on systems with an ARM architectured timer), the time
> >      returned by ktime_get_mono_fast_ns() does not advance while
> >      timekeeping is suspended.
> >      Interestingly, (on the same ARM systems) the time returned by
> >      ktime_get() does advance while timekeeping is suspended, despite
> >      the warning.
>
> Interesting, looks like we should spend some time to further
> investigate this behaviour.

Probably, I was a bit surprised by this behavior, too.

> >   2. Depending on the actual clock source, and especially before a
> >      high-resolution clocksource (e.g. the ARM architectured timer)
> >      becomes available, time may not advance in atomic contexts, thus
> >      breaking timeout handling.
> >
> > Fix this by abandoning the idea that one can rely on timekeeping to
> > implement timeout handling in all atomic contexts, and switch from a
> > global time-based to a locally-estimated timeout handling.  In most
> > (all?) cases the timeout condition is exceptional and an error
> > condition, hence any additional delays due to underestimating wall clock
> > time are irrelevant.
>
> I wonder if this isn't an oversimplification of the situation. Don't
> we have timeout-error-conditions that we expected to happen quite
> frequently?

We may have some.  But they definitely do not happen when time
does not advance, or they would have been mitigated long ago
(the loop would never terminate).

> If so, in these cases, we really don't want to continue looping longer
> than actually needed, as then we will remain in the atomic context
> longer than necessary.
>
> I guess some information about how big these additional delays could
> be, would help to understand better. Of course, it's not entirely easy
> to get that data, but did you run some tests to see how this changes?

I did some timings (when timekeeping is available), and the differences
are rather minor.  The delay and timeout parameters are in µs, and
1 µs is already a few orders of magnitude larger than the cycle time
of a contemporary CPU.

Under-estimates are due to the time spent in op() (depends on the
user, typical use is a hardware device register read), udelay()
(architecture/platform-dependent accuracy), and general loop overhead.

> > Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
> > ---
> > Alternatively, one could use a mixed approach (use both
> > ktime_get_mono_fast_ns() and a local (under)estimate, and timeout on the
> > earliest occasion), but I think that would complicate things without
> > much gain.
>
> Another option could be to provide two different polling APIs for the
> atomic use-case.
>
> One that keeps using ktime, which is more accurate and generally
> favourable - and another, along the lines of what you propose, that
> should be used by those that can't rely on timekeeping.

At the risk of people picking the wrong one, leading to hard to
find bugs?

Gr{oetje,eeting}s,

                        Geert
Ulf Hansson May 12, 2023, 7:53 a.m. UTC | #6
On Thu, 11 May 2023 at 14:44, Geert Uytterhoeven <geert@linux-m68k.org> wrote:
>
> Hi Ulf,
>
> On Thu, May 11, 2023 at 12:27 PM Ulf Hansson <ulf.hansson@linaro.org> wrote:
> > On Wed, 10 May 2023 at 15:23, Geert Uytterhoeven
> > <geert+renesas@glider.be> wrote:
> > > read_poll_timeout_atomic() uses ktime_get() to implement the timeout
> > > feature, just like its non-atomic counterpart.  However, there are
> > > several issues with this, due to its use in atomic contexts:
> > >
> > >   1. When called in the s2ram path (as typically done by clock or PM
> > >      domain drivers), timekeeping may be suspended, triggering the
> > >      WARN_ON(timekeeping_suspended) in ktime_get():
> > >
> > >         WARNING: CPU: 0 PID: 654 at kernel/time/timekeeping.c:843 ktime_get+0x28/0x78
> > >
> > >      Calling ktime_get_mono_fast_ns() instead of ktime_get() would get
> > >      rid of that warning.  However, that would break timeout handling,
> > >      as (at least on systems with an ARM architectured timer), the time
> > >      returned by ktime_get_mono_fast_ns() does not advance while
> > >      timekeeping is suspended.
> > >      Interestingly, (on the same ARM systems) the time returned by
> > >      ktime_get() does advance while timekeeping is suspended, despite
> > >      the warning.
> >
> > Interesting, looks like we should spend some time to further
> > investigate this behaviour.
>
> Probably, I was a bit surprised by this behavior, too.
>
> > >   2. Depending on the actual clock source, and especially before a
> > >      high-resolution clocksource (e.g. the ARM architectured timer)
> > >      becomes available, time may not advance in atomic contexts, thus
> > >      breaking timeout handling.
> > >
> > > Fix this by abandoning the idea that one can rely on timekeeping to
> > > implement timeout handling in all atomic contexts, and switch from a
> > > global time-based to a locally-estimated timeout handling.  In most
> > > (all?) cases the timeout condition is exceptional and an error
> > > condition, hence any additional delays due to underestimating wall clock
> > > time are irrelevant.
> >
> > I wonder if this isn't an oversimplification of the situation. Don't
> > we have timeout-error-conditions that we expected to happen quite
> > frequently?
>
> We may have some.  But they definitely do not happen when time
> does not advance, or they would have been mitigated long ago
> (the loop would never terminate).

Right, I was merely thinking of the case when ktime isn't suspended,
which of course is the most common case.

>
> > If so, in these cases, we really don't want to continue looping longer
> > than actually needed, as then we will remain in the atomic context
> > longer than necessary.
> >
> > I guess some information about how big these additional delays could
> > be, would help to understand better. Of course, it's not entirely easy
> > to get that data, but did you run some tests to see how this changes?
>
> I did some timings (when timekeeping is available), and the differences
> are rather minor.  The delay and timeout parameters are in µs, and
> 1 µs is already a few orders of magnitude larger than the cycle time
> of a contemporary CPU.

Ohh, I was certainly expecting a bigger spread. If it's in that
ballpark we should certainly be fine.

I will run some tests at my side too, as I am curious to see the
behaviour. I will let you know, whatever the result is, of course.

>
> Under-estimates are due to the time spent in op() (depends on the
> user, typical use is a hardware device register read), udelay()
> (architecture/platform-dependent accuracy), and general loop overhead.

Yes, you are right. My main concern is the accuracy of the udelay, but
I may be totally wrong here.

>
> > > Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
> > > ---
> > > Alternatively, one could use a mixed approach (use both
> > > ktime_get_mono_fast_ns() and a local (under)estimate, and timeout on the
> > > earliest occasion), but I think that would complicate things without
> > > much gain.
> >
> > Another option could be to provide two different polling APIs for the
> > atomic use-case.
> >
> > One that keeps using ktime, which is more accurate and generally
> > favourable - and another, along the lines of what you propose, that
> > should be used by those that can't rely on timekeeping.
>
> At the risk of people picking the wrong one, leading to hard to
> find bugs?

I agree, If we don't need two APIs, it's certainly better to stick with one.

My main point is that we should not sacrifice "performance" for the
most common case, just to keep things simple, right?

Kind regards
Uffe
Geert Uytterhoeven May 12, 2023, 8:03 a.m. UTC | #7
Hi Ulf,

On Fri, May 12, 2023 at 9:54 AM Ulf Hansson <ulf.hansson@linaro.org> wrote:
> On Thu, 11 May 2023 at 14:44, Geert Uytterhoeven <geert@linux-m68k.org> wrote:
> > On Thu, May 11, 2023 at 12:27 PM Ulf Hansson <ulf.hansson@linaro.org> wrote:
> > > On Wed, 10 May 2023 at 15:23, Geert Uytterhoeven
> > > <geert+renesas@glider.be> wrote:
> > > > read_poll_timeout_atomic() uses ktime_get() to implement the timeout
> > > > feature, just like its non-atomic counterpart.  However, there are
> > > > several issues with this, due to its use in atomic contexts:
> > > >
> > > >   1. When called in the s2ram path (as typically done by clock or PM
> > > >      domain drivers), timekeeping may be suspended, triggering the
> > > >      WARN_ON(timekeeping_suspended) in ktime_get():
> > > >
> > > >         WARNING: CPU: 0 PID: 654 at kernel/time/timekeeping.c:843 ktime_get+0x28/0x78
> > > >
> > > >      Calling ktime_get_mono_fast_ns() instead of ktime_get() would get
> > > >      rid of that warning.  However, that would break timeout handling,
> > > >      as (at least on systems with an ARM architectured timer), the time
> > > >      returned by ktime_get_mono_fast_ns() does not advance while
> > > >      timekeeping is suspended.
> > > >      Interestingly, (on the same ARM systems) the time returned by
> > > >      ktime_get() does advance while timekeeping is suspended, despite
> > > >      the warning.
> > >
> > > Interesting, looks like we should spend some time to further
> > > investigate this behaviour.
> >
> > Probably, I was a bit surprised by this behavior, too.
> >
> > > >   2. Depending on the actual clock source, and especially before a
> > > >      high-resolution clocksource (e.g. the ARM architectured timer)
> > > >      becomes available, time may not advance in atomic contexts, thus
> > > >      breaking timeout handling.
> > > >
> > > > Fix this by abandoning the idea that one can rely on timekeeping to
> > > > implement timeout handling in all atomic contexts, and switch from a
> > > > global time-based to a locally-estimated timeout handling.  In most
> > > > (all?) cases the timeout condition is exceptional and an error
> > > > condition, hence any additional delays due to underestimating wall clock
> > > > time are irrelevant.
> > >
> > > I wonder if this isn't an oversimplification of the situation. Don't
> > > we have timeout-error-conditions that we expected to happen quite
> > > frequently?
> >
> > We may have some.  But they definitely do not happen when time
> > does not advance, or they would have been mitigated long ago
> > (the loop would never terminate).
>
> Right, I was merely thinking of the case when ktime isn't suspended,
> which of course is the most common case.
>
> >
> > > If so, in these cases, we really don't want to continue looping longer
> > > than actually needed, as then we will remain in the atomic context
> > > longer than necessary.
> > >
> > > I guess some information about how big these additional delays could
> > > be, would help to understand better. Of course, it's not entirely easy
> > > to get that data, but did you run some tests to see how this changes?
> >
> > I did some timings (when timekeeping is available), and the differences
> > are rather minor.  The delay and timeout parameters are in µs, and
> > 1 µs is already a few orders of magnitude larger than the cycle time
> > of a contemporary CPU.
>
> Ohh, I was certainly expecting a bigger spread. If it's in that
> ballpark we should certainly be fine.
>
> I will run some tests at my side too, as I am curious to see the
> behaviour. I will let you know, whatever the result is, of course.
>
> >
> > Under-estimates are due to the time spent in op() (depends on the
> > user, typical use is a hardware device register read), udelay()
> > (architecture/platform-dependent accuracy), and general loop overhead.
>
> Yes, you are right. My main concern is the accuracy of the udelay, but
> I may be totally wrong here.
>
> >
> > > > Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
> > > > ---
> > > > Alternatively, one could use a mixed approach (use both
> > > > ktime_get_mono_fast_ns() and a local (under)estimate, and timeout on the
> > > > earliest occasion), but I think that would complicate things without
> > > > much gain.
> > >
> > > Another option could be to provide two different polling APIs for the
> > > atomic use-case.
> > >
> > > One that keeps using ktime, which is more accurate and generally
> > > favourable - and another, along the lines of what you propose, that
> > > should be used by those that can't rely on timekeeping.
> >
> > At the risk of people picking the wrong one, leading to hard to
> > find bugs?
>
> I agree, If we don't need two APIs, it's certainly better to stick with one.
>
> My main point is that we should not sacrifice "performance" for the
> most common case, just to keep things simple, right?

Most of these loops run just 1 or 2 cycles.
Performance mostly kicks in when timing out, but note that not
calling ktime_get() also reduces loop overhead...

Gr{oetje,eeting}s,

                        Geert
Ulf Hansson May 15, 2023, 9:26 a.m. UTC | #8
On Fri, 12 May 2023 at 10:03, Geert Uytterhoeven <geert@linux-m68k.org> wrote:
>
> Hi Ulf,
>
> On Fri, May 12, 2023 at 9:54 AM Ulf Hansson <ulf.hansson@linaro.org> wrote:
> > On Thu, 11 May 2023 at 14:44, Geert Uytterhoeven <geert@linux-m68k.org> wrote:
> > > On Thu, May 11, 2023 at 12:27 PM Ulf Hansson <ulf.hansson@linaro.org> wrote:
> > > > On Wed, 10 May 2023 at 15:23, Geert Uytterhoeven
> > > > <geert+renesas@glider.be> wrote:
> > > > > read_poll_timeout_atomic() uses ktime_get() to implement the timeout
> > > > > feature, just like its non-atomic counterpart.  However, there are
> > > > > several issues with this, due to its use in atomic contexts:
> > > > >
> > > > >   1. When called in the s2ram path (as typically done by clock or PM
> > > > >      domain drivers), timekeeping may be suspended, triggering the
> > > > >      WARN_ON(timekeeping_suspended) in ktime_get():
> > > > >
> > > > >         WARNING: CPU: 0 PID: 654 at kernel/time/timekeeping.c:843 ktime_get+0x28/0x78
> > > > >
> > > > >      Calling ktime_get_mono_fast_ns() instead of ktime_get() would get
> > > > >      rid of that warning.  However, that would break timeout handling,
> > > > >      as (at least on systems with an ARM architectured timer), the time
> > > > >      returned by ktime_get_mono_fast_ns() does not advance while
> > > > >      timekeeping is suspended.
> > > > >      Interestingly, (on the same ARM systems) the time returned by
> > > > >      ktime_get() does advance while timekeeping is suspended, despite
> > > > >      the warning.
> > > >
> > > > Interesting, looks like we should spend some time to further
> > > > investigate this behaviour.
> > >
> > > Probably, I was a bit surprised by this behavior, too.
> > >
> > > > >   2. Depending on the actual clock source, and especially before a
> > > > >      high-resolution clocksource (e.g. the ARM architectured timer)
> > > > >      becomes available, time may not advance in atomic contexts, thus
> > > > >      breaking timeout handling.
> > > > >
> > > > > Fix this by abandoning the idea that one can rely on timekeeping to
> > > > > implement timeout handling in all atomic contexts, and switch from a
> > > > > global time-based to a locally-estimated timeout handling.  In most
> > > > > (all?) cases the timeout condition is exceptional and an error
> > > > > condition, hence any additional delays due to underestimating wall clock
> > > > > time are irrelevant.
> > > >
> > > > I wonder if this isn't an oversimplification of the situation. Don't
> > > > we have timeout-error-conditions that we expected to happen quite
> > > > frequently?
> > >
> > > We may have some.  But they definitely do not happen when time
> > > does not advance, or they would have been mitigated long ago
> > > (the loop would never terminate).
> >
> > Right, I was merely thinking of the case when ktime isn't suspended,
> > which of course is the most common case.
> >
> > >
> > > > If so, in these cases, we really don't want to continue looping longer
> > > > than actually needed, as then we will remain in the atomic context
> > > > longer than necessary.
> > > >
> > > > I guess some information about how big these additional delays could
> > > > be, would help to understand better. Of course, it's not entirely easy
> > > > to get that data, but did you run some tests to see how this changes?
> > >
> > > I did some timings (when timekeeping is available), and the differences
> > > are rather minor.  The delay and timeout parameters are in µs, and
> > > 1 µs is already a few orders of magnitude larger than the cycle time
> > > of a contemporary CPU.
> >
> > Ohh, I was certainly expecting a bigger spread. If it's in that
> > ballpark we should certainly be fine.
> >
> > I will run some tests at my side too, as I am curious to see the
> > behaviour. I will let you know, whatever the result is, of course.
> >
> > >
> > > Under-estimates are due to the time spent in op() (depends on the
> > > user, typical use is a hardware device register read), udelay()
> > > (architecture/platform-dependent accuracy), and general loop overhead.
> >
> > Yes, you are right. My main concern is the accuracy of the udelay, but
> > I may be totally wrong here.
> >
> > >
> > > > > Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
> > > > > ---
> > > > > Alternatively, one could use a mixed approach (use both
> > > > > ktime_get_mono_fast_ns() and a local (under)estimate, and timeout on the
> > > > > earliest occasion), but I think that would complicate things without
> > > > > much gain.
> > > >
> > > > Another option could be to provide two different polling APIs for the
> > > > atomic use-case.
> > > >
> > > > One that keeps using ktime, which is more accurate and generally
> > > > favourable - and another, along the lines of what you propose, that
> > > > should be used by those that can't rely on timekeeping.
> > >
> > > At the risk of people picking the wrong one, leading to hard to
> > > find bugs?
> >
> > I agree, If we don't need two APIs, it's certainly better to stick with one.
> >
> > My main point is that we should not sacrifice "performance" for the
> > most common case, just to keep things simple, right?
>
> Most of these loops run just 1 or 2 cycles.
> Performance mostly kicks in when timing out, but note that not
> calling ktime_get() also reduces loop overhead...

That's a good point too!

It sure sounds like the benefits are superior to the potential
downside. Let me not stand in the way of getting this applied.
Instead, feel free to add my:

Reviewed-by: Ulf Hansson <ulf.hansson@linaro.org>

Kind regards
Uffe
diff mbox series

Patch

diff --git a/include/linux/iopoll.h b/include/linux/iopoll.h
index 0417360a6db9b0d6..bb2e1d9117e96679 100644
--- a/include/linux/iopoll.h
+++ b/include/linux/iopoll.h
@@ -81,22 +81,30 @@ 
 					delay_before_read, args...) \
 ({ \
 	u64 __timeout_us = (timeout_us); \
+	s64 __left_ns = __timeout_us * NSEC_PER_USEC; \
 	unsigned long __delay_us = (delay_us); \
-	ktime_t __timeout = ktime_add_us(ktime_get(), __timeout_us); \
-	if (delay_before_read && __delay_us) \
+	u64 __delay_ns = __delay_us * NSEC_PER_USEC; \
+	if (delay_before_read && __delay_us) { \
 		udelay(__delay_us); \
+		if (__timeout_us) \
+			__left_ns -= __delay_ns; \
+	} \
 	for (;;) { \
 		(val) = op(args); \
 		if (cond) \
 			break; \
-		if (__timeout_us && \
-		    ktime_compare(ktime_get(), __timeout) > 0) { \
+		if (__timeout_us && __left_ns < 0) { \
 			(val) = op(args); \
 			break; \
 		} \
-		if (__delay_us) \
+		if (__delay_us) { \
 			udelay(__delay_us); \
+			if (__timeout_us) \
+				__left_ns -= __delay_ns; \
+		} \
 		cpu_relax(); \
+		if (__timeout_us) \
+			__left_ns--; \
 	} \
 	(cond) ? 0 : -ETIMEDOUT; \
 })