diff mbox series

[v5,01/14] riscv: only use IPIs to handle cache-flushes on remote cpus

Message ID 20220121163618.351934-2-heiko@sntech.de
State New
Headers show
Series [v5,01/14] riscv: only use IPIs to handle cache-flushes on remote cpus | expand

Commit Message

Heiko Stübner Jan. 21, 2022, 4:36 p.m. UTC
Right now, the flush_icache functions always use the SBI remote-fence
when SBI is available, leaving using IPIs as a fallback mechanism.

IPIs on the other hand are more flexible, as the ipi_ops are initially
set to go through SBI but later will be overwritten to go through the
ACLINT/CLINT.

In a discussion we had, Nick was of the opinion that "In general we
should prefer doing IPIs on S-mode through CLINT instead of going
through SBI/M-mode, so IMHO we should only be using
on_each_cpu_mask(ipi_remote_fence_i) on flush_icache_all()/
flush_icache_mm() and remove any explicit calls to sbi_remote_fence_i(),
because this way we continue using SBI for doing remote fences even after
CLINT/ACLINT driver is registered, instead of using direct IPIs through
CLINT/ACLINT."

So follow this suggestion and just do ipi calls to have the proper kernel
parts do them,

This also fixes the null-ptr dereference happening when flush_icache_all()
is called before sbi_init().

Suggested-by: Nick Kossifidis <mick@ics.forth.gr>
Signed-off-by: Heiko Stuebner <heiko@sntech.de>
---
 arch/riscv/mm/cacheflush.c | 8 +-------
 1 file changed, 1 insertion(+), 7 deletions(-)

Comments

Anup Patel Jan. 22, 2022, 4:10 a.m. UTC | #1
On Fri, Jan 21, 2022 at 10:07 PM Heiko Stuebner <heiko@sntech.de> wrote:
>
> Right now, the flush_icache functions always use the SBI remote-fence
> when SBI is available, leaving using IPIs as a fallback mechanism.
>
> IPIs on the other hand are more flexible, as the ipi_ops are initially
> set to go through SBI but later will be overwritten to go through the
> ACLINT/CLINT.
>
> In a discussion we had, Nick was of the opinion that "In general we
> should prefer doing IPIs on S-mode through CLINT instead of going
> through SBI/M-mode, so IMHO we should only be using
> on_each_cpu_mask(ipi_remote_fence_i) on flush_icache_all()/
> flush_icache_mm() and remove any explicit calls to sbi_remote_fence_i(),
> because this way we continue using SBI for doing remote fences even after
> CLINT/ACLINT driver is registered, instead of using direct IPIs through
> CLINT/ACLINT."
>
> So follow this suggestion and just do ipi calls to have the proper kernel
> parts do them,
>
> This also fixes the null-ptr dereference happening when flush_icache_all()
> is called before sbi_init().
>
> Suggested-by: Nick Kossifidis <mick@ics.forth.gr>
> Signed-off-by: Heiko Stuebner <heiko@sntech.de>

For Guest/VM, only virtual IMSIC provides faster IPIs so in absence of
virtual IMSIC, the SBI IPI based IPIs are the best approach.

Like Atish mentioned, please base this work on the ACLINT series
because the ACLINT series adds required IPI infrastructure which helps
us select SBI IPI versus direct S-mode IPI based on hardware capability.

Regards,
Anup

> ---
>  arch/riscv/mm/cacheflush.c | 8 +-------
>  1 file changed, 1 insertion(+), 7 deletions(-)
>
> diff --git a/arch/riscv/mm/cacheflush.c b/arch/riscv/mm/cacheflush.c
> index 6cb7d96ad9c7..c35375cd52ec 100644
> --- a/arch/riscv/mm/cacheflush.c
> +++ b/arch/riscv/mm/cacheflush.c
> @@ -17,11 +17,7 @@ static void ipi_remote_fence_i(void *info)
>  void flush_icache_all(void)
>  {
>         local_flush_icache_all();
> -
> -       if (IS_ENABLED(CONFIG_RISCV_SBI))
> -               sbi_remote_fence_i(NULL);
> -       else
> -               on_each_cpu(ipi_remote_fence_i, NULL, 1);
> +       on_each_cpu(ipi_remote_fence_i, NULL, 1);
>  }
>  EXPORT_SYMBOL(flush_icache_all);
>
> @@ -66,8 +62,6 @@ void flush_icache_mm(struct mm_struct *mm, bool local)
>                  * with flush_icache_deferred().
>                  */
>                 smp_mb();
> -       } else if (IS_ENABLED(CONFIG_RISCV_SBI)) {
> -               sbi_remote_fence_i(&others);
>         } else {
>                 on_each_cpu_mask(&others, ipi_remote_fence_i, NULL, 1);
>         }
> --
> 2.30.2
>
Heiko Stübner Jan. 24, 2022, 12:30 p.m. UTC | #2
Hi Atish,

Am Samstag, 22. Januar 2022, 04:45:52 CET schrieb Atish Patra:
> On Fri, Jan 21, 2022 at 8:37 AM Heiko Stuebner <heiko@sntech.de> wrote:
> >
> > Right now, the flush_icache functions always use the SBI remote-fence
> > when SBI is available, leaving using IPIs as a fallback mechanism.
> >
> > IPIs on the other hand are more flexible, as the ipi_ops are initially
> > set to go through SBI but later will be overwritten to go through the
> > ACLINT/CLINT.
> >
> > In a discussion we had, Nick was of the opinion that "In general we
> > should prefer doing IPIs on S-mode through CLINT instead of going
> > through SBI/M-mode,
> 
> Yes. Once Anup's ACLINT drivers are merged, that should be the
> preferred approach.
> 
> https://github.com/avpatel/linux/commit/416c667fd77d6f1fc310cbf727ec127aaf96cae2
> 
> >so IMHO we should only be using
> > on_each_cpu_mask(ipi_remote_fence_i) on flush_icache_all()/
> > flush_icache_mm() and remove any explicit calls to sbi_remote_fence_i(),
> 
> That's a bit confusing because we will be using SBI calls for all other fences
> while using IPIs for fence.i
> 
> > because this way we continue using SBI for doing remote fences even after
> > CLINT/ACLINT driver is registered, instead of using direct IPIs through
> > CLINT/ACLINT."
> >
> > So follow this suggestion and just do ipi calls to have the proper kernel
> > parts do them,
> >
> > This also fixes the null-ptr dereference happening when flush_icache_all()
> > is called before sbi_init().
> >
> 
> IMHO, this series should only fix the null-ptr dereference issue.
> The IPI based fence (for all) should only be disabled along with the
> ACLINT driver
> that actually enables S-mode IPIs.

ok, I'll roll this back to simply fixing the null-ptr issue.

Meanwhile I even found a nicer solution without actually touching
the cachflush code.

Without sbi_init() we can assume that we're still before smp bringup,
so the local_flush_icache_all() in flush_icache_all() will do the trick
just fine and sbi_remote_fence_i() simply just needs to be an empty
function until sbi is initialized.


Heiko

> 
> > Suggested-by: Nick Kossifidis <mick@ics.forth.gr>
> > Signed-off-by: Heiko Stuebner <heiko@sntech.de>
> > ---
> >  arch/riscv/mm/cacheflush.c | 8 +-------
> >  1 file changed, 1 insertion(+), 7 deletions(-)
> >
> > diff --git a/arch/riscv/mm/cacheflush.c b/arch/riscv/mm/cacheflush.c
> > index 6cb7d96ad9c7..c35375cd52ec 100644
> > --- a/arch/riscv/mm/cacheflush.c
> > +++ b/arch/riscv/mm/cacheflush.c
> > @@ -17,11 +17,7 @@ static void ipi_remote_fence_i(void *info)
> >  void flush_icache_all(void)
> >  {
> >         local_flush_icache_all();
> > -
> > -       if (IS_ENABLED(CONFIG_RISCV_SBI))
> > -               sbi_remote_fence_i(NULL);
> > -       else
> > -               on_each_cpu(ipi_remote_fence_i, NULL, 1);
> > +       on_each_cpu(ipi_remote_fence_i, NULL, 1);
> >  }
> >  EXPORT_SYMBOL(flush_icache_all);
> >
> > @@ -66,8 +62,6 @@ void flush_icache_mm(struct mm_struct *mm, bool local)
> >                  * with flush_icache_deferred().
> >                  */
> >                 smp_mb();
> > -       } else if (IS_ENABLED(CONFIG_RISCV_SBI)) {
> > -               sbi_remote_fence_i(&others);
> >         } else {
> >                 on_each_cpu_mask(&others, ipi_remote_fence_i, NULL, 1);
> >         }
> > --
> > 2.30.2
> >
> >
> > _______________________________________________
> > linux-riscv mailing list
> > linux-riscv@lists.infradead.org
> > http://lists.infradead.org/mailman/listinfo/linux-riscv
> 
> 
> 
>
diff mbox series

Patch

diff --git a/arch/riscv/mm/cacheflush.c b/arch/riscv/mm/cacheflush.c
index 6cb7d96ad9c7..c35375cd52ec 100644
--- a/arch/riscv/mm/cacheflush.c
+++ b/arch/riscv/mm/cacheflush.c
@@ -17,11 +17,7 @@  static void ipi_remote_fence_i(void *info)
 void flush_icache_all(void)
 {
 	local_flush_icache_all();
-
-	if (IS_ENABLED(CONFIG_RISCV_SBI))
-		sbi_remote_fence_i(NULL);
-	else
-		on_each_cpu(ipi_remote_fence_i, NULL, 1);
+	on_each_cpu(ipi_remote_fence_i, NULL, 1);
 }
 EXPORT_SYMBOL(flush_icache_all);
 
@@ -66,8 +62,6 @@  void flush_icache_mm(struct mm_struct *mm, bool local)
 		 * with flush_icache_deferred().
 		 */
 		smp_mb();
-	} else if (IS_ENABLED(CONFIG_RISCV_SBI)) {
-		sbi_remote_fence_i(&others);
 	} else {
 		on_each_cpu_mask(&others, ipi_remote_fence_i, NULL, 1);
 	}