diff mbox series

[RFC,02/11] arm64: tlb: Add DSB ISHST prior to TLBI in __flush_tlb_[kernel_]pgtable()

Message ID 1535125966-7666-3-git-send-email-will.deacon@arm.com
State Superseded
Headers show
Series Avoid synchronous TLB invalidation for intermediate page-table entries on arm64 | expand

Commit Message

Will Deacon Aug. 24, 2018, 3:52 p.m. UTC
__flush_tlb_[kernel_]pgtable() rely on set_pXd() having a DSB after
writing the new table entry and therefore avoid the barrier prior to the
TLBI instruction.

In preparation for delaying our walk-cache invalidation on the unmap()
path, move the DSB into the TLB invalidation routines.

Signed-off-by: Will Deacon <will.deacon@arm.com>

---
 arch/arm64/include/asm/tlbflush.h | 2 ++
 1 file changed, 2 insertions(+)

-- 
2.1.4

Comments

Peter Zijlstra Aug. 24, 2018, 5:56 p.m. UTC | #1
On Fri, Aug 24, 2018 at 04:52:37PM +0100, Will Deacon wrote:
> __flush_tlb_[kernel_]pgtable() rely on set_pXd() having a DSB after

> writing the new table entry and therefore avoid the barrier prior to the

> TLBI instruction.

> 

> In preparation for delaying our walk-cache invalidation on the unmap()

> path, move the DSB into the TLB invalidation routines.

> 

> Signed-off-by: Will Deacon <will.deacon@arm.com>

> ---

>  arch/arm64/include/asm/tlbflush.h | 2 ++

>  1 file changed, 2 insertions(+)

> 

> diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h

> index 7e2a35424ca4..e257f8655b84 100644

> --- a/arch/arm64/include/asm/tlbflush.h

> +++ b/arch/arm64/include/asm/tlbflush.h

> @@ -213,6 +213,7 @@ static inline void __flush_tlb_pgtable(struct mm_struct *mm,

>  {

>  	unsigned long addr = __TLBI_VADDR(uaddr, ASID(mm));

>  

> +	dsb(ishst);

>  	__tlbi(vae1is, addr);

>  	__tlbi_user(vae1is, addr);

>  	dsb(ish);

> @@ -222,6 +223,7 @@ static inline void __flush_tlb_kernel_pgtable(unsigned long kaddr)

>  {

>  	unsigned long addr = __TLBI_VADDR(kaddr, 0);

>  

> +	dsb(ishst);

>  	__tlbi(vaae1is, addr);

>  	dsb(ish);

>  }


I would suggest these barrier -- like any other barriers, carry a
comment that explain the required ordering.

I think this here reads like:

	STORE: unhook page

	DSB-ishst: wait for all stores to complete
	TLBI: invalidate broadcast
	DSB-ish: wait for TLBI to complete

And the 'newly' placed DSB-ishst ensures the page is observed unlinked
before we issue the invalidate.
Will Deacon Aug. 28, 2018, 1:03 p.m. UTC | #2
On Fri, Aug 24, 2018 at 07:56:09PM +0200, Peter Zijlstra wrote:
> On Fri, Aug 24, 2018 at 04:52:37PM +0100, Will Deacon wrote:

> > __flush_tlb_[kernel_]pgtable() rely on set_pXd() having a DSB after

> > writing the new table entry and therefore avoid the barrier prior to the

> > TLBI instruction.

> > 

> > In preparation for delaying our walk-cache invalidation on the unmap()

> > path, move the DSB into the TLB invalidation routines.

> > 

> > Signed-off-by: Will Deacon <will.deacon@arm.com>

> > ---

> >  arch/arm64/include/asm/tlbflush.h | 2 ++

> >  1 file changed, 2 insertions(+)

> > 

> > diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h

> > index 7e2a35424ca4..e257f8655b84 100644

> > --- a/arch/arm64/include/asm/tlbflush.h

> > +++ b/arch/arm64/include/asm/tlbflush.h

> > @@ -213,6 +213,7 @@ static inline void __flush_tlb_pgtable(struct mm_struct *mm,

> >  {

> >  	unsigned long addr = __TLBI_VADDR(uaddr, ASID(mm));

> >  

> > +	dsb(ishst);

> >  	__tlbi(vae1is, addr);

> >  	__tlbi_user(vae1is, addr);

> >  	dsb(ish);

> > @@ -222,6 +223,7 @@ static inline void __flush_tlb_kernel_pgtable(unsigned long kaddr)

> >  {

> >  	unsigned long addr = __TLBI_VADDR(kaddr, 0);

> >  

> > +	dsb(ishst);

> >  	__tlbi(vaae1is, addr);

> >  	dsb(ish);

> >  }

> 

> I would suggest these barrier -- like any other barriers, carry a

> comment that explain the required ordering.

> 

> I think this here reads like:

> 

> 	STORE: unhook page

> 

> 	DSB-ishst: wait for all stores to complete

> 	TLBI: invalidate broadcast

> 	DSB-ish: wait for TLBI to complete

> 

> And the 'newly' placed DSB-ishst ensures the page is observed unlinked

> before we issue the invalidate.


Yeah, not a bad idea. We already have a big block comment in this file but
it's desperately out of date, so lemme rewrite that and justify the barriers
at the same time.

Will
diff mbox series

Patch

diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h
index 7e2a35424ca4..e257f8655b84 100644
--- a/arch/arm64/include/asm/tlbflush.h
+++ b/arch/arm64/include/asm/tlbflush.h
@@ -213,6 +213,7 @@  static inline void __flush_tlb_pgtable(struct mm_struct *mm,
 {
 	unsigned long addr = __TLBI_VADDR(uaddr, ASID(mm));
 
+	dsb(ishst);
 	__tlbi(vae1is, addr);
 	__tlbi_user(vae1is, addr);
 	dsb(ish);
@@ -222,6 +223,7 @@  static inline void __flush_tlb_kernel_pgtable(unsigned long kaddr)
 {
 	unsigned long addr = __TLBI_VADDR(kaddr, 0);
 
+	dsb(ishst);
 	__tlbi(vaae1is, addr);
 	dsb(ish);
 }