diff mbox series

arm64: mm: Ensure writes to swapper are ordered wrt subsequent cache maintenance

Message ID 1529681025-20850-1-git-send-email-will.deacon@arm.com
State Accepted
Commit 71c8fc0c96abf8e53e74ed4d891d671e585f9076
Headers show
Series arm64: mm: Ensure writes to swapper are ordered wrt subsequent cache maintenance | expand

Commit Message

Will Deacon June 22, 2018, 3:23 p.m. UTC
When rewriting swapper using nG mappings, we must performance cache
maintenance around each page table access in order to avoid coherency
problems with the host's cacheable alias under KVM. To ensure correct
ordering of the maintenance with respect to Device memory accesses made
with the Stage-1 MMU disabled, DMBs need to be added between the
maintenance and the corresponding memory access.

This patch adds a missing DMB between writing a new page table entry and
performing a clean+invalidate on the same line.

Cc: <stable@vger.kernel.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>

---
 arch/arm64/mm/proc.S | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

-- 
2.1.4

Comments

Mark Rutland June 22, 2018, 3:25 p.m. UTC | #1
On Fri, Jun 22, 2018 at 04:23:45PM +0100, Will Deacon wrote:
> When rewriting swapper using nG mappings, we must performance cache

> maintenance around each page table access in order to avoid coherency

> problems with the host's cacheable alias under KVM. To ensure correct

> ordering of the maintenance with respect to Device memory accesses made

> with the Stage-1 MMU disabled, DMBs need to be added between the

> maintenance and the corresponding memory access.

> 

> This patch adds a missing DMB between writing a new page table entry and

> performing a clean+invalidate on the same line.

> 

> Cc: <stable@vger.kernel.org>

> Signed-off-by: Will Deacon <will.deacon@arm.com>


Acked-by: Mark Rutland <mark.rutland@arm.com>


Mark.

> ---

>  arch/arm64/mm/proc.S | 5 +++--

>  1 file changed, 3 insertions(+), 2 deletions(-)

> 

> diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S

> index 5f9a73a4452c..03646e6a2ef4 100644

> --- a/arch/arm64/mm/proc.S

> +++ b/arch/arm64/mm/proc.S

> @@ -217,8 +217,9 @@ ENDPROC(idmap_cpu_replace_ttbr1)

>  

>  	.macro __idmap_kpti_put_pgtable_ent_ng, type

>  	orr	\type, \type, #PTE_NG		// Same bit for blocks and pages

> -	str	\type, [cur_\()\type\()p]	// Update the entry and ensure it

> -	dc	civac, cur_\()\type\()p		// is visible to all CPUs.

> +	str	\type, [cur_\()\type\()p]	// Update the entry and ensure

> +	dmb	sy				// that it is visible to all

> +	dc	civac, cur_\()\type\()p		// CPUs.

>  	.endm

>  

>  /*

> -- 

> 2.1.4

>
Catalin Marinas June 22, 2018, 4:24 p.m. UTC | #2
On Fri, Jun 22, 2018 at 04:23:45PM +0100, Will Deacon wrote:
> When rewriting swapper using nG mappings, we must performance cache

> maintenance around each page table access in order to avoid coherency

> problems with the host's cacheable alias under KVM. To ensure correct

> ordering of the maintenance with respect to Device memory accesses made

> with the Stage-1 MMU disabled, DMBs need to be added between the

> maintenance and the corresponding memory access.

> 

> This patch adds a missing DMB between writing a new page table entry and

> performing a clean+invalidate on the same line.

> 

> Cc: <stable@vger.kernel.org>

> Signed-off-by: Will Deacon <will.deacon@arm.com>


Applied. Thanks.

-- 
Catalin
diff mbox series

Patch

diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S
index 5f9a73a4452c..03646e6a2ef4 100644
--- a/arch/arm64/mm/proc.S
+++ b/arch/arm64/mm/proc.S
@@ -217,8 +217,9 @@  ENDPROC(idmap_cpu_replace_ttbr1)
 
 	.macro __idmap_kpti_put_pgtable_ent_ng, type
 	orr	\type, \type, #PTE_NG		// Same bit for blocks and pages
-	str	\type, [cur_\()\type\()p]	// Update the entry and ensure it
-	dc	civac, cur_\()\type\()p		// is visible to all CPUs.
+	str	\type, [cur_\()\type\()p]	// Update the entry and ensure
+	dmb	sy				// that it is visible to all
+	dc	civac, cur_\()\type\()p		// CPUs.
 	.endm
 
 /*