diff mbox

arm64: KVM: Take S1 walks into account when determining S2 write faults

Message ID 1475149021-13288-1-git-send-email-will.deacon@arm.com
State Accepted
Commit 60e21a0ef54cd836b9eb22c7cb396989b5b11648
Headers show

Commit Message

Will Deacon Sept. 29, 2016, 11:37 a.m. UTC
The WnR bit in the HSR/ESR_EL2 indicates whether a data abort was
generated by a read or a write instruction. For stage 2 data aborts
generated by a stage 1 translation table walk (i.e. the actual page
table access faults at EL2), the WnR bit therefore reports whether the
instruction generating the walk was a load or a store, *not* whether the
page table walker was reading or writing the entry.

For page tables marked as read-only at stage 2 (e.g. due to KSM merging
them with the tables from another guest), this could result in livelock,
where a page table walk generated by a load instruction attempts to
set the access flag in the stage 1 descriptor, but fails to trigger
CoW in the host since only a read fault is reported.

This patch modifies the arm64 kvm_vcpu_dabt_iswrite function to
take into account stage 2 faults in stage 1 walks. Since DBM cannot be
disabled at EL2 for CPUs that implement it, we assume that these faults
are always causes by writes, avoiding the livelock situation at the
expense of occasional, spurious CoWs.

We could, in theory, do a bit better by checking the guest TCR
configuration and inspecting the page table to see why the PTE faulted.
However, I doubt this is measurable in practice, and the threat of
livelock is real.

Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Christoffer Dall <christoffer.dall@linaro.org>
Cc: Julien Grall <julien.grall@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>

---
 arch/arm64/include/asm/kvm_emulate.h | 11 ++++++-----
 1 file changed, 6 insertions(+), 5 deletions(-)

-- 
2.1.4


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

Comments

Mark Rutland Sept. 29, 2016, 5:16 p.m. UTC | #1
[Adding Julien, who seemed to be missing from the real Cc list]

Mark.

On Thu, Sep 29, 2016 at 12:37:01PM +0100, Will Deacon wrote:
> The WnR bit in the HSR/ESR_EL2 indicates whether a data abort was

> generated by a read or a write instruction. For stage 2 data aborts

> generated by a stage 1 translation table walk (i.e. the actual page

> table access faults at EL2), the WnR bit therefore reports whether the

> instruction generating the walk was a load or a store, *not* whether the

> page table walker was reading or writing the entry.

> 

> For page tables marked as read-only at stage 2 (e.g. due to KSM merging

> them with the tables from another guest), this could result in livelock,

> where a page table walk generated by a load instruction attempts to

> set the access flag in the stage 1 descriptor, but fails to trigger

> CoW in the host since only a read fault is reported.

> 

> This patch modifies the arm64 kvm_vcpu_dabt_iswrite function to

> take into account stage 2 faults in stage 1 walks. Since DBM cannot be

> disabled at EL2 for CPUs that implement it, we assume that these faults

> are always causes by writes, avoiding the livelock situation at the

> expense of occasional, spurious CoWs.

> 

> We could, in theory, do a bit better by checking the guest TCR

> configuration and inspecting the page table to see why the PTE faulted.

> However, I doubt this is measurable in practice, and the threat of

> livelock is real.

> 

> Cc: Marc Zyngier <marc.zyngier@arm.com>

> Cc: Christoffer Dall <christoffer.dall@linaro.org>

> Cc: Julien Grall <julien.grall@arm.com>

> Signed-off-by: Will Deacon <will.deacon@arm.com>

> ---

>  arch/arm64/include/asm/kvm_emulate.h | 11 ++++++-----

>  1 file changed, 6 insertions(+), 5 deletions(-)

> 

> diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h

> index 4cdeae3b17c6..948a9a8a9297 100644

> --- a/arch/arm64/include/asm/kvm_emulate.h

> +++ b/arch/arm64/include/asm/kvm_emulate.h

> @@ -167,11 +167,6 @@ static inline bool kvm_vcpu_dabt_isvalid(const struct kvm_vcpu *vcpu)

>  	return !!(kvm_vcpu_get_hsr(vcpu) & ESR_ELx_ISV);

>  }

>  

> -static inline bool kvm_vcpu_dabt_iswrite(const struct kvm_vcpu *vcpu)

> -{

> -	return !!(kvm_vcpu_get_hsr(vcpu) & ESR_ELx_WNR);

> -}

> -

>  static inline bool kvm_vcpu_dabt_issext(const struct kvm_vcpu *vcpu)

>  {

>  	return !!(kvm_vcpu_get_hsr(vcpu) & ESR_ELx_SSE);

> @@ -192,6 +187,12 @@ static inline bool kvm_vcpu_dabt_iss1tw(const struct kvm_vcpu *vcpu)

>  	return !!(kvm_vcpu_get_hsr(vcpu) & ESR_ELx_S1PTW);

>  }

>  

> +static inline bool kvm_vcpu_dabt_iswrite(const struct kvm_vcpu *vcpu)

> +{

> +	return !!(kvm_vcpu_get_hsr(vcpu) & ESR_ELx_WNR) ||

> +		kvm_vcpu_dabt_iss1tw(vcpu); /* AF/DBM update */

> +}

> +

>  static inline bool kvm_vcpu_dabt_is_cm(const struct kvm_vcpu *vcpu)

>  {

>  	return !!(kvm_vcpu_get_hsr(vcpu) & ESR_ELx_CM);

> -- 

> 2.1.4

> 

> _______________________________________________

> kvmarm mailing list

> kvmarm@lists.cs.columbia.edu

> https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

> 


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
Christoffer Dall Sept. 29, 2016, 7:14 p.m. UTC | #2
On Thu, Sep 29, 2016 at 12:37:01PM +0100, Will Deacon wrote:
> The WnR bit in the HSR/ESR_EL2 indicates whether a data abort was

> generated by a read or a write instruction. For stage 2 data aborts

> generated by a stage 1 translation table walk (i.e. the actual page

> table access faults at EL2), the WnR bit therefore reports whether the

> instruction generating the walk was a load or a store, *not* whether the

> page table walker was reading or writing the entry.

> 

> For page tables marked as read-only at stage 2 (e.g. due to KSM merging

> them with the tables from another guest), this could result in livelock,

> where a page table walk generated by a load instruction attempts to

> set the access flag in the stage 1 descriptor, but fails to trigger

> CoW in the host since only a read fault is reported.

> 

> This patch modifies the arm64 kvm_vcpu_dabt_iswrite function to

> take into account stage 2 faults in stage 1 walks. Since DBM cannot be

> disabled at EL2 for CPUs that implement it, we assume that these faults

> are always causes by writes, avoiding the livelock situation at the

> expense of occasional, spurious CoWs.

> 

> We could, in theory, do a bit better by checking the guest TCR

> configuration and inspecting the page table to see why the PTE faulted.

> However, I doubt this is measurable in practice, and the threat of

> livelock is real.

> 

> Cc: Marc Zyngier <marc.zyngier@arm.com>

> Cc: Christoffer Dall <christoffer.dall@linaro.org>

> Cc: Julien Grall <julien.grall@arm.com>

> Signed-off-by: Will Deacon <will.deacon@arm.com>


Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>


Applied,
-Christoffer

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
Will Deacon Oct. 17, 2016, 10:20 a.m. UTC | #3
On Thu, Sep 29, 2016 at 09:14:32PM +0200, Christoffer Dall wrote:
> On Thu, Sep 29, 2016 at 12:37:01PM +0100, Will Deacon wrote:

> > The WnR bit in the HSR/ESR_EL2 indicates whether a data abort was

> > generated by a read or a write instruction. For stage 2 data aborts

> > generated by a stage 1 translation table walk (i.e. the actual page

> > table access faults at EL2), the WnR bit therefore reports whether the

> > instruction generating the walk was a load or a store, *not* whether the

> > page table walker was reading or writing the entry.

> > 

> > For page tables marked as read-only at stage 2 (e.g. due to KSM merging

> > them with the tables from another guest), this could result in livelock,

> > where a page table walk generated by a load instruction attempts to

> > set the access flag in the stage 1 descriptor, but fails to trigger

> > CoW in the host since only a read fault is reported.

> > 

> > This patch modifies the arm64 kvm_vcpu_dabt_iswrite function to

> > take into account stage 2 faults in stage 1 walks. Since DBM cannot be

> > disabled at EL2 for CPUs that implement it, we assume that these faults

> > are always causes by writes, avoiding the livelock situation at the

> > expense of occasional, spurious CoWs.

> > 

> > We could, in theory, do a bit better by checking the guest TCR

> > configuration and inspecting the page table to see why the PTE faulted.

> > However, I doubt this is measurable in practice, and the threat of

> > livelock is real.

> > 

> > Cc: Marc Zyngier <marc.zyngier@arm.com>

> > Cc: Christoffer Dall <christoffer.dall@linaro.org>

> > Cc: Julien Grall <julien.grall@arm.com>

> > Signed-off-by: Will Deacon <will.deacon@arm.com>

> 

> Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>

> 

> Applied,


This doesn't seem to be in 4.9-rc1. Could you please dig it up?

Ta,

Will

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
diff mbox

Patch

diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
index 4cdeae3b17c6..948a9a8a9297 100644
--- a/arch/arm64/include/asm/kvm_emulate.h
+++ b/arch/arm64/include/asm/kvm_emulate.h
@@ -167,11 +167,6 @@  static inline bool kvm_vcpu_dabt_isvalid(const struct kvm_vcpu *vcpu)
 	return !!(kvm_vcpu_get_hsr(vcpu) & ESR_ELx_ISV);
 }
 
-static inline bool kvm_vcpu_dabt_iswrite(const struct kvm_vcpu *vcpu)
-{
-	return !!(kvm_vcpu_get_hsr(vcpu) & ESR_ELx_WNR);
-}
-
 static inline bool kvm_vcpu_dabt_issext(const struct kvm_vcpu *vcpu)
 {
 	return !!(kvm_vcpu_get_hsr(vcpu) & ESR_ELx_SSE);
@@ -192,6 +187,12 @@  static inline bool kvm_vcpu_dabt_iss1tw(const struct kvm_vcpu *vcpu)
 	return !!(kvm_vcpu_get_hsr(vcpu) & ESR_ELx_S1PTW);
 }
 
+static inline bool kvm_vcpu_dabt_iswrite(const struct kvm_vcpu *vcpu)
+{
+	return !!(kvm_vcpu_get_hsr(vcpu) & ESR_ELx_WNR) ||
+		kvm_vcpu_dabt_iss1tw(vcpu); /* AF/DBM update */
+}
+
 static inline bool kvm_vcpu_dabt_is_cm(const struct kvm_vcpu *vcpu)
 {
 	return !!(kvm_vcpu_get_hsr(vcpu) & ESR_ELx_CM);