diff mbox series

target/arm: Correct MTE tag checking for reverse-copy MOPS

Message ID 20231110162546.2192512-1-peter.maydell@linaro.org
State Superseded
Headers show
Series target/arm: Correct MTE tag checking for reverse-copy MOPS | expand

Commit Message

Peter Maydell Nov. 10, 2023, 4:25 p.m. UTC
When we are doing a FEAT_MOPS copy that must be performed backwards,
we call mte_mops_probe_rev(), passing it the address of the last byte
in the region we are probing.  However, allocation_tag_mem_probe()
wants the address of the first byte to get the tag memory for.
Because we passed it (ptr, size) we could incorrectly trip the
allocation_tag_mem_probe() check for "does this access run across to
the following page", and if that following page happened not to be
valid then we would assert.

We know we will always be only dealing with a single page because the
code that calls mte_mops_probe_rev() ensures that.  We could make
mte_mops_probe_rev() pass 'ptr - (size - 1)' to
allocation_tag_mem_probe(), but then we would have to adjust the
returned 'mem' pointer to get back to the tag RAM for the last byte
of the region.  It's simpler to just pass in a size of 1 byte,
because we know that allocation_tag_mem_probe() in pure-probe
single-page mode doesn't care about the size.

Fixes: 69c51dc3723b ("target/arm: Implement MTE tag-checking functions for FEAT_MOPS copies")
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
---
 target/arm/tcg/mte_helper.c | 12 ++++++++++--
 1 file changed, 10 insertions(+), 2 deletions(-)

Comments

Philippe Mathieu-Daudé Nov. 10, 2023, 5:39 p.m. UTC | #1
On 10/11/23 17:25, Peter Maydell wrote:
> When we are doing a FEAT_MOPS copy that must be performed backwards,
> we call mte_mops_probe_rev(), passing it the address of the last byte
> in the region we are probing.  However, allocation_tag_mem_probe()
> wants the address of the first byte to get the tag memory for.
> Because we passed it (ptr, size) we could incorrectly trip the
> allocation_tag_mem_probe() check for "does this access run across to
> the following page", and if that following page happened not to be
> valid then we would assert.
> 
> We know we will always be only dealing with a single page because the
> code that calls mte_mops_probe_rev() ensures that.  We could make
> mte_mops_probe_rev() pass 'ptr - (size - 1)' to
> allocation_tag_mem_probe(), but then we would have to adjust the
> returned 'mem' pointer to get back to the tag RAM for the last byte
> of the region.  It's simpler to just pass in a size of 1 byte,
> because we know that allocation_tag_mem_probe() in pure-probe
> single-page mode doesn't care about the size.
> 
> Fixes: 69c51dc3723b ("target/arm: Implement MTE tag-checking functions for FEAT_MOPS copies")
> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
> ---
>   target/arm/tcg/mte_helper.c | 12 ++++++++++--
>   1 file changed, 10 insertions(+), 2 deletions(-)

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Richard Henderson Nov. 12, 2023, 4:22 p.m. UTC | #2
On 11/10/23 08:25, Peter Maydell wrote:
> When we are doing a FEAT_MOPS copy that must be performed backwards,
> we call mte_mops_probe_rev(), passing it the address of the last byte
> in the region we are probing.  However, allocation_tag_mem_probe()
> wants the address of the first byte to get the tag memory for.
> Because we passed it (ptr, size) we could incorrectly trip the
> allocation_tag_mem_probe() check for "does this access run across to
> the following page", and if that following page happened not to be
> valid then we would assert.
> 
> We know we will always be only dealing with a single page because the
> code that calls mte_mops_probe_rev() ensures that.  We could make
> mte_mops_probe_rev() pass 'ptr - (size - 1)' to
> allocation_tag_mem_probe(), but then we would have to adjust the
> returned 'mem' pointer to get back to the tag RAM for the last byte
> of the region.  It's simpler to just pass in a size of 1 byte,
> because we know that allocation_tag_mem_probe() in pure-probe
> single-page mode doesn't care about the size.
> 
> Fixes: 69c51dc3723b ("target/arm: Implement MTE tag-checking functions for FEAT_MOPS copies")
> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
> ---
>   target/arm/tcg/mte_helper.c | 12 ++++++++++--
>   1 file changed, 10 insertions(+), 2 deletions(-)

Acked-by: Richard Henderson <richard.henderson@linaro.org>


r~
diff mbox series

Patch

diff --git a/target/arm/tcg/mte_helper.c b/target/arm/tcg/mte_helper.c
index 70ac876105f..ffb8ea1c349 100644
--- a/target/arm/tcg/mte_helper.c
+++ b/target/arm/tcg/mte_helper.c
@@ -1101,10 +1101,18 @@  uint64_t mte_mops_probe_rev(CPUARMState *env, uint64_t ptr, uint64_t size,
     uint32_t n;
 
     mmu_idx = FIELD_EX32(desc, MTEDESC, MIDX);
-    /* True probe; this will never fault */
+    /*
+     * True probe; this will never fault. Note that our caller passes
+     * us a pointer to the end of the region, but allocation_tag_mem_probe()
+     * wants a pointer to the start. Because we know we don't span a page
+     * boundary and that allocation_tag_mem_probe() doesn't otherwise care
+     * about the size, pass in a size of 1 byte. This is simpler than
+     * adjusting the ptr to point to the start of the region and then having
+     * to adjust the returned 'mem' to get the end of the tag memory.
+     */
     mem = allocation_tag_mem_probe(env, mmu_idx, ptr,
                                    w ? MMU_DATA_STORE : MMU_DATA_LOAD,
-                                   size, MMU_DATA_LOAD, true, 0);
+                                   1, MMU_DATA_LOAD, true, 0);
     if (!mem) {
         return size;
     }