diff mbox series

linux-user/aarch64: Do not clear PROT_MTE on mprotect

Message ID 20220711031420.17820-1-richard.henderson@linaro.org
State New
Headers show
Series linux-user/aarch64: Do not clear PROT_MTE on mprotect | expand

Commit Message

Richard Henderson July 11, 2022, 3:14 a.m. UTC
The documentation for PROT_MTE says that it cannot be cleared
by mprotect.  Further, the implementation of the VM_ARCH_CLEAR bit,
contains PROT_BTI confiming that bit should be cleared.

Introduce PAGE_TARGET_STICKY to allow target/arch/cpu.h to control
which bits may be reset during page_set_flags.  This is sort of the
opposite of VM_ARCH_CLEAR, but works better with qemu's PAGE_* bits
that are separate from PROT_* bits.

Reported-by: Vitaly Buka <vitalybuka@google.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---

My initial reaction to the bug report was that we weren't treating
the other PAGE_* bits properly during the update.  But auditing the
code more thoroughly shows we are -- it's just PROT_MTE that's not
up to scratch.


r~

---
 target/arm/cpu.h          |  7 +++++--
 accel/tcg/translate-all.c | 13 +++++++++++--
 2 files changed, 16 insertions(+), 4 deletions(-)

Comments

Peter Maydell July 14, 2022, 2:54 p.m. UTC | #1
On Mon, 11 Jul 2022 at 04:14, Richard Henderson
<richard.henderson@linaro.org> wrote:
>
> The documentation for PROT_MTE says that it cannot be cleared
> by mprotect.  Further, the implementation of the VM_ARCH_CLEAR bit,
> contains PROT_BTI confiming that bit should be cleared.
>
> Introduce PAGE_TARGET_STICKY to allow target/arch/cpu.h to control
> which bits may be reset during page_set_flags.  This is sort of the
> opposite of VM_ARCH_CLEAR, but works better with qemu's PAGE_* bits
> that are separate from PROT_* bits.
>
> Reported-by: Vitaly Buka <vitalybuka@google.com>
> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
> ---
>
> My initial reaction to the bug report was that we weren't treating
> the other PAGE_* bits properly during the update.  But auditing the
> code more thoroughly shows we are -- it's just PROT_MTE that's not
> up to scratch.


Applied to target-arm.next, thanks.

-- PMM
diff mbox series

Patch

diff --git a/target/arm/cpu.h b/target/arm/cpu.h
index 1f4f3e0485..35c279e1f1 100644
--- a/target/arm/cpu.h
+++ b/target/arm/cpu.h
@@ -3385,9 +3385,12 @@  static inline MemTxAttrs *typecheck_memtxattrs(MemTxAttrs *x)
 
 /*
  * AArch64 usage of the PAGE_TARGET_* bits for linux-user.
+ * Note that with the Linux kernel, PROT_MTE may not be cleared by mprotect
+ * mprotect but PROT_BTI may be cleared.  C.f. the kernel's VM_ARCH_CLEAR.
  */
-#define PAGE_BTI  PAGE_TARGET_1
-#define PAGE_MTE  PAGE_TARGET_2
+#define PAGE_BTI            PAGE_TARGET_1
+#define PAGE_MTE            PAGE_TARGET_2
+#define PAGE_TARGET_STICKY  PAGE_MTE
 
 #ifdef TARGET_TAGGED_ADDRESSES
 /**
diff --git a/accel/tcg/translate-all.c b/accel/tcg/translate-all.c
index 8fd23a9d05..ef62a199c7 100644
--- a/accel/tcg/translate-all.c
+++ b/accel/tcg/translate-all.c
@@ -2256,6 +2256,15 @@  int page_get_flags(target_ulong address)
     return p->flags;
 }
 
+/*
+ * Allow the target to decide if PAGE_TARGET_[12] may be reset.
+ * By default, they are not kept.
+ */
+#ifndef PAGE_TARGET_STICKY
+#define PAGE_TARGET_STICKY  0
+#endif
+#define PAGE_STICKY  (PAGE_ANON | PAGE_TARGET_STICKY)
+
 /* Modify the flags of a page and invalidate the code if necessary.
    The flag PAGE_WRITE_ORG is positioned automatically depending
    on PAGE_WRITE.  The mmap_lock should already be held.  */
@@ -2299,8 +2308,8 @@  void page_set_flags(target_ulong start, target_ulong end, int flags)
             p->target_data = NULL;
             p->flags = flags;
         } else {
-            /* Using mprotect on a page does not change MAP_ANON. */
-            p->flags = (p->flags & PAGE_ANON) | flags;
+            /* Using mprotect on a page does not change sticky bits. */
+            p->flags = (p->flags & PAGE_STICKY) | flags;
         }
     }
 }