diff mbox series

softmmu: Always initialize xlat in address_space_translate_for_iotlb

Message ID 20220615163846.313229-1-richard.henderson@linaro.org
State New
Headers show
Series softmmu: Always initialize xlat in address_space_translate_for_iotlb | expand

Commit Message

Richard Henderson June 15, 2022, 4:38 p.m. UTC
The bug is an uninitialized memory read, along the translate_fail
path, which results in garbage being read from iotlb_to_section,
which can lead to a crash in io_readx/io_writex.

The bug may be fixed by writing any value with zero
in ~TARGET_PAGE_MASK, so that the call to iotlb_to_section using
the xlat'ed address returns io_mem_unassigned, as desired by the
translate_fail path.

It is most useful to record the original physical page address,
which will eventually be logged by memory_region_access_valid
when the access is rejected by unassigned_mem_accepts.

Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1065
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
 softmmu/physmem.c | 3 +++
 1 file changed, 3 insertions(+)

Comments

Peter Maydell June 20, 2022, 12:52 p.m. UTC | #1
On Wed, 15 Jun 2022 at 17:43, Richard Henderson
<richard.henderson@linaro.org> wrote:
>
> The bug is an uninitialized memory read, along the translate_fail
> path, which results in garbage being read from iotlb_to_section,
> which can lead to a crash in io_readx/io_writex.
>
> The bug may be fixed by writing any value with zero
> in ~TARGET_PAGE_MASK, so that the call to iotlb_to_section using
> the xlat'ed address returns io_mem_unassigned, as desired by the
> translate_fail path.
>
> It is most useful to record the original physical page address,
> which will eventually be logged by memory_region_access_valid
> when the access is rejected by unassigned_mem_accepts.
>
> Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1065
> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
> ---
>  softmmu/physmem.c | 3 +++
>  1 file changed, 3 insertions(+)
>
> diff --git a/softmmu/physmem.c b/softmmu/physmem.c
> index 657841eed0..fb0f0709b5 100644
> --- a/softmmu/physmem.c
> +++ b/softmmu/physmem.c
> @@ -681,6 +681,9 @@ address_space_translate_for_iotlb(CPUState *cpu, int asidx, hwaddr addr,
>      AddressSpaceDispatch *d =
>          qatomic_rcu_read(&cpu->cpu_ases[asidx].memory_dispatch);
>
> +    /* Record the original phys page for use by the translate_fail path. */
> +    *xlat = addr;

There's no doc comment for address_space_translate_for_iotlb(),
so there's nothing that says explicitly that addr is obliged
to be page aligned, although it happens that its only caller
does pass a page-aligned address. Were we already implicitly
requiring a page-aligned address here, or does not masking
addr before assigning to *xlat impose a new requirement ?

thanks
-- PMM
Richard Henderson June 20, 2022, 4:53 p.m. UTC | #2
On 6/20/22 05:52, Peter Maydell wrote:
> On Wed, 15 Jun 2022 at 17:43, Richard Henderson
> <richard.henderson@linaro.org> wrote:
>>
>> The bug is an uninitialized memory read, along the translate_fail
>> path, which results in garbage being read from iotlb_to_section,
>> which can lead to a crash in io_readx/io_writex.
>>
>> The bug may be fixed by writing any value with zero
>> in ~TARGET_PAGE_MASK, so that the call to iotlb_to_section using
>> the xlat'ed address returns io_mem_unassigned, as desired by the
>> translate_fail path.
>>
>> It is most useful to record the original physical page address,
>> which will eventually be logged by memory_region_access_valid
>> when the access is rejected by unassigned_mem_accepts.
>>
>> Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1065
>> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
>> ---
>>   softmmu/physmem.c | 3 +++
>>   1 file changed, 3 insertions(+)
>>
>> diff --git a/softmmu/physmem.c b/softmmu/physmem.c
>> index 657841eed0..fb0f0709b5 100644
>> --- a/softmmu/physmem.c
>> +++ b/softmmu/physmem.c
>> @@ -681,6 +681,9 @@ address_space_translate_for_iotlb(CPUState *cpu, int asidx, hwaddr addr,
>>       AddressSpaceDispatch *d =
>>           qatomic_rcu_read(&cpu->cpu_ases[asidx].memory_dispatch);
>>
>> +    /* Record the original phys page for use by the translate_fail path. */
>> +    *xlat = addr;
> 
> There's no doc comment for address_space_translate_for_iotlb(),
> so there's nothing that says explicitly that addr is obliged
> to be page aligned, although it happens that its only caller
> does pass a page-aligned address. Were we already implicitly
> requiring a page-aligned address here, or does not masking
> addr before assigning to *xlat impose a new requirement ?

I have no idea.  The whole lookup process is both undocumented and twistedly complex.  I'm 
willing to add an extra masking operation here, if it seems necessary?


r~
Peter Maydell June 21, 2022, 3:06 p.m. UTC | #3
On Mon, 20 Jun 2022 at 17:54, Richard Henderson
<richard.henderson@linaro.org> wrote:
>
> On 6/20/22 05:52, Peter Maydell wrote:
> > On Wed, 15 Jun 2022 at 17:43, Richard Henderson
> > <richard.henderson@linaro.org> wrote:
> >>
> >> The bug is an uninitialized memory read, along the translate_fail
> >> path, which results in garbage being read from iotlb_to_section,
> >> which can lead to a crash in io_readx/io_writex.
> >>
> >> The bug may be fixed by writing any value with zero
> >> in ~TARGET_PAGE_MASK, so that the call to iotlb_to_section using
> >> the xlat'ed address returns io_mem_unassigned, as desired by the
> >> translate_fail path.
> >>
> >> It is most useful to record the original physical page address,
> >> which will eventually be logged by memory_region_access_valid
> >> when the access is rejected by unassigned_mem_accepts.
> >>
> >> Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1065
> >> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
> >> ---
> >>   softmmu/physmem.c | 3 +++
> >>   1 file changed, 3 insertions(+)
> >>
> >> diff --git a/softmmu/physmem.c b/softmmu/physmem.c
> >> index 657841eed0..fb0f0709b5 100644
> >> --- a/softmmu/physmem.c
> >> +++ b/softmmu/physmem.c
> >> @@ -681,6 +681,9 @@ address_space_translate_for_iotlb(CPUState *cpu, int asidx, hwaddr addr,
> >>       AddressSpaceDispatch *d =
> >>           qatomic_rcu_read(&cpu->cpu_ases[asidx].memory_dispatch);
> >>
> >> +    /* Record the original phys page for use by the translate_fail path. */
> >> +    *xlat = addr;
> >
> > There's no doc comment for address_space_translate_for_iotlb(),
> > so there's nothing that says explicitly that addr is obliged
> > to be page aligned, although it happens that its only caller
> > does pass a page-aligned address. Were we already implicitly
> > requiring a page-aligned address here, or does not masking
> > addr before assigning to *xlat impose a new requirement ?
>
> I have no idea.  The whole lookup process is both undocumented and twistedly complex.  I'm
> willing to add an extra masking operation here, if it seems necessary?

I think we should do one of:
 * document that we assume the address is page-aligned
 * assert that the address is page-aligned
 * mask to force it to page-alignedness

but I much don't care which one of those we do. Maybe we should
assert((*xlat & ~TARGET_PAGE_MASK) == 0) at the translate_fail
label, with a suitable comment ?

thanks
-- PMM
diff mbox series

Patch

diff --git a/softmmu/physmem.c b/softmmu/physmem.c
index 657841eed0..fb0f0709b5 100644
--- a/softmmu/physmem.c
+++ b/softmmu/physmem.c
@@ -681,6 +681,9 @@  address_space_translate_for_iotlb(CPUState *cpu, int asidx, hwaddr addr,
     AddressSpaceDispatch *d =
         qatomic_rcu_read(&cpu->cpu_ases[asidx].memory_dispatch);
 
+    /* Record the original phys page for use by the translate_fail path. */
+    *xlat = addr;
+
     for (;;) {
         section = address_space_translate_internal(d, addr, &addr, plen, false);