diff mbox series

hw/pci-host/raven.c: Mark raven_io_ops as implementing unaligned accesses

Message ID 20240112134640.1775041-1-peter.maydell@linaro.org
State Superseded
Headers show
Series hw/pci-host/raven.c: Mark raven_io_ops as implementing unaligned accesses | expand

Commit Message

Peter Maydell Jan. 12, 2024, 1:46 p.m. UTC
The raven_io_ops MemoryRegionOps is the only one in the source tree
which sets .valid.unaligned to indicate that it should support
unaligned accesses and which does not also set .impl.unaligned to
indicate that its read and write functions can do the unaligned
handling themselves.  This is a problem, because at the moment the
core memory system does not implement the support for handling
unaligned accesses by doing a series of aligned accesses and
combining them (system/memory.c:access_with_adjusted_size() has a
TODO comment noting this).

Fortunately raven_io_read() and raven_io_write() will correctly deal
with the case of being passed an unaligned address, so we can fix the
missing unaligned access support by setting .impl.unaligned in the
MemoryRegionOps struct.

Fixes: 9a1839164c9c8f06 ("raven: Implement non-contiguous I/O region")
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
---
Spotted by code inspection: I was looking for devices whose behaviour
might be changed by a patch I'm reviewing that adds that missing
support for unaligned accesses in the core memory system. But even
if we do implement it there, it's more efficient for the raven MR
to correctly mark it as handling unaligned accesses itself.

Tested with 'make check' and 'make check-avocado' only.
---
 hw/pci-host/raven.c | 1 +
 1 file changed, 1 insertion(+)

Comments

Cédric Le Goater Jan. 23, 2024, 11:03 a.m. UTC | #1
On 1/12/24 14:46, Peter Maydell wrote:
> The raven_io_ops MemoryRegionOps is the only one in the source tree
> which sets .valid.unaligned to indicate that it should support
> unaligned accesses and which does not also set .impl.unaligned to
> indicate that its read and write functions can do the unaligned
> handling themselves.  This is a problem, because at the moment the
> core memory system does not implement the support for handling
> unaligned accesses by doing a series of aligned accesses and
> combining them (system/memory.c:access_with_adjusted_size() has a
> TODO comment noting this).
> 
> Fortunately raven_io_read() and raven_io_write() will correctly deal
> with the case of being passed an unaligned address, so we can fix the
> missing unaligned access support by setting .impl.unaligned in the
> MemoryRegionOps struct.
> 
> Fixes: 9a1839164c9c8f06 ("raven: Implement non-contiguous I/O region")
> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
> ---
> Spotted by code inspection: I was looking for devices whose behaviour
> might be changed by a patch I'm reviewing that adds that missing
> support for unaligned accesses in the core memory system. But even
> if we do implement it there, it's more efficient for the raven MR
> to correctly mark it as handling unaligned accesses itself.
> 
> Tested with 'make check' and 'make check-avocado' only.

It doesn't affect the prep machine boot with OpenBIOS and a
"Debian GNU/Linux 3.0 6015" image.

Tested-by: Cédric Le Goater <clg@redhat.com>
Reviewed-by: Cédric Le Goater <clg@redhat.com>

Thanks,

C.



> ---
>   hw/pci-host/raven.c | 1 +
>   1 file changed, 1 insertion(+)
> 
> diff --git a/hw/pci-host/raven.c b/hw/pci-host/raven.c
> index c7a0a2878ab..a7dfddd69ea 100644
> --- a/hw/pci-host/raven.c
> +++ b/hw/pci-host/raven.c
> @@ -200,6 +200,7 @@ static const MemoryRegionOps raven_io_ops = {
>       .write = raven_io_write,
>       .endianness = DEVICE_LITTLE_ENDIAN,
>       .impl.max_access_size = 4,
> +    .impl.unaligned = true,
>       .valid.unaligned = true,
>   };
>
Peter Maydell Feb. 1, 2024, 1:32 p.m. UTC | #2
On Tue, 23 Jan 2024 at 11:03, Cédric Le Goater <clegoate@redhat.com> wrote:
>
> On 1/12/24 14:46, Peter Maydell wrote:
> > The raven_io_ops MemoryRegionOps is the only one in the source tree
> > which sets .valid.unaligned to indicate that it should support
> > unaligned accesses and which does not also set .impl.unaligned to
> > indicate that its read and write functions can do the unaligned
> > handling themselves.  This is a problem, because at the moment the
> > core memory system does not implement the support for handling
> > unaligned accesses by doing a series of aligned accesses and
> > combining them (system/memory.c:access_with_adjusted_size() has a
> > TODO comment noting this).
> >
> > Fortunately raven_io_read() and raven_io_write() will correctly deal
> > with the case of being passed an unaligned address, so we can fix the
> > missing unaligned access support by setting .impl.unaligned in the
> > MemoryRegionOps struct.
> >
> > Fixes: 9a1839164c9c8f06 ("raven: Implement non-contiguous I/O region")
> > Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
> > ---
> > Spotted by code inspection: I was looking for devices whose behaviour
> > might be changed by a patch I'm reviewing that adds that missing
> > support for unaligned accesses in the core memory system. But even
> > if we do implement it there, it's more efficient for the raven MR
> > to correctly mark it as handling unaligned accesses itself.
> >
> > Tested with 'make check' and 'make check-avocado' only.
>
> It doesn't affect the prep machine boot with OpenBIOS and a
> "Debian GNU/Linux 3.0 6015" image.
>
> Tested-by: Cédric Le Goater <clg@redhat.com>
> Reviewed-by: Cédric Le Goater <clg@redhat.com>

Thanks for the review -- is this patch going to go via a
ppc queue, or should I throw it in with my upcoming
target-arm pullreq?

-- PMM
Cédric Le Goater Feb. 1, 2024, 1:46 p.m. UTC | #3
On 2/1/24 14:32, Peter Maydell wrote:
> On Tue, 23 Jan 2024 at 11:03, Cédric Le Goater <clegoate@redhat.com> wrote:
>>
>> On 1/12/24 14:46, Peter Maydell wrote:
>>> The raven_io_ops MemoryRegionOps is the only one in the source tree
>>> which sets .valid.unaligned to indicate that it should support
>>> unaligned accesses and which does not also set .impl.unaligned to
>>> indicate that its read and write functions can do the unaligned
>>> handling themselves.  This is a problem, because at the moment the
>>> core memory system does not implement the support for handling
>>> unaligned accesses by doing a series of aligned accesses and
>>> combining them (system/memory.c:access_with_adjusted_size() has a
>>> TODO comment noting this).
>>>
>>> Fortunately raven_io_read() and raven_io_write() will correctly deal
>>> with the case of being passed an unaligned address, so we can fix the
>>> missing unaligned access support by setting .impl.unaligned in the
>>> MemoryRegionOps struct.
>>>
>>> Fixes: 9a1839164c9c8f06 ("raven: Implement non-contiguous I/O region")
>>> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
>>> ---
>>> Spotted by code inspection: I was looking for devices whose behaviour
>>> might be changed by a patch I'm reviewing that adds that missing
>>> support for unaligned accesses in the core memory system. But even
>>> if we do implement it there, it's more efficient for the raven MR
>>> to correctly mark it as handling unaligned accesses itself.
>>>
>>> Tested with 'make check' and 'make check-avocado' only.
>>
>> It doesn't affect the prep machine boot with OpenBIOS and a
>> "Debian GNU/Linux 3.0 6015" image.
>>
>> Tested-by: Cédric Le Goater <clg@redhat.com>
>> Reviewed-by: Cédric Le Goater <clg@redhat.com>
> 
> Thanks for the review -- is this patch going to go via a
> ppc queue, or should I throw it in with my upcoming
> target-arm pullreq?

Please take it through target-arm. PPC is in low power state AFAICT.

Thanks,

C.
diff mbox series

Patch

diff --git a/hw/pci-host/raven.c b/hw/pci-host/raven.c
index c7a0a2878ab..a7dfddd69ea 100644
--- a/hw/pci-host/raven.c
+++ b/hw/pci-host/raven.c
@@ -200,6 +200,7 @@  static const MemoryRegionOps raven_io_ops = {
     .write = raven_io_write,
     .endianness = DEVICE_LITTLE_ENDIAN,
     .impl.max_access_size = 4,
+    .impl.unaligned = true,
     .valid.unaligned = true,
 };