Message ID | 20210322201336.9539-1-peter.maydell@linaro.org |
---|---|
State | Superseded |
Headers | show |
Series | hw/pci-host/gpex: Don't fault for unmapped parts of MMIO and PIO windows | expand |
6On Mon, Mar 22, 2021 at 9:13 PM Peter Maydell <peter.maydell@linaro.org> wrote: > > Currently the gpex PCI controller implements no special behaviour for > guest accesses to areas of the PIO and MMIO where it has not mapped > any PCI devices, which means that for Arm you end up with a CPU > exception due to a data abort. > > Most host OSes expect "like an x86 PC" behaviour, where bad accesses > like this return -1 for reads and ignore writes. In the interests of > not being surprising, make host CPU accesses to these windows behave > as -1/discard where there's no mapped PCI device. > > Reported-by: Dmitry Vyukov <dvyukov@google.com> > Fixes: https://bugs.launchpad.net/qemu/+bug/1918917 > Signed-off-by: Peter Maydell <peter.maydell@linaro.org> > --- > Not convinced that this is 6.0 material, because IMHO the > kernel shouldn't be doing this in the first place. > Do we need to have the property machinery so that old > virt-5.2 etc retain the previous behaviour ? I think it would be sufficient to do this for the ioport window, which is what old-style ISA drivers access. I am not aware of any driver accessing hardcoded addresses in the mmio window, at least not without probing io ports first (the VGA text console would use both). I checked which SoCs the kernel supports that do require a special hook to avoid an abort and found these: arch/arm/mach-bcm/bcm_5301x.c: hook_fault_code(16 + 6, bcm5301x_abort_handler, SIGBUS, BUS_OBJERR, arch/arm/mach-cns3xxx/pcie.c: hook_fault_code(16 + 6, cns3xxx_pcie_abort_handler, SIGBUS, 0, arch/arm/mach-iop32x/pci.c: hook_fault_code(16+6, iop3xx_pci_abort, SIGBUS, 0, "imprecise external abort"); arch/arm/mach-ixp4xx/common-pci.c: hook_fault_code(16+6, abort_handler, SIGBUS, 0, drivers/pci/controller/dwc/pci-imx6.c: hook_fault_code(8, imx6q_pcie_abort_handler, SIGBUS, 0, drivers/pci/controller/dwc/pci-keystone.c: hook_fault_code(17, ks_pcie_fault, SIGBUS, 0, The first four (bcm5301x, cns3xxx, iop32x and ixp4xx) generate an 'imprecise external abort' (16+6), imx6q has a "precise external abort on non-linefetch" (8), and keystone in LPAE mode has an "asynchronous external abort". The only SoC among those that is emulated by qemu to my knowledge is the i.MX6q in the 'sabrelite' machine. It's possible that some of these are not caused by a PCI master abort but some other error condition on the PCI host though. I think most other PCI implementations either ignore the error or generate an I/O interrupt. Arnd
On Mon, Mar 22, 2021 at 08:13:36PM +0000, Peter Maydell wrote: > Currently the gpex PCI controller implements no special behaviour for > guest accesses to areas of the PIO and MMIO where it has not mapped > any PCI devices, which means that for Arm you end up with a CPU > exception due to a data abort. > > Most host OSes expect "like an x86 PC" behaviour, where bad accesses > like this return -1 for reads and ignore writes. In the interests of > not being surprising, make host CPU accesses to these windows behave > as -1/discard where there's no mapped PCI device. > > Reported-by: Dmitry Vyukov <dvyukov@google.com> > Fixes: https://bugs.launchpad.net/qemu/+bug/1918917 > Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Acked-by: Michael S. Tsirkin <mst@redhat.com> BTW it looks like launchpad butchered the lore.kernel.org link so one can't find out what was the guest issue this is fixing. Want to include a bit more data in the commit log instead? > --- > Not convinced that this is 6.0 material, because IMHO the > kernel shouldn't be doing this in the first place. > Do we need to have the property machinery so that old > virt-5.2 etc retain the previous behaviour ? > --- > include/hw/pci-host/gpex.h | 2 ++ > hw/pci-host/gpex.c | 37 +++++++++++++++++++++++++++++++++++-- > 2 files changed, 37 insertions(+), 2 deletions(-) > > diff --git a/include/hw/pci-host/gpex.h b/include/hw/pci-host/gpex.h > index d48a020a952..ad876ecd209 100644 > --- a/include/hw/pci-host/gpex.h > +++ b/include/hw/pci-host/gpex.h > @@ -49,6 +49,8 @@ struct GPEXHost { > > MemoryRegion io_ioport; > MemoryRegion io_mmio; > + MemoryRegion io_ioport_window; > + MemoryRegion io_mmio_window; > qemu_irq irq[GPEX_NUM_IRQS]; > int irq_num[GPEX_NUM_IRQS]; > }; > diff --git a/hw/pci-host/gpex.c b/hw/pci-host/gpex.c > index 2bdbe7b4561..1f48c89ac6a 100644 > --- a/hw/pci-host/gpex.c > +++ b/hw/pci-host/gpex.c > @@ -82,13 +82,46 @@ static void gpex_host_realize(DeviceState *dev, Error **errp) > PCIExpressHost *pex = PCIE_HOST_BRIDGE(dev); > int i; > > + /* > + * Note that the MemoryRegions io_mmio and io_ioport that we pass > + * to pci_register_root_bus() are not the same as the > + * MemoryRegions io_mmio_window and io_ioport_window that we > + * expose as SysBus MRs. The difference is in the behaviour of > + * accesses to addresses where no PCI device has been mapped. > + * > + * io_mmio and io_ioport are the underlying PCI view of the PCI > + * address space, and when a PCI device does a bus master access > + * to a bad address this is reported back to it as a transaction > + * failure. > + * > + * io_mmio_window and io_ioport_window implement "unmapped > + * addresses read as -1 and ignore writes"; this is traditional > + * x86 PC behaviour, which is not mandated by the PCI spec proper > + * but expected by much PCI-using guest software, including Linux. > + * > + * In the interests of not being unnecessarily surprising, we > + * implement it in the gpex PCI host controller, by providing the > + * _window MRs, which are containers with io ops that implement > + * the 'background' behaviour and which hold the real PCI MRs as > + * subregions. > + */ > pcie_host_mmcfg_init(pex, PCIE_MMCFG_SIZE_MAX); > memory_region_init(&s->io_mmio, OBJECT(s), "gpex_mmio", UINT64_MAX); > memory_region_init(&s->io_ioport, OBJECT(s), "gpex_ioport", 64 * 1024); > > + memory_region_init_io(&s->io_mmio_window, OBJECT(s), > + &unassigned_io_ops, OBJECT(s), > + "gpex_mmio_window", UINT64_MAX); > + memory_region_init_io(&s->io_ioport_window, OBJECT(s), > + &unassigned_io_ops, OBJECT(s), > + "gpex_ioport_window", 64 * 1024); > + > + memory_region_add_subregion(&s->io_mmio_window, 0, &s->io_mmio); > + memory_region_add_subregion(&s->io_ioport_window, 0, &s->io_ioport); > + > sysbus_init_mmio(sbd, &pex->mmio); > - sysbus_init_mmio(sbd, &s->io_mmio); > - sysbus_init_mmio(sbd, &s->io_ioport); > + sysbus_init_mmio(sbd, &s->io_mmio_window); > + sysbus_init_mmio(sbd, &s->io_ioport_window); > for (i = 0; i < GPEX_NUM_IRQS; i++) { > sysbus_init_irq(sbd, &s->irq[i]); > s->irq_num[i] = -1; > -- > 2.20.1
On Mon, 22 Mar 2021 at 22:35, Michael S. Tsirkin <mst@redhat.com> wrote: > > On Mon, Mar 22, 2021 at 08:13:36PM +0000, Peter Maydell wrote: > > Currently the gpex PCI controller implements no special behaviour for > > guest accesses to areas of the PIO and MMIO where it has not mapped > > any PCI devices, which means that for Arm you end up with a CPU > > exception due to a data abort. > > > > Most host OSes expect "like an x86 PC" behaviour, where bad accesses > > like this return -1 for reads and ignore writes. In the interests of > > not being surprising, make host CPU accesses to these windows behave > > as -1/discard where there's no mapped PCI device. > > > > Reported-by: Dmitry Vyukov <dvyukov@google.com> > > Fixes: https://bugs.launchpad.net/qemu/+bug/1918917 > > Signed-off-by: Peter Maydell <peter.maydell@linaro.org> > > Acked-by: Michael S. Tsirkin <mst@redhat.com> > > BTW it looks like launchpad butchered the lore.kernel.org > link so one can't find out what was the guest issue this is > fixing. Want to include a bit more data in the commit log > instead? The link in the LP report works for me, I can just click straight through. https://lore.kernel.org/lkml/CAK8P3a0HVu+x0T6+K3d0v1bvU-Pes0F0CSjqm5x=bxFgv5Y3mA@mail.gmail.com/ It's a syzkaller report that the kernel falls over if userspace tries to access a non-existent 8250 UART, because it doesn't expect the external abort. > > Do we need to have the property machinery so that old > > virt-5.2 etc retain the previous behaviour ? Musing on this after sending the patch, I'm leaning towards adding the property stuff, just to be on the safe side. thanks -- PMM
diff --git a/include/hw/pci-host/gpex.h b/include/hw/pci-host/gpex.h index d48a020a952..ad876ecd209 100644 --- a/include/hw/pci-host/gpex.h +++ b/include/hw/pci-host/gpex.h @@ -49,6 +49,8 @@ struct GPEXHost { MemoryRegion io_ioport; MemoryRegion io_mmio; + MemoryRegion io_ioport_window; + MemoryRegion io_mmio_window; qemu_irq irq[GPEX_NUM_IRQS]; int irq_num[GPEX_NUM_IRQS]; }; diff --git a/hw/pci-host/gpex.c b/hw/pci-host/gpex.c index 2bdbe7b4561..1f48c89ac6a 100644 --- a/hw/pci-host/gpex.c +++ b/hw/pci-host/gpex.c @@ -82,13 +82,46 @@ static void gpex_host_realize(DeviceState *dev, Error **errp) PCIExpressHost *pex = PCIE_HOST_BRIDGE(dev); int i; + /* + * Note that the MemoryRegions io_mmio and io_ioport that we pass + * to pci_register_root_bus() are not the same as the + * MemoryRegions io_mmio_window and io_ioport_window that we + * expose as SysBus MRs. The difference is in the behaviour of + * accesses to addresses where no PCI device has been mapped. + * + * io_mmio and io_ioport are the underlying PCI view of the PCI + * address space, and when a PCI device does a bus master access + * to a bad address this is reported back to it as a transaction + * failure. + * + * io_mmio_window and io_ioport_window implement "unmapped + * addresses read as -1 and ignore writes"; this is traditional + * x86 PC behaviour, which is not mandated by the PCI spec proper + * but expected by much PCI-using guest software, including Linux. + * + * In the interests of not being unnecessarily surprising, we + * implement it in the gpex PCI host controller, by providing the + * _window MRs, which are containers with io ops that implement + * the 'background' behaviour and which hold the real PCI MRs as + * subregions. + */ pcie_host_mmcfg_init(pex, PCIE_MMCFG_SIZE_MAX); memory_region_init(&s->io_mmio, OBJECT(s), "gpex_mmio", UINT64_MAX); memory_region_init(&s->io_ioport, OBJECT(s), "gpex_ioport", 64 * 1024); + memory_region_init_io(&s->io_mmio_window, OBJECT(s), + &unassigned_io_ops, OBJECT(s), + "gpex_mmio_window", UINT64_MAX); + memory_region_init_io(&s->io_ioport_window, OBJECT(s), + &unassigned_io_ops, OBJECT(s), + "gpex_ioport_window", 64 * 1024); + + memory_region_add_subregion(&s->io_mmio_window, 0, &s->io_mmio); + memory_region_add_subregion(&s->io_ioport_window, 0, &s->io_ioport); + sysbus_init_mmio(sbd, &pex->mmio); - sysbus_init_mmio(sbd, &s->io_mmio); - sysbus_init_mmio(sbd, &s->io_ioport); + sysbus_init_mmio(sbd, &s->io_mmio_window); + sysbus_init_mmio(sbd, &s->io_ioport_window); for (i = 0; i < GPEX_NUM_IRQS; i++) { sysbus_init_irq(sbd, &s->irq[i]); s->irq_num[i] = -1;
Currently the gpex PCI controller implements no special behaviour for guest accesses to areas of the PIO and MMIO where it has not mapped any PCI devices, which means that for Arm you end up with a CPU exception due to a data abort. Most host OSes expect "like an x86 PC" behaviour, where bad accesses like this return -1 for reads and ignore writes. In the interests of not being surprising, make host CPU accesses to these windows behave as -1/discard where there's no mapped PCI device. Reported-by: Dmitry Vyukov <dvyukov@google.com> Fixes: https://bugs.launchpad.net/qemu/+bug/1918917 Signed-off-by: Peter Maydell <peter.maydell@linaro.org> --- Not convinced that this is 6.0 material, because IMHO the kernel shouldn't be doing this in the first place. Do we need to have the property machinery so that old virt-5.2 etc retain the previous behaviour ? --- include/hw/pci-host/gpex.h | 2 ++ hw/pci-host/gpex.c | 37 +++++++++++++++++++++++++++++++++++-- 2 files changed, 37 insertions(+), 2 deletions(-) -- 2.20.1