mbox series

[RFC,00/13] virtio-iommu on non-devicetree platforms

Message ID 20191122105000.800410-1-jean-philippe@linaro.org
Headers show
Series virtio-iommu on non-devicetree platforms | expand

Message

Jean-Philippe Brucker Nov. 22, 2019, 10:49 a.m. UTC
I'm seeking feedback on multi-platform support for virtio-iommu. At the
moment only devicetree (DT) is supported and we don't have a pleasant
solution for other platforms. Once we figure out the topology
description, x86 support is trivial.

Since the IOMMU manages memory accesses from other devices, the guest
kernel needs to initialize the IOMMU before endpoints start issuing DMA.
It's a solved problem: firmware or hypervisor describes through DT or
ACPI tables the device dependencies, and probe of endpoints is deferred
until the IOMMU is probed. But:

(1) ACPI has one table per vendor (DMAR for Intel, IVRS for AMD and IORT
    for Arm). From my point of view IORT is easier to extend, since we
    just need to introduce a new node type. There are no dependencies to
    Arm in the Linux IORT driver, so it works well with CONFIG_X86.

    However, there are concerns about other OS vendors feeling obligated
    to implement this new node, so Arm proposed introducing another ACPI
    table, that can wrap any of DMAR, IVRS and IORT to extend it with
    new virtual nodes. A draft of this VIOT table specification is
    available at http://jpbrucker.net/virtio-iommu/viot/viot-v5.pdf

    I'm afraid this could increase fragmentation as guests would need to
    implement or modify their support for all of DMAR, IVRS and IORT. If
    we end up doing VIOT, I suggest limiting it to IORT.

(2) In addition, there are some concerns about having virtio depend on
    ACPI or DT. Some hypervisors (Firecracker, QEMU microvm, kvmtool x86
    [1]) don't currently implement those methods.

    It was suggested to embed the topology description into the device.
    It can work, as demonstrated at the end of this RFC, with the
    following limitations:

    - The topology description must be read before any endpoint managed
      by the IOMMU is probed, and even before the virtio module is
      loaded. This RFC uses a PCI quirk to manually parse the virtio
      configuration. It assumes that all endpoints managed by the IOMMU
      are under this same PCI host.

    - I don't have a solution for the virtio-mmio transport at the
      moment, because I haven't had time to modify a host to test it. I
      think it could either use a notifier on the platform bus, or
      better, a new 'iommu' command-line argument to the virtio-mmio
      driver. So the current prototype doesn't work for firecracker and
      microvm, which rely on virtio-mmio.

    - For Arm, if the platform has an ITS, the hypervisor needs IORT or
      DT to describe it anyway. More generally, not using either ACPI or
      DT might prevent from supporting other features as well. I suspect
      the above users will have to implement a standard method sooner or
      later.

    - Even when reusing as much existing code as possible, guest support
      is still going to be around a few hundred lines since we can't
      rely on the normal virtio infrastructure to be loaded at that
      point. As you can see below, the diffstat for the incomplete
      topology implementation is already bigger than the exhaustive IORT
      support, even when jumping through the VIOT hoop.

    So it's a lightweight solution for very specific use-cases, and we
    should still support ACPI for the general case. Multi-platform
    guests such as Linux will then need to support three topology
    descriptions instead of two.

In this RFC I present both solutions, but I'd rather not keep all of it.
Please see the individual patches for details:

(1) Patches 1, 3-10 add support for virtio-iommu to the Linux IORT
    driver and patches 2, 11 add the VIOT glue.

(2) Patch 12 adds the built-in topology description to the virtio-iommu
    specification. Patch 13 is a partial implementation for the Linux
    virtio-iommu driver. It only supports PCI, not platform devices.

You can find Linux and QEMU code on my virtio-iommu/devel branches at
http://jpbrucker.net/git/linux and http://jpbrucker.net/git/qemu


I split the diffstat since there are two independent features. The first
one is for patches 1-11, and the second one for patch 13.

Jean-Philippe Brucker (11):
  ACPI/IORT: Move IORT to the ACPI folder
  ACPI: Add VIOT definitions
  ACPI/IORT: Allow registration of external tables
  ACPI/IORT: Add node categories
  ACPI/IORT: Support VIOT virtio-mmio node
  ACPI/IORT: Support VIOT virtio-pci node
  ACPI/IORT: Defer probe until virtio-iommu-pci has registered a fwnode
  ACPI/IORT: Add callback to update a device's fwnode
  iommu/virtio: Create fwnode if necessary
  iommu/virtio: Update IORT fwnode
  ACPI: Add VIOT table

 MAINTAINERS                     |   9 +
 drivers/acpi/Kconfig            |   7 +
 drivers/acpi/Makefile           |   2 +
 drivers/acpi/arm64/Kconfig      |   3 -
 drivers/acpi/arm64/Makefile     |   1 -
 drivers/acpi/bus.c              |   2 +
 drivers/acpi/{arm64 => }/iort.c | 317 ++++++++++++++++++++++++++------
 drivers/acpi/tables.c           |   2 +-
 drivers/acpi/viot.c             |  44 +++++
 drivers/iommu/Kconfig           |   1 +
 drivers/iommu/virtio-iommu.c    |  61 +++++-
 include/acpi/actbl2.h           |  31 ++++
 include/linux/acpi_iort.h       |  14 ++
 include/linux/acpi_viot.h       |  20 ++
 14 files changed, 448 insertions(+), 66 deletions(-)
 rename drivers/acpi/{arm64 => }/iort.c (86%)
 create mode 100644 drivers/acpi/viot.c
 create mode 100644 include/linux/acpi_viot.h

Jean-Philippe Brucker (1):
  iommu/virtio: Add topology description to virtio-iommu config space

 drivers/base/platform.c               |   3 +
 drivers/iommu/Kconfig                 |   9 +
 drivers/iommu/Makefile                |   1 +
 drivers/iommu/virtio-iommu-topology.c | 410 ++++++++++++++++++++++++++
 drivers/iommu/virtio-iommu.c          |   3 +
 drivers/pci/pci-driver.c              |   3 +
 include/linux/virtio_iommu.h          |  18 ++
 include/uapi/linux/virtio_iommu.h     |  26 ++
 8 files changed, 473 insertions(+)
 create mode 100644 drivers/iommu/virtio-iommu-topology.c
 create mode 100644 include/linux/virtio_iommu.h


[1] firecracker: https://github.com/firecracker-microvm/firecracker
    microvm: https://github.com/qemu/qemu/blob/master/docs/microvm.rst
    kvmtool: https://git.kernel.org/pub/scm/linux/kernel/git/will/kvmtool.git/
-- 
2.24.0

Comments

Michael S. Tsirkin Nov. 22, 2019, 1 p.m. UTC | #1
On Fri, Nov 22, 2019 at 11:49:47AM +0100, Jean-Philippe Brucker wrote:
> I'm seeking feedback on multi-platform support for virtio-iommu. At the

> moment only devicetree (DT) is supported and we don't have a pleasant

> solution for other platforms. Once we figure out the topology

> description, x86 support is trivial.

> 

> Since the IOMMU manages memory accesses from other devices, the guest

> kernel needs to initialize the IOMMU before endpoints start issuing DMA.

> It's a solved problem: firmware or hypervisor describes through DT or

> ACPI tables the device dependencies, and probe of endpoints is deferred

> until the IOMMU is probed. But:

> 

> (1) ACPI has one table per vendor (DMAR for Intel, IVRS for AMD and IORT

>     for Arm). From my point of view IORT is easier to extend, since we

>     just need to introduce a new node type. There are no dependencies to

>     Arm in the Linux IORT driver, so it works well with CONFIG_X86.

> 

>     However, there are concerns about other OS vendors feeling obligated

>     to implement this new node, so Arm proposed introducing another ACPI

>     table, that can wrap any of DMAR, IVRS and IORT to extend it with

>     new virtual nodes. A draft of this VIOT table specification is

>     available at http://jpbrucker.net/virtio-iommu/viot/viot-v5.pdf

> 

>     I'm afraid this could increase fragmentation as guests would need to

>     implement or modify their support for all of DMAR, IVRS and IORT. If

>     we end up doing VIOT, I suggest limiting it to IORT.

> 

> (2) In addition, there are some concerns about having virtio depend on

>     ACPI or DT. Some hypervisors (Firecracker, QEMU microvm, kvmtool x86

>     [1])


power?

> don't currently implement those methods.

> 

>     It was suggested to embed the topology description into the device.

>     It can work, as demonstrated at the end of this RFC, with the

>     following limitations:

> 

>     - The topology description must be read before any endpoint managed

>       by the IOMMU is probed, and even before the virtio module is

>       loaded. This RFC uses a PCI quirk to manually parse the virtio

>       configuration. It assumes that all endpoints managed by the IOMMU

>       are under this same PCI host.

> 

>     - I don't have a solution for the virtio-mmio transport at the

>       moment, because I haven't had time to modify a host to test it. I

>       think it could either use a notifier on the platform bus, or

>       better, a new 'iommu' command-line argument to the virtio-mmio

>       driver.


	A notifier seems easier for users. What are the disadvantages of
	that?

>	So the current prototype doesn't work for firecracker and

>       microvm, which rely on virtio-mmio.

> 

>     - For Arm, if the platform has an ITS, the hypervisor needs IORT or

>       DT to describe it anyway. More generally, not using either ACPI or

>       DT might prevent from supporting other features as well. I suspect

>       the above users will have to implement a standard method sooner or

>       later.

> 

>     - Even when reusing as much existing code as possible, guest support

>       is still going to be around a few hundred lines since we can't

>       rely on the normal virtio infrastructure to be loaded at that

>       point. As you can see below, the diffstat for the incomplete

>       topology implementation is already bigger than the exhaustive IORT

>       support, even when jumping through the VIOT hoop.

> 

>     So it's a lightweight solution for very specific use-cases, and we

>     should still support ACPI for the general case. Multi-platform

>     guests such as Linux will then need to support three topology

>     descriptions instead of two.

> 

> In this RFC I present both solutions, but I'd rather not keep all of it.

> Please see the individual patches for details:

> 

> (1) Patches 1, 3-10 add support for virtio-iommu to the Linux IORT

>     driver and patches 2, 11 add the VIOT glue.

> 

> (2) Patch 12 adds the built-in topology description to the virtio-iommu

>     specification. Patch 13 is a partial implementation for the Linux

>     virtio-iommu driver. It only supports PCI, not platform devices.

> 

> You can find Linux and QEMU code on my virtio-iommu/devel branches at

> http://jpbrucker.net/git/linux and http://jpbrucker.net/git/qemu

> 

> 

> I split the diffstat since there are two independent features. The first

> one is for patches 1-11, and the second one for patch 13.

> 

> Jean-Philippe Brucker (11):

>   ACPI/IORT: Move IORT to the ACPI folder

>   ACPI: Add VIOT definitions

>   ACPI/IORT: Allow registration of external tables

>   ACPI/IORT: Add node categories

>   ACPI/IORT: Support VIOT virtio-mmio node

>   ACPI/IORT: Support VIOT virtio-pci node

>   ACPI/IORT: Defer probe until virtio-iommu-pci has registered a fwnode

>   ACPI/IORT: Add callback to update a device's fwnode

>   iommu/virtio: Create fwnode if necessary

>   iommu/virtio: Update IORT fwnode

>   ACPI: Add VIOT table

> 

>  MAINTAINERS                     |   9 +

>  drivers/acpi/Kconfig            |   7 +

>  drivers/acpi/Makefile           |   2 +

>  drivers/acpi/arm64/Kconfig      |   3 -

>  drivers/acpi/arm64/Makefile     |   1 -

>  drivers/acpi/bus.c              |   2 +

>  drivers/acpi/{arm64 => }/iort.c | 317 ++++++++++++++++++++++++++------

>  drivers/acpi/tables.c           |   2 +-

>  drivers/acpi/viot.c             |  44 +++++

>  drivers/iommu/Kconfig           |   1 +

>  drivers/iommu/virtio-iommu.c    |  61 +++++-

>  include/acpi/actbl2.h           |  31 ++++

>  include/linux/acpi_iort.h       |  14 ++

>  include/linux/acpi_viot.h       |  20 ++

>  14 files changed, 448 insertions(+), 66 deletions(-)

>  rename drivers/acpi/{arm64 => }/iort.c (86%)

>  create mode 100644 drivers/acpi/viot.c

>  create mode 100644 include/linux/acpi_viot.h

> 

> Jean-Philippe Brucker (1):

>   iommu/virtio: Add topology description to virtio-iommu config space

> 

>  drivers/base/platform.c               |   3 +

>  drivers/iommu/Kconfig                 |   9 +

>  drivers/iommu/Makefile                |   1 +

>  drivers/iommu/virtio-iommu-topology.c | 410 ++++++++++++++++++++++++++

>  drivers/iommu/virtio-iommu.c          |   3 +

>  drivers/pci/pci-driver.c              |   3 +

>  include/linux/virtio_iommu.h          |  18 ++

>  include/uapi/linux/virtio_iommu.h     |  26 ++

>  8 files changed, 473 insertions(+)

>  create mode 100644 drivers/iommu/virtio-iommu-topology.c

>  create mode 100644 include/linux/virtio_iommu.h

> 

> 

> [1] firecracker: https://github.com/firecracker-microvm/firecracker

>     microvm: https://github.com/qemu/qemu/blob/master/docs/microvm.rst

>     kvmtool: https://git.kernel.org/pub/scm/linux/kernel/git/will/kvmtool.git/

> -- 

> 2.24.0
Jacob Pan (Jun) Nov. 23, 2019, 12:01 a.m. UTC | #2
On Fri, 22 Nov 2019 11:49:47 +0100
Jean-Philippe Brucker <jean-philippe@linaro.org> wrote:

> I'm seeking feedback on multi-platform support for virtio-iommu. At

> the moment only devicetree (DT) is supported and we don't have a

> pleasant solution for other platforms. Once we figure out the topology

> description, x86 support is trivial.

> 

> Since the IOMMU manages memory accesses from other devices, the guest

> kernel needs to initialize the IOMMU before endpoints start issuing

> DMA. It's a solved problem: firmware or hypervisor describes through

> DT or ACPI tables the device dependencies, and probe of endpoints is

> deferred until the IOMMU is probed. But:

> 

> (1) ACPI has one table per vendor (DMAR for Intel, IVRS for AMD and

> IORT for Arm). From my point of view IORT is easier to extend, since

> we just need to introduce a new node type. There are no dependencies

> to Arm in the Linux IORT driver, so it works well with CONFIG_X86.

> 

From my limited understanding, IORT and VIOT is to solve device topology
enumeration only? I am not sure how it can be expanded to cover
information beyond device topology. e.g. DMAR has NUMA information and
root port ATS, I guess they are not used today in the guest but might
be additions in the future.

>     However, there are concerns about other OS vendors feeling

> obligated to implement this new node, so Arm proposed introducing

> another ACPI table, that can wrap any of DMAR, IVRS and IORT to

> extend it with new virtual nodes. A draft of this VIOT table

> specification is available at

> http://jpbrucker.net/virtio-iommu/viot/viot-v5.pdf

> 

>     I'm afraid this could increase fragmentation as guests would need

> to implement or modify their support for all of DMAR, IVRS and IORT.

> If we end up doing VIOT, I suggest limiting it to IORT.

> 

> (2) In addition, there are some concerns about having virtio depend on

>     ACPI or DT. Some hypervisors (Firecracker, QEMU microvm, kvmtool

> x86 [1]) don't currently implement those methods.

> 

>     It was suggested to embed the topology description into the

> device. It can work, as demonstrated at the end of this RFC, with the

>     following limitations:

> 

>     - The topology description must be read before any endpoint

> managed by the IOMMU is probed, and even before the virtio module is

>       loaded. This RFC uses a PCI quirk to manually parse the virtio

>       configuration. It assumes that all endpoints managed by the

> IOMMU are under this same PCI host.

> 


>     - I don't have a solution for the virtio-mmio transport at the

>       moment, because I haven't had time to modify a host to test it.

> I think it could either use a notifier on the platform bus, or

>       better, a new 'iommu' command-line argument to the virtio-mmio

>       driver. So the current prototype doesn't work for firecracker

> and microvm, which rely on virtio-mmio.

> 

>     - For Arm, if the platform has an ITS, the hypervisor needs IORT

> or DT to describe it anyway. More generally, not using either ACPI or

>       DT might prevent from supporting other features as well. I

> suspect the above users will have to implement a standard method

> sooner or later.

> 

>     - Even when reusing as much existing code as possible, guest

> support is still going to be around a few hundred lines since we can't

>       rely on the normal virtio infrastructure to be loaded at that

>       point. As you can see below, the diffstat for the incomplete

>       topology implementation is already bigger than the exhaustive

> IORT support, even when jumping through the VIOT hoop.

> 

>     So it's a lightweight solution for very specific use-cases, and we

>     should still support ACPI for the general case. Multi-platform

>     guests such as Linux will then need to support three topology

>     descriptions instead of two.

> 

> In this RFC I present both solutions, but I'd rather not keep all of

> it. Please see the individual patches for details:

> 

> (1) Patches 1, 3-10 add support for virtio-iommu to the Linux IORT

>     driver and patches 2, 11 add the VIOT glue.

> 

> (2) Patch 12 adds the built-in topology description to the

> virtio-iommu specification. Patch 13 is a partial implementation for

> the Linux virtio-iommu driver. It only supports PCI, not platform

> devices.

> 

> You can find Linux and QEMU code on my virtio-iommu/devel branches at

> http://jpbrucker.net/git/linux and http://jpbrucker.net/git/qemu

> 

> 

> I split the diffstat since there are two independent features. The

> first one is for patches 1-11, and the second one for patch 13.

> 

> Jean-Philippe Brucker (11):

>   ACPI/IORT: Move IORT to the ACPI folder

>   ACPI: Add VIOT definitions

>   ACPI/IORT: Allow registration of external tables

>   ACPI/IORT: Add node categories

>   ACPI/IORT: Support VIOT virtio-mmio node

>   ACPI/IORT: Support VIOT virtio-pci node

>   ACPI/IORT: Defer probe until virtio-iommu-pci has registered a

> fwnode ACPI/IORT: Add callback to update a device's fwnode

>   iommu/virtio: Create fwnode if necessary

>   iommu/virtio: Update IORT fwnode

>   ACPI: Add VIOT table

> 

>  MAINTAINERS                     |   9 +

>  drivers/acpi/Kconfig            |   7 +

>  drivers/acpi/Makefile           |   2 +

>  drivers/acpi/arm64/Kconfig      |   3 -

>  drivers/acpi/arm64/Makefile     |   1 -

>  drivers/acpi/bus.c              |   2 +

>  drivers/acpi/{arm64 => }/iort.c | 317

> ++++++++++++++++++++++++++------ drivers/acpi/tables.c           |

> 2 +- drivers/acpi/viot.c             |  44 +++++

>  drivers/iommu/Kconfig           |   1 +

>  drivers/iommu/virtio-iommu.c    |  61 +++++-

>  include/acpi/actbl2.h           |  31 ++++

>  include/linux/acpi_iort.h       |  14 ++

>  include/linux/acpi_viot.h       |  20 ++

>  14 files changed, 448 insertions(+), 66 deletions(-)

>  rename drivers/acpi/{arm64 => }/iort.c (86%)

>  create mode 100644 drivers/acpi/viot.c

>  create mode 100644 include/linux/acpi_viot.h

> 

> Jean-Philippe Brucker (1):

>   iommu/virtio: Add topology description to virtio-iommu config space

> 

>  drivers/base/platform.c               |   3 +

>  drivers/iommu/Kconfig                 |   9 +

>  drivers/iommu/Makefile                |   1 +

>  drivers/iommu/virtio-iommu-topology.c | 410

> ++++++++++++++++++++++++++ drivers/iommu/virtio-iommu.c          |

> 3 + drivers/pci/pci-driver.c              |   3 +

>  include/linux/virtio_iommu.h          |  18 ++

>  include/uapi/linux/virtio_iommu.h     |  26 ++

>  8 files changed, 473 insertions(+)

>  create mode 100644 drivers/iommu/virtio-iommu-topology.c

>  create mode 100644 include/linux/virtio_iommu.h

> 

> 

> [1] firecracker: https://github.com/firecracker-microvm/firecracker

>     microvm: https://github.com/qemu/qemu/blob/master/docs/microvm.rst

>     kvmtool:

> https://git.kernel.org/pub/scm/linux/kernel/git/will/kvmtool.git/
Jean-Philippe Brucker Nov. 25, 2019, 5:53 p.m. UTC | #3
On Fri, Nov 22, 2019 at 08:00:46AM -0500, Michael S. Tsirkin wrote:
> > (2) In addition, there are some concerns about having virtio depend on

> >     ACPI or DT. Some hypervisors (Firecracker, QEMU microvm, kvmtool x86

> >     [1])

> 

> power?


In kvmtool it boot with device tree. It also doesn't need virtio-iommu I
think, since it has its own paravirtualized interface.

> > don't currently implement those methods.

> > 

> >     It was suggested to embed the topology description into the device.

> >     It can work, as demonstrated at the end of this RFC, with the

> >     following limitations:

> > 

> >     - The topology description must be read before any endpoint managed

> >       by the IOMMU is probed, and even before the virtio module is

> >       loaded. This RFC uses a PCI quirk to manually parse the virtio

> >       configuration. It assumes that all endpoints managed by the IOMMU

> >       are under this same PCI host.

> > 

> >     - I don't have a solution for the virtio-mmio transport at the

> >       moment, because I haven't had time to modify a host to test it. I

> >       think it could either use a notifier on the platform bus, or

> >       better, a new 'iommu' command-line argument to the virtio-mmio

> >       driver.

> 

> 	A notifier seems easier for users. What are the disadvantages of

> 	that?


For each device we have to check if it's virtio-mmio, then map the MMIO
resource and check the device type. Having a dedicated command-line
argument would be more efficient.

Thanks,
Jean
Jean-Philippe Brucker Nov. 25, 2019, 6:02 p.m. UTC | #4
On Fri, Nov 22, 2019 at 04:01:02PM -0800, Jacob Pan (Jun) wrote:
> > (1) ACPI has one table per vendor (DMAR for Intel, IVRS for AMD and

> > IORT for Arm). From my point of view IORT is easier to extend, since

> > we just need to introduce a new node type. There are no dependencies

> > to Arm in the Linux IORT driver, so it works well with CONFIG_X86.

> > 

> From my limited understanding, IORT and VIOT is to solve device topology

> enumeration only? I am not sure how it can be expanded to cover

> information beyond device topology. e.g. DMAR has NUMA information and

> root port ATS, I guess they are not used today in the guest but might

> be additions in the future.


The PCI root-complex node of IORT has an ATS attribute, which we can
already use. However its scope is the root complex, not individual root
ports like with DMAR.

I'm not very familiar with NUMA, but it looks like we just need to specify
a proximity domain in relation to the SRAT table, for each viommu? The
SMMUv3 node in IORT has a 4-bytes "proximity domain" field for this. We
can add the same to the VIOT virtio-iommu nodes later, since the
structures are extensible.

But it might be better to keep the bare minimum information in the FW
descriptor, and put the rest in the virtio-iommu. So yes topology
enumeration is something the device cannot do itself (not fully that is,
see (2)) but for the rest, virtio-iommu's PROBE request can provide
details about each endpoint in relation to their physical IOMMU.

We could for example add a bit in a PROBE property saying that the whole
path between the IOMMU and the endpoint supports ATS. For NUMA it might
also be more interesting to have a finer granularity, since one viommu
could be managing endpoints that are behind different physical IOMMUs. If
in the future we want to allocate page tables close to the physical IOMMU
for example, we might need to describe multiple NUMA nodes per viommu,
using the PROBE request.

Thanks,
Jean
Jacob Pan (Jun) Dec. 4, 2019, 3:01 a.m. UTC | #5
Hi Jean,

Sorry for the delay, I was out last week. Comments inline below.

On Mon, 25 Nov 2019 19:02:47 +0100
Jean-Philippe Brucker <jean-philippe@linaro.org> wrote:

> On Fri, Nov 22, 2019 at 04:01:02PM -0800, Jacob Pan (Jun) wrote:

> > > (1) ACPI has one table per vendor (DMAR for Intel, IVRS for AMD

> > > and IORT for Arm). From my point of view IORT is easier to

> > > extend, since we just need to introduce a new node type. There

> > > are no dependencies to Arm in the Linux IORT driver, so it works

> > > well with CONFIG_X86. 

> > From my limited understanding, IORT and VIOT is to solve device

> > topology enumeration only? I am not sure how it can be expanded to

> > cover information beyond device topology. e.g. DMAR has NUMA

> > information and root port ATS, I guess they are not used today in

> > the guest but might be additions in the future.  

> 

> The PCI root-complex node of IORT has an ATS attribute, which we can

> already use. However its scope is the root complex, not individual

> root ports like with DMAR.

> 

> I'm not very familiar with NUMA, but it looks like we just need to

> specify a proximity domain in relation to the SRAT table, for each

> viommu? The SMMUv3 node in IORT has a 4-bytes "proximity domain"

> field for this. We can add the same to the VIOT virtio-iommu nodes

> later, since the structures are extensible.

> 

I think there the proximity domain is more for each assigned device
than vIOMMU. vIOMMU in the guest can have assigned devices belong to
different pIOMMU and proximity domains. If the guest owns the first
level page tables (gIOVA or SVA), we want to make sure page tables are
allocated from the close proximity domain.

My understanding is virtio IOMMU supports both virtio devices and
assigned devices. we could care less about the former in terms of NUMA.

In ACPI, we have _PXM method to retrieve device proximity domain. I
don't know if there is something equivalent or a generic way to get
_PXM information. I think VMM also need to make sure when an assigned
device is used with vIOMMU, there are some memory is allocated from the
device's proximity domain.

> But it might be better to keep the bare minimum information in the FW

> descriptor, and put the rest in the virtio-iommu. So yes topology

> enumeration is something the device cannot do itself (not fully that

> is, see (2)) but for the rest, virtio-iommu's PROBE request can

> provide details about each endpoint in relation to their physical

> IOMMU.

> 

> We could for example add a bit in a PROBE property saying that the

> whole path between the IOMMU and the endpoint supports ATS. For NUMA

> it might also be more interesting to have a finer granularity, since

> one viommu could be managing endpoints that are behind different

> physical IOMMUs. If in the future we want to allocate page tables

> close to the physical IOMMU for example, we might need to describe

> multiple NUMA nodes per viommu, using the PROBE request.

> 

Should we reinvent something for NUMA or use ACPI's SRAT, _PXM? I am
not sure how it is handled today in QEMU in terms of guest-host NUMA
proximity domain mapping.

> Thanks,

> Jean
Jean-Philippe Brucker Dec. 18, 2019, 11:20 a.m. UTC | #6
On Tue, Dec 03, 2019 at 07:01:36PM -0800, Jacob Pan (Jun) wrote:
> Hi Jean,

> 

> Sorry for the delay, I was out last week. Comments inline below.

> 

> On Mon, 25 Nov 2019 19:02:47 +0100

> Jean-Philippe Brucker <jean-philippe@linaro.org> wrote:

> 

> > On Fri, Nov 22, 2019 at 04:01:02PM -0800, Jacob Pan (Jun) wrote:

> > > > (1) ACPI has one table per vendor (DMAR for Intel, IVRS for AMD

> > > > and IORT for Arm). From my point of view IORT is easier to

> > > > extend, since we just need to introduce a new node type. There

> > > > are no dependencies to Arm in the Linux IORT driver, so it works

> > > > well with CONFIG_X86. 

> > > From my limited understanding, IORT and VIOT is to solve device

> > > topology enumeration only? I am not sure how it can be expanded to

> > > cover information beyond device topology. e.g. DMAR has NUMA

> > > information and root port ATS, I guess they are not used today in

> > > the guest but might be additions in the future.  

> > 

> > The PCI root-complex node of IORT has an ATS attribute, which we can

> > already use. However its scope is the root complex, not individual

> > root ports like with DMAR.

> > 

> > I'm not very familiar with NUMA, but it looks like we just need to

> > specify a proximity domain in relation to the SRAT table, for each

> > viommu? The SMMUv3 node in IORT has a 4-bytes "proximity domain"

> > field for this. We can add the same to the VIOT virtio-iommu nodes

> > later, since the structures are extensible.

> > 

> I think there the proximity domain is more for each assigned device

> than vIOMMU. vIOMMU in the guest can have assigned devices belong to

> different pIOMMU and proximity domains. If the guest owns the first

> level page tables (gIOVA or SVA), we want to make sure page tables are

> allocated from the close proximity domain.

> 

> My understanding is virtio IOMMU supports both virtio devices and

> assigned devices. we could care less about the former in terms of NUMA.

> 

> In ACPI, we have _PXM method to retrieve device proximity domain. I

> don't know if there is something equivalent or a generic way to get

> _PXM information. I think VMM also need to make sure when an assigned

> device is used with vIOMMU, there are some memory is allocated from the

> device's proximity domain.

> 

> > But it might be better to keep the bare minimum information in the FW

> > descriptor, and put the rest in the virtio-iommu. So yes topology

> > enumeration is something the device cannot do itself (not fully that

> > is, see (2)) but for the rest, virtio-iommu's PROBE request can

> > provide details about each endpoint in relation to their physical

> > IOMMU.

> > 

> > We could for example add a bit in a PROBE property saying that the

> > whole path between the IOMMU and the endpoint supports ATS. For NUMA

> > it might also be more interesting to have a finer granularity, since

> > one viommu could be managing endpoints that are behind different

> > physical IOMMUs. If in the future we want to allocate page tables

> > close to the physical IOMMU for example, we might need to describe

> > multiple NUMA nodes per viommu, using the PROBE request.

> > 

> Should we reinvent something for NUMA or use ACPI's SRAT, _PXM? 


Regardless whether we put it in the VIOT or in the virtio-iommu PROBE
request, we necessarily need to reuse the node IDs defined by SRAT (or
numa-node-id on devicetree, also a 32-bit value). A virtio-pci based
virtio-iommu already has the _PXM of its closest bridge and wouldn't need
anything more in the VIOT, while a virtio-mmio based virtio-iommu would
need a proximity domain field in the VIOT. That could be added later since
the table is extensible, but as you pointed out, that information might
not be very useful.

> I am not sure how it is handled today in QEMU in terms of guest-host

> NUMA proximity domain mapping.


It looks like the user can specify this guest-host mapping on the
command-line:

  -object memory-backend-ram,id=mem0,size=4G,host-nodes=3,policy=bind
  -object memory-backend-ram,id=mem1,size=4G,host-nodes=4,policy=bind
  -numa node,memdev=mem0,nodeid=numa0
  -numa node,memdev=mem1,nodeid=numa1
  -numa cpu,node-id=numa0,socket-id=0
  -numa cpu,node-id=numa1,socket-id=1

numa0 and numa1 would get proximity domains 0 and 1, corresponding to host
domains 3 and 4. It is also possible to specify the NUMA node of a PCI bus
(via the PCI expander bridge), and therefore to assign a VFIO PCI device
in the same proximity domain as its physical location.

  -device pxb,id=bridge1,bus=pci.0,numa_node=1 (simplified)
  -device vfio-pci,host=03:01.0,bus=bridge1

Linux can use this information to allocate DMA close to the endpoint (see
for example __iommu_dma_alloc_pages()). For page tables allocation,
io-pgtables currently takes the node ID of the IOMMU device, not the
endpoint. For the scenario you describe (virtio-iommu endpoints managed by
different physical IOMMU), we would need to take for example the node ID
of the first endpoint in the iommu_domain for which we're allocating page
tables.

Is it safe to assume that the pIOMMU is in the same proximity domain as
the physical endpoint?  If that's the case, then the guest already has all
the information it needs. Otherwise it's easy to add a proximity domain
PROBE property for each endpoint. Configuring the host to pass that
information might be more difficult.


Off topic, I've been wondering how to make iommu-sva aware of NUMA
topology as well, so that when handling a page request we allocate memory
on the faulting device's NUMA node, but I think it might require invasive
changes to the mm subsystem, to pass a NUMA node to handle_mm_fault().

Thanks,
Jean
Jacob Pan (Jun) Dec. 20, 2019, 6:54 p.m. UTC | #7
On Wed, 18 Dec 2019 12:20:44 +0100
Jean-Philippe Brucker <jean-philippe@linaro.org> wrote:

> On Tue, Dec 03, 2019 at 07:01:36PM -0800, Jacob Pan (Jun) wrote:

> > Hi Jean,

> > 

> > Sorry for the delay, I was out last week. Comments inline below.

> > 

> > On Mon, 25 Nov 2019 19:02:47 +0100

> > Jean-Philippe Brucker <jean-philippe@linaro.org> wrote:

> >   

> > > On Fri, Nov 22, 2019 at 04:01:02PM -0800, Jacob Pan (Jun) wrote:  

> > > > > (1) ACPI has one table per vendor (DMAR for Intel, IVRS for

> > > > > AMD and IORT for Arm). From my point of view IORT is easier to

> > > > > extend, since we just need to introduce a new node type. There

> > > > > are no dependencies to Arm in the Linux IORT driver, so it

> > > > > works well with CONFIG_X86.   

> > > > From my limited understanding, IORT and VIOT is to solve device

> > > > topology enumeration only? I am not sure how it can be expanded

> > > > to cover information beyond device topology. e.g. DMAR has NUMA

> > > > information and root port ATS, I guess they are not used today

> > > > in the guest but might be additions in the future.    

> > > 

> > > The PCI root-complex node of IORT has an ATS attribute, which we

> > > can already use. However its scope is the root complex, not

> > > individual root ports like with DMAR.

> > > 

> > > I'm not very familiar with NUMA, but it looks like we just need to

> > > specify a proximity domain in relation to the SRAT table, for each

> > > viommu? The SMMUv3 node in IORT has a 4-bytes "proximity domain"

> > > field for this. We can add the same to the VIOT virtio-iommu nodes

> > > later, since the structures are extensible.

> > >   

> > I think there the proximity domain is more for each assigned device

> > than vIOMMU. vIOMMU in the guest can have assigned devices belong to

> > different pIOMMU and proximity domains. If the guest owns the first

> > level page tables (gIOVA or SVA), we want to make sure page tables

> > are allocated from the close proximity domain.

> > 

> > My understanding is virtio IOMMU supports both virtio devices and

> > assigned devices. we could care less about the former in terms of

> > NUMA.

> > 

> > In ACPI, we have _PXM method to retrieve device proximity domain. I

> > don't know if there is something equivalent or a generic way to get

> > _PXM information. I think VMM also need to make sure when an

> > assigned device is used with vIOMMU, there are some memory is

> > allocated from the device's proximity domain.

> >   

> > > But it might be better to keep the bare minimum information in

> > > the FW descriptor, and put the rest in the virtio-iommu. So yes

> > > topology enumeration is something the device cannot do itself

> > > (not fully that is, see (2)) but for the rest, virtio-iommu's

> > > PROBE request can provide details about each endpoint in relation

> > > to their physical IOMMU.

> > > 

> > > We could for example add a bit in a PROBE property saying that the

> > > whole path between the IOMMU and the endpoint supports ATS. For

> > > NUMA it might also be more interesting to have a finer

> > > granularity, since one viommu could be managing endpoints that

> > > are behind different physical IOMMUs. If in the future we want to

> > > allocate page tables close to the physical IOMMU for example, we

> > > might need to describe multiple NUMA nodes per viommu, using the

> > > PROBE request. 

> > Should we reinvent something for NUMA or use ACPI's SRAT, _PXM?   

> 

> Regardless whether we put it in the VIOT or in the virtio-iommu PROBE

> request, we necessarily need to reuse the node IDs defined by SRAT (or

> numa-node-id on devicetree, also a 32-bit value). A virtio-pci based

> virtio-iommu already has the _PXM of its closest bridge and wouldn't

> need anything more in the VIOT, while a virtio-mmio based

> virtio-iommu would need a proximity domain field in the VIOT. That

> could be added later since the table is extensible, but as you

> pointed out, that information might not be very useful.

> 

> > I am not sure how it is handled today in QEMU in terms of guest-host

> > NUMA proximity domain mapping.  

> 

> It looks like the user can specify this guest-host mapping on the

> command-line:

> 

>   -object memory-backend-ram,id=mem0,size=4G,host-nodes=3,policy=bind

>   -object memory-backend-ram,id=mem1,size=4G,host-nodes=4,policy=bind

>   -numa node,memdev=mem0,nodeid=numa0

>   -numa node,memdev=mem1,nodeid=numa1

>   -numa cpu,node-id=numa0,socket-id=0

>   -numa cpu,node-id=numa1,socket-id=1

> 

> numa0 and numa1 would get proximity domains 0 and 1, corresponding to

> host domains 3 and 4. It is also possible to specify the NUMA node of

> a PCI bus (via the PCI expander bridge), and therefore to assign a

> VFIO PCI device in the same proximity domain as its physical location.

> 

>   -device pxb,id=bridge1,bus=pci.0,numa_node=1 (simplified)

>   -device vfio-pci,host=03:01.0,bus=bridge1

> 

Thanks a lot for the thorough explanation. I will give that a try on
x86, I assume the ACPI tables also built to match these cmdline options.
> Linux can use this information to allocate DMA close to the endpoint

> (see for example __iommu_dma_alloc_pages()). For page tables

> allocation, io-pgtables currently takes the node ID of the IOMMU

> device, not the endpoint. For the scenario you describe (virtio-iommu

> endpoints managed by different physical IOMMU), we would need to take

> for example the node ID of the first endpoint in the iommu_domain for

> which we're allocating page tables.

> 

If iommu_domain is shared by multiple device from different NUMA node,
I guess taking the first one is as good as anyone. It would not be
an optimal configuration.
> Is it safe to assume that the pIOMMU is in the same proximity domain

> as the physical endpoint?

I think it is a safe assumption.
>  If that's the case, then the guest already

> has all the information it needs. Otherwise it's easy to add a

> proximity domain PROBE property for each endpoint. Configuring the

> host to pass that information might be more difficult.

> 

I agree, guest should always allocate DMA and IOVA page tables basedon
the endpoint. VT-d currently allocates page table pages based on IOMMU
NUMA node, that might have to change.
> 

> Off topic, I've been wondering how to make iommu-sva aware of NUMA

> topology as well, so that when handling a page request we allocate

> memory on the faulting device's NUMA node, but I think it might

> require invasive changes to the mm subsystem, to pass a NUMA node to

> handle_mm_fault().

> 

> Thanks,

> Jean