diff mbox series

[v3] PCI: vmd: Honor ACPI _OSC on PCIe features

Message ID 20211203031541.1428904-1-kai.heng.feng@canonical.com
State Accepted
Commit 04b12ef163d10e348db664900ae7f611b83c7a0e
Headers show
Series [v3] PCI: vmd: Honor ACPI _OSC on PCIe features | expand

Commit Message

Kai-Heng Feng Dec. 3, 2021, 3:15 a.m. UTC
When Samsung PCIe Gen4 NVMe is connected to Intel ADL VMD, the
combination causes AER message flood and drags the system performance
down.

The issue doesn't happen when VMD mode is disabled in BIOS, since AER
isn't enabled by acpi_pci_root_create() . When VMD mode is enabled, AER
is enabled regardless of _OSC:
[    0.410076] acpi PNP0A08:00: _OSC: platform does not support [AER]
...
[    1.486704] pcieport 10000:e0:06.0: AER: enabled with IRQ 146

Since VMD is an aperture to regular PCIe root ports, honor ACPI _OSC to
disable PCIe features accordingly to resolve the issue.

Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=215027
Suggested-by: Rafael J. Wysocki <rafael@kernel.org>
Signed-off-by: Kai-Heng Feng <kai.heng.feng@canonical.com>
---
v3:
 - Use a new helper function.

v2:
 - Use pci_find_host_bridge() instead of open coding.

 drivers/pci/controller/vmd.c | 18 ++++++++++++++++++
 1 file changed, 18 insertions(+)

Comments

Rafael J. Wysocki Dec. 3, 2021, 2 p.m. UTC | #1
On Fri, Dec 3, 2021 at 4:16 AM Kai-Heng Feng
<kai.heng.feng@canonical.com> wrote:
>
> When Samsung PCIe Gen4 NVMe is connected to Intel ADL VMD, the
> combination causes AER message flood and drags the system performance
> down.
>
> The issue doesn't happen when VMD mode is disabled in BIOS, since AER
> isn't enabled by acpi_pci_root_create() . When VMD mode is enabled, AER
> is enabled regardless of _OSC:
> [    0.410076] acpi PNP0A08:00: _OSC: platform does not support [AER]
> ...
> [    1.486704] pcieport 10000:e0:06.0: AER: enabled with IRQ 146
>
> Since VMD is an aperture to regular PCIe root ports, honor ACPI _OSC to
> disable PCIe features accordingly to resolve the issue.
>
> Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=215027
> Suggested-by: Rafael J. Wysocki <rafael@kernel.org>
> Signed-off-by: Kai-Heng Feng <kai.heng.feng@canonical.com>
> ---
> v3:
>  - Use a new helper function.
>
> v2:
>  - Use pci_find_host_bridge() instead of open coding.
>
>  drivers/pci/controller/vmd.c | 18 ++++++++++++++++++
>  1 file changed, 18 insertions(+)
>
> diff --git a/drivers/pci/controller/vmd.c b/drivers/pci/controller/vmd.c
> index a45e8e59d3d48..691765e6c12aa 100644
> --- a/drivers/pci/controller/vmd.c
> +++ b/drivers/pci/controller/vmd.c
> @@ -661,6 +661,21 @@ static int vmd_alloc_irqs(struct vmd_dev *vmd)
>         return 0;
>  }
>
> +/*
> + * Since VMD is an aperture to regular PCIe root ports, only allow it to
> + * control features that the OS is allowed to control on the physical PCI bus.
> + */

I'd put the comment inside the function, but nevertheless

Reviewed-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>

> +static void vmd_copy_host_bridge_flags(struct pci_host_bridge *root_bridge,
> +                                      struct pci_host_bridge *vmd_bridge)
> +{
> +       vmd_bridge->native_pcie_hotplug = root_bridge->native_pcie_hotplug;
> +       vmd_bridge->native_shpc_hotplug = root_bridge->native_shpc_hotplug;
> +       vmd_bridge->native_aer = root_bridge->native_aer;
> +       vmd_bridge->native_pme = root_bridge->native_pme;
> +       vmd_bridge->native_ltr = root_bridge->native_ltr;
> +       vmd_bridge->native_dpc = root_bridge->native_dpc;
> +}
> +
>  static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features)
>  {
>         struct pci_sysdata *sd = &vmd->sysdata;
> @@ -798,6 +813,9 @@ static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features)
>                 return -ENODEV;
>         }
>
> +       vmd_copy_host_bridge_flags(pci_find_host_bridge(vmd->dev->bus),
> +                                  to_pci_host_bridge(vmd->bus->bridge));
> +
>         vmd_attach_resources(vmd);
>         if (vmd->irq_domain)
>                 dev_set_msi_domain(&vmd->bus->dev, vmd->irq_domain);
> --
> 2.32.0
>
Keith Busch Dec. 6, 2021, 11:12 p.m. UTC | #2
On Fri, Dec 03, 2021 at 11:15:41AM +0800, Kai-Heng Feng wrote:
> When Samsung PCIe Gen4 NVMe is connected to Intel ADL VMD, the
> combination causes AER message flood and drags the system performance
> down.
> 
> The issue doesn't happen when VMD mode is disabled in BIOS, since AER
> isn't enabled by acpi_pci_root_create() . When VMD mode is enabled, AER
> is enabled regardless of _OSC:
> [    0.410076] acpi PNP0A08:00: _OSC: platform does not support [AER]
> ...
> [    1.486704] pcieport 10000:e0:06.0: AER: enabled with IRQ 146
> 
> Since VMD is an aperture to regular PCIe root ports, honor ACPI _OSC to
> disable PCIe features accordingly to resolve the issue.

At least for some versions of this hardare, I recall ACPI is unaware of
any devices in the VMD domain; the platform can not see past the VMD
endpoint, so I throught the driver was supposed to always let the VMD
domain use OS native support regardless of the parent's ACPI _OSC.

 
> Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=215027
> Suggested-by: Rafael J. Wysocki <rafael@kernel.org>
> Signed-off-by: Kai-Heng Feng <kai.heng.feng@canonical.com>
> ---
> v3:
>  - Use a new helper function.
> 
> v2:
>  - Use pci_find_host_bridge() instead of open coding.
> 
>  drivers/pci/controller/vmd.c | 18 ++++++++++++++++++
>  1 file changed, 18 insertions(+)
> 
> diff --git a/drivers/pci/controller/vmd.c b/drivers/pci/controller/vmd.c
> index a45e8e59d3d48..691765e6c12aa 100644
> --- a/drivers/pci/controller/vmd.c
> +++ b/drivers/pci/controller/vmd.c
> @@ -661,6 +661,21 @@ static int vmd_alloc_irqs(struct vmd_dev *vmd)
>  	return 0;
>  }
>  
> +/*
> + * Since VMD is an aperture to regular PCIe root ports, only allow it to
> + * control features that the OS is allowed to control on the physical PCI bus.
> + */
> +static void vmd_copy_host_bridge_flags(struct pci_host_bridge *root_bridge,
> +				       struct pci_host_bridge *vmd_bridge)
> +{
> +	vmd_bridge->native_pcie_hotplug = root_bridge->native_pcie_hotplug;
> +	vmd_bridge->native_shpc_hotplug = root_bridge->native_shpc_hotplug;
> +	vmd_bridge->native_aer = root_bridge->native_aer;
> +	vmd_bridge->native_pme = root_bridge->native_pme;
> +	vmd_bridge->native_ltr = root_bridge->native_ltr;
> +	vmd_bridge->native_dpc = root_bridge->native_dpc;
> +}
> +
>  static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features)
>  {
>  	struct pci_sysdata *sd = &vmd->sysdata;
> @@ -798,6 +813,9 @@ static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features)
>  		return -ENODEV;
>  	}
>  
> +	vmd_copy_host_bridge_flags(pci_find_host_bridge(vmd->dev->bus),
> +				   to_pci_host_bridge(vmd->bus->bridge));
> +
>  	vmd_attach_resources(vmd);
>  	if (vmd->irq_domain)
>  		dev_set_msi_domain(&vmd->bus->dev, vmd->irq_domain);
> -- 
> 2.32.0
>
Rafael J. Wysocki Dec. 7, 2021, 1:15 p.m. UTC | #3
On Tue, Dec 7, 2021 at 12:12 AM Keith Busch <kbusch@kernel.org> wrote:
>
> On Fri, Dec 03, 2021 at 11:15:41AM +0800, Kai-Heng Feng wrote:
> > When Samsung PCIe Gen4 NVMe is connected to Intel ADL VMD, the
> > combination causes AER message flood and drags the system performance
> > down.
> >
> > The issue doesn't happen when VMD mode is disabled in BIOS, since AER
> > isn't enabled by acpi_pci_root_create() . When VMD mode is enabled, AER
> > is enabled regardless of _OSC:
> > [    0.410076] acpi PNP0A08:00: _OSC: platform does not support [AER]
> > ...
> > [    1.486704] pcieport 10000:e0:06.0: AER: enabled with IRQ 146
> >
> > Since VMD is an aperture to regular PCIe root ports, honor ACPI _OSC to
> > disable PCIe features accordingly to resolve the issue.
>
> At least for some versions of this hardare, I recall ACPI is unaware of
> any devices in the VMD domain; the platform can not see past the VMD
> endpoint, so I throught the driver was supposed to always let the VMD
> domain use OS native support regardless of the parent's ACPI _OSC.

This is orthogonal to whether or not ACPI is aware of the VMD domain
or the devices in it.

If the platform firmware does not allow the OS to control specific
PCIe features at the physical host bridge level, that extends to the
VMD "bus", because it is just a way to expose a hidden part of the
PCIe hierarchy.

The platform firmware does that through ACPI _OSC under the host
bridge device (not under the VMD device) which it is very well aware
of.
Lorenzo Pieralisi Jan. 4, 2022, 3:26 p.m. UTC | #4
On Fri, 3 Dec 2021 11:15:41 +0800, Kai-Heng Feng wrote:
> When Samsung PCIe Gen4 NVMe is connected to Intel ADL VMD, the
> combination causes AER message flood and drags the system performance
> down.
> 
> The issue doesn't happen when VMD mode is disabled in BIOS, since AER
> isn't enabled by acpi_pci_root_create() . When VMD mode is enabled, AER
> is enabled regardless of _OSC:
> [    0.410076] acpi PNP0A08:00: _OSC: platform does not support [AER]
> ...
> [    1.486704] pcieport 10000:e0:06.0: AER: enabled with IRQ 146
> 
> [...]

Applied to pci/vmd, thanks!

[1/1] PCI: vmd: Honor ACPI _OSC on PCIe features
      https://git.kernel.org/lpieralisi/pci/c/04b12ef163

Thanks,
Lorenzo
Bjorn Helgaas Feb. 9, 2022, 9:36 p.m. UTC | #5
On Tue, Dec 07, 2021 at 02:15:04PM +0100, Rafael J. Wysocki wrote:
> On Tue, Dec 7, 2021 at 12:12 AM Keith Busch <kbusch@kernel.org> wrote:
> > On Fri, Dec 03, 2021 at 11:15:41AM +0800, Kai-Heng Feng wrote:
> > > When Samsung PCIe Gen4 NVMe is connected to Intel ADL VMD, the
> > > combination causes AER message flood and drags the system performance
> > > down.
> > >
> > > The issue doesn't happen when VMD mode is disabled in BIOS, since AER
> > > isn't enabled by acpi_pci_root_create() . When VMD mode is enabled, AER
> > > is enabled regardless of _OSC:
> > > [    0.410076] acpi PNP0A08:00: _OSC: platform does not support [AER]
> > > ...
> > > [    1.486704] pcieport 10000:e0:06.0: AER: enabled with IRQ 146
> > >
> > > Since VMD is an aperture to regular PCIe root ports, honor ACPI _OSC to
> > > disable PCIe features accordingly to resolve the issue.
> >
> > At least for some versions of this hardare, I recall ACPI is unaware of
> > any devices in the VMD domain; the platform can not see past the VMD
> > endpoint, so I throught the driver was supposed to always let the VMD
> > domain use OS native support regardless of the parent's ACPI _OSC.
> 
> This is orthogonal to whether or not ACPI is aware of the VMD domain
> or the devices in it.
> 
> If the platform firmware does not allow the OS to control specific
> PCIe features at the physical host bridge level, that extends to the
> VMD "bus", because it is just a way to expose a hidden part of the
> PCIe hierarchy.

I don't understand what's going on here.  Do we understand the AER
message flood?  Are we just papering over it by disabling AER?

If an error occurs below a VMD, who notices and reports it?  If we
disable native AER below VMD because of _OSC, as this patch does, I
guess we're assuming the platform will handle AER events below VMD.
Is that really true?  Does the platform know how to find AER log
registers of devices below VMD?

> The platform firmware does that through ACPI _OSC under the host
> bridge device (not under the VMD device) which it is very well aware
> of.
Jonathan Derrick Feb. 10, 2022, 5:52 p.m. UTC | #6
On 2/9/2022 2:36 PM, Bjorn Helgaas wrote:
> On Tue, Dec 07, 2021 at 02:15:04PM +0100, Rafael J. Wysocki wrote:
>> On Tue, Dec 7, 2021 at 12:12 AM Keith Busch <kbusch@kernel.org> wrote:
>>> On Fri, Dec 03, 2021 at 11:15:41AM +0800, Kai-Heng Feng wrote:
>>>> When Samsung PCIe Gen4 NVMe is connected to Intel ADL VMD, the
>>>> combination causes AER message flood and drags the system performance
>>>> down.
>>>>
>>>> The issue doesn't happen when VMD mode is disabled in BIOS, since AER
>>>> isn't enabled by acpi_pci_root_create() . When VMD mode is enabled, AER
>>>> is enabled regardless of _OSC:
>>>> [    0.410076] acpi PNP0A08:00: _OSC: platform does not support [AER]
>>>> ...
>>>> [    1.486704] pcieport 10000:e0:06.0: AER: enabled with IRQ 146
>>>>
>>>> Since VMD is an aperture to regular PCIe root ports, honor ACPI _OSC to
>>>> disable PCIe features accordingly to resolve the issue.
>>>
>>> At least for some versions of this hardare, I recall ACPI is unaware of
>>> any devices in the VMD domain; the platform can not see past the VMD
>>> endpoint, so I throught the driver was supposed to always let the VMD
>>> domain use OS native support regardless of the parent's ACPI _OSC.
>>
>> This is orthogonal to whether or not ACPI is aware of the VMD domain
>> or the devices in it.
>>
>> If the platform firmware does not allow the OS to control specific
>> PCIe features at the physical host bridge level, that extends to the
>> VMD "bus", because it is just a way to expose a hidden part of the
>> PCIe hierarchy.
> 
> I don't understand what's going on here.  Do we understand the AER
> message flood?  Are we just papering over it by disabling AER?
> 
> If an error occurs below a VMD, who notices and reports it?  If we
> disable native AER below VMD because of _OSC, as this patch does, I
> guess we're assuming the platform will handle AER events below VMD.
> Is that really true?  Does the platform know how to find AER log
> registers of devices below VMD?
ACPI (and the specific UEFI implementation) might remain unaware of
VMD domains. It's possible that the system management mode (SMM)
controller which typically handles firmware-first errors would be
capable of handling VMD errors in the vendor-specific manner.
However if _OSC hadn't taken into account VMD ports, SMM wouldn't
be capable of handling those errors and silently disabling AER on
VMD domains is a bad idea.

The bugzilla made it sound like a specific platform/drive combination.
What about a DMI match to mask the Corrected Physical Layer bits?

> 
>> The platform firmware does that through ACPI _OSC under the host
>> bridge device (not under the VMD device) which it is very well aware
>> of.
Kai-Heng Feng Feb. 14, 2022, 12:23 a.m. UTC | #7
Hi Bjorn,

On Thu, Feb 10, 2022 at 5:36 AM Bjorn Helgaas <helgaas@kernel.org> wrote:
>
> On Tue, Dec 07, 2021 at 02:15:04PM +0100, Rafael J. Wysocki wrote:
> > On Tue, Dec 7, 2021 at 12:12 AM Keith Busch <kbusch@kernel.org> wrote:
> > > On Fri, Dec 03, 2021 at 11:15:41AM +0800, Kai-Heng Feng wrote:
> > > > When Samsung PCIe Gen4 NVMe is connected to Intel ADL VMD, the
> > > > combination causes AER message flood and drags the system performance
> > > > down.
> > > >
> > > > The issue doesn't happen when VMD mode is disabled in BIOS, since AER
> > > > isn't enabled by acpi_pci_root_create() . When VMD mode is enabled, AER
> > > > is enabled regardless of _OSC:
> > > > [    0.410076] acpi PNP0A08:00: _OSC: platform does not support [AER]
> > > > ...
> > > > [    1.486704] pcieport 10000:e0:06.0: AER: enabled with IRQ 146
> > > >
> > > > Since VMD is an aperture to regular PCIe root ports, honor ACPI _OSC to
> > > > disable PCIe features accordingly to resolve the issue.
> > >
> > > At least for some versions of this hardare, I recall ACPI is unaware of
> > > any devices in the VMD domain; the platform can not see past the VMD
> > > endpoint, so I throught the driver was supposed to always let the VMD
> > > domain use OS native support regardless of the parent's ACPI _OSC.
> >
> > This is orthogonal to whether or not ACPI is aware of the VMD domain
> > or the devices in it.
> >
> > If the platform firmware does not allow the OS to control specific
> > PCIe features at the physical host bridge level, that extends to the
> > VMD "bus", because it is just a way to expose a hidden part of the
> > PCIe hierarchy.
>
> I don't understand what's going on here.  Do we understand the AER
> message flood?  Are we just papering over it by disabling AER?

To be more precise, AER is disabled by the platform vendor in BIOS to
paper over the issue.
The only viable solution for us is to follow their settings. We may
never know what really happens underneath.

Disabling ASPM/AER/PME etc is a normal practice for ODMs unfortunately.

Kai-Heng

>
> If an error occurs below a VMD, who notices and reports it?  If we
> disable native AER below VMD because of _OSC, as this patch does, I
> guess we're assuming the platform will handle AER events below VMD.
> Is that really true?  Does the platform know how to find AER log
> registers of devices below VMD?
>
> > The platform firmware does that through ACPI _OSC under the host
> > bridge device (not under the VMD device) which it is very well aware
> > of.
Bjorn Helgaas Feb. 15, 2022, 3:09 p.m. UTC | #8
On Mon, Feb 14, 2022 at 08:23:05AM +0800, Kai-Heng Feng wrote:
> On Thu, Feb 10, 2022 at 5:36 AM Bjorn Helgaas <helgaas@kernel.org> wrote:
> > On Tue, Dec 07, 2021 at 02:15:04PM +0100, Rafael J. Wysocki wrote:
> > > On Tue, Dec 7, 2021 at 12:12 AM Keith Busch <kbusch@kernel.org> wrote:
> > > > On Fri, Dec 03, 2021 at 11:15:41AM +0800, Kai-Heng Feng wrote:
> > > > > When Samsung PCIe Gen4 NVMe is connected to Intel ADL VMD, the
> > > > > combination causes AER message flood and drags the system performance
> > > > > down.
> > > > >
> > > > > The issue doesn't happen when VMD mode is disabled in BIOS, since AER
> > > > > isn't enabled by acpi_pci_root_create() . When VMD mode is enabled, AER
> > > > > is enabled regardless of _OSC:
> > > > > [    0.410076] acpi PNP0A08:00: _OSC: platform does not support [AER]
> > > > > ...
> > > > > [    1.486704] pcieport 10000:e0:06.0: AER: enabled with IRQ 146
> > > > >
> > > > > Since VMD is an aperture to regular PCIe root ports, honor ACPI _OSC to
> > > > > disable PCIe features accordingly to resolve the issue.
> > > >
> > > > At least for some versions of this hardare, I recall ACPI is unaware of
> > > > any devices in the VMD domain; the platform can not see past the VMD
> > > > endpoint, so I throught the driver was supposed to always let the VMD
> > > > domain use OS native support regardless of the parent's ACPI _OSC.
> > >
> > > This is orthogonal to whether or not ACPI is aware of the VMD domain
> > > or the devices in it.
> > >
> > > If the platform firmware does not allow the OS to control specific
> > > PCIe features at the physical host bridge level, that extends to the
> > > VMD "bus", because it is just a way to expose a hidden part of the
> > > PCIe hierarchy.
> >
> > I don't understand what's going on here.  Do we understand the AER
> > message flood?  Are we just papering over it by disabling AER?
> 
> To be more precise, AER is disabled by the platform vendor in BIOS to
> paper over the issue.
> The only viable solution for us is to follow their settings. We may
> never know what really happens underneath.
> 
> Disabling ASPM/AER/PME etc is a normal practice for ODMs unfortunately.

OK.  So this patch actually has nothing in particular to do with AER.
It's about making _OSC apply to *all* devices below a host bridge,
even those below a VMD.

This is slightly ambiguous because while "_OSC applies to the entire
hierarchy originated by a PCI Host Bridge" (PCI Firmware spec r3.3,
sec 4.5.1), vmd.c creates a logical view where devices below the VMD
are in a separate hierarchy with a separate domain.

The interpretation that _OSC applies to devices below VMD should work,
as long as it is possible for platform firmware to manage services
(AER, pciehp, etc) for things below VMD without getting in the way of
vmd.c.

But I think one implication of this is that we cannot support
hot-added VMDs.  For example, firmware that wants to manage AER will
use _OSC to retain AER control.  But if the firmware doesn't know how
VMDs work, it will not be able to handle AER for devices below the
VMD.

> > If an error occurs below a VMD, who notices and reports it?  If we
> > disable native AER below VMD because of _OSC, as this patch does, I
> > guess we're assuming the platform will handle AER events below VMD.
> > Is that really true?  Does the platform know how to find AER log
> > registers of devices below VMD?
> >
> > > The platform firmware does that through ACPI _OSC under the host
> > > bridge device (not under the VMD device) which it is very well aware
> > > of.
Rafael J. Wysocki Feb. 15, 2022, 5:09 p.m. UTC | #9
On Tue, Feb 15, 2022 at 4:09 PM Bjorn Helgaas <helgaas@kernel.org> wrote:
>
> On Mon, Feb 14, 2022 at 08:23:05AM +0800, Kai-Heng Feng wrote:
> > On Thu, Feb 10, 2022 at 5:36 AM Bjorn Helgaas <helgaas@kernel.org> wrote:
> > > On Tue, Dec 07, 2021 at 02:15:04PM +0100, Rafael J. Wysocki wrote:
> > > > On Tue, Dec 7, 2021 at 12:12 AM Keith Busch <kbusch@kernel.org> wrote:
> > > > > On Fri, Dec 03, 2021 at 11:15:41AM +0800, Kai-Heng Feng wrote:
> > > > > > When Samsung PCIe Gen4 NVMe is connected to Intel ADL VMD, the
> > > > > > combination causes AER message flood and drags the system performance
> > > > > > down.
> > > > > >
> > > > > > The issue doesn't happen when VMD mode is disabled in BIOS, since AER
> > > > > > isn't enabled by acpi_pci_root_create() . When VMD mode is enabled, AER
> > > > > > is enabled regardless of _OSC:
> > > > > > [    0.410076] acpi PNP0A08:00: _OSC: platform does not support [AER]
> > > > > > ...
> > > > > > [    1.486704] pcieport 10000:e0:06.0: AER: enabled with IRQ 146
> > > > > >
> > > > > > Since VMD is an aperture to regular PCIe root ports, honor ACPI _OSC to
> > > > > > disable PCIe features accordingly to resolve the issue.
> > > > >
> > > > > At least for some versions of this hardare, I recall ACPI is unaware of
> > > > > any devices in the VMD domain; the platform can not see past the VMD
> > > > > endpoint, so I throught the driver was supposed to always let the VMD
> > > > > domain use OS native support regardless of the parent's ACPI _OSC.
> > > >
> > > > This is orthogonal to whether or not ACPI is aware of the VMD domain
> > > > or the devices in it.
> > > >
> > > > If the platform firmware does not allow the OS to control specific
> > > > PCIe features at the physical host bridge level, that extends to the
> > > > VMD "bus", because it is just a way to expose a hidden part of the
> > > > PCIe hierarchy.
> > >
> > > I don't understand what's going on here.  Do we understand the AER
> > > message flood?  Are we just papering over it by disabling AER?
> >
> > To be more precise, AER is disabled by the platform vendor in BIOS to
> > paper over the issue.
> > The only viable solution for us is to follow their settings. We may
> > never know what really happens underneath.
> >
> > Disabling ASPM/AER/PME etc is a normal practice for ODMs unfortunately.
>
> OK.  So this patch actually has nothing in particular to do with AER.
> It's about making _OSC apply to *all* devices below a host bridge,
> even those below a VMD.

Right.

> This is slightly ambiguous because while "_OSC applies to the entire
> hierarchy originated by a PCI Host Bridge" (PCI Firmware spec r3.3,
> sec 4.5.1), vmd.c creates a logical view where devices below the VMD
> are in a separate hierarchy with a separate domain.

But from the HW perspective they still are in the same hierarchy below
the original host bridge.

> The interpretation that _OSC applies to devices below VMD should work,
> as long as it is possible for platform firmware to manage services
> (AER, pciehp, etc) for things below VMD without getting in the way of
> vmd.c.

vmd.c actually exposes things hidden by the firmware and the point of
the patch is to still let the firmware control them if it wants/needs
to IIUC.

> But I think one implication of this is that we cannot support
> hot-added VMDs.  For example, firmware that wants to manage AER will
> use _OSC to retain AER control.  But if the firmware doesn't know how
> VMDs work, it will not be able to handle AER for devices below the
> VMD.

Well, the firmware needs to know how stuff works to hide it in the
first place ...

> > > If an error occurs below a VMD, who notices and reports it?  If we
> > > disable native AER below VMD because of _OSC, as this patch does, I
> > > guess we're assuming the platform will handle AER events below VMD.
> > > Is that really true?  Does the platform know how to find AER log
> > > registers of devices below VMD?
> > >
> > > > The platform firmware does that through ACPI _OSC under the host
> > > > bridge device (not under the VMD device) which it is very well aware
> > > > of.
Bjorn Helgaas Feb. 16, 2022, 1:53 a.m. UTC | #10
On Tue, Feb 15, 2022 at 06:09:15PM +0100, Rafael J. Wysocki wrote:
> On Tue, Feb 15, 2022 at 4:09 PM Bjorn Helgaas <helgaas@kernel.org> wrote:
> > On Mon, Feb 14, 2022 at 08:23:05AM +0800, Kai-Heng Feng wrote:
> > > On Thu, Feb 10, 2022 at 5:36 AM Bjorn Helgaas <helgaas@kernel.org> wrote:
> > > > On Tue, Dec 07, 2021 at 02:15:04PM +0100, Rafael J. Wysocki wrote:
> > > > > On Tue, Dec 7, 2021 at 12:12 AM Keith Busch <kbusch@kernel.org> wrote:
> > > > > > On Fri, Dec 03, 2021 at 11:15:41AM +0800, Kai-Heng Feng wrote:
> > > > > > > When Samsung PCIe Gen4 NVMe is connected to Intel ADL VMD, the
> > > > > > > combination causes AER message flood and drags the system performance
> > > > > > > down.
> > > > > > >
> > > > > > > The issue doesn't happen when VMD mode is disabled in BIOS, since AER
> > > > > > > isn't enabled by acpi_pci_root_create() . When VMD mode is enabled, AER
> > > > > > > is enabled regardless of _OSC:
> > > > > > > [    0.410076] acpi PNP0A08:00: _OSC: platform does not support [AER]
> > > > > > > ...
> > > > > > > [    1.486704] pcieport 10000:e0:06.0: AER: enabled with IRQ 146
> > > > > > >
> > > > > > > Since VMD is an aperture to regular PCIe root ports, honor ACPI _OSC to
> > > > > > > disable PCIe features accordingly to resolve the issue.
> > > > > >
> > > > > > At least for some versions of this hardare, I recall ACPI is unaware of
> > > > > > any devices in the VMD domain; the platform can not see past the VMD
> > > > > > endpoint, so I throught the driver was supposed to always let the VMD
> > > > > > domain use OS native support regardless of the parent's ACPI _OSC.
> > > > >
> > > > > This is orthogonal to whether or not ACPI is aware of the VMD domain
> > > > > or the devices in it.
> > > > >
> > > > > If the platform firmware does not allow the OS to control specific
> > > > > PCIe features at the physical host bridge level, that extends to the
> > > > > VMD "bus", because it is just a way to expose a hidden part of the
> > > > > PCIe hierarchy.
> > > >
> > > > I don't understand what's going on here.  Do we understand the AER
> > > > message flood?  Are we just papering over it by disabling AER?
> > >
> > > To be more precise, AER is disabled by the platform vendor in BIOS to
> > > paper over the issue.
> > > The only viable solution for us is to follow their settings. We may
> > > never know what really happens underneath.
> > >
> > > Disabling ASPM/AER/PME etc is a normal practice for ODMs unfortunately.
> >
> > OK.  So this patch actually has nothing in particular to do with AER.
> > It's about making _OSC apply to *all* devices below a host bridge,
> > even those below a VMD.
> 
> Right.
> 
> > This is slightly ambiguous because while "_OSC applies to the entire
> > hierarchy originated by a PCI Host Bridge" (PCI Firmware spec r3.3,
> > sec 4.5.1), vmd.c creates a logical view where devices below the VMD
> > are in a separate hierarchy with a separate domain.
> 
> But from the HW perspective they still are in the same hierarchy below
> the original host bridge.

I suppose in some sense it's the same hierarchy because the electrical
connection all goes through a root port in the original host bridge,
but it's a little muddy because according to [1], a VMD spawns a new
hierarchy that can use an entirely new [bus 00-ff] space, and the
hierarchy below VMD uses a new config access mechanism independent of
ECAM or whatever the original host bridge uses.

> > The interpretation that _OSC applies to devices below VMD should work,
> > as long as it is possible for platform firmware to manage services
> > (AER, pciehp, etc) for things below VMD without getting in the way of
> > vmd.c.
> 
> vmd.c actually exposes things hidden by the firmware and the point of
> the patch is to still let the firmware control them if it wants/needs
> to IIUC.

My mental picture is that without vmd.c, Linux would enumerate the VMD
RCiEP itself, but none of the devices below the VMD would be visible.
With vmd.c, devices below the VMD RCiEP are visible.  Maybe this
picture is incorrect or too simple?

Apparently there's a firmware toggle, but I don't know exactly what it
does.  Maybe if the toggle is set to disable VMD, the VMD device looks
like a regular Root Port and the devices below are enumerated
normally even without any vmd.c?

> > But I think one implication of this is that we cannot support
> > hot-added VMDs.  For example, firmware that wants to manage AER will
> > use _OSC to retain AER control.  But if the firmware doesn't know how
> > VMDs work, it will not be able to handle AER for devices below the
> > VMD.
> 
> Well, the firmware needs to know how stuff works to hide it in the
> first place ...

[1] does also say that VMD is a Root Complex *Integrated* Endpoint,
which could not be hotplugged.  But I don't see anything in the code
that actually enforces or requires that, so I don't know what to make
of it.

If it's possible to hot-add a VMD device, firmware wouldn't be
involved in configuring it (assuming pciehp hotplug).  I assume the
new VMD would look like an Endpoint, and if vmd.c is present, maybe it
could construct a new hierarchy below that Endpoint?  In that case, we
have to assume firmware doesn't know how to operate VMD, so even if
firmware manages AER in general, it wouldn't be able to do it for
things below the VMD.

> > > > If an error occurs below a VMD, who notices and reports it?  If we
> > > > disable native AER below VMD because of _OSC, as this patch does, I
> > > > guess we're assuming the platform will handle AER events below VMD.
> > > > Is that really true?  Does the platform know how to find AER log
> > > > registers of devices below VMD?
> > > >
> > > > > The platform firmware does that through ACPI _OSC under the host
> > > > > bridge device (not under the VMD device) which it is very well aware
> > > > > of.

[1] https://git.kernel.org/linus/185a383ada2e
'Christoph Hellwig' Feb. 16, 2022, 8:14 a.m. UTC | #11
On Tue, Feb 15, 2022 at 07:53:03PM -0600, Bjorn Helgaas wrote:
> Apparently there's a firmware toggle, but I don't know exactly what it
> does.  Maybe if the toggle is set to disable VMD, the VMD device looks
> like a regular Root Port and the devices below are enumerated
> normally even without any vmd.c?

Yes.  VMD is just an intel invention to make the OSes life incredibly
painful (and to allow Intel to force binding their NVMe driver instead
of the Microsoft one on windows).
Rafael J. Wysocki Feb. 16, 2022, 12:37 p.m. UTC | #12
On Wed, Feb 16, 2022 at 2:53 AM Bjorn Helgaas <helgaas@kernel.org> wrote:
>
> On Tue, Feb 15, 2022 at 06:09:15PM +0100, Rafael J. Wysocki wrote:
> > On Tue, Feb 15, 2022 at 4:09 PM Bjorn Helgaas <helgaas@kernel.org> wrote:
> > > On Mon, Feb 14, 2022 at 08:23:05AM +0800, Kai-Heng Feng wrote:
> > > > On Thu, Feb 10, 2022 at 5:36 AM Bjorn Helgaas <helgaas@kernel.org> wrote:
> > > > > On Tue, Dec 07, 2021 at 02:15:04PM +0100, Rafael J. Wysocki wrote:
> > > > > > On Tue, Dec 7, 2021 at 12:12 AM Keith Busch <kbusch@kernel.org> wrote:
> > > > > > > On Fri, Dec 03, 2021 at 11:15:41AM +0800, Kai-Heng Feng wrote:
> > > > > > > > When Samsung PCIe Gen4 NVMe is connected to Intel ADL VMD, the
> > > > > > > > combination causes AER message flood and drags the system performance
> > > > > > > > down.
> > > > > > > >
> > > > > > > > The issue doesn't happen when VMD mode is disabled in BIOS, since AER
> > > > > > > > isn't enabled by acpi_pci_root_create() . When VMD mode is enabled, AER
> > > > > > > > is enabled regardless of _OSC:
> > > > > > > > [    0.410076] acpi PNP0A08:00: _OSC: platform does not support [AER]
> > > > > > > > ...
> > > > > > > > [    1.486704] pcieport 10000:e0:06.0: AER: enabled with IRQ 146
> > > > > > > >
> > > > > > > > Since VMD is an aperture to regular PCIe root ports, honor ACPI _OSC to
> > > > > > > > disable PCIe features accordingly to resolve the issue.
> > > > > > >
> > > > > > > At least for some versions of this hardare, I recall ACPI is unaware of
> > > > > > > any devices in the VMD domain; the platform can not see past the VMD
> > > > > > > endpoint, so I throught the driver was supposed to always let the VMD
> > > > > > > domain use OS native support regardless of the parent's ACPI _OSC.
> > > > > >
> > > > > > This is orthogonal to whether or not ACPI is aware of the VMD domain
> > > > > > or the devices in it.
> > > > > >
> > > > > > If the platform firmware does not allow the OS to control specific
> > > > > > PCIe features at the physical host bridge level, that extends to the
> > > > > > VMD "bus", because it is just a way to expose a hidden part of the
> > > > > > PCIe hierarchy.
> > > > >
> > > > > I don't understand what's going on here.  Do we understand the AER
> > > > > message flood?  Are we just papering over it by disabling AER?
> > > >
> > > > To be more precise, AER is disabled by the platform vendor in BIOS to
> > > > paper over the issue.
> > > > The only viable solution for us is to follow their settings. We may
> > > > never know what really happens underneath.
> > > >
> > > > Disabling ASPM/AER/PME etc is a normal practice for ODMs unfortunately.
> > >
> > > OK.  So this patch actually has nothing in particular to do with AER.
> > > It's about making _OSC apply to *all* devices below a host bridge,
> > > even those below a VMD.
> >
> > Right.
> >
> > > This is slightly ambiguous because while "_OSC applies to the entire
> > > hierarchy originated by a PCI Host Bridge" (PCI Firmware spec r3.3,
> > > sec 4.5.1), vmd.c creates a logical view where devices below the VMD
> > > are in a separate hierarchy with a separate domain.
> >
> > But from the HW perspective they still are in the same hierarchy below
> > the original host bridge.
>
> I suppose in some sense it's the same hierarchy because the electrical
> connection all goes through a root port in the original host bridge,
> but it's a little muddy because according to [1], a VMD spawns a new
> hierarchy that can use an entirely new [bus 00-ff] space, and the
> hierarchy below VMD uses a new config access mechanism independent of
> ECAM or whatever the original host bridge uses.

IIUC, that's part of the hiding mechanism.  See below.

> > > The interpretation that _OSC applies to devices below VMD should work,
> > > as long as it is possible for platform firmware to manage services
> > > (AER, pciehp, etc) for things below VMD without getting in the way of
> > > vmd.c.
> >
> > vmd.c actually exposes things hidden by the firmware and the point of
> > the patch is to still let the firmware control them if it wants/needs
> > to IIUC.
>
> My mental picture is that without vmd.c, Linux would enumerate the VMD
> RCiEP itself, but none of the devices below the VMD would be visible.
> With vmd.c, devices below the VMD RCiEP are visible.  Maybe this
> picture is incorrect or too simple?

It doesn't reflect what really happens AFAICS.  The devices that
appear to be located below the VMD RCiEP are not there physically.

> Apparently there's a firmware toggle, but I don't know exactly what it
> does.  Maybe if the toggle is set to disable VMD, the VMD device looks
> like a regular Root Port and the devices below are enumerated
> normally even without any vmd.c?

If the toggle is set to disable VMD, all of the devices that would be
hidden by the firmware and only visible through the VMD mechanisms
show up in their proper locations in the original PCIe hierarchy that
they belong to physically.

> > > But I think one implication of this is that we cannot support
> > > hot-added VMDs.  For example, firmware that wants to manage AER will
> > > use _OSC to retain AER control.  But if the firmware doesn't know how
> > > VMDs work, it will not be able to handle AER for devices below the
> > > VMD.
> >
> > Well, the firmware needs to know how stuff works to hide it in the
> > first place ...
>
> [1] does also say that VMD is a Root Complex *Integrated* Endpoint,
> which could not be hotplugged.  But I don't see anything in the code
> that actually enforces or requires that, so I don't know what to make
> of it.

[1] is correct.

> If it's possible to hot-add a VMD device, firmware wouldn't be
> involved in configuring it (assuming pciehp hotplug).  I assume the
> new VMD would look like an Endpoint, and if vmd.c is present, maybe it
> could construct a new hierarchy below that Endpoint?  In that case, we
> have to assume firmware doesn't know how to operate VMD, so even if
> firmware manages AER in general, it wouldn't be able to do it for
> things below the VMD.

No, this really is only about re-exposing some of the existing PCIe
hierarchy that was hidden by the firmware from the OS.  Physically, it
is still the same hierarchy all the time.

> > > > > If an error occurs below a VMD, who notices and reports it?  If we
> > > > > disable native AER below VMD because of _OSC, as this patch does, I
> > > > > guess we're assuming the platform will handle AER events below VMD.
> > > > > Is that really true?  Does the platform know how to find AER log
> > > > > registers of devices below VMD?
> > > > >
> > > > > > The platform firmware does that through ACPI _OSC under the host
> > > > > > bridge device (not under the VMD device) which it is very well aware
> > > > > > of.
>
> [1] https://git.kernel.org/linus/185a383ada2e
diff mbox series

Patch

diff --git a/drivers/pci/controller/vmd.c b/drivers/pci/controller/vmd.c
index a45e8e59d3d48..691765e6c12aa 100644
--- a/drivers/pci/controller/vmd.c
+++ b/drivers/pci/controller/vmd.c
@@ -661,6 +661,21 @@  static int vmd_alloc_irqs(struct vmd_dev *vmd)
 	return 0;
 }
 
+/*
+ * Since VMD is an aperture to regular PCIe root ports, only allow it to
+ * control features that the OS is allowed to control on the physical PCI bus.
+ */
+static void vmd_copy_host_bridge_flags(struct pci_host_bridge *root_bridge,
+				       struct pci_host_bridge *vmd_bridge)
+{
+	vmd_bridge->native_pcie_hotplug = root_bridge->native_pcie_hotplug;
+	vmd_bridge->native_shpc_hotplug = root_bridge->native_shpc_hotplug;
+	vmd_bridge->native_aer = root_bridge->native_aer;
+	vmd_bridge->native_pme = root_bridge->native_pme;
+	vmd_bridge->native_ltr = root_bridge->native_ltr;
+	vmd_bridge->native_dpc = root_bridge->native_dpc;
+}
+
 static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features)
 {
 	struct pci_sysdata *sd = &vmd->sysdata;
@@ -798,6 +813,9 @@  static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features)
 		return -ENODEV;
 	}
 
+	vmd_copy_host_bridge_flags(pci_find_host_bridge(vmd->dev->bus),
+				   to_pci_host_bridge(vmd->bus->bridge));
+
 	vmd_attach_resources(vmd);
 	if (vmd->irq_domain)
 		dev_set_msi_domain(&vmd->bus->dev, vmd->irq_domain);