Message ID | 1692120000-46900-1-git-send-email-lizhi.hou@amd.com |
---|---|
Headers | show |
Series | Generate device tree node for pci devices | expand |
Hi Geert, Thanks for reviewing the patch. I add my comment in-line. On 8/24/23 01:31, Geert Uytterhoeven wrote: > Hi Lizhi, > > On Tue, 15 Aug 2023, Lizhi Hou wrote: >> Currently, in an overlay fdt fragment, it needs to specify the exact >> location in base DT. In another word, when the fdt fragment is >> generated, >> the base DT location for the fragment is already known. >> >> There is new use case that the base DT location is unknown when fdt >> fragment is generated. For example, the add-on device provide a fdt >> overlay with its firmware to describe its downstream devices. Because it >> is add-on device which can be plugged to different systems, its firmware >> will not be able to know the overlay location in base DT. Instead, the >> device driver will load the overlay fdt and apply it to base DT at >> runtime. >> In this case, of_overlay_fdt_apply() needs to be extended to specify >> the target node for device driver to apply overlay fdt. >> int overlay_fdt_apply(..., struct device_node *base); >> >> Signed-off-by: Lizhi Hou <lizhi.hou@amd.com> > > Thanks for your patch, which is now commit 47284862bfc7fd56 ("of: > overlay: Extend of_overlay_fdt_apply() in dt-rh/for-next. > >> --- a/drivers/of/overlay.c >> +++ b/drivers/of/overlay.c >> @@ -715,6 +730,7 @@ static struct device_node *find_target(struct >> device_node *info_node) >> /** >> * init_overlay_changeset() - initialize overlay changeset from >> overlay tree >> * @ovcs: Overlay changeset to build >> + * @target_base: Point to the target node to apply overlay >> * >> * Initialize @ovcs. Populate @ovcs->fragments with node information >> from >> * the top level of @overlay_root. The relevant top level nodes are the > > As an overlay can contain one or more fragments, this means the > base (when specified) will be applied to all fragments, and will thus > override the target-path properties in all fragments. > > However, for the use case of an overlay that you can plug into > a random location (and of which there can be multiple instances), > there can really be only a single fragment. Even nodes that typically > live at the root level (e.g. gpio-leds or gpio-keys) must be inserted > below the specified location, to avoid conflicts. > > Hence: > 1. Should init_overlay_changeset() return -EINVAL if target_base is > specified, and there is more than one fragment? Maybe allowing more than one fragment make the interface more generic? For example, it could support the use case that multiple fragments share the same base node. Currently, the fragment overlay path is "base node path" + "fragment target path". Thus, for the structure: /a/b/c/fragment0 /a/b/d/fagment1 It can be two fragments in one fdt by using base node path = /a/b fragment0 target path = /c fragment1 target path = /d I am not sure if there will be this kind of use case or not. And I think it would not be hurt to allow that. > > 2. Should there be a convention about the target-path property's > contents in the original overlay? > drivers/of/unittest-data/overlay_pci_node.dtso in "[PATCH V13 5/5] > of: unittest: Add pci_dt_testdrv pci driver" uses > > target-path=""; > > which cannot be represented when using sugar syntax. > "/" should work fine, though. Because the fragment overlay path is "base node path" + "fragment target path", I may add code to check if "fragment target patch is '/' and ignore it. I think that would support sugar syntax with only '/' specified. Thanks, Lizhi > > Thanks! > > Gr{oetje,eeting}s, > > Geert > > -- > Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- > geert@linux-m68k.org > > In personal conversations with technical people, I call myself a > hacker. But > when I'm talking to journalists I just say "programmer" or something > like that. > -- Linus Torvalds
On Thu, Aug 24, 2023 at 1:40 PM Lizhi Hou <lizhi.hou@amd.com> wrote: > > Hi Geert, > > Thanks for reviewing the patch. I add my comment in-line. > > On 8/24/23 01:31, Geert Uytterhoeven wrote: > > Hi Lizhi, > > > > On Tue, 15 Aug 2023, Lizhi Hou wrote: > >> Currently, in an overlay fdt fragment, it needs to specify the exact > >> location in base DT. In another word, when the fdt fragment is > >> generated, > >> the base DT location for the fragment is already known. > >> > >> There is new use case that the base DT location is unknown when fdt > >> fragment is generated. For example, the add-on device provide a fdt > >> overlay with its firmware to describe its downstream devices. Because it > >> is add-on device which can be plugged to different systems, its firmware > >> will not be able to know the overlay location in base DT. Instead, the > >> device driver will load the overlay fdt and apply it to base DT at > >> runtime. > >> In this case, of_overlay_fdt_apply() needs to be extended to specify > >> the target node for device driver to apply overlay fdt. > >> int overlay_fdt_apply(..., struct device_node *base); > >> > >> Signed-off-by: Lizhi Hou <lizhi.hou@amd.com> > > > > Thanks for your patch, which is now commit 47284862bfc7fd56 ("of: > > overlay: Extend of_overlay_fdt_apply() in dt-rh/for-next. > > > >> --- a/drivers/of/overlay.c > >> +++ b/drivers/of/overlay.c > >> @@ -715,6 +730,7 @@ static struct device_node *find_target(struct > >> device_node *info_node) > >> /** > >> * init_overlay_changeset() - initialize overlay changeset from > >> overlay tree > >> * @ovcs: Overlay changeset to build > >> + * @target_base: Point to the target node to apply overlay > >> * > >> * Initialize @ovcs. Populate @ovcs->fragments with node information > >> from > >> * the top level of @overlay_root. The relevant top level nodes are the > > > > As an overlay can contain one or more fragments, this means the > > base (when specified) will be applied to all fragments, and will thus > > override the target-path properties in all fragments. > > > > However, for the use case of an overlay that you can plug into > > a random location (and of which there can be multiple instances), > > there can really be only a single fragment. Even nodes that typically > > live at the root level (e.g. gpio-leds or gpio-keys) must be inserted > > below the specified location, to avoid conflicts. It's not a random location, but a location where the full path and/or unit-address are not known. What we should know is the node's base name and compatible. I think we can assume for this kind of usecase, that adding nodes only under a defined base node is allowed. This is also just the restriction I've asked for every time more general support of applying overlays by the kernel is requested. The add-on card, hat, cape, etc. usecases should all be applied downstream of some node. > > > > Hence: > > 1. Should init_overlay_changeset() return -EINVAL if target_base is > > specified, and there is more than one fragment? > > Maybe allowing more than one fragment make the interface more generic? > For example, it could support the use case that multiple fragments share > the same base node. > > Currently, the fragment overlay path is "base node path" + "fragment > target path". Thus, for the structure: > > /a/b/c/fragment0 > > /a/b/d/fagment1 > > It can be two fragments in one fdt by using > > base node path = /a/b > > fragment0 target path = /c > > fragment1 target path = /d > > I am not sure if there will be this kind of use case or not. And I think > it would not be hurt to allow that. > > > > > 2. Should there be a convention about the target-path property's > > contents in the original overlay? > > drivers/of/unittest-data/overlay_pci_node.dtso in "[PATCH V13 5/5] > > of: unittest: Add pci_dt_testdrv pci driver" uses > > > > target-path=""; > > > > which cannot be represented when using sugar syntax. > > "/" should work fine, though. > > Because the fragment overlay path is "base node path" + "fragment target > path", I may add code to check if "fragment target patch is '/' and > ignore it. I think that would support sugar syntax with only '/' specified. Note that "/" is also a valid target path. I think it would be better to have a form that's obviously not a fixed path. I think what's needed is to be able to specify just the nodename with or without the unit-address. I don't know if dtc will accept that. As labels are part of the ABI with overlays, a target label could also work. Though the kernel would have to learn to add new labels or get a label path from another source as a label doesn't exist on a generated node. Rob
Hi Lizhi, On Thu, Aug 24, 2023 at 8:40 PM Lizhi Hou <lizhi.hou@amd.com> wrote: > On 8/24/23 01:31, Geert Uytterhoeven wrote: > > On Tue, 15 Aug 2023, Lizhi Hou wrote: > >> Currently, in an overlay fdt fragment, it needs to specify the exact > >> location in base DT. In another word, when the fdt fragment is > >> generated, > >> the base DT location for the fragment is already known. > >> > >> There is new use case that the base DT location is unknown when fdt > >> fragment is generated. For example, the add-on device provide a fdt > >> overlay with its firmware to describe its downstream devices. Because it > >> is add-on device which can be plugged to different systems, its firmware > >> will not be able to know the overlay location in base DT. Instead, the > >> device driver will load the overlay fdt and apply it to base DT at > >> runtime. > >> In this case, of_overlay_fdt_apply() needs to be extended to specify > >> the target node for device driver to apply overlay fdt. > >> int overlay_fdt_apply(..., struct device_node *base); > >> > >> Signed-off-by: Lizhi Hou <lizhi.hou@amd.com> > > > > Thanks for your patch, which is now commit 47284862bfc7fd56 ("of: > > overlay: Extend of_overlay_fdt_apply() in dt-rh/for-next. > > > >> --- a/drivers/of/overlay.c > >> +++ b/drivers/of/overlay.c > >> @@ -715,6 +730,7 @@ static struct device_node *find_target(struct > >> device_node *info_node) > >> /** > >> * init_overlay_changeset() - initialize overlay changeset from > >> overlay tree > >> * @ovcs: Overlay changeset to build > >> + * @target_base: Point to the target node to apply overlay > >> * > >> * Initialize @ovcs. Populate @ovcs->fragments with node information > >> from > >> * the top level of @overlay_root. The relevant top level nodes are the > > > > As an overlay can contain one or more fragments, this means the > > base (when specified) will be applied to all fragments, and will thus > > override the target-path properties in all fragments. > > > > However, for the use case of an overlay that you can plug into > > a random location (and of which there can be multiple instances), > > there can really be only a single fragment. Even nodes that typically > > live at the root level (e.g. gpio-leds or gpio-keys) must be inserted > > below the specified location, to avoid conflicts. > > > > Hence: > > 1. Should init_overlay_changeset() return -EINVAL if target_base is > > specified, and there is more than one fragment? > > Maybe allowing more than one fragment make the interface more generic? > For example, it could support the use case that multiple fragments share > the same base node. > > Currently, the fragment overlay path is "base node path" + "fragment > target path". Thus, for the structure: Oh, I had missed that the "fragment target path" is appended, and thought it was just overridden. > /a/b/c/fragment0 > > /a/b/d/fagment1 > > It can be two fragments in one fdt by using > > base node path = /a/b > > fragment0 target path = /c > > fragment1 target path = /d > > I am not sure if there will be this kind of use case or not. And I think > it would not be hurt to allow that. Is there a need for that? Both c and d can be handled as subnodes in a single fragment if the target path is empty (and see below). > > 2. Should there be a convention about the target-path property's > > contents in the original overlay? > > drivers/of/unittest-data/overlay_pci_node.dtso in "[PATCH V13 5/5] > > of: unittest: Add pci_dt_testdrv pci driver" uses > > > > target-path=""; > > > > which cannot be represented when using sugar syntax. > > "/" should work fine, though. > > Because the fragment overlay path is "base node path" + "fragment target > path", I may add code to check if "fragment target patch is '/' and > ignore it. I think that would support sugar syntax with only '/' specified. That makes sense. Thanks! Gr{oetje,eeting}s, Geert
On Tue, Aug 15, 2023 at 10:19:55AM -0700, Lizhi Hou wrote: > This patch series introduces OF overlay support for PCI devices which > primarily addresses two use cases. First, it provides a data driven method > to describe hardware peripherals that are present in a PCI endpoint and > hence can be accessed by the PCI host. Second, it allows reuse of a OF > compatible driver -- often used in SoC platforms -- in a PCI host based > system. > > There are 2 series devices rely on this patch: > > 1) Xilinx Alveo Accelerator cards (FPGA based device) > 2) Microchip LAN9662 Ethernet Controller > > Please see: https://lore.kernel.org/lkml/20220427094502.456111-1-clement.leger@bootlin.com/ > > Normally, the PCI core discovers PCI devices and their BARs using the > PCI enumeration process. However, the process does not provide a way to > discover the hardware peripherals that are present in a PCI device, and > which can be accessed through the PCI BARs. Also, the enumeration process > does not provide a way to associate MSI-X vectors of a PCI device with the > hardware peripherals that are present in the device. PCI device drivers > often use header files to describe the hardware peripherals and their > resources as there is no standard data driven way to do so. This patch > series proposes to use flattened device tree blob to describe the > peripherals in a data driven way. Based on previous discussion, using > device tree overlay is the best way to unflatten the blob and populate > platform devices. To use device tree overlay, there are three obvious > problems that need to be resolved. > > First, we need to create a base tree for non-DT system such as x86_64. A > patch series has been submitted for this: > https://lore.kernel.org/lkml/20220624034327.2542112-1-frowand.list@gmail.com/ > https://lore.kernel.org/lkml/20220216050056.311496-1-lizhi.hou@xilinx.com/ > > Second, a device tree node corresponding to the PCI endpoint is required > for overlaying the flattened device tree blob for that PCI endpoint. > Because PCI is a self-discoverable bus, a device tree node is usually not > created for PCI devices. This series adds support to generate a device > tree node for a PCI device which advertises itself using PCI quirks > infrastructure. > > Third, we need to generate device tree nodes for PCI bridges since a child > PCI endpoint may choose to have a device tree node created. > > This patch series is made up of three patches. > > The first patch is adding OF interface to create or destroy OF node > dynamically. > > The second patch introduces a kernel option, CONFIG_PCI_DYNAMIC_OF_NODES. > When the option is turned on, the kernel will generate device tree nodes > for all PCI bridges unconditionally. The patch also shows how to use the > PCI quirks infrastructure, DECLARE_PCI_FIXUP_FINAL to generate a device > tree node for a device. Specifically, the patch generates a device tree > node for Xilinx Alveo U50 PCIe accelerator device. The generated device > tree nodes do not have any property. > > The third patch adds basic properties ('reg', 'compatible' and > 'device_type') to the dynamically generated device tree nodes. More > properties can be added in the future. In my opinion this series needs much more work (esp. cleaning up one) to not look like a NIH here and there.
Hi Andy, On Wed, 13 Sep 2023 14:17:30 +0300 Andy Shevchenko <andriy.shevchenko@intel.com> wrote: > On Tue, Sep 12, 2023 at 02:12:04PM -0500, Rob Herring wrote: > > On Mon, Sep 11, 2023 at 3:37 PM Andy Shevchenko > > <andriy.shevchenko@intel.com> wrote: > > > On Tue, Aug 15, 2023 at 10:19:55AM -0700, Lizhi Hou wrote: > > ... > > > > Can you point out to the ACPI excerpt(s) of the description of anything related > > > to the device(s) in question? > > > > I don't understand what you are asking for. > > Through the email thread it was mentioned that this series was tested on the > ACPI enabled platform, Jonathan (IIRC) asked why do we need to have a shadow > DT for the something that ACPI already describes. That's why I'm trying to > understand if it's the case. and if so, how can we improve the approach. > Patches from Frank Rowand series [1] are needed to create an of_root_node if a DT was not provided by the firmware, bootloader, etc that run the kernel. [1]: https://lore.kernel.org/lkml/20220624034327.2542112-1-frowand.list@gmail.com/ Current Lizhi's series creates nodes from the PCI host node during the PCI enumeration. It creates PCI-PCI bridge and PCI device nodes. I use these series on an ACPI system. I need one more missing component: the node related to the PCI host bridge This was the purpose of Clement's work. This work was not sent upstream yet and I am working on it in order to have a full tree from the of_root to the PCI device ie: of_root <-- Frank Rowand series + of_host_pci_bridge <-- Clement's work + pci_bridge <-- Current Lizhi series + pci_bridge <-- Current Lizhi series ... + pci_dev <-- Current Lizhi series Hope that this status helped. Regards, Hervé
On Fri, Sep 15, 2023 at 07:30:08PM +0200, Herve Codina wrote: > On Wed, 13 Sep 2023 14:17:30 +0300 > Andy Shevchenko <andriy.shevchenko@intel.com> wrote: > > On Tue, Sep 12, 2023 at 02:12:04PM -0500, Rob Herring wrote: > > > On Mon, Sep 11, 2023 at 3:37 PM Andy Shevchenko > > > <andriy.shevchenko@intel.com> wrote: > > > > On Tue, Aug 15, 2023 at 10:19:55AM -0700, Lizhi Hou wrote: ... > > > > Can you point out to the ACPI excerpt(s) of the description of anything related > > > > to the device(s) in question? > > > > > > I don't understand what you are asking for. > > > > Through the email thread it was mentioned that this series was tested on the > > ACPI enabled platform, Jonathan (IIRC) asked why do we need to have a shadow > > DT for the something that ACPI already describes. That's why I'm trying to > > understand if it's the case. and if so, how can we improve the approach. > > Patches from Frank Rowand series [1] are needed to create an of_root_node if a DT > was not provided by the firmware, bootloader, etc that run the kernel. > > [1]: https://lore.kernel.org/lkml/20220624034327.2542112-1-frowand.list@gmail.com/ > > Current Lizhi's series creates nodes from the PCI host node during the PCI > enumeration. It creates PCI-PCI bridge and PCI device nodes. > > I use these series on an ACPI system. > > I need one more missing component: the node related to the PCI host bridge > This was the purpose of Clement's work. This work was not sent upstream yet and I > am working on it in order to have a full tree from the of_root to the PCI device > ie: > of_root <-- Frank Rowand series > + of_host_pci_bridge <-- Clement's work > + pci_bridge <-- Current Lizhi series > + pci_bridge <-- Current Lizhi series > ... > + pci_dev <-- Current Lizhi series > > Hope that this status helped. Thanks for the explanation! I suppose it's better to have three series combined into one and being sent with a better cover letter to explain all this. Also it might make sense (in my opinion) to Cc Jonathan (I did it here). Sorry, Jonathan, if you are not wanting this.
On Mon, Sep 18, 2023 at 2:17 AM Andy Shevchenko <andriy.shevchenko@intel.com> wrote: > > On Fri, Sep 15, 2023 at 07:30:08PM +0200, Herve Codina wrote: > > On Wed, 13 Sep 2023 14:17:30 +0300 > > Andy Shevchenko <andriy.shevchenko@intel.com> wrote: > > > On Tue, Sep 12, 2023 at 02:12:04PM -0500, Rob Herring wrote: > > > > On Mon, Sep 11, 2023 at 3:37 PM Andy Shevchenko > > > > <andriy.shevchenko@intel.com> wrote: > > > > > On Tue, Aug 15, 2023 at 10:19:55AM -0700, Lizhi Hou wrote: > > ... > > > > > > Can you point out to the ACPI excerpt(s) of the description of anything related > > > > > to the device(s) in question? > > > > > > > > I don't understand what you are asking for. > > > > > > Through the email thread it was mentioned that this series was tested on the > > > ACPI enabled platform, Jonathan (IIRC) asked why do we need to have a shadow > > > DT for the something that ACPI already describes. That's why I'm trying to > > > understand if it's the case. and if so, how can we improve the approach. > > > > Patches from Frank Rowand series [1] are needed to create an of_root_node if a DT > > was not provided by the firmware, bootloader, etc that run the kernel. > > > > [1]: https://lore.kernel.org/lkml/20220624034327.2542112-1-frowand.list@gmail.com/ > > > > Current Lizhi's series creates nodes from the PCI host node during the PCI > > enumeration. It creates PCI-PCI bridge and PCI device nodes. > > > > I use these series on an ACPI system. > > > > I need one more missing component: the node related to the PCI host bridge > > This was the purpose of Clement's work. This work was not sent upstream yet and I > > am working on it in order to have a full tree from the of_root to the PCI device > > ie: > > of_root <-- Frank Rowand series > > + of_host_pci_bridge <-- Clement's work > > + pci_bridge <-- Current Lizhi series > > + pci_bridge <-- Current Lizhi series > > ... > > + pci_dev <-- Current Lizhi series > > > > Hope that this status helped. > > Thanks for the explanation! I suppose it's better to have three series combined > into one and being sent with a better cover letter to explain all this. You can go back (years now) and see that. I asked for this to be split up into manageable chunks and not solve multiple problems at once. No point in trying to do DT on top of ACPI if DT on top of DT doesn't work first. Rob