Message ID | cover.1703229766.git.zhoubinbin@loongson.cn |
---|---|
Headers | show |
Series | LoongArch: Add built-in dtb support | expand |
Hi Conor: Sorry for the late reply. On Fri, Dec 22, 2023 at 9:39 PM Conor Dooley <conor@kernel.org> wrote: > > Hey Binbin, > > On Fri, Dec 22, 2023 at 04:00:43PM +0800, Binbin Zhou wrote: > > Hi all: > > > > This patchset introduces LoongArch's built-in dtb support. > > > > During the upstream progress of those DT-based drivers, DT properties > > are changed a lot so very different from those in existing bootloaders. > > It is inevitably that some existing systems do not provide a standard, > > canonical device tree to the kernel at boot time. So let's provide a > > device tree table in the kernel, keyed by the dts filename, containing > > the relevant DTBs. > > > > We can use the built-in dts files as references. Each SoC has only one > > built-in dts file which describes all possible device information of > > that SoC, so the dts files are good examples during development. > > > > And as a reference, our built-in dts file only enables the most basic > > bootable combinations (so it is generic enough), acts as an alternative > > in case the dts in the bootloader is unexpected. > > > > In the past while, we resolved the DTC_CHK warning for the v4 patchset, > > and the relevant patchset has either been applied or had the > > Reviewed-by tag added. > > I notice you dropped the topology information from all patches in the > series, not only the 2k0500 patch that only has one CPU. I didn't see a > response to my comments the kernel being able to assemble the topology > based on the second level caches using the generic topology code for the > systems that have more than one cpu. With the cpu-map information > dropped, do the multi-cpu systems have their topologies assembled > correctly by the kernel? As we saw previously, our DT-based system only supports single cluster cpu{s}, and multi-cluster cpu is not going to be in our plans. > You mentioned that there is an instruction that allows you to get > information about i and d caches etc, so adding them to the DT is not > required, but does it also cover the next level caches? Firstly, sorry for my previous mistake about the cache reads. `cpucfg` is actually a set of registers that describes the features of the cpu, including the CPU cache [1]. `populate_cache_properties()` reads all levels of cache information [2], including of course `next cache` if it exists. [1]: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/arch/loongarch/include/asm/loongarch.h#n765 [2]: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/arch/loongarch/mm/cache.c#n94 > The program that I am familiar with for displaying this information > is hwloc: https://github.com/open-mpi/hwloc > Ah, yes, I tried looking at the `hwloc-ls` output before committing, and it's below(LS2K1000): [root@fedora ~]# hwloc-ls Machine (7730MB total) Package L#0 NUMANode L#0 (P#0 7730MB) L2 L#0 (1024KB) L1d L#0 (32KB) + L1i L#0 (32KB) + Core L#0 + PU L#0 (P#0) L1d L#1 (32KB) + L1i L#1 (32KB) + Core L#1 + PU L#1 (P#1) It's the same as what we actually have. Thanks. Binbin > Cheers, > Conor. >
On Wed, Dec 27, 2023 at 12:04:59PM +0600, Binbin Zhou wrote: > > Ah, yes, I tried looking at the `hwloc-ls` output before committing, > and it's below(LS2K1000): > > [root@fedora ~]# hwloc-ls > Machine (7730MB total) > Package L#0 > NUMANode L#0 (P#0 7730MB) > L2 L#0 (1024KB) > L1d L#0 (32KB) + L1i L#0 (32KB) + Core L#0 + PU L#0 (P#0) > L1d L#1 (32KB) + L1i L#1 (32KB) + Core L#1 + PU L#1 (P#1) > > It's the same as what we actually have. Yeah, that looks to be about what I would expect, thanks.
This series is good enough for me, I will apply it to the loongarch tree after [1] is merged. [1] https://lore.kernel.org/loongarch/cover.1701933946.git.zhoubinbin@loongson.cn/T/#t Huacai On Thu, Dec 28, 2023 at 10:09 PM Conor Dooley <conor@kernel.org> wrote: > > On Wed, Dec 27, 2023 at 12:04:59PM +0600, Binbin Zhou wrote: > > > > Ah, yes, I tried looking at the `hwloc-ls` output before committing, > > and it's below(LS2K1000): > > > > [root@fedora ~]# hwloc-ls > > Machine (7730MB total) > > Package L#0 > > NUMANode L#0 (P#0 7730MB) > > L2 L#0 (1024KB) > > L1d L#0 (32KB) + L1i L#0 (32KB) + Core L#0 + PU L#0 (P#0) > > L1d L#1 (32KB) + L1i L#1 (32KB) + Core L#1 + PU L#1 (P#1) > > > > It's the same as what we actually have. > > > Yeah, that looks to be about what I would expect, thanks.
Applied to loongarch-next, thanks. Huacai On Fri, Dec 29, 2023 at 11:10 PM Huacai Chen <chenhuacai@kernel.org> wrote: > > This series is good enough for me, I will apply it to the loongarch > tree after [1] is merged. > > [1] https://lore.kernel.org/loongarch/cover.1701933946.git.zhoubinbin@loongson.cn/T/#t > > Huacai > > On Thu, Dec 28, 2023 at 10:09 PM Conor Dooley <conor@kernel.org> wrote: > > > > On Wed, Dec 27, 2023 at 12:04:59PM +0600, Binbin Zhou wrote: > > > > > > Ah, yes, I tried looking at the `hwloc-ls` output before committing, > > > and it's below(LS2K1000): > > > > > > [root@fedora ~]# hwloc-ls > > > Machine (7730MB total) > > > Package L#0 > > > NUMANode L#0 (P#0 7730MB) > > > L2 L#0 (1024KB) > > > L1d L#0 (32KB) + L1i L#0 (32KB) + Core L#0 + PU L#0 (P#0) > > > L1d L#1 (32KB) + L1i L#1 (32KB) + Core L#1 + PU L#1 (P#1) > > > > > > It's the same as what we actually have. > > > > > > Yeah, that looks to be about what I would expect, thanks.
Hi, Krzysztof, On Tue, Jan 9, 2024 at 7:14 PM Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org> wrote: > > On 09/01/2024 10:57, Huacai Chen wrote: > > Applied to loongarch-next, thanks. > > > > It's merge window, why do you apply patches? For which cycle? I'm sorry I forgot to reply to the email when I applied patches, the patches have already been pulled in linux-next some days before. Huacai > > Best regards, > Krzysztof >
On 09/01/2024 13:13, Huacai Chen wrote: > Hi, Krzysztof, > > On Tue, Jan 9, 2024 at 7:14 PM Krzysztof Kozlowski > <krzysztof.kozlowski@linaro.org> wrote: >> >> On 09/01/2024 10:57, Huacai Chen wrote: >>> Applied to loongarch-next, thanks. >>> >> >> It's merge window, why do you apply patches? For which cycle? > I'm sorry I forgot to reply to the email when I applied patches, the > patches have already been pulled in linux-next some days before. Really? I cannot find them on next-20240108, so what happened? Are you aware that patches should be in next for "few next cycles" minimum (which means few days or even a week)? Best regards, Krzysztof
Hi, Krzysztof, On Tue, Jan 9, 2024 at 9:33 PM Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org> wrote: > > On 09/01/2024 13:13, Huacai Chen wrote: > > Hi, Krzysztof, > > > > On Tue, Jan 9, 2024 at 7:14 PM Krzysztof Kozlowski > > <krzysztof.kozlowski@linaro.org> wrote: > >> > >> On 09/01/2024 10:57, Huacai Chen wrote: > >>> Applied to loongarch-next, thanks. > >>> > >> > >> It's merge window, why do you apply patches? For which cycle? > > I'm sorry I forgot to reply to the email when I applied patches, the > > patches have already been pulled in linux-next some days before. > > Really? I cannot find them on next-20240108, so what happened? Hmm, I applied patches two days ago, and they were only pulled in next-20240109. > > Are you aware that patches should be in next for "few next cycles" > minimum (which means few days or even a week)? Thank you for your reminder, when I sent my first PR, my mentor had already told me this. So I will wait until next week to send PR for this series. Huacai > > Best regards, > Krzysztof > >