diff mbox series

[2/6] cacheinfo: Set cache 'id' based on DT data

Message ID 20211216233125.1130793-3-robh@kernel.org
State New
Headers show
Series cacheinfo: CPU affinity and Devicetree 'id' support | expand

Commit Message

Rob Herring Dec. 16, 2021, 11:31 p.m. UTC
Use the minimum CPU h/w id of the CPUs associated with the cache for the
cache 'id'. This will provide a stable id value for a given system. As
we need to check all possible CPUs, we can't use the shared_cpu_map
which is just online CPUs. There's not a cache to CPUs mapping in DT, so
we have to walk all CPU nodes and then walk cache levels.

Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: "Rafael J. Wysocki" <rafael@kernel.org>
Signed-off-by: Rob Herring <robh@kernel.org>
---
v2:
 - Loop with for_each_possible_cpu instead of for_each_of_cpu_node as
   we will need the logical cpu numbers.
---
 drivers/base/cacheinfo.c | 31 +++++++++++++++++++++++++++++++
 1 file changed, 31 insertions(+)

Comments

Robin Murphy Dec. 17, 2021, 4:57 p.m. UTC | #1
Hi Rob,

On 2021-12-16 23:31, Rob Herring wrote:
> Use the minimum CPU h/w id of the CPUs associated with the cache for the
> cache 'id'. This will provide a stable id value for a given system. As
> we need to check all possible CPUs, we can't use the shared_cpu_map
> which is just online CPUs. There's not a cache to CPUs mapping in DT, so
> we have to walk all CPU nodes and then walk cache levels.

I believe another expected use of the cache ID exposed in sysfs is to 
program steering tags for cache stashing (typically in VFIO-based 
userspace drivers like DPDK so we can't realistically mediate it any 
other way). There were plans afoot last year to ensure that ACPI PPTT 
could provide the necessary ID values for arm64 systems which will 
typically be fairly arbitrary (but unique) due to reflecting underlying 
interconnect routing IDs. Assuming that there will eventually be some 
interest in cache stashing on DT-based systems too, we probably want to 
allow for an explicit ID property on DT cache nodes in a similar manner.

That said, I think it does make sense to have some kind of 
auto-generated fallback scheme *as well*, since I'm sure there will be 
plenty systems which care about MPAM but don't support stashing, and 
therefore wouldn't have a meaningful set of IDs to populate their DT 
with. Conversely I think that might also matter for ACPI too - one point 
I remember from previous discussions is that PPTT may use a compact 
representation where a single entry represents all equivalent caches at 
that level, so I'm not sure we can necessarily rely on IDs out of that 
path being unique either.

Robin.

> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
> Cc: "Rafael J. Wysocki" <rafael@kernel.org>
> Signed-off-by: Rob Herring <robh@kernel.org>
> ---
> v2:
>   - Loop with for_each_possible_cpu instead of for_each_of_cpu_node as
>     we will need the logical cpu numbers.
> ---
>   drivers/base/cacheinfo.c | 31 +++++++++++++++++++++++++++++++
>   1 file changed, 31 insertions(+)
> 
> diff --git a/drivers/base/cacheinfo.c b/drivers/base/cacheinfo.c
> index 66d10bdb863b..21accddf8f5f 100644
> --- a/drivers/base/cacheinfo.c
> +++ b/drivers/base/cacheinfo.c
> @@ -136,6 +136,36 @@ static bool cache_node_is_unified(struct cacheinfo *this_leaf,
>   	return of_property_read_bool(np, "cache-unified");
>   }
>   
> +static void cache_of_set_id(struct cacheinfo *this_leaf, struct device_node *np)
> +{
> +	int cpu;
> +	unsigned long min_id = ~0UL;
> +
> +	for_each_possible_cpu(cpu) {
> +		u64 id;
> +		struct device_node *cache_node, *cpu_node;
> +
> +		cache_node = cpu_node = of_get_cpu_node(cpu, NULL);
> +		id = of_get_cpu_hwid(cpu_node, 0);
> +		while ((cache_node = of_find_next_cache_node(cache_node))) {
> +			if (cache_node == np) {
> +				if (id < min_id) {
> +					min_id = id;
> +					of_node_put(cache_node);
> +					break;
> +				}
> +			}
> +			of_node_put(cache_node);
> +		}
> +		of_node_put(cpu_node);
> +	}
> +
> +	if (min_id != ~0UL) {
> +		this_leaf->id = min_id;
> +		this_leaf->attributes |= CACHE_ID;
> +	}
> +}
> +
>   static void cache_of_set_props(struct cacheinfo *this_leaf,
>   			       struct device_node *np)
>   {
> @@ -151,6 +181,7 @@ static void cache_of_set_props(struct cacheinfo *this_leaf,
>   	cache_get_line_size(this_leaf, np);
>   	cache_nr_sets(this_leaf, np);
>   	cache_associativity(this_leaf);
> +	cache_of_set_id(this_leaf, np);
>   }
>   
>   static int cache_setup_of_node(unsigned int cpu)
Rob Herring Dec. 17, 2021, 6:14 p.m. UTC | #2
On Fri, Dec 17, 2021 at 10:57 AM Robin Murphy <robin.murphy@arm.com> wrote:
>
> Hi Rob,
>
> On 2021-12-16 23:31, Rob Herring wrote:
> > Use the minimum CPU h/w id of the CPUs associated with the cache for the
> > cache 'id'. This will provide a stable id value for a given system. As
> > we need to check all possible CPUs, we can't use the shared_cpu_map
> > which is just online CPUs. There's not a cache to CPUs mapping in DT, so
> > we have to walk all CPU nodes and then walk cache levels.
>
> I believe another expected use of the cache ID exposed in sysfs is to
> program steering tags for cache stashing (typically in VFIO-based
> userspace drivers like DPDK so we can't realistically mediate it any
> other way). There were plans afoot last year to ensure that ACPI PPTT
> could provide the necessary ID values for arm64 systems which will
> typically be fairly arbitrary (but unique) due to reflecting underlying
> interconnect routing IDs. Assuming that there will eventually be some
> interest in cache stashing on DT-based systems too, we probably want to
> allow for an explicit ID property on DT cache nodes in a similar manner.

If you have a suggestion for ID values that correspond to the h/w,
then we can add them. I'd like a bit more than just trusting that ID
is something real.

While the ACPI folks may be willing to take an arbitrary index, it's
something we (mostly) avoid for DT.

> That said, I think it does make sense to have some kind of
> auto-generated fallback scheme *as well*, since I'm sure there will be
> plenty systems which care about MPAM but don't support stashing, and
> therefore wouldn't have a meaningful set of IDs to populate their DT
> with. Conversely I think that might also matter for ACPI too - one point
> I remember from previous discussions is that PPTT may use a compact
> representation where a single entry represents all equivalent caches at
> that level, so I'm not sure we can necessarily rely on IDs out of that
> path being unique either.

AIUI, cache ids break the compact representation.

Rob
Sudeep Holla Dec. 17, 2021, 7:03 p.m. UTC | #3
On Fri, Dec 17, 2021 at 12:14:22PM -0600, Rob Herring wrote:
> On Fri, Dec 17, 2021 at 10:57 AM Robin Murphy <robin.murphy@arm.com> wrote:
> >
> > Hi Rob,
> >
> > On 2021-12-16 23:31, Rob Herring wrote:
> > > Use the minimum CPU h/w id of the CPUs associated with the cache for the
> > > cache 'id'. This will provide a stable id value for a given system. As

I am trying to follow the code. IIUC, the level one(I$ and D$) are skipped
in this logic and the private unified cache if any will get the cpu hwid as
the cache id which is all fine. But what happens if there are 2 levels of
unified private cache ? I am assuming we only care about shared caches for
MPAM and ignore private caches which sounds OK but I just wanted to confirm.

> > > we need to check all possible CPUs, we can't use the shared_cpu_map
> > > which is just online CPUs. There's not a cache to CPUs mapping in DT, so
> > > we have to walk all CPU nodes and then walk cache levels.

I would have preferred to add the cache IDs in DT similar to ACPI but I see
you have certain concerns with that which are valid as well.

> >
> > I believe another expected use of the cache ID exposed in sysfs is to
> > program steering tags for cache stashing (typically in VFIO-based
> > userspace drivers like DPDK so we can't realistically mediate it any
> > other way). There were plans afoot last year to ensure that ACPI PPTT
> > could provide the necessary ID values for arm64 systems which will
> > typically be fairly arbitrary (but unique) due to reflecting underlying
> > interconnect routing IDs. Assuming that there will eventually be some
> > interest in cache stashing on DT-based systems too, we probably want to
> > allow for an explicit ID property on DT cache nodes in a similar manner.
> 
> If you have a suggestion for ID values that correspond to the h/w,
> then we can add them. I'd like a bit more than just trusting that ID
> is something real.
>

I agree, probably architecture must do better job at defining these. But
generated IDs IMO might cause issues especial if we have to change the
logic without breaking the backward compatibility.

> While the ACPI folks may be willing to take an arbitrary index, it's
> something we (mostly) avoid for DT.
>

Not sure if we can call that *arbitrary* 😄, in that case we can imagine
the same at several places in the firmware.

> > That said, I think it does make sense to have some kind of
> > auto-generated fallback scheme *as well*, since I'm sure there will be
> > plenty systems which care about MPAM but don't support stashing, and
> > therefore wouldn't have a meaningful set of IDs to populate their DT
> > with. Conversely I think that might also matter for ACPI too - one point
> > I remember from previous discussions is that PPTT may use a compact
> > representation where a single entry represents all equivalent caches at
> > that level, so I'm not sure we can necessarily rely on IDs out of that
> > path being unique either.
> 
> AIUI, cache ids break the compact representation.
>

IIRC, a note was added to avoid compaction if an implementation requires
any cache instance to be referenced uniquely.
Sudeep Holla Dec. 17, 2021, 7:08 p.m. UTC | #4
On Fri, Dec 17, 2021 at 07:03:45PM +0000, Sudeep Holla wrote:
> On Fri, Dec 17, 2021 at 12:14:22PM -0600, Rob Herring wrote:
> > On Fri, Dec 17, 2021 at 10:57 AM Robin Murphy <robin.murphy@arm.com> wrote:
> > >
> > > Hi Rob,
> > >
> > > On 2021-12-16 23:31, Rob Herring wrote:
> > > > Use the minimum CPU h/w id of the CPUs associated with the cache for the
> > > > cache 'id'. This will provide a stable id value for a given system. As
> 
> I am trying to follow the code. IIUC, the level one(I$ and D$) are skipped
> in this logic and the private unified cache if any will get the cpu hwid as
> the cache id which is all fine. But what happens if there are 2 levels of
> unified private cache ? I am assuming we only care about shared caches for
> MPAM and ignore private caches which sounds OK but I just wanted to confirm.
> 
> > > > we need to check all possible CPUs, we can't use the shared_cpu_map
> > > > which is just online CPUs. There's not a cache to CPUs mapping in DT, so
> > > > we have to walk all CPU nodes and then walk cache levels.
> 
> I would have preferred to add the cache IDs in DT similar to ACPI but I see
> you have certain concerns with that which are valid as well.

One thing I forgot to add is for some weird reasons, some platform supports
both DT and ACPI, this will force the ID generated here to be used in ACPI as
well to ensure same userspace scripts can be used to manage both. That doesn't
sound so great to me.
Robin Murphy Dec. 17, 2021, 7:08 p.m. UTC | #5
On 2021-12-17 18:14, Rob Herring wrote:
> On Fri, Dec 17, 2021 at 10:57 AM Robin Murphy <robin.murphy@arm.com> wrote:
>>
>> Hi Rob,
>>
>> On 2021-12-16 23:31, Rob Herring wrote:
>>> Use the minimum CPU h/w id of the CPUs associated with the cache for the
>>> cache 'id'. This will provide a stable id value for a given system. As
>>> we need to check all possible CPUs, we can't use the shared_cpu_map
>>> which is just online CPUs. There's not a cache to CPUs mapping in DT, so
>>> we have to walk all CPU nodes and then walk cache levels.
>>
>> I believe another expected use of the cache ID exposed in sysfs is to
>> program steering tags for cache stashing (typically in VFIO-based
>> userspace drivers like DPDK so we can't realistically mediate it any
>> other way). There were plans afoot last year to ensure that ACPI PPTT
>> could provide the necessary ID values for arm64 systems which will
>> typically be fairly arbitrary (but unique) due to reflecting underlying
>> interconnect routing IDs. Assuming that there will eventually be some
>> interest in cache stashing on DT-based systems too, we probably want to
>> allow for an explicit ID property on DT cache nodes in a similar manner.
> 
> If you have a suggestion for ID values that correspond to the h/w,
> then we can add them. I'd like a bit more than just trusting that ID
> is something real.
> 
> While the ACPI folks may be willing to take an arbitrary index, it's
> something we (mostly) avoid for DT.

Not really. On the CHI side there are two fields - StashNID, which could 
be any node ID value depending on the interconnect layout, plus 
(optionally) StashLPID to address a specific cache within that node if 
it's something like a CPU cluster. However, how a PCIe TLP steering tag 
translates to those fields in the resulting CHI flit is largely up to 
the root complex.

I think it's going to be more like a "reg" property than a nice 
validatable index.

>> That said, I think it does make sense to have some kind of
>> auto-generated fallback scheme *as well*, since I'm sure there will be
>> plenty systems which care about MPAM but don't support stashing, and
>> therefore wouldn't have a meaningful set of IDs to populate their DT
>> with. Conversely I think that might also matter for ACPI too - one point
>> I remember from previous discussions is that PPTT may use a compact
>> representation where a single entry represents all equivalent caches at
>> that level, so I'm not sure we can necessarily rely on IDs out of that
>> path being unique either.
> 
> AIUI, cache ids break the compact representation.

Right, firmware authors can't use it if they do want to specify IDs, but 
that also means that if we find we *are* consuming a compact PPTT, then 
chances are we're not getting meaningful IDs out of it for MPAM to rely on.

Robin.
Rob Herring Dec. 17, 2021, 7:26 p.m. UTC | #6
On Fri, Dec 17, 2021 at 1:03 PM Sudeep Holla <sudeep.holla@arm.com> wrote:
>
> On Fri, Dec 17, 2021 at 12:14:22PM -0600, Rob Herring wrote:
> > On Fri, Dec 17, 2021 at 10:57 AM Robin Murphy <robin.murphy@arm.com> wrote:
> > >
> > > Hi Rob,
> > >
> > > On 2021-12-16 23:31, Rob Herring wrote:
> > > > Use the minimum CPU h/w id of the CPUs associated with the cache for the
> > > > cache 'id'. This will provide a stable id value for a given system. As
>
> I am trying to follow the code. IIUC, the level one(I$ and D$) are skipped
> in this logic and the private unified cache if any will get the cpu hwid as
> the cache id which is all fine. But what happens if there are 2 levels of
> unified private cache ? I am assuming we only care about shared caches for
> MPAM and ignore private caches which sounds OK but I just wanted to confirm.

The cacheinfo 'id' is only unique to the level and type. It's the
type, level, and ID that gives a unique identifier:

 * struct cacheinfo - represent a cache leaf node
 * @id: This cache's id. It is unique among caches with the same (type, level).

Maybe ACPI's ID expects/allows globally unique cache IDs?

> > > > we need to check all possible CPUs, we can't use the shared_cpu_map
> > > > which is just online CPUs. There's not a cache to CPUs mapping in DT, so
> > > > we have to walk all CPU nodes and then walk cache levels.
>
> I would have preferred to add the cache IDs in DT similar to ACPI but I see
> you have certain concerns with that which are valid as well.
>
> > >
> > > I believe another expected use of the cache ID exposed in sysfs is to
> > > program steering tags for cache stashing (typically in VFIO-based
> > > userspace drivers like DPDK so we can't realistically mediate it any
> > > other way). There were plans afoot last year to ensure that ACPI PPTT
> > > could provide the necessary ID values for arm64 systems which will
> > > typically be fairly arbitrary (but unique) due to reflecting underlying
> > > interconnect routing IDs. Assuming that there will eventually be some
> > > interest in cache stashing on DT-based systems too, we probably want to
> > > allow for an explicit ID property on DT cache nodes in a similar manner.
> >
> > If you have a suggestion for ID values that correspond to the h/w,
> > then we can add them. I'd like a bit more than just trusting that ID
> > is something real.
> >
>
> I agree, probably architecture must do better job at defining these. But
> generated IDs IMO might cause issues especial if we have to change the
> logic without breaking the backward compatibility.
>
> > While the ACPI folks may be willing to take an arbitrary index, it's
> > something we (mostly) avoid for DT.
> >
>
> Not sure if we can call that *arbitrary* 😄, in that case we can imagine
> the same at several places in the firmware.

By arbitrary, I mean made up by the binding/dts author or
documentation convention (UART0, UART1, etc.). Certainly things like
clock IDs are often made up number spaces, but I don't see how we
avoid that. DT had 'cell-index' which I still see attempted. But that
property traces back to h/w having a single power ctrl register and
cell-index was the bit index for the register. If only h/w was still
that simple.

Rob
Rob Herring Dec. 17, 2021, 7:35 p.m. UTC | #7
On Fri, Dec 17, 2021 at 1:08 PM Robin Murphy <robin.murphy@arm.com> wrote:
>
> On 2021-12-17 18:14, Rob Herring wrote:
> > On Fri, Dec 17, 2021 at 10:57 AM Robin Murphy <robin.murphy@arm.com> wrote:
> >>
> >> Hi Rob,
> >>
> >> On 2021-12-16 23:31, Rob Herring wrote:
> >>> Use the minimum CPU h/w id of the CPUs associated with the cache for the
> >>> cache 'id'. This will provide a stable id value for a given system. As
> >>> we need to check all possible CPUs, we can't use the shared_cpu_map
> >>> which is just online CPUs. There's not a cache to CPUs mapping in DT, so
> >>> we have to walk all CPU nodes and then walk cache levels.
> >>
> >> I believe another expected use of the cache ID exposed in sysfs is to
> >> program steering tags for cache stashing (typically in VFIO-based
> >> userspace drivers like DPDK so we can't realistically mediate it any
> >> other way). There were plans afoot last year to ensure that ACPI PPTT
> >> could provide the necessary ID values for arm64 systems which will
> >> typically be fairly arbitrary (but unique) due to reflecting underlying
> >> interconnect routing IDs. Assuming that there will eventually be some
> >> interest in cache stashing on DT-based systems too, we probably want to
> >> allow for an explicit ID property on DT cache nodes in a similar manner.
> >
> > If you have a suggestion for ID values that correspond to the h/w,
> > then we can add them. I'd like a bit more than just trusting that ID
> > is something real.
> >
> > While the ACPI folks may be willing to take an arbitrary index, it's
> > something we (mostly) avoid for DT.
>
> Not really. On the CHI side there are two fields - StashNID, which could
> be any node ID value depending on the interconnect layout, plus
> (optionally) StashLPID to address a specific cache within that node if
> it's something like a CPU cluster. However, how a PCIe TLP steering tag
> translates to those fields in the resulting CHI flit is largely up to
> the root complex.

Knowing next to nothing about CHI, this means pretty much nothing to me. :(

I would guess there is a bit more to supporting CHI in DT systems than
just a cache ID.

> I think it's going to be more like a "reg" property than a nice
> validatable index.
>
> >> That said, I think it does make sense to have some kind of
> >> auto-generated fallback scheme *as well*, since I'm sure there will be
> >> plenty systems which care about MPAM but don't support stashing, and
> >> therefore wouldn't have a meaningful set of IDs to populate their DT
> >> with. Conversely I think that might also matter for ACPI too - one point
> >> I remember from previous discussions is that PPTT may use a compact
> >> representation where a single entry represents all equivalent caches at
> >> that level, so I'm not sure we can necessarily rely on IDs out of that
> >> path being unique either.
> >
> > AIUI, cache ids break the compact representation.
>
> Right, firmware authors can't use it if they do want to specify IDs, but
> that also means that if we find we *are* consuming a compact PPTT, then
> chances are we're not getting meaningful IDs out of it for MPAM to rely on.

Sounds like broken firmware is in our future. ;) Or ACPI can default
to the same id scheme.

Rob
Jeremy Linton Dec. 17, 2021, 8:22 p.m. UTC | #8
Hi,

On 12/17/21 13:35, Rob Herring wrote:
> On Fri, Dec 17, 2021 at 1:08 PM Robin Murphy <robin.murphy@arm.com> wrote:
>>
>> On 2021-12-17 18:14, Rob Herring wrote:
>>> On Fri, Dec 17, 2021 at 10:57 AM Robin Murphy <robin.murphy@arm.com> wrote:
>>>>
>>>> Hi Rob,
>>>>
>>>> On 2021-12-16 23:31, Rob Herring wrote:
>>>>> Use the minimum CPU h/w id of the CPUs associated with the cache for the
>>>>> cache 'id'. This will provide a stable id value for a given system. As
>>>>> we need to check all possible CPUs, we can't use the shared_cpu_map
>>>>> which is just online CPUs. There's not a cache to CPUs mapping in DT, so
>>>>> we have to walk all CPU nodes and then walk cache levels.
>>>>
>>>> I believe another expected use of the cache ID exposed in sysfs is to
>>>> program steering tags for cache stashing (typically in VFIO-based
>>>> userspace drivers like DPDK so we can't realistically mediate it any
>>>> other way). There were plans afoot last year to ensure that ACPI PPTT
>>>> could provide the necessary ID values for arm64 systems which will
>>>> typically be fairly arbitrary (but unique) due to reflecting underlying
>>>> interconnect routing IDs. Assuming that there will eventually be some
>>>> interest in cache stashing on DT-based systems too, we probably want to
>>>> allow for an explicit ID property on DT cache nodes in a similar manner.
>>>
>>> If you have a suggestion for ID values that correspond to the h/w,
>>> then we can add them. I'd like a bit more than just trusting that ID
>>> is something real.
>>>
>>> While the ACPI folks may be willing to take an arbitrary index, it's
>>> something we (mostly) avoid for DT.
>>
>> Not really. On the CHI side there are two fields - StashNID, which could
>> be any node ID value depending on the interconnect layout, plus
>> (optionally) StashLPID to address a specific cache within that node if
>> it's something like a CPU cluster. However, how a PCIe TLP steering tag
>> translates to those fields in the resulting CHI flit is largely up to
>> the root complex.
> 
> Knowing next to nothing about CHI, this means pretty much nothing to me. :(
> 
> I would guess there is a bit more to supporting CHI in DT systems than
> just a cache ID.
> 
>> I think it's going to be more like a "reg" property than a nice
>> validatable index.
>>
>>>> That said, I think it does make sense to have some kind of
>>>> auto-generated fallback scheme *as well*, since I'm sure there will be
>>>> plenty systems which care about MPAM but don't support stashing, and
>>>> therefore wouldn't have a meaningful set of IDs to populate their DT
>>>> with. Conversely I think that might also matter for ACPI too - one point
>>>> I remember from previous discussions is that PPTT may use a compact
>>>> representation where a single entry represents all equivalent caches at
>>>> that level, so I'm not sure we can necessarily rely on IDs out of that
>>>> path being unique either.
>>>
>>> AIUI, cache ids break the compact representation.
>>
>> Right, firmware authors can't use it if they do want to specify IDs, but
>> that also means that if we find we *are* consuming a compact PPTT, then
>> chances are we're not getting meaningful IDs out of it for MPAM to rely on.
> 
> Sounds like broken firmware is in our future. ;) Or ACPI can default
> to the same id scheme.
> 

Yah, that is a problem. The ID's provided by the ACPI cache ID field are 
as officially meaningless as the ones we can generate from the existing 
fw_token mechanism. Given that, they don't really add anything beyond 
what we can achieve simply by encoding the level somewhere in the 
fw_token currently in use if we want something that is globally unique 
rather than just unique for a given cache level+I/D. Their one advantage 
though is that they can be more human readable at the cost of 2-3X the 
size of the table, with the additional problem of having to worry about 
them being populated in all the cache structures in the table. Its 
almost easier to revisit some of the earlier discussion and generate a 
uniq id, and then renumber them at the end.


If you want to encode some kind of routing ID in them, then that will 
have to be standardized, and I would guess it might be easier to add the 
routing ID's to the structure than retroactively add meaning to the ID 
field if anyone is actually using it. Or just create yet another lookup 
table to translate the id to something meaningful.
Jeremy Linton Dec. 17, 2021, 8:28 p.m. UTC | #9
Hi,

On 12/17/21 13:26, Rob Herring wrote:
> On Fri, Dec 17, 2021 at 1:03 PM Sudeep Holla <sudeep.holla@arm.com> wrote:
>>
>> On Fri, Dec 17, 2021 at 12:14:22PM -0600, Rob Herring wrote:
>>> On Fri, Dec 17, 2021 at 10:57 AM Robin Murphy <robin.murphy@arm.com> wrote:
>>>>
>>>> Hi Rob,
>>>>
>>>> On 2021-12-16 23:31, Rob Herring wrote:
>>>>> Use the minimum CPU h/w id of the CPUs associated with the cache for the
>>>>> cache 'id'. This will provide a stable id value for a given system. As
>>
>> I am trying to follow the code. IIUC, the level one(I$ and D$) are skipped
>> in this logic and the private unified cache if any will get the cpu hwid as
>> the cache id which is all fine. But what happens if there are 2 levels of
>> unified private cache ? I am assuming we only care about shared caches for
>> MPAM and ignore private caches which sounds OK but I just wanted to confirm.
> 
> The cacheinfo 'id' is only unique to the level and type. It's the
> type, level, and ID that gives a unique identifier:
> 
>   * struct cacheinfo - represent a cache leaf node
>   * @id: This cache's id. It is unique among caches with the same (type, level).
> 
> Maybe ACPI's ID expects/allows globally unique cache IDs?

Yes, but the spec is IMHO written in a way that they may only be unique 
for a subset of the caches! The rest might not have an ID at all, 
particularly for !arm machines.


> 
>>>>> we need to check all possible CPUs, we can't use the shared_cpu_map
>>>>> which is just online CPUs. There's not a cache to CPUs mapping in DT, so
>>>>> we have to walk all CPU nodes and then walk cache levels.
>>
>> I would have preferred to add the cache IDs in DT similar to ACPI but I see
>> you have certain concerns with that which are valid as well.
>>
>>>>
>>>> I believe another expected use of the cache ID exposed in sysfs is to
>>>> program steering tags for cache stashing (typically in VFIO-based
>>>> userspace drivers like DPDK so we can't realistically mediate it any
>>>> other way). There were plans afoot last year to ensure that ACPI PPTT
>>>> could provide the necessary ID values for arm64 systems which will
>>>> typically be fairly arbitrary (but unique) due to reflecting underlying
>>>> interconnect routing IDs. Assuming that there will eventually be some
>>>> interest in cache stashing on DT-based systems too, we probably want to
>>>> allow for an explicit ID property on DT cache nodes in a similar manner.
>>>
>>> If you have a suggestion for ID values that correspond to the h/w,
>>> then we can add them. I'd like a bit more than just trusting that ID
>>> is something real.
>>>
>>
>> I agree, probably architecture must do better job at defining these. But
>> generated IDs IMO might cause issues especial if we have to change the
>> logic without breaking the backward compatibility.
>>
>>> While the ACPI folks may be willing to take an arbitrary index, it's
>>> something we (mostly) avoid for DT.
>>>
>>
>> Not sure if we can call that *arbitrary* 😄, in that case we can imagine
>> the same at several places in the firmware.
> 
> By arbitrary, I mean made up by the binding/dts author or
> documentation convention (UART0, UART1, etc.). Certainly things like
> clock IDs are often made up number spaces, but I don't see how we
> avoid that. DT had 'cell-index' which I still see attempted. But that
> property traces back to h/w having a single power ctrl register and
> cell-index was the bit index for the register. If only h/w was still
> that simple.
> 
> Rob
>
Robin Murphy Dec. 17, 2021, 9:13 p.m. UTC | #10
On 2021-12-17 19:35, Rob Herring wrote:
> On Fri, Dec 17, 2021 at 1:08 PM Robin Murphy <robin.murphy@arm.com> wrote:
>>
>> On 2021-12-17 18:14, Rob Herring wrote:
>>> On Fri, Dec 17, 2021 at 10:57 AM Robin Murphy <robin.murphy@arm.com> wrote:
>>>>
>>>> Hi Rob,
>>>>
>>>> On 2021-12-16 23:31, Rob Herring wrote:
>>>>> Use the minimum CPU h/w id of the CPUs associated with the cache for the
>>>>> cache 'id'. This will provide a stable id value for a given system. As
>>>>> we need to check all possible CPUs, we can't use the shared_cpu_map
>>>>> which is just online CPUs. There's not a cache to CPUs mapping in DT, so
>>>>> we have to walk all CPU nodes and then walk cache levels.
>>>>
>>>> I believe another expected use of the cache ID exposed in sysfs is to
>>>> program steering tags for cache stashing (typically in VFIO-based
>>>> userspace drivers like DPDK so we can't realistically mediate it any
>>>> other way). There were plans afoot last year to ensure that ACPI PPTT
>>>> could provide the necessary ID values for arm64 systems which will
>>>> typically be fairly arbitrary (but unique) due to reflecting underlying
>>>> interconnect routing IDs. Assuming that there will eventually be some
>>>> interest in cache stashing on DT-based systems too, we probably want to
>>>> allow for an explicit ID property on DT cache nodes in a similar manner.
>>>
>>> If you have a suggestion for ID values that correspond to the h/w,
>>> then we can add them. I'd like a bit more than just trusting that ID
>>> is something real.
>>>
>>> While the ACPI folks may be willing to take an arbitrary index, it's
>>> something we (mostly) avoid for DT.
>>
>> Not really. On the CHI side there are two fields - StashNID, which could
>> be any node ID value depending on the interconnect layout, plus
>> (optionally) StashLPID to address a specific cache within that node if
>> it's something like a CPU cluster. However, how a PCIe TLP steering tag
>> translates to those fields in the resulting CHI flit is largely up to
>> the root complex.
> 
> Knowing next to nothing about CHI, this means pretty much nothing to me. :(
> 
> I would guess there is a bit more to supporting CHI in DT systems than
> just a cache ID.

I use CHI as an example because it's what I'm familiar with, and my 
involvement in cache stashing discussions has been in the context of Arm 
CMN interconnects which are CHI-based. Other folks who build their own 
interconnects may have different details of how exactly they support 
cache stashing, but the overall point is that the required IDs are 
typically going to boil down to some amount (likely around 8-16 bits or 
so) of address-like information in a system-specific format which can't 
be reasoned about beyond that.

>> I think it's going to be more like a "reg" property than a nice
>> validatable index.
>>
>>>> That said, I think it does make sense to have some kind of
>>>> auto-generated fallback scheme *as well*, since I'm sure there will be
>>>> plenty systems which care about MPAM but don't support stashing, and
>>>> therefore wouldn't have a meaningful set of IDs to populate their DT
>>>> with. Conversely I think that might also matter for ACPI too - one point
>>>> I remember from previous discussions is that PPTT may use a compact
>>>> representation where a single entry represents all equivalent caches at
>>>> that level, so I'm not sure we can necessarily rely on IDs out of that
>>>> path being unique either.
>>>
>>> AIUI, cache ids break the compact representation.
>>
>> Right, firmware authors can't use it if they do want to specify IDs, but
>> that also means that if we find we *are* consuming a compact PPTT, then
>> chances are we're not getting meaningful IDs out of it for MPAM to rely on.
> 
> Sounds like broken firmware is in our future. ;) Or ACPI can default
> to the same id scheme.

I don't really see this being an opportunity for firmware to be any more 
broken than usual. Systems that support cache stashing will need to 
provide the correct hardware IDs for targetable caches via their 
firmware tables, which it seems that MPAM's notion of cache IDs will 
have to coexist with. Systems that do not support cache stashing may not 
even have a meaningful notion of hardware IDs for caches, and thus 
cannot be expected to provide any in firmware. Linux will need to cope 
with both situations.

Thanks,
Robin.
diff mbox series

Patch

diff --git a/drivers/base/cacheinfo.c b/drivers/base/cacheinfo.c
index 66d10bdb863b..21accddf8f5f 100644
--- a/drivers/base/cacheinfo.c
+++ b/drivers/base/cacheinfo.c
@@ -136,6 +136,36 @@  static bool cache_node_is_unified(struct cacheinfo *this_leaf,
 	return of_property_read_bool(np, "cache-unified");
 }
 
+static void cache_of_set_id(struct cacheinfo *this_leaf, struct device_node *np)
+{
+	int cpu;
+	unsigned long min_id = ~0UL;
+
+	for_each_possible_cpu(cpu) {
+		u64 id;
+		struct device_node *cache_node, *cpu_node;
+
+		cache_node = cpu_node = of_get_cpu_node(cpu, NULL);
+		id = of_get_cpu_hwid(cpu_node, 0);
+		while ((cache_node = of_find_next_cache_node(cache_node))) {
+			if (cache_node == np) {
+				if (id < min_id) {
+					min_id = id;
+					of_node_put(cache_node);
+					break;
+				}
+			}
+			of_node_put(cache_node);
+		}
+		of_node_put(cpu_node);
+	}
+
+	if (min_id != ~0UL) {
+		this_leaf->id = min_id;
+		this_leaf->attributes |= CACHE_ID;
+	}
+}
+
 static void cache_of_set_props(struct cacheinfo *this_leaf,
 			       struct device_node *np)
 {
@@ -151,6 +181,7 @@  static void cache_of_set_props(struct cacheinfo *this_leaf,
 	cache_get_line_size(this_leaf, np);
 	cache_nr_sets(this_leaf, np);
 	cache_associativity(this_leaf);
+	cache_of_set_id(this_leaf, np);
 }
 
 static int cache_setup_of_node(unsigned int cpu)