Message ID | 20200909150818.313699-1-nitesh@redhat.com |
---|---|
Headers | show |
Series | isolation: limit msix vectors based on housekeeping CPUs | expand |
On Wed, Sep 09, 2020 at 11:08:18AM -0400, Nitesh Narayan Lal wrote: > This patch limits the pci_alloc_irq_vectors max vectors that is passed on > by the caller based on the available housekeeping CPUs by only using the > minimum of the two. > > A minimum of the max_vecs passed and available housekeeping CPUs is > derived to ensure that we don't create excess vectors which can be > problematic specifically in an RT environment. This is because for an RT > environment unwanted IRQs are moved to the housekeeping CPUs from > isolated CPUs to keep the latency overhead to a minimum. If the number of > housekeeping CPUs are significantly lower than that of the isolated CPUs > we can run into failures while moving these IRQs to housekeeping due to > per CPU vector limit. > > Signed-off-by: Nitesh Narayan Lal <nitesh@redhat.com> > --- > include/linux/pci.h | 16 ++++++++++++++++ > 1 file changed, 16 insertions(+) > > diff --git a/include/linux/pci.h b/include/linux/pci.h > index 835530605c0d..750ba927d963 100644 > --- a/include/linux/pci.h > +++ b/include/linux/pci.h > @@ -38,6 +38,7 @@ > #include <linux/interrupt.h> > #include <linux/io.h> > #include <linux/resource_ext.h> > +#include <linux/sched/isolation.h> > #include <uapi/linux/pci.h> > > #include <linux/pci_ids.h> > @@ -1797,6 +1798,21 @@ static inline int > pci_alloc_irq_vectors(struct pci_dev *dev, unsigned int min_vecs, > unsigned int max_vecs, unsigned int flags) > { > + unsigned int num_housekeeping = num_housekeeping_cpus(); > + unsigned int num_online = num_online_cpus(); > + > + /* > + * Try to be conservative and at max only ask for the same number of > + * vectors as there are housekeeping CPUs. However, skip any > + * modification to the of max vectors in two conditions: > + * 1. If the min_vecs requested are higher than that of the > + * housekeeping CPUs as we don't want to prevent the initialization > + * of a device. > + * 2. If there are no isolated CPUs as in this case the driver should > + * already have taken online CPUs into consideration. > + */ > + if (min_vecs < num_housekeeping && num_housekeeping != num_online) > + max_vecs = min_t(int, max_vecs, num_housekeeping); > return pci_alloc_irq_vectors_affinity(dev, min_vecs, max_vecs, flags, > NULL); > } If min_vecs > num_housekeeping, for example: /* PCI MSI/MSIx support */ #define XGBE_MSI_BASE_COUNT 4 #define XGBE_MSI_MIN_COUNT (XGBE_MSI_BASE_COUNT + 1) Then the protection fails. How about reducing max_vecs down to min_vecs, if min_vecs > num_housekeeping ?
On 9/10/20 3:22 PM, Marcelo Tosatti wrote: > On Wed, Sep 09, 2020 at 11:08:18AM -0400, Nitesh Narayan Lal wrote: >> This patch limits the pci_alloc_irq_vectors max vectors that is passed on >> by the caller based on the available housekeeping CPUs by only using the >> minimum of the two. >> >> A minimum of the max_vecs passed and available housekeeping CPUs is >> derived to ensure that we don't create excess vectors which can be >> problematic specifically in an RT environment. This is because for an RT >> environment unwanted IRQs are moved to the housekeeping CPUs from >> isolated CPUs to keep the latency overhead to a minimum. If the number of >> housekeeping CPUs are significantly lower than that of the isolated CPUs >> we can run into failures while moving these IRQs to housekeeping due to >> per CPU vector limit. >> >> Signed-off-by: Nitesh Narayan Lal <nitesh@redhat.com> >> --- >> include/linux/pci.h | 16 ++++++++++++++++ >> 1 file changed, 16 insertions(+) >> >> diff --git a/include/linux/pci.h b/include/linux/pci.h >> index 835530605c0d..750ba927d963 100644 >> --- a/include/linux/pci.h >> +++ b/include/linux/pci.h >> @@ -38,6 +38,7 @@ >> #include <linux/interrupt.h> >> #include <linux/io.h> >> #include <linux/resource_ext.h> >> +#include <linux/sched/isolation.h> >> #include <uapi/linux/pci.h> >> >> #include <linux/pci_ids.h> >> @@ -1797,6 +1798,21 @@ static inline int >> pci_alloc_irq_vectors(struct pci_dev *dev, unsigned int min_vecs, >> unsigned int max_vecs, unsigned int flags) >> { >> + unsigned int num_housekeeping = num_housekeeping_cpus(); >> + unsigned int num_online = num_online_cpus(); >> + >> + /* >> + * Try to be conservative and at max only ask for the same number of >> + * vectors as there are housekeeping CPUs. However, skip any >> + * modification to the of max vectors in two conditions: >> + * 1. If the min_vecs requested are higher than that of the >> + * housekeeping CPUs as we don't want to prevent the initialization >> + * of a device. >> + * 2. If there are no isolated CPUs as in this case the driver should >> + * already have taken online CPUs into consideration. >> + */ >> + if (min_vecs < num_housekeeping && num_housekeeping != num_online) >> + max_vecs = min_t(int, max_vecs, num_housekeeping); >> return pci_alloc_irq_vectors_affinity(dev, min_vecs, max_vecs, flags, >> NULL); >> } > If min_vecs > num_housekeeping, for example: > > /* PCI MSI/MSIx support */ > #define XGBE_MSI_BASE_COUNT 4 > #define XGBE_MSI_MIN_COUNT (XGBE_MSI_BASE_COUNT + 1) > > Then the protection fails. Right, I was ignoring that case. > > How about reducing max_vecs down to min_vecs, if min_vecs > > num_housekeeping ? Yes, I think this makes sense. I will wait a bit to see if anyone else has any other comment and will post the next version then. >
Nitesh Narayan Lal wrote: > Introduce a new API num_housekeeping_cpus(), that can be used to retrieve > the number of housekeeping CPUs by reading an atomic variable > __num_housekeeping_cpus. This variable is set from housekeeping_setup(). > > This API is introduced for the purpose of drivers that were previously > relying only on num_online_cpus() to determine the number of MSIX vectors > to create. In an RT environment with large isolated but a fewer > housekeeping CPUs this was leading to a situation where an attempt to > move all of the vectors corresponding to isolated CPUs to housekeeping > CPUs was failing due to per CPU vector limit. > > If there are no isolated CPUs specified then the API returns the number > of all online CPUs. > > Signed-off-by: Nitesh Narayan Lal <nitesh@redhat.com> > --- > include/linux/sched/isolation.h | 7 +++++++ > kernel/sched/isolation.c | 23 +++++++++++++++++++++++ > 2 files changed, 30 insertions(+) I'm not a scheduler expert, but a couple comments follow. > > diff --git a/include/linux/sched/isolation.h b/include/linux/sched/isolation.h > index cc9f393e2a70..94c25d956d8a 100644 > --- a/include/linux/sched/isolation.h > +++ b/include/linux/sched/isolation.h > @@ -25,6 +25,7 @@ extern bool housekeeping_enabled(enum hk_flags flags); > extern void housekeeping_affine(struct task_struct *t, enum hk_flags flags); > extern bool housekeeping_test_cpu(int cpu, enum hk_flags flags); > extern void __init housekeeping_init(void); > +extern unsigned int num_housekeeping_cpus(void); > > #else > > @@ -46,6 +47,12 @@ static inline bool housekeeping_enabled(enum hk_flags flags) > static inline void housekeeping_affine(struct task_struct *t, > enum hk_flags flags) { } > static inline void housekeeping_init(void) { } > + > +static unsigned int num_housekeeping_cpus(void) > +{ > + return num_online_cpus(); > +} > + > #endif /* CONFIG_CPU_ISOLATION */ > > static inline bool housekeeping_cpu(int cpu, enum hk_flags flags) > diff --git a/kernel/sched/isolation.c b/kernel/sched/isolation.c > index 5a6ea03f9882..7024298390b7 100644 > --- a/kernel/sched/isolation.c > +++ b/kernel/sched/isolation.c > @@ -13,6 +13,7 @@ DEFINE_STATIC_KEY_FALSE(housekeeping_overridden); > EXPORT_SYMBOL_GPL(housekeeping_overridden); > static cpumask_var_t housekeeping_mask; > static unsigned int housekeeping_flags; > +static atomic_t __num_housekeeping_cpus __read_mostly; > > bool housekeeping_enabled(enum hk_flags flags) > { > @@ -20,6 +21,27 @@ bool housekeeping_enabled(enum hk_flags flags) > } > EXPORT_SYMBOL_GPL(housekeeping_enabled); > > +/* use correct kdoc style, and you get free documentation from your source (you're so close!) should be (note the first line and the function title line change to remove parens: /** * num_housekeeping_cpus - Read the number of housekeeping CPUs. * * This function returns the number of available housekeeping CPUs * based on __num_housekeeping_cpus which is of type atomic_t * and is initialized at the time of the housekeeping setup. */ > + * num_housekeeping_cpus() - Read the number of housekeeping CPUs. > + * > + * This function returns the number of available housekeeping CPUs > + * based on __num_housekeeping_cpus which is of type atomic_t > + * and is initialized at the time of the housekeeping setup. > + */ > +unsigned int num_housekeeping_cpus(void) > +{ > + unsigned int cpus; > + > + if (static_branch_unlikely(&housekeeping_overridden)) { > + cpus = atomic_read(&__num_housekeeping_cpus); > + /* We should always have at least one housekeeping CPU */ > + BUG_ON(!cpus); you need to crash the kernel because of this? maybe a WARN_ON? How did the global even get set to the bad value? It's going to blame the poor caller for this in the trace, but the caller likely had nothing to do with setting the value incorrectly! > + return cpus; > + } > + return num_online_cpus(); > +} > +EXPORT_SYMBOL_GPL(num_housekeeping_cpus);
On 9/17/20 2:18 PM, Jesse Brandeburg wrote: > Nitesh Narayan Lal wrote: > >> Introduce a new API num_housekeeping_cpus(), that can be used to retrieve >> the number of housekeeping CPUs by reading an atomic variable >> __num_housekeeping_cpus. This variable is set from housekeeping_setup(). >> >> This API is introduced for the purpose of drivers that were previously >> relying only on num_online_cpus() to determine the number of MSIX vectors >> to create. In an RT environment with large isolated but a fewer >> housekeeping CPUs this was leading to a situation where an attempt to >> move all of the vectors corresponding to isolated CPUs to housekeeping >> CPUs was failing due to per CPU vector limit. >> >> If there are no isolated CPUs specified then the API returns the number >> of all online CPUs. >> >> Signed-off-by: Nitesh Narayan Lal <nitesh@redhat.com> >> --- >> include/linux/sched/isolation.h | 7 +++++++ >> kernel/sched/isolation.c | 23 +++++++++++++++++++++++ >> 2 files changed, 30 insertions(+) > I'm not a scheduler expert, but a couple comments follow. > >> diff --git a/include/linux/sched/isolation.h b/include/linux/sched/isolation.h >> index cc9f393e2a70..94c25d956d8a 100644 >> --- a/include/linux/sched/isolation.h >> +++ b/include/linux/sched/isolation.h >> @@ -25,6 +25,7 @@ extern bool housekeeping_enabled(enum hk_flags flags); >> extern void housekeeping_affine(struct task_struct *t, enum hk_flags flags); >> extern bool housekeeping_test_cpu(int cpu, enum hk_flags flags); >> extern void __init housekeeping_init(void); >> +extern unsigned int num_housekeeping_cpus(void); >> >> #else >> >> @@ -46,6 +47,12 @@ static inline bool housekeeping_enabled(enum hk_flags flags) >> static inline void housekeeping_affine(struct task_struct *t, >> enum hk_flags flags) { } >> static inline void housekeeping_init(void) { } >> + >> +static unsigned int num_housekeeping_cpus(void) >> +{ >> + return num_online_cpus(); >> +} >> + >> #endif /* CONFIG_CPU_ISOLATION */ >> >> static inline bool housekeeping_cpu(int cpu, enum hk_flags flags) >> diff --git a/kernel/sched/isolation.c b/kernel/sched/isolation.c >> index 5a6ea03f9882..7024298390b7 100644 >> --- a/kernel/sched/isolation.c >> +++ b/kernel/sched/isolation.c >> @@ -13,6 +13,7 @@ DEFINE_STATIC_KEY_FALSE(housekeeping_overridden); >> EXPORT_SYMBOL_GPL(housekeeping_overridden); >> static cpumask_var_t housekeeping_mask; >> static unsigned int housekeeping_flags; >> +static atomic_t __num_housekeeping_cpus __read_mostly; >> >> bool housekeeping_enabled(enum hk_flags flags) >> { >> @@ -20,6 +21,27 @@ bool housekeeping_enabled(enum hk_flags flags) >> } >> EXPORT_SYMBOL_GPL(housekeeping_enabled); >> >> +/* > use correct kdoc style, and you get free documentation from your source > (you're so close!) > > should be (note the first line and the function title line change to > remove parens: > /** > * num_housekeeping_cpus - Read the number of housekeeping CPUs. > * > * This function returns the number of available housekeeping CPUs > * based on __num_housekeeping_cpus which is of type atomic_t > * and is initialized at the time of the housekeeping setup. > */ My bad, I missed that. Thanks for pointing it out. > >> + * num_housekeeping_cpus() - Read the number of housekeeping CPUs. >> + * >> + * This function returns the number of available housekeeping CPUs >> + * based on __num_housekeeping_cpus which is of type atomic_t >> + * and is initialized at the time of the housekeeping setup. >> + */ >> +unsigned int num_housekeeping_cpus(void) >> +{ >> + unsigned int cpus; >> + >> + if (static_branch_unlikely(&housekeeping_overridden)) { >> + cpus = atomic_read(&__num_housekeeping_cpus); >> + /* We should always have at least one housekeeping CPU */ >> + BUG_ON(!cpus); > you need to crash the kernel because of this? maybe a WARN_ON? How did > the global even get set to the bad value? It's going to blame the poor > caller for this in the trace, but the caller likely had nothing to do > with setting the value incorrectly! Yes, ideally this should not be triggered, but if somehow it does then we have a bug and that needs to be fixed. That's probably the only reason why I chose BUG_ON. But, I am not entirely against the usage of WARN_ON either, because we get a stack trace anyways. I will see if anyone else has any other concerns on this patch and then I can post the next version. > >> + return cpus; >> + } >> + return num_online_cpus(); >> +} >> +EXPORT_SYMBOL_GPL(num_housekeeping_cpus); -- Thanks Nitesh
On Wed, Sep 09, 2020 at 11:08:16AM -0400, Nitesh Narayan Lal wrote: > +/* > + * num_housekeeping_cpus() - Read the number of housekeeping CPUs. > + * > + * This function returns the number of available housekeeping CPUs > + * based on __num_housekeeping_cpus which is of type atomic_t > + * and is initialized at the time of the housekeeping setup. > + */ > +unsigned int num_housekeeping_cpus(void) > +{ > + unsigned int cpus; > + > + if (static_branch_unlikely(&housekeeping_overridden)) { > + cpus = atomic_read(&__num_housekeeping_cpus); > + /* We should always have at least one housekeeping CPU */ > + BUG_ON(!cpus); > + return cpus; > + } > + return num_online_cpus(); > +} > +EXPORT_SYMBOL_GPL(num_housekeeping_cpus); > + > int housekeeping_any_cpu(enum hk_flags flags) > { > int cpu; > @@ -131,6 +153,7 @@ static int __init housekeeping_setup(char *str, enum hk_flags flags) > > housekeeping_flags |= flags; > > + atomic_set(&__num_housekeeping_cpus, cpumask_weight(housekeeping_mask)); So the problem here is that it takes the whole cpumask weight but you're only interested in the housekeepers who take the managed irq duties I guess (HK_FLAG_MANAGED_IRQ ?). > free_bootmem_cpumask_var(non_housekeeping_mask); > > return 1; > -- > 2.27.0 >
On 9/21/20 7:40 PM, Frederic Weisbecker wrote: > On Wed, Sep 09, 2020 at 11:08:16AM -0400, Nitesh Narayan Lal wrote: >> +/* >> + * num_housekeeping_cpus() - Read the number of housekeeping CPUs. >> + * >> + * This function returns the number of available housekeeping CPUs >> + * based on __num_housekeeping_cpus which is of type atomic_t >> + * and is initialized at the time of the housekeeping setup. >> + */ >> +unsigned int num_housekeeping_cpus(void) >> +{ >> + unsigned int cpus; >> + >> + if (static_branch_unlikely(&housekeeping_overridden)) { >> + cpus = atomic_read(&__num_housekeeping_cpus); >> + /* We should always have at least one housekeeping CPU */ >> + BUG_ON(!cpus); >> + return cpus; >> + } >> + return num_online_cpus(); >> +} >> +EXPORT_SYMBOL_GPL(num_housekeeping_cpus); >> + >> int housekeeping_any_cpu(enum hk_flags flags) >> { >> int cpu; >> @@ -131,6 +153,7 @@ static int __init housekeeping_setup(char *str, enum hk_flags flags) >> >> housekeeping_flags |= flags; >> >> + atomic_set(&__num_housekeeping_cpus, cpumask_weight(housekeeping_mask)); > So the problem here is that it takes the whole cpumask weight but you're only > interested in the housekeepers who take the managed irq duties I guess > (HK_FLAG_MANAGED_IRQ ?). IMHO we should also consider the cases where we only have nohz_full. Otherwise, we may run into the same situation on those setups, do you agree? > >> free_bootmem_cpumask_var(non_housekeeping_mask); >> >> return 1; >> -- >> 2.27.0 >>
On Mon, Sep 21, 2020 at 11:16:51PM -0400, Nitesh Narayan Lal wrote: > > On 9/21/20 7:40 PM, Frederic Weisbecker wrote: > > On Wed, Sep 09, 2020 at 11:08:16AM -0400, Nitesh Narayan Lal wrote: > >> +/* > >> + * num_housekeeping_cpus() - Read the number of housekeeping CPUs. > >> + * > >> + * This function returns the number of available housekeeping CPUs > >> + * based on __num_housekeeping_cpus which is of type atomic_t > >> + * and is initialized at the time of the housekeeping setup. > >> + */ > >> +unsigned int num_housekeeping_cpus(void) > >> +{ > >> + unsigned int cpus; > >> + > >> + if (static_branch_unlikely(&housekeeping_overridden)) { > >> + cpus = atomic_read(&__num_housekeeping_cpus); > >> + /* We should always have at least one housekeeping CPU */ > >> + BUG_ON(!cpus); > >> + return cpus; > >> + } > >> + return num_online_cpus(); > >> +} > >> +EXPORT_SYMBOL_GPL(num_housekeeping_cpus); > >> + > >> int housekeeping_any_cpu(enum hk_flags flags) > >> { > >> int cpu; > >> @@ -131,6 +153,7 @@ static int __init housekeeping_setup(char *str, enum hk_flags flags) > >> > >> housekeeping_flags |= flags; > >> > >> + atomic_set(&__num_housekeeping_cpus, cpumask_weight(housekeeping_mask)); > > So the problem here is that it takes the whole cpumask weight but you're only > > interested in the housekeepers who take the managed irq duties I guess > > (HK_FLAG_MANAGED_IRQ ?). > > IMHO we should also consider the cases where we only have nohz_full. > Otherwise, we may run into the same situation on those setups, do you agree? I guess it's up to the user to gather the tick and managed irq housekeeping together? Of course that makes the implementation more complicated. But if this is called only on drivers initialization for now, this could be just a function that does: cpumask_weight(cpu_online_mask | housekeeping_cpumask(HK_FLAG_MANAGED_IRQ)) And then can we rename it to housekeeping_num_online()? Thanks. > > > >> free_bootmem_cpumask_var(non_housekeeping_mask); > >> > >> return 1; > >> -- > >> 2.27.0 > >> > -- > Thanks > Nitesh >
On 9/22/20 6:08 AM, Frederic Weisbecker wrote: > On Mon, Sep 21, 2020 at 11:16:51PM -0400, Nitesh Narayan Lal wrote: >> On 9/21/20 7:40 PM, Frederic Weisbecker wrote: >>> On Wed, Sep 09, 2020 at 11:08:16AM -0400, Nitesh Narayan Lal wrote: >>>> +/* >>>> + * num_housekeeping_cpus() - Read the number of housekeeping CPUs. >>>> + * >>>> + * This function returns the number of available housekeeping CPUs >>>> + * based on __num_housekeeping_cpus which is of type atomic_t >>>> + * and is initialized at the time of the housekeeping setup. >>>> + */ >>>> +unsigned int num_housekeeping_cpus(void) >>>> +{ >>>> + unsigned int cpus; >>>> + >>>> + if (static_branch_unlikely(&housekeeping_overridden)) { >>>> + cpus = atomic_read(&__num_housekeeping_cpus); >>>> + /* We should always have at least one housekeeping CPU */ >>>> + BUG_ON(!cpus); >>>> + return cpus; >>>> + } >>>> + return num_online_cpus(); >>>> +} >>>> +EXPORT_SYMBOL_GPL(num_housekeeping_cpus); >>>> + >>>> int housekeeping_any_cpu(enum hk_flags flags) >>>> { >>>> int cpu; >>>> @@ -131,6 +153,7 @@ static int __init housekeeping_setup(char *str, enum hk_flags flags) >>>> >>>> housekeeping_flags |= flags; >>>> >>>> + atomic_set(&__num_housekeeping_cpus, cpumask_weight(housekeeping_mask)); >>> So the problem here is that it takes the whole cpumask weight but you're only >>> interested in the housekeepers who take the managed irq duties I guess >>> (HK_FLAG_MANAGED_IRQ ?). >> IMHO we should also consider the cases where we only have nohz_full. >> Otherwise, we may run into the same situation on those setups, do you agree? > I guess it's up to the user to gather the tick and managed irq housekeeping > together? TBH I don't have a very strong case here at the moment. But still, IMHO, this will force the user to have both managed irqs and nohz_full in their environments to avoid these kinds of issues. Is that how we would like to proceed? The reason why I want to get this clarity is that going forward for any RT related work I can form my thoughts based on this discussion. > > Of course that makes the implementation more complicated. But if this is > called only on drivers initialization for now, this could be just a function > that does: > > cpumask_weight(cpu_online_mask | housekeeping_cpumask(HK_FLAG_MANAGED_IRQ)) Ack, this makes more sense. > > And then can we rename it to housekeeping_num_online()? It could be just me, but does something like hk_num_online_cpus() makes more sense here? > > Thanks. > >>>> free_bootmem_cpumask_var(non_housekeeping_mask); >>>> >>>> return 1; >>>> -- >>>> 2.27.0 >>>> >> -- >> Thanks >> Nitesh >> > >
On 9/10/20 3:31 PM, Nitesh Narayan Lal wrote: > On 9/10/20 3:22 PM, Marcelo Tosatti wrote: >> On Wed, Sep 09, 2020 at 11:08:18AM -0400, Nitesh Narayan Lal wrote: >>> This patch limits the pci_alloc_irq_vectors max vectors that is passed on >>> by the caller based on the available housekeeping CPUs by only using the >>> minimum of the two. >>> >>> A minimum of the max_vecs passed and available housekeeping CPUs is >>> derived to ensure that we don't create excess vectors which can be >>> problematic specifically in an RT environment. This is because for an RT >>> environment unwanted IRQs are moved to the housekeeping CPUs from >>> isolated CPUs to keep the latency overhead to a minimum. If the number of >>> housekeeping CPUs are significantly lower than that of the isolated CPUs >>> we can run into failures while moving these IRQs to housekeeping due to >>> per CPU vector limit. >>> >>> Signed-off-by: Nitesh Narayan Lal <nitesh@redhat.com> >>> --- >>> include/linux/pci.h | 16 ++++++++++++++++ >>> 1 file changed, 16 insertions(+) >>> >>> diff --git a/include/linux/pci.h b/include/linux/pci.h >>> index 835530605c0d..750ba927d963 100644 >>> --- a/include/linux/pci.h >>> +++ b/include/linux/pci.h >>> @@ -38,6 +38,7 @@ >>> #include <linux/interrupt.h> >>> #include <linux/io.h> >>> #include <linux/resource_ext.h> >>> +#include <linux/sched/isolation.h> >>> #include <uapi/linux/pci.h> >>> >>> #include <linux/pci_ids.h> >>> @@ -1797,6 +1798,21 @@ static inline int >>> pci_alloc_irq_vectors(struct pci_dev *dev, unsigned int min_vecs, >>> unsigned int max_vecs, unsigned int flags) >>> { >>> + unsigned int num_housekeeping = num_housekeeping_cpus(); >>> + unsigned int num_online = num_online_cpus(); >>> + >>> + /* >>> + * Try to be conservative and at max only ask for the same number of >>> + * vectors as there are housekeeping CPUs. However, skip any >>> + * modification to the of max vectors in two conditions: >>> + * 1. If the min_vecs requested are higher than that of the >>> + * housekeeping CPUs as we don't want to prevent the initialization >>> + * of a device. >>> + * 2. If there are no isolated CPUs as in this case the driver should >>> + * already have taken online CPUs into consideration. >>> + */ >>> + if (min_vecs < num_housekeeping && num_housekeeping != num_online) >>> + max_vecs = min_t(int, max_vecs, num_housekeeping); >>> return pci_alloc_irq_vectors_affinity(dev, min_vecs, max_vecs, flags, >>> NULL); >>> } >> If min_vecs > num_housekeeping, for example: >> >> /* PCI MSI/MSIx support */ >> #define XGBE_MSI_BASE_COUNT 4 >> #define XGBE_MSI_MIN_COUNT (XGBE_MSI_BASE_COUNT + 1) >> >> Then the protection fails. > Right, I was ignoring that case. > >> How about reducing max_vecs down to min_vecs, if min_vecs > >> num_housekeeping ? > Yes, I think this makes sense. > I will wait a bit to see if anyone else has any other comment and will post > the next version then. > Are there any other comments/concerns on this patch that I need to address in the next posting? -- Nitesh
On Tue, Sep 22, 2020 at 09:50:55AM -0400, Nitesh Narayan Lal wrote: > On 9/22/20 6:08 AM, Frederic Weisbecker wrote: > TBH I don't have a very strong case here at the moment. > But still, IMHO, this will force the user to have both managed irqs and > nohz_full in their environments to avoid these kinds of issues. Is that how > we would like to proceed? Yep that sounds good to me. I never know how much we want to split each and any of the isolation features but I'd rather stay cautious to separate HK_FLAG_TICK from the rest, just in case running in nohz_full mode ever becomes interesting alone for performance and not just latency/isolation. But look what you can do as well: diff --git a/kernel/sched/isolation.c b/kernel/sched/isolation.c index 5a6ea03f9882..9df9598a9e39 100644 --- a/kernel/sched/isolation.c +++ b/kernel/sched/isolation.c @@ -141,7 +141,7 @@ static int __init housekeeping_nohz_full_setup(char *str) unsigned int flags; flags = HK_FLAG_TICK | HK_FLAG_WQ | HK_FLAG_TIMER | HK_FLAG_RCU | - HK_FLAG_MISC | HK_FLAG_KTHREAD; + HK_FLAG_MISC | HK_FLAG_KTHREAD | HK_FLAG_MANAGED_IRQ; return housekeeping_setup(str, flags); } "nohz_full=" has historically gathered most wanted isolation features. It can as well isolate managed irqs. > > And then can we rename it to housekeeping_num_online()? > > It could be just me, but does something like hk_num_online_cpus() makes more > sense here? Sure, that works as well. Thanks.
On Tue, Sep 22, 2020 at 09:54:58AM -0400, Nitesh Narayan Lal wrote: > >> If min_vecs > num_housekeeping, for example: > >> > >> /* PCI MSI/MSIx support */ > >> #define XGBE_MSI_BASE_COUNT 4 > >> #define XGBE_MSI_MIN_COUNT (XGBE_MSI_BASE_COUNT + 1) > >> > >> Then the protection fails. > > Right, I was ignoring that case. > > > >> How about reducing max_vecs down to min_vecs, if min_vecs > > >> num_housekeeping ? > > Yes, I think this makes sense. > > I will wait a bit to see if anyone else has any other comment and will post > > the next version then. > > > > Are there any other comments/concerns on this patch that I need to address in > the next posting? No objection from me, I don't know much about this area anyway. > -- > Nitesh >
On 9/22/20 4:58 PM, Frederic Weisbecker wrote: > On Tue, Sep 22, 2020 at 09:50:55AM -0400, Nitesh Narayan Lal wrote: >> On 9/22/20 6:08 AM, Frederic Weisbecker wrote: >> TBH I don't have a very strong case here at the moment. >> But still, IMHO, this will force the user to have both managed irqs and >> nohz_full in their environments to avoid these kinds of issues. Is that how >> we would like to proceed? > Yep that sounds good to me. I never know how much we want to split each and any > of the isolation features but I'd rather stay cautious to separate HK_FLAG_TICK > from the rest, just in case running in nohz_full mode ever becomes interesting > alone for performance and not just latency/isolation. Fair point. > > But look what you can do as well: > > diff --git a/kernel/sched/isolation.c b/kernel/sched/isolation.c > index 5a6ea03f9882..9df9598a9e39 100644 > --- a/kernel/sched/isolation.c > +++ b/kernel/sched/isolation.c > @@ -141,7 +141,7 @@ static int __init housekeeping_nohz_full_setup(char *str) > unsigned int flags; > > flags = HK_FLAG_TICK | HK_FLAG_WQ | HK_FLAG_TIMER | HK_FLAG_RCU | > - HK_FLAG_MISC | HK_FLAG_KTHREAD; > + HK_FLAG_MISC | HK_FLAG_KTHREAD | HK_FLAG_MANAGED_IRQ; > > return housekeeping_setup(str, flags); > } > > > "nohz_full=" has historically gathered most wanted isolation features. It can > as well isolate managed irqs. Nice, yeap this will work. > > >>> And then can we rename it to housekeeping_num_online()? >> It could be just me, but does something like hk_num_online_cpus() makes more >> sense here? > Sure, that works as well. Thanks a lot for all the help. > > Thanks. > -- Nitesh
> Subject: Re: [RFC][Patch v1 1/3] sched/isolation: API to get num of hosekeeping CPUs
Hosekeeping? Are these CPUs out gardening in the weeds?
Andrew
On 9/22/20 5:26 PM, Andrew Lunn wrote: >> Subject: Re: [RFC][Patch v1 1/3] sched/isolation: API to get num of hosekeeping CPUs > Hosekeeping? Are these CPUs out gardening in the weeds? Bjorn has already highlighted the typo, so I will be fixing it in the next version. Do you find the commit message and body of this patch unclear? > > Andrew > -- Nitesh