Message ID | 1605927976-232804-2-git-send-email-linyunsheng@huawei.com |
---|---|
State | New |
Headers | show |
Series | [net-next,v2,1/2] lockdep: Introduce in_softirq lockdep assert | expand |
On Sat, Nov 21, 2020 at 11:06:15AM +0800, Yunsheng Lin wrote: > The current semantic for napi_consume_skb() is that caller need > to provide non-zero budget when calling from NAPI context, and > breaking this semantic will cause hard to debug problem, because > _kfree_skb_defer() need to run in atomic context in order to push > the skb to the particular cpu' napi_alloc_cache atomically. > > So add the lockdep_assert_in_softirq() to assert when the running > context is not in_softirq, in_softirq means softirq is serving or > BH is disabled. Because the softirq context can be interrupted by > hard IRQ or NMI context, so lockdep_assert_in_softirq() need to > assert about hard IRQ or NMI context too. > > Suggested-by: Jakub Kicinski <kuba@kernel.org> > Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com> > --- > include/linux/lockdep.h | 7 +++++++ > 1 file changed, 7 insertions(+) > > diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h > index f559487..f5e3d81 100644 > --- a/include/linux/lockdep.h > +++ b/include/linux/lockdep.h > @@ -594,6 +594,12 @@ do { \ > this_cpu_read(hardirqs_enabled))); \ > } while (0) Due to in_softirq() having a deprication notice (due to it being awefully ambiguous), could we have a nice big comment here that explains in detail understandable to !network people (me) why this is actually correct? I'm not opposed to the thing, if that his what you need, it's fine, but please put on a comment that explains that in_softirq() is ambiguous and when you really do need it anyway. > +#define lockdep_assert_in_softirq() \ > +do { \ > + WARN_ON_ONCE(__lockdep_enabled && \ > + (!in_softirq() || in_irq() || in_nmi())); \ > +} while (0) > + > #else > # define might_lock(lock) do { } while (0) > # define might_lock_read(lock) do { } while (0) > @@ -605,6 +611,7 @@ do { \ > > # define lockdep_assert_preemption_enabled() do { } while (0) > # define lockdep_assert_preemption_disabled() do { } while (0) > +# define lockdep_assert_in_softirq() do { } while (0) > #endif > > #ifdef CONFIG_PROVE_RAW_LOCK_NESTING > -- > 2.8.1 >
On Mon, 23 Nov 2020 15:27:25 +0100 Peter Zijlstra wrote: > On Sat, Nov 21, 2020 at 11:06:15AM +0800, Yunsheng Lin wrote: > > The current semantic for napi_consume_skb() is that caller need > > to provide non-zero budget when calling from NAPI context, and > > breaking this semantic will cause hard to debug problem, because > > _kfree_skb_defer() need to run in atomic context in order to push > > the skb to the particular cpu' napi_alloc_cache atomically. > > > > So add the lockdep_assert_in_softirq() to assert when the running > > context is not in_softirq, in_softirq means softirq is serving or > > BH is disabled. Because the softirq context can be interrupted by > > hard IRQ or NMI context, so lockdep_assert_in_softirq() need to > > assert about hard IRQ or NMI context too. > Due to in_softirq() having a deprication notice (due to it being > awefully ambiguous), could we have a nice big comment here that explains > in detail understandable to !network people (me) why this is actually > correct? > > I'm not opposed to the thing, if that his what you need, it's fine, but > please put on a comment that explains that in_softirq() is ambiguous and > when you really do need it anyway. One liner would be: * Acceptable for protecting per-CPU resources accessed from BH We can add: * Much like in_softirq() - semantics are ambiguous, use carefully. * IIUC we basically want to protect the nc array and counter here: static inline void _kfree_skb_defer(struct sk_buff *skb) { struct napi_alloc_cache *nc = this_cpu_ptr(&napi_alloc_cache); /* drop skb->head and call any destructors for packet */ skb_release_all(skb); /* record skb to CPU local list */ nc->skb_cache[nc->skb_count++] = skb; #ifdef CONFIG_SLUB /* SLUB writes into objects when freeing */ prefetchw(skb); #endif /* flush skb_cache if it is filled */ if (unlikely(nc->skb_count == NAPI_SKB_CACHE_SIZE)) { kmem_cache_free_bulk(skbuff_head_cache, NAPI_SKB_CACHE_SIZE, nc->skb_cache); nc->skb_count = 0; } } > > +#define lockdep_assert_in_softirq() \ > > +do { \ > > + WARN_ON_ONCE(__lockdep_enabled && \ > > + (!in_softirq() || in_irq() || in_nmi())); \ > > +} while (0)
On Mon, Nov 23, 2020 at 12:12:59PM -0800, Jakub Kicinski wrote: > One liner would be: > > * Acceptable for protecting per-CPU resources accessed from BH > > We can add: > > * Much like in_softirq() - semantics are ambiguous, use carefully. * > > > IIUC we basically want to protect the nc array and counter here: Works for me, thanks!
On 2020/11/24 16:11, Peter Zijlstra wrote: > On Mon, Nov 23, 2020 at 12:12:59PM -0800, Jakub Kicinski wrote: >> One liner would be: >> >> * Acceptable for protecting per-CPU resources accessed from BH >> >> We can add: >> >> * Much like in_softirq() - semantics are ambiguous, use carefully. * >> >> >> IIUC we basically want to protect the nc array and counter here: > > Works for me, thanks! Will add the above comment in v3. Thanks. > . >
diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h index f559487..f5e3d81 100644 --- a/include/linux/lockdep.h +++ b/include/linux/lockdep.h @@ -594,6 +594,12 @@ do { \ this_cpu_read(hardirqs_enabled))); \ } while (0) +#define lockdep_assert_in_softirq() \ +do { \ + WARN_ON_ONCE(__lockdep_enabled && \ + (!in_softirq() || in_irq() || in_nmi())); \ +} while (0) + #else # define might_lock(lock) do { } while (0) # define might_lock_read(lock) do { } while (0) @@ -605,6 +611,7 @@ do { \ # define lockdep_assert_preemption_enabled() do { } while (0) # define lockdep_assert_preemption_disabled() do { } while (0) +# define lockdep_assert_in_softirq() do { } while (0) #endif #ifdef CONFIG_PROVE_RAW_LOCK_NESTING
The current semantic for napi_consume_skb() is that caller need to provide non-zero budget when calling from NAPI context, and breaking this semantic will cause hard to debug problem, because _kfree_skb_defer() need to run in atomic context in order to push the skb to the particular cpu' napi_alloc_cache atomically. So add the lockdep_assert_in_softirq() to assert when the running context is not in_softirq, in_softirq means softirq is serving or BH is disabled. Because the softirq context can be interrupted by hard IRQ or NMI context, so lockdep_assert_in_softirq() need to assert about hard IRQ or NMI context too. Suggested-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com> --- include/linux/lockdep.h | 7 +++++++ 1 file changed, 7 insertions(+)