diff mbox series

[4.4] netfilter: x_tables: Use correct memory barriers.

Message ID 20210509082436.GA25504@amd
State New
Headers show
Series [4.4] netfilter: x_tables: Use correct memory barriers. | expand

Commit Message

Pavel Machek May 9, 2021, 8:24 a.m. UTC
From: Mark Tomlinson <mark.tomlinson@alliedtelesis.co.nz>

commit 175e476b8cdf2a4de7432583b49c871345e4f8a1 upstream.

When a new table value was assigned, it was followed by a write memory
barrier. This ensured that all writes before this point would complete
before any writes after this point. However, to determine whether the
rules are unused, the sequence counter is read. To ensure that all
writes have been done before these reads, a full memory barrier is
needed, not just a write memory barrier. The same argument applies when
incrementing the counter, before the rules are read.

Changing to using smp_mb() instead of smp_wmb() fixes the kernel panic
reported in cc00bcaa5899 (which is still present), while still
maintaining the same speed of replacing tables.

The smb_mb() barriers potentially slow the packet path, however testing
has shown no measurable change in performance on a 4-core MIPS64
platform.

Fixes: 7f5c6d4f665b ("netfilter: get rid of atomic ops in fast path")
Signed-off-by: Mark Tomlinson <mark.tomlinson@alliedtelesis.co.nz>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
[Ported to stable, affected barrier is added by d3d40f237480abf3268956daf18cdc56edd32834 in mainline]
Signed-off-by: Pavel Machek (CIP) <pavel@denx.de>
---
 include/linux/netfilter/x_tables.h | 2 +-
 net/netfilter/x_tables.c           | 3 +++
 2 files changed, 4 insertions(+), 1 deletion(-)

Comments

Greg KH May 10, 2021, 7:52 a.m. UTC | #1
On Sun, May 09, 2021 at 10:24:36AM +0200, Pavel Machek wrote:
> 
> From: Mark Tomlinson <mark.tomlinson@alliedtelesis.co.nz>
> 
> commit 175e476b8cdf2a4de7432583b49c871345e4f8a1 upstream.
> 
> When a new table value was assigned, it was followed by a write memory
> barrier. This ensured that all writes before this point would complete
> before any writes after this point. However, to determine whether the
> rules are unused, the sequence counter is read. To ensure that all
> writes have been done before these reads, a full memory barrier is
> needed, not just a write memory barrier. The same argument applies when
> incrementing the counter, before the rules are read.
> 
> Changing to using smp_mb() instead of smp_wmb() fixes the kernel panic
> reported in cc00bcaa5899 (which is still present), while still
> maintaining the same speed of replacing tables.
> 
> The smb_mb() barriers potentially slow the packet path, however testing
> has shown no measurable change in performance on a 4-core MIPS64
> platform.
> 
> Fixes: 7f5c6d4f665b ("netfilter: get rid of atomic ops in fast path")
> Signed-off-by: Mark Tomlinson <mark.tomlinson@alliedtelesis.co.nz>
> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
> [Ported to stable, affected barrier is added by d3d40f237480abf3268956daf18cdc56edd32834 in mainline]
> Signed-off-by: Pavel Machek (CIP) <pavel@denx.de>
> ---
>  include/linux/netfilter/x_tables.h | 2 +-
>  net/netfilter/x_tables.c           | 3 +++
>  2 files changed, 4 insertions(+), 1 deletion(-)

What about 4.14 and 4.9?  I can't take patches only for 4.4 that are not
also in newer releases.

thanks,

greg k-h
Nobuhiro Iwamatsu May 20, 2021, 8:04 a.m. UTC | #2
Hi,

On Mon, May 10, 2021 at 09:52:06AM +0200, Greg KH wrote:
> On Sun, May 09, 2021 at 10:24:36AM +0200, Pavel Machek wrote:

> > 

> > From: Mark Tomlinson <mark.tomlinson@alliedtelesis.co.nz>

> > 

> > commit 175e476b8cdf2a4de7432583b49c871345e4f8a1 upstream.

> > 

> > When a new table value was assigned, it was followed by a write memory

> > barrier. This ensured that all writes before this point would complete

> > before any writes after this point. However, to determine whether the

> > rules are unused, the sequence counter is read. To ensure that all

> > writes have been done before these reads, a full memory barrier is

> > needed, not just a write memory barrier. The same argument applies when

> > incrementing the counter, before the rules are read.

> > 

> > Changing to using smp_mb() instead of smp_wmb() fixes the kernel panic

> > reported in cc00bcaa5899 (which is still present), while still

> > maintaining the same speed of replacing tables.

> > 

> > The smb_mb() barriers potentially slow the packet path, however testing

> > has shown no measurable change in performance on a 4-core MIPS64

> > platform.

> > 

> > Fixes: 7f5c6d4f665b ("netfilter: get rid of atomic ops in fast path")

> > Signed-off-by: Mark Tomlinson <mark.tomlinson@alliedtelesis.co.nz>

> > Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>

> > [Ported to stable, affected barrier is added by d3d40f237480abf3268956daf18cdc56edd32834 in mainline]

> > Signed-off-by: Pavel Machek (CIP) <pavel@denx.de>

> > ---

> >  include/linux/netfilter/x_tables.h | 2 +-

> >  net/netfilter/x_tables.c           | 3 +++

> >  2 files changed, 4 insertions(+), 1 deletion(-)

> 

> What about 4.14 and 4.9?  I can't take patches only for 4.4 that are not

> also in newer releases.


I have confirmed that this patch can be applied to 4.9 and 4.14.
Do I need resend this patch again?

> 

> thanks,

> 

> greg k-h

> 


Best regards,
  Nobuhiro
diff mbox series

Patch

diff --git a/include/linux/netfilter/x_tables.h b/include/linux/netfilter/x_tables.h
index 6923e4049de3..304b60b49526 100644
--- a/include/linux/netfilter/x_tables.h
+++ b/include/linux/netfilter/x_tables.h
@@ -327,7 +327,7 @@  static inline unsigned int xt_write_recseq_begin(void)
 	 * since addend is most likely 1
 	 */
 	__this_cpu_add(xt_recseq.sequence, addend);
-	smp_wmb();
+	smp_mb();
 
 	return addend;
 }
diff --git a/net/netfilter/x_tables.c b/net/netfilter/x_tables.c
index 8caae1c5d93d..8878f859c6de 100644
--- a/net/netfilter/x_tables.c
+++ b/net/netfilter/x_tables.c
@@ -1146,6 +1146,9 @@  xt_replace_table(struct xt_table *table,
 	smp_wmb();
 	table->private = newinfo;
 
+	/* make sure all cpus see new ->private value */
+	smp_mb();
+
 	/*
 	 * Even though table entries have now been swapped, other CPU's
 	 * may still be using the old entries. This is okay, because