diff mbox series

[5.4,097/122] xfrm: policy: Read seqcount outside of rcu-read side in xfrm_policy_lookup_bytype

Message ID 20210715182517.994942248@linuxfoundation.org
State Superseded
Headers show
Series None | expand

Commit Message

Greg Kroah-Hartman July 15, 2021, 6:39 p.m. UTC
From: Varad Gautam <varad.gautam@suse.com>

commit d7b0408934c749f546b01f2b33d07421a49b6f3e upstream.

xfrm_policy_lookup_bytype loops on seqcount mutex xfrm_policy_hash_generation
within an RCU read side critical section. Although ill advised, this is fine if
the loop is bounded.

xfrm_policy_hash_generation wraps mutex hash_resize_mutex, which is used to
serialize writers (xfrm_hash_resize, xfrm_hash_rebuild). This is fine too.

On PREEMPT_RT=y, the read_seqcount_begin call within xfrm_policy_lookup_bytype
emits a mutex lock/unlock for hash_resize_mutex. Mutex locking is fine, since
RCU read side critical sections are allowed to sleep with PREEMPT_RT.

xfrm_hash_resize can, however, block on synchronize_rcu while holding
hash_resize_mutex.

This leads to the following situation on PREEMPT_RT, where the writer is
blocked on RCU grace period expiry, while the reader is blocked on a lock held
by the writer:

Thead 1 (xfrm_hash_resize)	Thread 2 (xfrm_policy_lookup_bytype)

				rcu_read_lock();
mutex_lock(&hash_resize_mutex);
				read_seqcount_begin(&xfrm_policy_hash_generation);
				mutex_lock(&hash_resize_mutex); // block
xfrm_bydst_resize();
synchronize_rcu(); // block
		<RCU stalls in xfrm_policy_lookup_bytype>

Move the read_seqcount_begin call outside of the RCU read side critical section,
and do an rcu_read_unlock/retry if we got stale data within the critical section.

On non-PREEMPT_RT, this shortens the time spent within RCU read side critical
section in case the seqcount needs a retry, and avoids unbounded looping.

Fixes: 77cc278f7b20 ("xfrm: policy: Use sequence counters with associated lock")
Signed-off-by: Varad Gautam <varad.gautam@suse.com>
Cc: linux-rt-users <linux-rt-users@vger.kernel.org>
Cc: netdev@vger.kernel.org
Cc: stable@vger.kernel.org # v4.9
Cc: Steffen Klassert <steffen.klassert@secunet.com>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Jakub Kicinski <kuba@kernel.org>
Cc: Florian Westphal <fw@strlen.de>
Cc: "Ahmed S. Darwish" <a.darwish@linutronix.de>
Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
Acked-by: Ahmed S. Darwish <a.darwish@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 net/xfrm/xfrm_policy.c |   21 ++++++++++++++-------
 1 file changed, 14 insertions(+), 7 deletions(-)

Comments

Florian Westphal July 15, 2021, 6:54 p.m. UTC | #1
Greg Kroah-Hartman <gregkh@linuxfoundation.org> wrote:
> From: Varad Gautam <varad.gautam@suse.com>
> 
> commit d7b0408934c749f546b01f2b33d07421a49b6f3e upstream.

This patch has been reverted in the ipsec tree, the problem was then
addressed via 2580d3f40022642452dd8422bfb8c22e54cf84bb
("xfrm: Fix RCU vs hash_resize_mutex lock inversion").

AFAICS its not in mainline yet.
Greg Kroah-Hartman July 16, 2021, 5:47 p.m. UTC | #2
On Thu, Jul 15, 2021 at 08:54:47PM +0200, Florian Westphal wrote:
> Greg Kroah-Hartman <gregkh@linuxfoundation.org> wrote:

> > From: Varad Gautam <varad.gautam@suse.com>

> > 

> > commit d7b0408934c749f546b01f2b33d07421a49b6f3e upstream.

> 

> This patch has been reverted in the ipsec tree, the problem was then

> addressed via 2580d3f40022642452dd8422bfb8c22e54cf84bb

> ("xfrm: Fix RCU vs hash_resize_mutex lock inversion").

> 

> AFAICS its not in mainline yet.


Thank you, I have now dropped this from everywhere.

greg k-h
diff mbox series

Patch

--- a/net/xfrm/xfrm_policy.c
+++ b/net/xfrm/xfrm_policy.c
@@ -2089,12 +2089,15 @@  static struct xfrm_policy *xfrm_policy_l
 	if (unlikely(!daddr || !saddr))
 		return NULL;
 
-	rcu_read_lock();
  retry:
-	do {
-		sequence = read_seqcount_begin(&xfrm_policy_hash_generation);
-		chain = policy_hash_direct(net, daddr, saddr, family, dir);
-	} while (read_seqcount_retry(&xfrm_policy_hash_generation, sequence));
+	sequence = read_seqcount_begin(&xfrm_policy_hash_generation);
+	rcu_read_lock();
+
+	chain = policy_hash_direct(net, daddr, saddr, family, dir);
+	if (read_seqcount_retry(&xfrm_policy_hash_generation, sequence)) {
+		rcu_read_unlock();
+		goto retry;
+	}
 
 	ret = NULL;
 	hlist_for_each_entry_rcu(pol, chain, bydst) {
@@ -2125,11 +2128,15 @@  static struct xfrm_policy *xfrm_policy_l
 	}
 
 skip_inexact:
-	if (read_seqcount_retry(&xfrm_policy_hash_generation, sequence))
+	if (read_seqcount_retry(&xfrm_policy_hash_generation, sequence)) {
+		rcu_read_unlock();
 		goto retry;
+	}
 
-	if (ret && !xfrm_pol_hold_rcu(ret))
+	if (ret && !xfrm_pol_hold_rcu(ret)) {
+		rcu_read_unlock();
 		goto retry;
+	}
 fail:
 	rcu_read_unlock();