diff mbox series

[net,2/2] net: ipa: prevent concurrent replenish

Message ID 20220111192150.379274-3-elder@linaro.org
State New
Headers show
Series net: ipa: fix two replenish bugs | expand

Commit Message

Alex Elder Jan. 11, 2022, 7:21 p.m. UTC
We have seen cases where an endpoint RX completion interrupt arrives
while replenishing for the endpoint is underway.  This causes another
instance of replenishing to begin as part of completing the receive
transaction.  If this occurs it can lead to transaction corruption.

Use a new atomic variable to ensure only replenish instance for an
endpoint executes at a time.

Fixes: 84f9bd12d46db ("soc: qcom: ipa: IPA endpoints")
Signed-off-by: Alex Elder <elder@linaro.org>
---
 drivers/net/ipa/ipa_endpoint.c | 13 +++++++++++++
 drivers/net/ipa/ipa_endpoint.h |  2 ++
 2 files changed, 15 insertions(+)

Comments

Alex Elder Jan. 12, 2022, 1:16 p.m. UTC | #1
On 1/11/22 10:04 PM, Jakub Kicinski wrote:
> On Tue, 11 Jan 2022 13:21:50 -0600 Alex Elder wrote:
>> Use a new atomic variable to ensure only replenish instance for an
>> endpoint executes at a time.
> 
> Why atomic_t? test_and_set_bit() + clear_bit() should do nicely here?

I think it foreshadows the replenish logic improvements
I'm experimenting with.  The bit operations are probably
best to represent Booleans, so I'll send version 2 that
adds and uses a bitmask instead.

Thanks.

					-Alex
diff mbox series

Patch

diff --git a/drivers/net/ipa/ipa_endpoint.c b/drivers/net/ipa/ipa_endpoint.c
index 8b055885cf3cf..a1019f5fe1748 100644
--- a/drivers/net/ipa/ipa_endpoint.c
+++ b/drivers/net/ipa/ipa_endpoint.c
@@ -1088,15 +1088,27 @@  static void ipa_endpoint_replenish(struct ipa_endpoint *endpoint, bool add_one)
 		return;
 	}
 
+	/* If already active, just update the backlog */
+	if (atomic_xchg(&endpoint->replenish_active, 1)) {
+		if (add_one)
+			atomic_inc(&endpoint->replenish_backlog);
+		return;
+	}
+
 	while (atomic_dec_not_zero(&endpoint->replenish_backlog))
 		if (ipa_endpoint_replenish_one(endpoint))
 			goto try_again_later;
+
+	atomic_set(&endpoint->replenish_active, 0);
+
 	if (add_one)
 		atomic_inc(&endpoint->replenish_backlog);
 
 	return;
 
 try_again_later:
+	atomic_set(&endpoint->replenish_active, 0);
+
 	/* The last one didn't succeed, so fix the backlog */
 	delta = add_one ? 2 : 1;
 	backlog = atomic_add_return(delta, &endpoint->replenish_backlog);
@@ -1691,6 +1703,7 @@  static void ipa_endpoint_setup_one(struct ipa_endpoint *endpoint)
 		 * backlog is the same as the maximum outstanding TREs.
 		 */
 		endpoint->replenish_enabled = false;
+		atomic_set(&endpoint->replenish_active, 0);
 		atomic_set(&endpoint->replenish_saved,
 			   gsi_channel_tre_max(gsi, endpoint->channel_id));
 		atomic_set(&endpoint->replenish_backlog, 0);
diff --git a/drivers/net/ipa/ipa_endpoint.h b/drivers/net/ipa/ipa_endpoint.h
index 0a859d10312dc..200f093214997 100644
--- a/drivers/net/ipa/ipa_endpoint.h
+++ b/drivers/net/ipa/ipa_endpoint.h
@@ -53,6 +53,7 @@  enum ipa_endpoint_name {
  * @netdev:		Network device pointer, if endpoint uses one
  * @replenish_enabled:	Whether receive buffer replenishing is enabled
  * @replenish_ready:	Number of replenish transactions without doorbell
+ * @replenish_active:	1 when replenishing is active, 0 otherwise
  * @replenish_saved:	Replenish requests held while disabled
  * @replenish_backlog:	Number of buffers needed to fill hardware queue
  * @replenish_work:	Work item used for repeated replenish failures
@@ -74,6 +75,7 @@  struct ipa_endpoint {
 	/* Receive buffer replenishing for RX endpoints */
 	bool replenish_enabled;
 	u32 replenish_ready;
+	atomic_t replenish_active;
 	atomic_t replenish_saved;
 	atomic_t replenish_backlog;
 	struct delayed_work replenish_work;		/* global wq */