diff mbox series

[net-next,7/7] net: ipa: allow arbitrary number of interconnects

Message ID 20210115125050.20555-8-elder@linaro.org
State New
Headers show
Series net: ipa: interconnect improvements | expand

Commit Message

Alex Elder Jan. 15, 2021, 12:50 p.m. UTC
Currently we assume that the IPA hardware has exactly three
interconnects.  But that won't be guaranteed for all platforms,
so allow any number of interconnects to be specified in the
configuration data.

For each platform, define an array of interconnect data entries
(still associated with the IPA clock structure), and record the
number of entries initialized in that array.

Loop over all entries in this array when initializing, enabling,
disabling, or tearing down the set of interconnects.

With this change we no longer need the ipa_interconnect_id
enumerated type, so get rid of it.

Signed-off-by: Alex Elder <elder@linaro.org>

---
 drivers/net/ipa/ipa_clock.c       | 113 +++++++++++++-----------------
 drivers/net/ipa/ipa_data-sc7180.c |  41 ++++++-----
 drivers/net/ipa/ipa_data-sdm845.c |  41 ++++++-----
 drivers/net/ipa/ipa_data.h        |  14 ++--
 4 files changed, 97 insertions(+), 112 deletions(-)

-- 
2.20.1

Comments

Jakub Kicinski Jan. 17, 2021, 3:12 a.m. UTC | #1
On Fri, 15 Jan 2021 06:50:50 -0600 Alex Elder wrote:
> Currently we assume that the IPA hardware has exactly three

> interconnects.  But that won't be guaranteed for all platforms,

> so allow any number of interconnects to be specified in the

> configuration data.

> 

> For each platform, define an array of interconnect data entries

> (still associated with the IPA clock structure), and record the

> number of entries initialized in that array.

> 

> Loop over all entries in this array when initializing, enabling,

> disabling, or tearing down the set of interconnects.

> 

> With this change we no longer need the ipa_interconnect_id

> enumerated type, so get rid of it.


Okay, all the platforms supported as of the end of the series 
still have 3 interconnects, or there is no upstream user of 
this functionality, if you will. What's the story?
Alex Elder Jan. 17, 2021, 4:03 p.m. UTC | #2
On 1/16/21 9:12 PM, Jakub Kicinski wrote:
> On Fri, 15 Jan 2021 06:50:50 -0600 Alex Elder wrote:

>> Currently we assume that the IPA hardware has exactly three

>> interconnects.  But that won't be guaranteed for all platforms,

>> so allow any number of interconnects to be specified in the

>> configuration data.

>>

>> For each platform, define an array of interconnect data entries

>> (still associated with the IPA clock structure), and record the

>> number of entries initialized in that array.

>>

>> Loop over all entries in this array when initializing, enabling,

>> disabling, or tearing down the set of interconnects.

>>

>> With this change we no longer need the ipa_interconnect_id

>> enumerated type, so get rid of it.

> 

> Okay, all the platforms supported as of the end of the series

> still have 3 interconnects, or there is no upstream user of

> this functionality, if you will. What's the story?


The short answer is that there is another version of IPA that
has four interconnects instead of three.  (A DTS file for it is
in "sdxprairie.dtsi" in the Qualcomm "downstream" code.)  I hope
to have that version supported this year, but it's not my top
priority right now.  Generalizing the interconnect definitions
as done in this series improves the driver, but you're right, it
is technically not required at this time.

And some more background:
The upstream IPA driver is derived from the Qualcomm downstream
code published at codeaurora.org.  The downstream driver is huge
(it's well over 100,000 lines of code) and it supports lots of
IPA versions and some features that are not present upstream.

In order to have any hope of getting upstream support for the
IPA hardware, the downstream driver functionality was reduced,
removing support for filtering, routing, and NAT.  I spent many
months refactoring and reworking that code to make it more
"upstreamable," and eventually posted the result for review.

Now that there is an upstream driver, a long term goal is to
add back functionality that got removed, matching the most
important features and hardware support found in the downstream
code.  So in cases like this, even though this feature is not
yet required, adding it now lays groundwork to make later work
easier.

Everything I do with the upstream IPA driver is aimed at in-tree
support for additional IPA features and hardware versions.  So
even if an improvement isn't required *now*, there is at least
a general plan to add support "soon" for something that will
need it.

Beyond even that, there are some things I intend to do that
will improve the driver, even if they aren't technically
required near-term.  For example, I'd like to dynamically
allocate endpoints based on what's needed, rather than
having the driver support any number of them up to a maximum
fixed at build time.

Probably a longer response than you needed, but I thought it
would help to provide more background.  Besides, you *did* ask
for "the story..."

					-Alex
Jakub Kicinski Jan. 18, 2021, 7:58 p.m. UTC | #3
On Sun, 17 Jan 2021 10:03:41 -0600 Alex Elder wrote:
> On 1/16/21 9:12 PM, Jakub Kicinski wrote:

> > On Fri, 15 Jan 2021 06:50:50 -0600 Alex Elder wrote:  

> >> Currently we assume that the IPA hardware has exactly three

> >> interconnects.  But that won't be guaranteed for all platforms,

> >> so allow any number of interconnects to be specified in the

> >> configuration data.

> >>

> >> For each platform, define an array of interconnect data entries

> >> (still associated with the IPA clock structure), and record the

> >> number of entries initialized in that array.

> >>

> >> Loop over all entries in this array when initializing, enabling,

> >> disabling, or tearing down the set of interconnects.

> >>

> >> With this change we no longer need the ipa_interconnect_id

> >> enumerated type, so get rid of it.  

> > 

> > Okay, all the platforms supported as of the end of the series

> > still have 3 interconnects, or there is no upstream user of

> > this functionality, if you will. What's the story?  

> 

> The short answer is that there is another version of IPA that

> has four interconnects instead of three.  (A DTS file for it is

> in "sdxprairie.dtsi" in the Qualcomm "downstream" code.)  I hope

> to have that version supported this year, but it's not my top

> priority right now.  Generalizing the interconnect definitions

> as done in this series improves the driver, but you're right, it

> is technically not required at this time.

> 

> And some more background:

> The upstream IPA driver is derived from the Qualcomm downstream

> code published at codeaurora.org.  The downstream driver is huge

> (it's well over 100,000 lines of code) and it supports lots of

> IPA versions and some features that are not present upstream.

> 

> In order to have any hope of getting upstream support for the

> IPA hardware, the downstream driver functionality was reduced,

> removing support for filtering, routing, and NAT.  I spent many

> months refactoring and reworking that code to make it more

> "upstreamable," and eventually posted the result for review.

> 

> Now that there is an upstream driver, a long term goal is to

> add back functionality that got removed, matching the most

> important features and hardware support found in the downstream

> code.  So in cases like this, even though this feature is not

> yet required, adding it now lays groundwork to make later work

> easier.

> 

> Everything I do with the upstream IPA driver is aimed at in-tree

> support for additional IPA features and hardware versions.  So

> even if an improvement isn't required *now*, there is at least

> a general plan to add support "soon" for something that will

> need it.

> 

> Beyond even that, there are some things I intend to do that

> will improve the driver, even if they aren't technically

> required near-term.  For example, I'd like to dynamically

> allocate endpoints based on what's needed, rather than

> having the driver support any number of them up to a maximum

> fixed at build time.

> 

> Probably a longer response than you needed, but I thought it

> would help to provide more background.  Besides, you *did* ask

> for "the story..."


Thanks, I think I get it now.

But it does sound a little too much like aligning with the vendor
driver for the sake of aligning with the vendor driver. This makes 
the review for someone not familiar with the vendor driver hard, 
and raises questions like "is this really needed upstream or just
downstream / out-of-tree". Please try to reorient the work towards
implementing particular pieces of functionality upstream start to end. 

Applied, with a warning :)
Alex Elder Jan. 18, 2021, 8:27 p.m. UTC | #4
On 1/18/21 1:58 PM, Jakub Kicinski wrote:
> But it does sound a little too much like aligning with the vendor

> driver for the sake of aligning with the vendor driver. This makes 

> the review for someone not familiar with the vendor driver hard, 

> and raises questions like "is this really needed upstream or just

> downstream / out-of-tree". Please try to reorient the work towards

> implementing particular pieces of functionality upstream start to end. 


I have lots of this kind of preparatory work "ready to go" and I'm
eager to get it upstream.  But I'll try to be more patient and wait
to send things out until something justifies the need, rather than
just doing it in support of a longer-term goal.

I would not consider what I'm doing "aligning with the vendor
driver," but maybe I misunderstand your meaning.  The upstream
driver is very different from Qualcomm's driver.  I'm basically
writing all new code now, using the Qualcomm code for reference.

Thank you very much for applying the patches.

					-Alex
diff mbox series

Patch

diff --git a/drivers/net/ipa/ipa_clock.c b/drivers/net/ipa/ipa_clock.c
index fbe42106fc2a8..354675a643db5 100644
--- a/drivers/net/ipa/ipa_clock.c
+++ b/drivers/net/ipa/ipa_clock.c
@@ -47,13 +47,15 @@  struct ipa_interconnect {
  * @count:		Clocking reference count
  * @mutex:		Protects clock enable/disable
  * @core:		IPA core clock
+ * @interconnect_count:	Number of elements in interconnect[]
  * @interconnect:	Interconnect array
  */
 struct ipa_clock {
 	refcount_t count;
 	struct mutex mutex; /* protects clock enable/disable */
 	struct clk *core;
-	struct ipa_interconnect interconnect[IPA_INTERCONNECT_COUNT];
+	u32 interconnect_count;
+	struct ipa_interconnect *interconnect;
 };
 
 static int ipa_interconnect_init_one(struct device *dev,
@@ -90,31 +92,29 @@  static int ipa_interconnect_init(struct ipa_clock *clock, struct device *dev,
 				 const struct ipa_interconnect_data *data)
 {
 	struct ipa_interconnect *interconnect;
+	u32 count;
 	int ret;
 
-	interconnect = &clock->interconnect[IPA_INTERCONNECT_MEMORY];
-	ret = ipa_interconnect_init_one(dev, interconnect, data++);
-	if (ret)
-		return ret;
+	count = clock->interconnect_count;
+	interconnect = kcalloc(count, sizeof(*interconnect), GFP_KERNEL);
+	if (!interconnect)
+		return -ENOMEM;
+	clock->interconnect = interconnect;
 
-	interconnect = &clock->interconnect[IPA_INTERCONNECT_IMEM];
-	ret = ipa_interconnect_init_one(dev, interconnect, data++);
-	if (ret)
-		goto err_memory_path_put;
-
-	interconnect = &clock->interconnect[IPA_INTERCONNECT_IMEM];
-	ret = ipa_interconnect_init_one(dev, interconnect, data++);
-	if (ret)
-		goto err_imem_path_put;
+	while (count--) {
+		ret = ipa_interconnect_init_one(dev, interconnect, data++);
+		if (ret)
+			goto out_unwind;
+		interconnect++;
+	}
 
 	return 0;
 
-err_imem_path_put:
-	interconnect = &clock->interconnect[IPA_INTERCONNECT_IMEM];
-	ipa_interconnect_exit_one(interconnect);
-err_memory_path_put:
-	interconnect = &clock->interconnect[IPA_INTERCONNECT_MEMORY];
-	ipa_interconnect_exit_one(interconnect);
+out_unwind:
+	while (interconnect-- > clock->interconnect)
+		ipa_interconnect_exit_one(interconnect);
+	kfree(clock->interconnect);
+	clock->interconnect = NULL;
 
 	return ret;
 }
@@ -124,12 +124,11 @@  static void ipa_interconnect_exit(struct ipa_clock *clock)
 {
 	struct ipa_interconnect *interconnect;
 
-	interconnect = &clock->interconnect[IPA_INTERCONNECT_CONFIG];
-	ipa_interconnect_exit_one(interconnect);
-	interconnect = &clock->interconnect[IPA_INTERCONNECT_IMEM];
-	ipa_interconnect_exit_one(interconnect);
-	interconnect = &clock->interconnect[IPA_INTERCONNECT_MEMORY];
-	ipa_interconnect_exit_one(interconnect);
+	interconnect = clock->interconnect + clock->interconnect_count;
+	while (interconnect-- > clock->interconnect)
+		ipa_interconnect_exit_one(interconnect);
+	kfree(clock->interconnect);
+	clock->interconnect = NULL;
 }
 
 /* Currently we only use one bandwidth level, so just "enable" interconnects */
@@ -138,33 +137,23 @@  static int ipa_interconnect_enable(struct ipa *ipa)
 	struct ipa_interconnect *interconnect;
 	struct ipa_clock *clock = ipa->clock;
 	int ret;
+	u32 i;
 
-	interconnect = &clock->interconnect[IPA_INTERCONNECT_MEMORY];
-	ret = icc_set_bw(interconnect->path, interconnect->average_bandwidth,
-			 interconnect->peak_bandwidth);
-	if (ret)
-		return ret;
-
-	interconnect = &clock->interconnect[IPA_INTERCONNECT_IMEM];
-	ret = icc_set_bw(interconnect->path, interconnect->average_bandwidth,
-			 interconnect->peak_bandwidth);
-	if (ret)
-		goto err_memory_path_disable;
-
-	interconnect = &clock->interconnect[IPA_INTERCONNECT_CONFIG];
-	ret = icc_set_bw(interconnect->path, interconnect->average_bandwidth,
-			 interconnect->peak_bandwidth);
-	if (ret)
-		goto err_imem_path_disable;
+	interconnect = clock->interconnect;
+	for (i = 0; i < clock->interconnect_count; i++) {
+		ret = icc_set_bw(interconnect->path,
+				 interconnect->average_bandwidth,
+				 interconnect->peak_bandwidth);
+		if (ret)
+			goto out_unwind;
+		interconnect++;
+	}
 
 	return 0;
 
-err_imem_path_disable:
-	interconnect = &clock->interconnect[IPA_INTERCONNECT_IMEM];
-	(void)icc_set_bw(interconnect->path, 0, 0);
-err_memory_path_disable:
-	interconnect = &clock->interconnect[IPA_INTERCONNECT_MEMORY];
-	(void)icc_set_bw(interconnect->path, 0, 0);
+out_unwind:
+	while (interconnect-- > clock->interconnect)
+		(void)icc_set_bw(interconnect->path, 0, 0);
 
 	return ret;
 }
@@ -175,22 +164,17 @@  static void ipa_interconnect_disable(struct ipa *ipa)
 	struct ipa_interconnect *interconnect;
 	struct ipa_clock *clock = ipa->clock;
 	int result = 0;
+	u32 count;
 	int ret;
 
-	interconnect = &clock->interconnect[IPA_INTERCONNECT_MEMORY];
-	ret = icc_set_bw(interconnect->path, 0, 0);
-	if (ret)
-		result = ret;
-
-	interconnect = &clock->interconnect[IPA_INTERCONNECT_IMEM];
-	ret = icc_set_bw(interconnect->path, 0, 0);
-	if (ret && !result)
-		result = ret;
-
-	interconnect = &clock->interconnect[IPA_INTERCONNECT_IMEM];
-	ret = icc_set_bw(interconnect->path, 0, 0);
-	if (ret && !result)
-		result = ret;
+	count = clock->interconnect_count;
+	interconnect = clock->interconnect + count;
+	while (count--) {
+		interconnect--;
+		ret = icc_set_bw(interconnect->path, 0, 0);
+		if (ret && !result)
+			result = ret;
+	}
 
 	if (result)
 		dev_err(&ipa->pdev->dev,
@@ -314,8 +298,9 @@  ipa_clock_init(struct device *dev, const struct ipa_clock_data *data)
 		goto err_clk_put;
 	}
 	clock->core = clk;
+	clock->interconnect_count = data->interconnect_count;
 
-	ret = ipa_interconnect_init(clock, dev, data->interconnect);
+	ret = ipa_interconnect_init(clock, dev, data->interconnect_data);
 	if (ret)
 		goto err_kfree;
 
diff --git a/drivers/net/ipa/ipa_data-sc7180.c b/drivers/net/ipa/ipa_data-sc7180.c
index 1936ecb4c1104..997b51ceb7d76 100644
--- a/drivers/net/ipa/ipa_data-sc7180.c
+++ b/drivers/net/ipa/ipa_data-sc7180.c
@@ -309,27 +309,30 @@  static struct ipa_mem_data ipa_mem_data = {
 	.smem_size	= 0x00002000,
 };
 
+/* Interconnect bandwidths are in 1000 byte/second units */
+static struct ipa_interconnect_data ipa_interconnect_data[] = {
+	{
+		.name			= "memory",
+		.peak_bandwidth		= 465000,	/* 465 MBps */
+		.average_bandwidth	= 80000,	/* 80 MBps */
+	},
+	/* Average bandwidth is unused for the next two interconnects */
+	{
+		.name			= "imem",
+		.peak_bandwidth		= 68570,	/* 68.570 MBps */
+		.average_bandwidth	= 0,		/* unused */
+	},
+	{
+		.name			= "config",
+		.peak_bandwidth		= 30000,	/* 30 MBps */
+		.average_bandwidth	= 0,		/* unused */
+	},
+};
+
 static struct ipa_clock_data ipa_clock_data = {
 	.core_clock_rate	= 100 * 1000 * 1000,	/* Hz */
-	/* Interconnect bandwidths are in 1000 byte/second units */
-	.interconnect = {
-		[IPA_INTERCONNECT_MEMORY] = {
-			.name			= "memory",
-			.peak_bandwidth		= 465000,	/* 465 MBps */
-			.average_bandwidth	= 80000,	/* 80 MBps */
-		},
-		/* Average bandwidth unused for the next two interconnects */
-		[IPA_INTERCONNECT_IMEM] = {
-			.name			= "imem",
-			.peak_bandwidth		= 68570,	/* 68.57 MBps */
-			.average_bandwidth	= 0,		/* unused */
-		},
-		[IPA_INTERCONNECT_CONFIG] = {
-			.name			= "config",
-			.peak_bandwidth		= 30000,	/* 30 MBps */
-			.average_bandwidth	= 0,		/* unused */
-		},
-	},
+	.interconnect_count	= ARRAY_SIZE(ipa_interconnect_data),
+	.interconnect_data	= ipa_interconnect_data,
 };
 
 /* Configuration data for the SC7180 SoC. */
diff --git a/drivers/net/ipa/ipa_data-sdm845.c b/drivers/net/ipa/ipa_data-sdm845.c
index 3b556b5a63406..88c9c3562ab79 100644
--- a/drivers/net/ipa/ipa_data-sdm845.c
+++ b/drivers/net/ipa/ipa_data-sdm845.c
@@ -329,27 +329,30 @@  static struct ipa_mem_data ipa_mem_data = {
 	.smem_size	= 0x00002000,
 };
 
+/* Interconnect bandwidths are in 1000 byte/second units */
+static struct ipa_interconnect_data ipa_interconnect_data[] = {
+	{
+		.name			= "memory",
+		.peak_bandwidth		= 600000,	/* 600 MBps */
+		.average_bandwidth	= 80000,	/* 80 MBps */
+	},
+	/* Average bandwidth is unused for the next two interconnects */
+	{
+		.name			= "imem",
+		.peak_bandwidth		= 350000,	/* 350 MBps */
+		.average_bandwidth	= 0,		/* unused */
+	},
+	{
+		.name			= "config",
+		.peak_bandwidth		= 40000,	/* 40 MBps */
+		.average_bandwidth	= 0,		/* unused */
+	},
+};
+
 static struct ipa_clock_data ipa_clock_data = {
 	.core_clock_rate	= 75 * 1000 * 1000,	/* Hz */
-	/* Interconnect bandwidths are in 1000 byte/second units */
-	.interconnect = {
-		[IPA_INTERCONNECT_MEMORY] = {
-			.name			= "memory",
-			.peak_bandwidth		= 600000,	/* 600 MBps */
-			.average_bandwidth	= 80000,	/* 80 MBps */
-		},
-		/* Average bandwidth unused for the next two interconnects */
-		[IPA_INTERCONNECT_IMEM] = {
-			.name			= "imem",
-			.peak_bandwidth		= 350000,	/* 350 MBps */
-			.average_bandwidth	= 0,		/* unused */
-		},
-		[IPA_INTERCONNECT_CONFIG] = {
-			.name			= "config",
-			.peak_bandwidth		= 40000,	/* 40 MBps */
-			.average_bandwidth	= 0,		/* unused */
-		},
-	},
+	.interconnect_count	= ARRAY_SIZE(ipa_interconnect_data),
+	.interconnect_data	= ipa_interconnect_data,
 };
 
 /* Configuration data for the SDM845 SoC. */
diff --git a/drivers/net/ipa/ipa_data.h b/drivers/net/ipa/ipa_data.h
index d8ea6266dc6a1..b476fc373f7fe 100644
--- a/drivers/net/ipa/ipa_data.h
+++ b/drivers/net/ipa/ipa_data.h
@@ -258,14 +258,6 @@  struct ipa_mem_data {
 	u32 smem_size;
 };
 
-/** enum ipa_interconnect_id - IPA interconnect identifier */
-enum ipa_interconnect_id {
-	IPA_INTERCONNECT_MEMORY,
-	IPA_INTERCONNECT_IMEM,
-	IPA_INTERCONNECT_CONFIG,
-	IPA_INTERCONNECT_COUNT,		/* Last; not an interconnect */
-};
-
 /**
  * struct ipa_interconnect_data - description of IPA interconnect bandwidths
  * @name:		Interconnect name (matches interconnect-name in DT)
@@ -281,11 +273,13 @@  struct ipa_interconnect_data {
 /**
  * struct ipa_clock_data - description of IPA clock and interconnect rates
  * @core_clock_rate:	Core clock rate (Hz)
- * @interconnect:	Array of interconnect bandwidth parameters
+ * @interconnect_count:	Number of entries in the interconnect_data array
+ * @interconnect_data:	IPA interconnect configuration data
  */
 struct ipa_clock_data {
 	u32 core_clock_rate;
-	struct ipa_interconnect_data interconnect[IPA_INTERCONNECT_COUNT];
+	u32 interconnect_count;		/* # entries in interconnect_data[] */
+	const struct ipa_interconnect_data *interconnect_data;
 };
 
 /**