diff mbox

[3.18-rc4,v11,2/6] irqchip: gic: Optimize locking in gic_raise_softirq

Message ID 1417119024-22844-3-git-send-email-daniel.thompson@linaro.org
State New
Headers show

Commit Message

Daniel Thompson Nov. 27, 2014, 8:10 p.m. UTC
Currently gic_raise_softirq() unconditionally takes and releases a lock
whose only purpose is to synchronize with the b.L switcher.

Remove this lock if the b.L switcher is not compiled in.

Signed-off-by: Daniel Thompson <daniel.thompson@linaro.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Jason Cooper <jason@lakedaemon.net>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Marc Zyngier <marc.zyngier@arm.com>
---
 drivers/irqchip/irq-gic.c | 19 +++++++++++++++++--
 1 file changed, 17 insertions(+), 2 deletions(-)

Comments

Daniel Thompson Nov. 28, 2014, 10:14 a.m. UTC | #1
On 27/11/14 21:37, Thomas Gleixner wrote:
> On Thu, 27 Nov 2014, Daniel Thompson wrote:
> 
>> Currently gic_raise_softirq() unconditionally takes and releases a lock
>> whose only purpose is to synchronize with the b.L switcher.
>>
>> Remove this lock if the b.L switcher is not compiled in.
> 
> I think the patches are in the wrong order. We optimize for the sane
> use case first, i.e BL=n. So you want to make the locking of
> irq_controller_lock in gic_raise_softirq() conditional in the first
> place, which should have been done when this was introduced.
>
> Once you have isolated that you can apply your split lock patch for
> the BL=y nonsense.
> 
> Adding more locks first and then optimizing them out does not make any
> sense.

You original described the use of irq_controller_lock for its current
dual purpose to be an abuse of the lock. Does it really make more sense
to optimize before we correct the abuse?

How about just squashing them together? It reduces the combined diffstat
by ~10%...


Daniel.
diff mbox

Patch

diff --git a/drivers/irqchip/irq-gic.c b/drivers/irqchip/irq-gic.c
index 94d77118efa8..e875da93f24a 100644
--- a/drivers/irqchip/irq-gic.c
+++ b/drivers/irqchip/irq-gic.c
@@ -76,8 +76,23 @@  static DEFINE_RAW_SPINLOCK(irq_controller_lock);
  * This lock is used by the big.LITTLE migration code to ensure no IPIs
  * can be pended on the old core after the map has been updated.
  */
+#ifdef CONFIG_BL_SWITCHER
 static DEFINE_RAW_SPINLOCK(cpu_map_migration_lock);
 
+static inline void bl_migration_lock(unsigned long *flags)
+{
+	raw_spin_lock_irqsave(&cpu_map_migration_lock, *flags);
+}
+
+static inline void bl_migration_unlock(unsigned long flags)
+{
+	raw_spin_unlock_irqrestore(&cpu_map_migration_lock, flags);
+}
+#else
+static inline void bl_migration_lock(unsigned long *flags) {}
+static inline void bl_migration_unlock(unsigned long flags) {}
+#endif
+
 /*
  * The GIC mapping of CPU interfaces does not necessarily match
  * the logical CPU numbering.  Let's use a mapping as returned
@@ -630,7 +645,7 @@  static void gic_raise_softirq(const struct cpumask *mask, unsigned int irq)
 	int cpu;
 	unsigned long flags, map = 0;
 
-	raw_spin_lock_irqsave(&cpu_map_migration_lock, flags);
+	bl_migration_lock(&flags);
 
 	/* Convert our logical CPU mask into a physical one. */
 	for_each_cpu(cpu, mask)
@@ -645,7 +660,7 @@  static void gic_raise_softirq(const struct cpumask *mask, unsigned int irq)
 	/* this always happens on GIC0 */
 	writel_relaxed(map << 16 | irq, gic_data_dist_base(&gic_data[0]) + GIC_DIST_SOFTINT);
 
-	raw_spin_unlock_irqrestore(&cpu_map_migration_lock, flags);
+	bl_migration_unlock(flags);
 }
 #endif