diff mbox series

arm64: perf: Disable PMU while processing counter overflows

Message ID 20210616192859.21708-1-apeureka@amazon.de
State New
Headers show
Series arm64: perf: Disable PMU while processing counter overflows | expand

Commit Message

Aman Priyadarshi June 16, 2021, 7:28 p.m. UTC
From: Suzuki K Poulose <suzuki.poulose@arm.com>


[ Upstream commit 3cce50dfec4a5b0414c974190940f47dd32c6dee ]

The arm64 PMU updates the event counters and reprograms the
counters in the overflow IRQ handler without disabling the
PMU. This could potentially cause skews in for group counters,
where the overflowed counters may potentially loose some event
counts, while they are reprogrammed. To prevent this, disable
the PMU while we process the counter overflows and enable it
right back when we are done.

This patch also moves the PMU stop/start routines to avoid a
forward declaration.

Suggested-by: Mark Rutland <mark.rutland@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>

Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>

Signed-off-by: Will Deacon <will.deacon@arm.com>

Signed-off-by: Aman Priyadarshi <apeureka@amazon.de>

Cc: stable@vger.kernel.org
---
 arch/arm64/kernel/perf_event.c | 50 +++++++++++++++++++---------------
 1 file changed, 28 insertions(+), 22 deletions(-)

-- 
2.17.1




Amazon Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrung: Christian Schlaeger, Jonathan Weiss
Eingetragen am Amtsgericht Charlottenburg unter HRB 149173 B
Sitz: Berlin
Ust-ID: DE 289 237 879

Comments

Greg Kroah-Hartman June 17, 2021, 4:51 a.m. UTC | #1
On Wed, Jun 16, 2021 at 09:28:59PM +0200, Aman Priyadarshi wrote:
> From: Suzuki K Poulose <suzuki.poulose@arm.com>

> 

> [ Upstream commit 3cce50dfec4a5b0414c974190940f47dd32c6dee ]

> 

> The arm64 PMU updates the event counters and reprograms the

> counters in the overflow IRQ handler without disabling the

> PMU. This could potentially cause skews in for group counters,

> where the overflowed counters may potentially loose some event

> counts, while they are reprogrammed. To prevent this, disable

> the PMU while we process the counter overflows and enable it

> right back when we are done.

> 

> This patch also moves the PMU stop/start routines to avoid a

> forward declaration.

> 

> Suggested-by: Mark Rutland <mark.rutland@arm.com>

> Cc: Will Deacon <will.deacon@arm.com>

> Acked-by: Mark Rutland <mark.rutland@arm.com>

> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>

> Signed-off-by: Will Deacon <will.deacon@arm.com>

> Signed-off-by: Aman Priyadarshi <apeureka@amazon.de>

> Cc: stable@vger.kernel.org

> ---

>  arch/arm64/kernel/perf_event.c | 50 +++++++++++++++++++---------------

>  1 file changed, 28 insertions(+), 22 deletions(-)


What stable tree(s) do you want this applied to?

thanks,

greg k-h
Marc Zyngier June 17, 2021, 7:34 a.m. UTC | #2
On Thu, 17 Jun 2021 05:51:03 +0100,
Greg Kroah-Hartman <gregkh@linuxfoundation.org> wrote:
> 

> On Wed, Jun 16, 2021 at 09:28:59PM +0200, Aman Priyadarshi wrote:

> > From: Suzuki K Poulose <suzuki.poulose@arm.com>

> > 

> > [ Upstream commit 3cce50dfec4a5b0414c974190940f47dd32c6dee ]

> > 

> > The arm64 PMU updates the event counters and reprograms the

> > counters in the overflow IRQ handler without disabling the

> > PMU. This could potentially cause skews in for group counters,

> > where the overflowed counters may potentially loose some event

> > counts, while they are reprogrammed. To prevent this, disable

> > the PMU while we process the counter overflows and enable it

> > right back when we are done.

> > 

> > This patch also moves the PMU stop/start routines to avoid a

> > forward declaration.

> > 

> > Suggested-by: Mark Rutland <mark.rutland@arm.com>

> > Cc: Will Deacon <will.deacon@arm.com>

> > Acked-by: Mark Rutland <mark.rutland@arm.com>

> > Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>

> > Signed-off-by: Will Deacon <will.deacon@arm.com>

> > Signed-off-by: Aman Priyadarshi <apeureka@amazon.de>

> > Cc: stable@vger.kernel.org

> > ---

> >  arch/arm64/kernel/perf_event.c | 50 +++++++++++++++++++---------------

> >  1 file changed, 28 insertions(+), 22 deletions(-)

> 

> What stable tree(s) do you want this applied to?


I guess that'd be 4.14 and previous stables if the patch actually
applies.

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.
Aman Priyadarshi June 17, 2021, 8:57 a.m. UTC | #3
On Thu, 2021-06-17 at 08:34 +0100, Marc Zyngier wrote:
> CAUTION: This email originated from outside of the organization. Do not
> click links or open attachments unless you can confirm the sender and
> know the content is safe.
> 
> 
> 
> On Thu, 17 Jun 2021 05:51:03 +0100,
> Greg Kroah-Hartman <gregkh@linuxfoundation.org> wrote:
> > 
> > On Wed, Jun 16, 2021 at 09:28:59PM +0200, Aman Priyadarshi wrote:
> > > From: Suzuki K Poulose <suzuki.poulose@arm.com>
> > > 
> > > [ Upstream commit 3cce50dfec4a5b0414c974190940f47dd32c6dee ]
> > > 
> > > The arm64 PMU updates the event counters and reprograms the
> > > counters in the overflow IRQ handler without disabling the
> > > PMU. This could potentially cause skews in for group counters,
> > > where the overflowed counters may potentially loose some event
> > > counts, while they are reprogrammed. To prevent this, disable
> > > the PMU while we process the counter overflows and enable it
> > > right back when we are done.
> > > 
> > > This patch also moves the PMU stop/start routines to avoid a
> > > forward declaration.
> > > 
> > > Suggested-by: Mark Rutland <mark.rutland@arm.com>
> > > Cc: Will Deacon <will.deacon@arm.com>
> > > Acked-by: Mark Rutland <mark.rutland@arm.com>
> > > Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
> > > Signed-off-by: Will Deacon <will.deacon@arm.com>
> > > Signed-off-by: Aman Priyadarshi <apeureka@amazon.de>
> > > Cc: stable@vger.kernel.org
> > > ---
> > >  arch/arm64/kernel/perf_event.c | 50 +++++++++++++++++++-------------
> > > --
> > >  1 file changed, 28 insertions(+), 22 deletions(-)
> > 
> > What stable tree(s) do you want this applied to?
> 
> I guess that'd be 4.14 and previous stables if the patch actually
> applies.
> 

Correct. I have tested the patch on 4.14.y, can confirm that it applies
cleanly on 4.9.y as well.

Thanks,
Aman Priyadarshi




Amazon Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrung: Christian Schlaeger, Jonathan Weiss
Eingetragen am Amtsgericht Charlottenburg unter HRB 149173 B
Sitz: Berlin
Ust-ID: DE 289 237 879
Greg Kroah-Hartman June 25, 2021, 10:22 a.m. UTC | #4
On Thu, Jun 17, 2021 at 10:57:00AM +0200, Aman Priyadarshi wrote:
> On Thu, 2021-06-17 at 08:34 +0100, Marc Zyngier wrote:

> > CAUTION: This email originated from outside of the organization. Do not

> > click links or open attachments unless you can confirm the sender and

> > know the content is safe.

> > 

> > 

> > 

> > On Thu, 17 Jun 2021 05:51:03 +0100,

> > Greg Kroah-Hartman <gregkh@linuxfoundation.org> wrote:

> > > 

> > > On Wed, Jun 16, 2021 at 09:28:59PM +0200, Aman Priyadarshi wrote:

> > > > From: Suzuki K Poulose <suzuki.poulose@arm.com>

> > > > 

> > > > [ Upstream commit 3cce50dfec4a5b0414c974190940f47dd32c6dee ]

> > > > 

> > > > The arm64 PMU updates the event counters and reprograms the

> > > > counters in the overflow IRQ handler without disabling the

> > > > PMU. This could potentially cause skews in for group counters,

> > > > where the overflowed counters may potentially loose some event

> > > > counts, while they are reprogrammed. To prevent this, disable

> > > > the PMU while we process the counter overflows and enable it

> > > > right back when we are done.

> > > > 

> > > > This patch also moves the PMU stop/start routines to avoid a

> > > > forward declaration.

> > > > 

> > > > Suggested-by: Mark Rutland <mark.rutland@arm.com>

> > > > Cc: Will Deacon <will.deacon@arm.com>

> > > > Acked-by: Mark Rutland <mark.rutland@arm.com>

> > > > Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>

> > > > Signed-off-by: Will Deacon <will.deacon@arm.com>

> > > > Signed-off-by: Aman Priyadarshi <apeureka@amazon.de>

> > > > Cc: stable@vger.kernel.org

> > > > ---

> > > >  arch/arm64/kernel/perf_event.c | 50 +++++++++++++++++++-------------

> > > > --

> > > >  1 file changed, 28 insertions(+), 22 deletions(-)

> > > 

> > > What stable tree(s) do you want this applied to?

> > 

> > I guess that'd be 4.14 and previous stables if the patch actually

> > applies.

> > 

> 

> Correct. I have tested the patch on 4.14.y, can confirm that it applies

> cleanly on 4.9.y as well.


Now queued up, thanks.

greg k-h
diff mbox series

Patch

diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c
index 53df84b2a07f..4ee1228d29eb 100644
--- a/arch/arm64/kernel/perf_event.c
+++ b/arch/arm64/kernel/perf_event.c
@@ -670,6 +670,28 @@  static void armv8pmu_disable_event(struct perf_event *event)
 	raw_spin_unlock_irqrestore(&events->pmu_lock, flags);
 }
 
+static void armv8pmu_start(struct arm_pmu *cpu_pmu)
+{
+	unsigned long flags;
+	struct pmu_hw_events *events = this_cpu_ptr(cpu_pmu->hw_events);
+
+	raw_spin_lock_irqsave(&events->pmu_lock, flags);
+	/* Enable all counters */
+	armv8pmu_pmcr_write(armv8pmu_pmcr_read() | ARMV8_PMU_PMCR_E);
+	raw_spin_unlock_irqrestore(&events->pmu_lock, flags);
+}
+
+static void armv8pmu_stop(struct arm_pmu *cpu_pmu)
+{
+	unsigned long flags;
+	struct pmu_hw_events *events = this_cpu_ptr(cpu_pmu->hw_events);
+
+	raw_spin_lock_irqsave(&events->pmu_lock, flags);
+	/* Disable all counters */
+	armv8pmu_pmcr_write(armv8pmu_pmcr_read() & ~ARMV8_PMU_PMCR_E);
+	raw_spin_unlock_irqrestore(&events->pmu_lock, flags);
+}
+
 static irqreturn_t armv8pmu_handle_irq(int irq_num, void *dev)
 {
 	u32 pmovsr;
@@ -695,6 +717,11 @@  static irqreturn_t armv8pmu_handle_irq(int irq_num, void *dev)
 	 */
 	regs = get_irq_regs();
 
+	/*
+	 * Stop the PMU while processing the counter overflows
+	 * to prevent skews in group events.
+	 */
+	armv8pmu_stop(cpu_pmu);
 	for (idx = 0; idx < cpu_pmu->num_events; ++idx) {
 		struct perf_event *event = cpuc->events[idx];
 		struct hw_perf_event *hwc;
@@ -719,6 +746,7 @@  static irqreturn_t armv8pmu_handle_irq(int irq_num, void *dev)
 		if (perf_event_overflow(event, &data, regs))
 			cpu_pmu->disable(event);
 	}
+	armv8pmu_start(cpu_pmu);
 
 	/*
 	 * Handle the pending perf events.
@@ -732,28 +760,6 @@  static irqreturn_t armv8pmu_handle_irq(int irq_num, void *dev)
 	return IRQ_HANDLED;
 }
 
-static void armv8pmu_start(struct arm_pmu *cpu_pmu)
-{
-	unsigned long flags;
-	struct pmu_hw_events *events = this_cpu_ptr(cpu_pmu->hw_events);
-
-	raw_spin_lock_irqsave(&events->pmu_lock, flags);
-	/* Enable all counters */
-	armv8pmu_pmcr_write(armv8pmu_pmcr_read() | ARMV8_PMU_PMCR_E);
-	raw_spin_unlock_irqrestore(&events->pmu_lock, flags);
-}
-
-static void armv8pmu_stop(struct arm_pmu *cpu_pmu)
-{
-	unsigned long flags;
-	struct pmu_hw_events *events = this_cpu_ptr(cpu_pmu->hw_events);
-
-	raw_spin_lock_irqsave(&events->pmu_lock, flags);
-	/* Disable all counters */
-	armv8pmu_pmcr_write(armv8pmu_pmcr_read() & ~ARMV8_PMU_PMCR_E);
-	raw_spin_unlock_irqrestore(&events->pmu_lock, flags);
-}
-
 static int armv8pmu_get_event_idx(struct pmu_hw_events *cpuc,
 				  struct perf_event *event)
 {