From patchwork Wed Feb 26 13:29:42 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ionela Voinescu X-Patchwork-Id: 212719 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 93564C4BA16 for ; Wed, 26 Feb 2020 13:30:40 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 7572621927 for ; Wed, 26 Feb 2020 13:30:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727075AbgBZNaj (ORCPT ); Wed, 26 Feb 2020 08:30:39 -0500 Received: from foss.arm.com ([217.140.110.172]:35844 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726700AbgBZNaj (ORCPT ); Wed, 26 Feb 2020 08:30:39 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id BDD0030E; Wed, 26 Feb 2020 05:30:38 -0800 (PST) Received: from e108754-lin.cambridge.arm.com (e108754-lin.cambridge.arm.com [10.1.198.53]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 16A463FA00; Wed, 26 Feb 2020 05:30:35 -0800 (PST) From: Ionela Voinescu To: catalin.marinas@arm.com, will@kernel.org, mark.rutland@arm.com, maz@kernel.org, suzuki.poulose@arm.com, sudeep.holla@arm.com, lukasz.luba@arm.com, valentin.schneider@arm.com, dietmar.eggemann@arm.com, rjw@rjwysocki.net, pkondeti@codeaurora.org, ionela.voinescu@arm.com Cc: peterz@infradead.org, mingo@redhat.com, vincent.guittot@linaro.org, viresh.kumar@linaro.org, linux-arm-kernel@lists.infradead.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org, Steve Capper Subject: [PATCH v5 2/7] arm64: trap to EL1 accesses to AMU counters from EL0 Date: Wed, 26 Feb 2020 13:29:42 +0000 Message-Id: <20200226132947.29738-3-ionela.voinescu@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200226132947.29738-1-ionela.voinescu@arm.com> References: <20200226132947.29738-1-ionela.voinescu@arm.com> Sender: linux-pm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org The activity monitors extension is an optional extension introduced by the ARMv8.4 CPU architecture. In order to access the activity monitors counters safely, if desired, the kernel should detect the presence of the extension through the feature register, and mediate the access. Therefore, disable direct accesses to activity monitors counters from EL0 (userspace) and trap them to EL1 (kernel). To be noted that the ARM64_AMU_EXTN kernel config and the disable_amu kernel parameter do not have an effect on this code. Given that the amuserenr_el0 resets to an UNKNOWN value, setting the trap of EL0 accesses to EL1 is always attempted for safety and security considerations. Therefore firmware should still ensure accesses to AMU registers are not trapped in EL2/EL3 as this code cannot be bypassed if the CPU implements the Activity Monitors Unit. Signed-off-by: Ionela Voinescu Reviewed-by: Suzuki K Poulose Reviewed-by: Valentin Schneider Cc: Catalin Marinas Cc: Will Deacon Cc: Mark Rutland Cc: Steve Capper --- arch/arm64/include/asm/assembler.h | 10 ++++++++++ arch/arm64/mm/proc.S | 3 +++ 2 files changed, 13 insertions(+) diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h index aca337d79d12..c5487806273f 100644 --- a/arch/arm64/include/asm/assembler.h +++ b/arch/arm64/include/asm/assembler.h @@ -430,6 +430,16 @@ USER(\label, ic ivau, \tmp2) // invalidate I line PoU 9000: .endm +/* + * reset_amuserenr_el0 - reset AMUSERENR_EL0 if AMUv1 present + */ + .macro reset_amuserenr_el0, tmpreg + mrs \tmpreg, id_aa64pfr0_el1 // Check ID_AA64PFR0_EL1 + ubfx \tmpreg, \tmpreg, #ID_AA64PFR0_AMU_SHIFT, #4 + cbz \tmpreg, .Lskip_\@ // Skip if no AMU present + msr_s SYS_AMUSERENR_EL0, xzr // Disable AMU access from EL0 +.Lskip_\@: + .endm /* * copy_page - copy src to dest using temp registers t1-t8 */ diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S index aafed6902411..7103027b4e64 100644 --- a/arch/arm64/mm/proc.S +++ b/arch/arm64/mm/proc.S @@ -131,6 +131,7 @@ alternative_endif ubfx x11, x11, #1, #1 msr oslar_el1, x11 reset_pmuserenr_el0 x0 // Disable PMU access from EL0 + reset_amuserenr_el0 x0 // Disable AMU access from EL0 alternative_if ARM64_HAS_RAS_EXTN msr_s SYS_DISR_EL1, xzr @@ -423,6 +424,8 @@ SYM_FUNC_START(__cpu_setup) isb // Unmask debug exceptions now, enable_dbg // since this is per-cpu reset_pmuserenr_el0 x0 // Disable PMU access from EL0 + reset_amuserenr_el0 x0 // Disable AMU access from EL0 + /* * Memory region attributes */