From patchwork Thu Apr 12 11:11:14 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 133213 Delivered-To: patch@linaro.org Received: by 10.46.84.29 with SMTP id i29csp1571846ljb; Thu, 12 Apr 2018 04:12:52 -0700 (PDT) X-Google-Smtp-Source: AIpwx4/W+NzmrRnTKYPd+R06t/8nN1IDCoXkI3l6LcjElTfUt9IzGVqP5I47KFk2lytsWGxwIrJJ X-Received: by 10.98.204.214 with SMTP id j83mr7359306pfk.182.1523531571963; Thu, 12 Apr 2018 04:12:51 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1523531571; cv=none; d=google.com; s=arc-20160816; b=Yl9yifZdLkRBgGkZhBzw1e4puanAZgGui56o9heT2gyqutpUHrMajccDljxaxc/pow 4D11gVWIbT0nQvZTBFgj5PlL7SExs8m15qdXykkVtBexO++/yKrZ1EkYF5/QRj0KW+o3 7FNRQoiJG2+Go/IXA3ADL2CsRdlVyNWdMlfPDewsb6/TXLfzS/bxrw5Pqfwgb120RUmr cUnYMuvVxsELvjFEHsjW/sglT1nQlNJXfevQOI1r8G3dDWOdF2k/gL3cDwWgnv0hewRH +8APmqNjEC68rvygf0aZXa9r0lBIon/agTj0YdGVicDpb8be/cLkz1dhpUCugthUj+Es O+Tg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=I6dXmv65H7NX5T61m5kJk7jZJBWMjN6m1xW0ZHtDcXE=; b=FdVUMPFI3/pd0nLg1YHjgJWKmZQItcTkB4I08rHUsW6u1gkCLQnCJKmiwqHwqhgzLI Cq561OE9jmhYgIArFAQa703CY4zM02gPdlC5BrI5rua5btb+7di2tEwUt5dYl2BNZlNt N5Zwu/5p5DuEKIBiVZIoGgGZs4FP389+QDd0Ip4GApYLDaXzadfrSs4L97RiPqXq4RvW CJQs5kLRXClcUENH2rmlGufZe8uZ6P8qO/rWSbE09cXDDKoPuqoy/DnkWyyhmqyO1fU+ sCzgX8WsKu0rOMjEGzg2qnvjH4lyK2IJrPPRYiwVd6lAoLgkaS77YDw1x1f63e3Ck50j yLrw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=stable-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id bg3-v6si3001807plb.118.2018.04.12.04.12.51; Thu, 12 Apr 2018 04:12:51 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=stable-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752911AbeDLLMv (ORCPT + 11 others); Thu, 12 Apr 2018 07:12:51 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:59466 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752900AbeDLLMu (ORCPT ); Thu, 12 Apr 2018 07:12:50 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 9A07A80D; Thu, 12 Apr 2018 04:12:50 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id EA5A43F24A; Thu, 12 Apr 2018 04:12:48 -0700 (PDT) From: Mark Rutland To: stable@vger.kernel.org Cc: mark.brown@linaro.org, ard.biesheuvel@linaro.org, marc.zyngier@arm.com, will.deacon@arm.com, catalin.marinas@arm.com, ghackmann@google.com, shankerd@codeaurora.org Subject: [PATCH v4.9.y 18/42] arm64: Move BP hardening to check_and_switch_context Date: Thu, 12 Apr 2018 12:11:14 +0100 Message-Id: <20180412111138.40990-19-mark.rutland@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20180412111138.40990-1-mark.rutland@arm.com> References: <20180412111138.40990-1-mark.rutland@arm.com> Sender: stable-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Marc Zyngier commit a8e4c0a919ae310944ed2c9ace11cf3ccd8a609b upstream. We call arm64_apply_bp_hardening() from post_ttbr_update_workaround, which has the unexpected consequence of being triggered on every exception return to userspace when ARM64_SW_TTBR0_PAN is selected, even if no context switch actually occured. This is a bit suboptimal, and it would be more logical to only invalidate the branch predictor when we actually switch to a different mm. In order to solve this, move the call to arm64_apply_bp_hardening() into check_and_switch_context(), where we're guaranteed to pick a different mm context. Acked-by: Will Deacon Signed-off-by: Marc Zyngier Signed-off-by: Catalin Marinas Signed-off-by: Mark Rutland [v4.9 backport] --- arch/arm64/mm/context.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) -- 2.11.0 diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm/context.c index accf7ead3945..62d976e843fc 100644 --- a/arch/arm64/mm/context.c +++ b/arch/arm64/mm/context.c @@ -230,6 +230,9 @@ void check_and_switch_context(struct mm_struct *mm, unsigned int cpu) raw_spin_unlock_irqrestore(&cpu_asid_lock, flags); switch_mm_fastpath: + + arm64_apply_bp_hardening(); + cpu_switch_mm(mm->pgd, mm); } @@ -240,8 +243,6 @@ asmlinkage void post_ttbr_update_workaround(void) "ic iallu; dsb nsh; isb", ARM64_WORKAROUND_CAVIUM_27456, CONFIG_CAVIUM_ERRATUM_27456)); - - arm64_apply_bp_hardening(); } static int asids_init(void)