From patchwork Tue Apr 3 11:08:59 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 132714 Delivered-To: patch@linaro.org Received: by 10.46.84.29 with SMTP id i29csp3665595ljb; Tue, 3 Apr 2018 04:09:51 -0700 (PDT) X-Google-Smtp-Source: AIpwx4/w9RNXAII0UMk1c5AeXI7FdPEQX8NJ3wU5A7xxrardiBF7aj3u3iJdC2QAo52VfwsesKCX X-Received: by 2002:a17:902:a508:: with SMTP id s8-v6mr13980104plq.216.1522753791568; Tue, 03 Apr 2018 04:09:51 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1522753791; cv=none; d=google.com; s=arc-20160816; b=tcFb3qML4OAC/XcCDA3ltCOAhTEJhnWsb7zhMO3XyDl2vEDr9aItiGm+OtTEclvVcw UnBNeIrzAkDN+C5mnbprkO7OBLp0u95ePznZvme0uXMO8qzmWGkbFywGHK+4jRKkcpNY 2lO91gNyVwtyvx53c0DE36AkP1k1iKN8PaVJBEm8KH5lXq47/3LzLeAzT82fBcWNKrv3 Pbb+4KFuzv2g+GK64a7JLSvD/IijxsJaUWrYD1RfzewwIr3ItVY4BjxtfiZKMfNBOIbT WJeAqaVRV7nLssakyntp056A5rpnv3NjT0uQwmpSxzNNJZysPgO5YQ4r9dxy+ZkR52XU HSeA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=VtpXUx5TADztEWI3h4SIBOp2oYKNf7VS1KhYmXnvNJk=; b=y51EvSEFyC25tSmiSjmL1CmjNYkxlj0RIP3fDV1yPp1IFOyBWc/zGP/sFT96IE8n+O LnpAd+Gys5oyx/qlhBi3GMATY1ceGAcjUAeqv7BizKj9H9Bp2QVuXkiMUoKtvOL+QXit eUR+Y0/3V/EJ5jEI616+PNakbxLOo4xTN7yaA+mJoBjzZy1YzOH2yWVcG2KX0e8hSck9 fGSlaGkMPd5kapNoclYc245JCubt80F1JhG80mxjBIY8nCXByN/yZMOhGgLBVc2lE7ML GqkPU5jSod47YpclT9x18+BpUyQKmfn8auDaWBSC3JFkaXVsyrzX6Q2TCF4jD5xYmGwu 3NPQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=stable-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id d14-v6si302866plj.191.2018.04.03.04.09.51; Tue, 03 Apr 2018 04:09:51 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=stable-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755257AbeDCLJu (ORCPT + 11 others); Tue, 3 Apr 2018 07:09:50 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:59292 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755063AbeDCLJu (ORCPT ); Tue, 3 Apr 2018 07:09:50 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 2DADB1435; Tue, 3 Apr 2018 04:09:50 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id F0D4A3F587; Tue, 3 Apr 2018 04:09:48 -0700 (PDT) From: Mark Rutland To: stable@vger.kernel.org Cc: mark.brown@linaro.org, ard.biesheuvel@linaro.org, marc.zyngier@arm.com, will.deacon@arm.com Subject: [PATCH v4.9.y 03/27] arm64: mm: Allocate ASIDs in pairs Date: Tue, 3 Apr 2018 12:08:59 +0100 Message-Id: <20180403110923.43575-4-mark.rutland@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20180403110923.43575-1-mark.rutland@arm.com> References: <20180403110923.43575-1-mark.rutland@arm.com> Sender: stable-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Will Deacon commit 0c8ea531b774 upstream. In preparation for separate kernel/user ASIDs, allocate them in pairs for each mm_struct. The bottom bit distinguishes the two: if it is set, then the ASID will map only userspace. Reviewed-by: Mark Rutland Tested-by: Laura Abbott Tested-by: Shanker Donthineni Signed-off-by: Will Deacon Signed-off-by: Alex Shi [v4.9 backport] Signed-off-by: Mark Rutland [v4.9 backport] --- arch/arm64/include/asm/mmu.h | 2 ++ arch/arm64/mm/context.c | 25 +++++++++++++++++-------- 2 files changed, 19 insertions(+), 8 deletions(-) -- 2.11.0 diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h index 8d9fce037b2f..49924e56048e 100644 --- a/arch/arm64/include/asm/mmu.h +++ b/arch/arm64/include/asm/mmu.h @@ -16,6 +16,8 @@ #ifndef __ASM_MMU_H #define __ASM_MMU_H +#define USER_ASID_FLAG (UL(1) << 48) + typedef struct { atomic64_t id; void *vdso; diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm/context.c index efcf1f7ef1e4..f00f5eeb556f 100644 --- a/arch/arm64/mm/context.c +++ b/arch/arm64/mm/context.c @@ -39,7 +39,16 @@ static cpumask_t tlb_flush_pending; #define ASID_MASK (~GENMASK(asid_bits - 1, 0)) #define ASID_FIRST_VERSION (1UL << asid_bits) -#define NUM_USER_ASIDS ASID_FIRST_VERSION + +#ifdef CONFIG_UNMAP_KERNEL_AT_EL0 +#define NUM_USER_ASIDS (ASID_FIRST_VERSION >> 1) +#define asid2idx(asid) (((asid) & ~ASID_MASK) >> 1) +#define idx2asid(idx) (((idx) << 1) & ~ASID_MASK) +#else +#define NUM_USER_ASIDS (ASID_FIRST_VERSION) +#define asid2idx(asid) ((asid) & ~ASID_MASK) +#define idx2asid(idx) asid2idx(idx) +#endif /* Get the ASIDBits supported by the current CPU */ static u32 get_cpu_asid_bits(void) @@ -104,7 +113,7 @@ static void flush_context(unsigned int cpu) */ if (asid == 0) asid = per_cpu(reserved_asids, i); - __set_bit(asid & ~ASID_MASK, asid_map); + __set_bit(asid2idx(asid), asid_map); per_cpu(reserved_asids, i) = asid; } @@ -159,16 +168,16 @@ static u64 new_context(struct mm_struct *mm, unsigned int cpu) * We had a valid ASID in a previous life, so try to re-use * it if possible. */ - asid &= ~ASID_MASK; - if (!__test_and_set_bit(asid, asid_map)) + if (!__test_and_set_bit(asid2idx(asid), asid_map)) return newasid; } /* * Allocate a free ASID. If we can't find one, take a note of the - * currently active ASIDs and mark the TLBs as requiring flushes. - * We always count from ASID #1, as we use ASID #0 when setting a - * reserved TTBR0 for the init_mm. + * currently active ASIDs and mark the TLBs as requiring flushes. We + * always count from ASID #2 (index 1), as we use ASID #0 when setting + * a reserved TTBR0 for the init_mm and we allocate ASIDs in even/odd + * pairs. */ asid = find_next_zero_bit(asid_map, NUM_USER_ASIDS, cur_idx); if (asid != NUM_USER_ASIDS) @@ -185,7 +194,7 @@ static u64 new_context(struct mm_struct *mm, unsigned int cpu) set_asid: __set_bit(asid, asid_map); cur_idx = asid; - return asid | generation; + return idx2asid(asid) | generation; } void check_and_switch_context(struct mm_struct *mm, unsigned int cpu)