From patchwork Wed Dec 6 12:35:26 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 120838 Delivered-To: patch@linaro.org Received: by 10.140.22.227 with SMTP id 90csp7003473qgn; Wed, 6 Dec 2017 04:40:27 -0800 (PST) X-Google-Smtp-Source: AGs4zMbJcOkGJZi0bNFC1HnoO53Hlnmg43EDeAPAUYulZH3tkbT/6n9iNg7RUEJ8PpDe4nJXZhUV X-Received: by 10.101.100.65 with SMTP id s1mr21070516pgv.185.1512564027180; Wed, 06 Dec 2017 04:40:27 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1512564027; cv=none; d=google.com; s=arc-20160816; b=CwzQEqoggwxBRLLZAC+549ikEM6tFaXBmJS8aJqEvblz1Q2Ues/INSBKk4fxjxARLz eF+L5+PWhSYRbln8dzsGDteMkql5Z9WVPjWqClJU7Y6P3LsgEPD6c5UtrNhVe3B7dZVG N5oa1aZEBZR9tF9ne4RzzWrXotzpN21W+duKsIgzFDnNGcMjDnJJPINBox83CL8pjR04 u0fnKqjXNJGvkvEYHMfRInrMJF9UjrldOHYRWiRwOXS32Rdye4leWSLf1ajRYn2GC/B6 2LBzYRISHnYg76i8t3SWy6eiEq++wJKNexW313OLyJnl0YomhF19UWx98wJT0myI6hTm JUUA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=rWa85yXB3p8CJfg0UnnXrX+2QJNQTHxDIns8qwvgeg4=; b=pcbGWiiFhg9l/UXYPCuV/wSRqOAcnMaBElkwyGNsEFgPOWcNC+m7EK9W8DcC2m9C7c u0vXzxm0EvCRNTGLA3CLyo1o/7U7Mczj1rDnqAKZEkoH1vYTlLXAH4VeOc86ifsER7gZ Fxg4fPbO/79Cecgf9fD+3bgtaDsDDddv37dMMk+pY3rLz6kC6c2GjyRY5IkYUs8EkWXM pNbEQzXsPBaSDko11sP71vLBgE+D2CSKjcTF/+a0JkhvX3r5yORmGfAn9Yw5jCM/fEBQ qHWwXdvLnoZNHjTi3GENGCM4VY8nmDrcPFQ+yDLWpe7NmDpxOwBWBrgP9FDq4KAIni19 LfDA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id m32si1893838pld.592.2017.12.06.04.40.26; Wed, 06 Dec 2017 04:40:27 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752512AbdLFMkY (ORCPT + 28 others); Wed, 6 Dec 2017 07:40:24 -0500 Received: from foss.arm.com ([217.140.101.70]:34712 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752145AbdLFMgL (ORCPT ); Wed, 6 Dec 2017 07:36:11 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 69A4F1682; Wed, 6 Dec 2017 04:36:11 -0800 (PST) Received: from edgewater-inn.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 3B0033F5BD; Wed, 6 Dec 2017 04:36:11 -0800 (PST) Received: by edgewater-inn.cambridge.arm.com (Postfix, from userid 1000) id 3859F1AE37A6; Wed, 6 Dec 2017 12:36:16 +0000 (GMT) From: Will Deacon To: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org, catalin.marinas@arm.com, mark.rutland@arm.com, ard.biesheuvel@linaro.org, sboyd@codeaurora.org, dave.hansen@linux.intel.com, keescook@chromium.org, msalter@redhat.com, labbott@redhat.com, tglx@linutronix.de, Will Deacon Subject: [PATCH v3 07/20] arm64: mm: Allocate ASIDs in pairs Date: Wed, 6 Dec 2017 12:35:26 +0000 Message-Id: <1512563739-25239-8-git-send-email-will.deacon@arm.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1512563739-25239-1-git-send-email-will.deacon@arm.com> References: <1512563739-25239-1-git-send-email-will.deacon@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org In preparation for separate kernel/user ASIDs, allocate them in pairs for each mm_struct. The bottom bit distinguishes the two: if it is set, then the ASID will map only userspace. Reviewed-by: Mark Rutland Signed-off-by: Will Deacon --- arch/arm64/include/asm/mmu.h | 1 + arch/arm64/mm/context.c | 25 +++++++++++++++++-------- 2 files changed, 18 insertions(+), 8 deletions(-) -- 2.1.4 diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h index 0d34bf0a89c7..01bfb184f2a8 100644 --- a/arch/arm64/include/asm/mmu.h +++ b/arch/arm64/include/asm/mmu.h @@ -17,6 +17,7 @@ #define __ASM_MMU_H #define MMCF_AARCH32 0x1 /* mm context flag for AArch32 executables */ +#define USER_ASID_FLAG (UL(1) << 48) typedef struct { atomic64_t id; diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm/context.c index 78a2dc596fee..1cb3bc92ae5c 100644 --- a/arch/arm64/mm/context.c +++ b/arch/arm64/mm/context.c @@ -39,7 +39,16 @@ static cpumask_t tlb_flush_pending; #define ASID_MASK (~GENMASK(asid_bits - 1, 0)) #define ASID_FIRST_VERSION (1UL << asid_bits) -#define NUM_USER_ASIDS ASID_FIRST_VERSION + +#ifdef CONFIG_UNMAP_KERNEL_AT_EL0 +#define NUM_USER_ASIDS (ASID_FIRST_VERSION >> 1) +#define asid2idx(asid) (((asid) & ~ASID_MASK) >> 1) +#define idx2asid(idx) (((idx) << 1) & ~ASID_MASK) +#else +#define NUM_USER_ASIDS (ASID_FIRST_VERSION) +#define asid2idx(asid) ((asid) & ~ASID_MASK) +#define idx2asid(idx) asid2idx(idx) +#endif /* Get the ASIDBits supported by the current CPU */ static u32 get_cpu_asid_bits(void) @@ -98,7 +107,7 @@ static void flush_context(unsigned int cpu) */ if (asid == 0) asid = per_cpu(reserved_asids, i); - __set_bit(asid & ~ASID_MASK, asid_map); + __set_bit(asid2idx(asid), asid_map); per_cpu(reserved_asids, i) = asid; } @@ -153,16 +162,16 @@ static u64 new_context(struct mm_struct *mm, unsigned int cpu) * We had a valid ASID in a previous life, so try to re-use * it if possible. */ - asid &= ~ASID_MASK; - if (!__test_and_set_bit(asid, asid_map)) + if (!__test_and_set_bit(asid2idx(asid), asid_map)) return newasid; } /* * Allocate a free ASID. If we can't find one, take a note of the - * currently active ASIDs and mark the TLBs as requiring flushes. - * We always count from ASID #1, as we use ASID #0 when setting a - * reserved TTBR0 for the init_mm. + * currently active ASIDs and mark the TLBs as requiring flushes. We + * always count from ASID #2 (index 1), as we use ASID #0 when setting + * a reserved TTBR0 for the init_mm and we allocate ASIDs in even/odd + * pairs. */ asid = find_next_zero_bit(asid_map, NUM_USER_ASIDS, cur_idx); if (asid != NUM_USER_ASIDS) @@ -179,7 +188,7 @@ static u64 new_context(struct mm_struct *mm, unsigned int cpu) set_asid: __set_bit(asid, asid_map); cur_idx = asid; - return asid | generation; + return idx2asid(asid) | generation; } void check_and_switch_context(struct mm_struct *mm, unsigned int cpu)