From patchwork Thu Nov 30 16:39:35 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 120256 Delivered-To: patch@linaro.org Received: by 10.140.22.227 with SMTP id 90csp26344qgn; Thu, 30 Nov 2017 08:51:55 -0800 (PST) X-Google-Smtp-Source: AGs4zMYs4oanpiEyNVYEiTN+F+zu9IQQ22kQKJv1+h9NhCFD9vyKePdPezMpu7OnYoGIrAopsKfZ X-Received: by 10.98.19.202 with SMTP id 71mr7245205pft.181.1512060715776; Thu, 30 Nov 2017 08:51:55 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1512060715; cv=none; d=google.com; s=arc-20160816; b=Ah/J5tUc/kCluQoh4W4hzBOsH8npRa/l4CIIgjZAlAbnkMw5l43Tr4t8S24k3r9r7a qjS8POAUXIvCWgaON6l3lA1TygvVuBDOJkj7OQjsyErS6D+f8BEJreLP0Yhkrl1bLXbd DHrw7OFbp3GUNc+NLNSLUv+5dacT5cHux4oJj/IeDp3zw8zom136Ebpnmw/yp/oS0msa BiWpfO/+0d2nmFcdHn1+mHG2TSg2Lnl9uWG2haSPezxEyp1Dtws49LbEnD7WiPXfQXGz GANEeMqeThJAhPmuDkb2MtkwQJ3QwauKixtI+usX94QSZBSmla5RlF01la56MJZ3reTQ vu3Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=c7dHcwSDUHqdwwQwJycOKWjLK7Mx3XaXcICWWym4L3k=; b=mJ2Zr07o8aacV9mYqR7RL9EAgr50a9O7hkeDUQrs98GsF4FyvQV5AncrhSptRERmX0 ftGCG2VhPEYiYkZwqbZSLNvWggWAzSUrDjWBHdu2cYrI1NJGGOVDYUa57POyLNKmE7yW IfuoTnLWzQsXPozo/Xp0jJWw7pgQi7rMjUKk99mmMYFazeE6swB1oUtuNzSUrpsmT3QD I2yU18jqasaVvskNxXUXa1gU1ByRAIBfI0GGSTmNx0QPn0wr2j/Sj5am1cYUgqnp9+kk L2G06vMeQB02y3wf8RZq/BBCaCxgxYTDJIPKXyCmuysiN7sUn2IKOtJnLrjLQS3V2XHf PL+w== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id t134si3233867pgb.511.2017.11.30.08.51.55; Thu, 30 Nov 2017 08:51:55 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753628AbdK3QqL (ORCPT + 28 others); Thu, 30 Nov 2017 11:46:11 -0500 Received: from foss.arm.com ([217.140.101.70]:57496 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753336AbdK3Qjp (ORCPT ); Thu, 30 Nov 2017 11:39:45 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 64F13165D; Thu, 30 Nov 2017 08:39:45 -0800 (PST) Received: from edgewater-inn.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 35F823F5B7; Thu, 30 Nov 2017 08:39:45 -0800 (PST) Received: by edgewater-inn.cambridge.arm.com (Postfix, from userid 1000) id E5B5B1AE3D5D; Thu, 30 Nov 2017 16:39:47 +0000 (GMT) From: Will Deacon To: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org, catalin.marinas@arm.com, mark.rutland@arm.com, ard.biesheuvel@linaro.org, sboyd@codeaurora.org, dave.hansen@linux.intel.com, keescook@chromium.org, msalter@redhat.com, labbott@redhat.com, tglx@linutronix.de, Will Deacon Subject: [PATCH v2 07/18] arm64: mm: Allocate ASIDs in pairs Date: Thu, 30 Nov 2017 16:39:35 +0000 Message-Id: <1512059986-21325-8-git-send-email-will.deacon@arm.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1512059986-21325-1-git-send-email-will.deacon@arm.com> References: <1512059986-21325-1-git-send-email-will.deacon@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org In preparation for separate kernel/user ASIDs, allocate them in pairs for each mm_struct. The bottom bit distinguishes the two: if it is set, then the ASID will map only userspace. Signed-off-by: Will Deacon --- arch/arm64/include/asm/mmu.h | 1 + arch/arm64/mm/context.c | 25 +++++++++++++++++-------- 2 files changed, 18 insertions(+), 8 deletions(-) -- 2.1.4 diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h index 0d34bf0a89c7..01bfb184f2a8 100644 --- a/arch/arm64/include/asm/mmu.h +++ b/arch/arm64/include/asm/mmu.h @@ -17,6 +17,7 @@ #define __ASM_MMU_H #define MMCF_AARCH32 0x1 /* mm context flag for AArch32 executables */ +#define USER_ASID_FLAG (UL(1) << 48) typedef struct { atomic64_t id; diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm/context.c index 78816e476491..db28958d9e4f 100644 --- a/arch/arm64/mm/context.c +++ b/arch/arm64/mm/context.c @@ -39,7 +39,16 @@ static cpumask_t tlb_flush_pending; #define ASID_MASK (~GENMASK(asid_bits - 1, 0)) #define ASID_FIRST_VERSION (1UL << asid_bits) -#define NUM_USER_ASIDS ASID_FIRST_VERSION + +#ifdef CONFIG_UNMAP_KERNEL_AT_EL0 +#define NUM_USER_ASIDS (ASID_FIRST_VERSION >> 1) +#define asid2idx(asid) (((asid) & ~ASID_MASK) >> 1) +#define idx2asid(idx) (((idx) << 1) & ~ASID_MASK) +#else +#define NUM_USER_ASIDS (ASID_FIRST_VERSION) +#define asid2idx(asid) ((asid) & ~ASID_MASK) +#define idx2asid(idx) asid2idx(idx) +#endif /* Get the ASIDBits supported by the current CPU */ static u32 get_cpu_asid_bits(void) @@ -104,7 +113,7 @@ static void flush_context(unsigned int cpu) */ if (asid == 0) asid = per_cpu(reserved_asids, i); - __set_bit(asid & ~ASID_MASK, asid_map); + __set_bit(asid2idx(asid), asid_map); per_cpu(reserved_asids, i) = asid; } @@ -156,16 +165,16 @@ static u64 new_context(struct mm_struct *mm, unsigned int cpu) * We had a valid ASID in a previous life, so try to re-use * it if possible. */ - asid &= ~ASID_MASK; - if (!__test_and_set_bit(asid, asid_map)) + if (!__test_and_set_bit(asid2idx(asid), asid_map)) return newasid; } /* * Allocate a free ASID. If we can't find one, take a note of the - * currently active ASIDs and mark the TLBs as requiring flushes. - * We always count from ASID #1, as we use ASID #0 when setting a - * reserved TTBR0 for the init_mm. + * currently active ASIDs and mark the TLBs as requiring flushes. We + * always count from ASID #2 (index 1), as we use ASID #0 when setting + * a reserved TTBR0 for the init_mm and we allocate ASIDs in even/odd + * pairs. */ asid = find_next_zero_bit(asid_map, NUM_USER_ASIDS, cur_idx); if (asid != NUM_USER_ASIDS) @@ -182,7 +191,7 @@ static u64 new_context(struct mm_struct *mm, unsigned int cpu) set_asid: __set_bit(asid, asid_map); cur_idx = asid; - return asid | generation; + return idx2asid(asid) | generation; } void check_and_switch_context(struct mm_struct *mm, unsigned int cpu)