From patchwork Thu Mar 21 16:36:10 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Grall X-Patchwork-Id: 160800 Delivered-To: patch@linaro.org Received: by 2002:a02:c6d8:0:0:0:0:0 with SMTP id r24csp1001729jan; Thu, 21 Mar 2019 09:36:48 -0700 (PDT) X-Google-Smtp-Source: APXvYqx+k7fgDFQcpFn4RaP3/oE4EyO2lFTxc9rYBagYp0FosGbdnM92Mn3udgaW09N8OPH36ojR X-Received: by 2002:a63:ca:: with SMTP id 193mr4181953pga.288.1553186208035; Thu, 21 Mar 2019 09:36:48 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1553186208; cv=none; d=google.com; s=arc-20160816; b=lUpZcB31s8BN3bzaVsZiCQHkiMHP9bas948FRB0ZnU95qIBaBVrYA9JdeWT4V1E6cV cx+lL2x4CNVeHl1pSJ1e2QUPooIspuaYrXqkpLzhXLlKO5MXpciJ/xhEZ9NOo1Bc0sDM LLytvuJPlQsaVTm/8NWglpKxMQ4XWJFgBgNXOMAr38l0CXCulcPfW9uNG4/M356Ko4mt 6QlZ/bZ5mfwrEhtIbyl35fuMUUNopc3+7Lob1kgQ5xDDj0i3CpunFaMZmdUoX96W8YxR 6oWQ2SXFCQAL50Hg5BI8CdpuqByxy80CriKbVdulcHVzcnM13XoJ/i65oU8vgEIkA+zB UkIQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=MziHUsaj28gwXpApOodFJity/ksWVbe8h6Yl/i/CukM=; b=vUjK8Rjg79wEpeWsBI9Nz3eE7Uviu2uU1bslzKix5v1TBg+fEaFecPUyoJPlMTZsRB jm3G2Z26XhoZWtpMp4un0MBHLSShlCDo4z62L4cQpJAFBLbR76Cmq5PbAkpz8w00JJUM 0YfM5AEos2GD74AFt3fcI241RXpg5x9lNNiLY4Z/rIg34vxuCx6fMu/hYts3QN8A7Bg6 pHcEnISyMPoKQ52PqdUMbvet2w2cTUHYX7r8lgQaXBTWoq23RztDHbXLAaq2CKEjnMm8 TKkWhYDBfnBEVPI69wqc78T2ba49wGlxslKIQ/hI7M85+8UQ9aGAe5cYm1PTxzIbZWTS unUA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id v9si4422641pgs.438.2019.03.21.09.36.47; Thu, 21 Mar 2019 09:36:48 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728635AbfCUQgp (ORCPT + 31 others); Thu, 21 Mar 2019 12:36:45 -0400 Received: from foss.arm.com ([217.140.101.70]:59360 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728528AbfCUQgn (ORCPT ); Thu, 21 Mar 2019 12:36:43 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id DA7D9EBD; Thu, 21 Mar 2019 09:36:42 -0700 (PDT) Received: from e108454-lin.cambridge.arm.com (e108454-lin.cambridge.arm.com [10.1.196.50]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id EABE53F614; Thu, 21 Mar 2019 09:36:40 -0700 (PDT) From: Julien Grall To: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu Cc: christoffer.dall@arm.com, james.morse@arm.com, marc.zyngier@arm.com, julien.thierry@arm.com, suzuki.poulose@arm.com, catalin.marinas@arm.com, will.deacon@arm.com, Julien Grall Subject: [PATCH RFC 01/14] arm64/mm: Introduce asid_info structure and move asid_generation/asid_map to it Date: Thu, 21 Mar 2019 16:36:10 +0000 Message-Id: <20190321163623.20219-2-julien.grall@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20190321163623.20219-1-julien.grall@arm.com> References: <20190321163623.20219-1-julien.grall@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org In an attempt to make the ASID allocator generic, create a new structure asid_info to store all the information necessary for the allocator. For now, move the variables asid_generation and asid_map to the new structure asid_info. Follow-up patches will move more variables. Note to avoid more renaming aftwards, a local variable 'info' has been created and is a pointer to the ASID allocator structure. Signed-off-by: Julien Grall --- arch/arm64/mm/context.c | 46 ++++++++++++++++++++++++++-------------------- 1 file changed, 26 insertions(+), 20 deletions(-) -- 2.11.0 diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm/context.c index 1f0ea2facf24..34db54f1a39a 100644 --- a/arch/arm64/mm/context.c +++ b/arch/arm64/mm/context.c @@ -30,8 +30,11 @@ static u32 asid_bits; static DEFINE_RAW_SPINLOCK(cpu_asid_lock); -static atomic64_t asid_generation; -static unsigned long *asid_map; +struct asid_info +{ + atomic64_t generation; + unsigned long *map; +} asid_info; static DEFINE_PER_CPU(atomic64_t, active_asids); static DEFINE_PER_CPU(u64, reserved_asids); @@ -88,13 +91,13 @@ void verify_cpu_asid_bits(void) } } -static void flush_context(void) +static void flush_context(struct asid_info *info) { int i; u64 asid; /* Update the list of reserved ASIDs and the ASID bitmap. */ - bitmap_clear(asid_map, 0, NUM_USER_ASIDS); + bitmap_clear(info->map, 0, NUM_USER_ASIDS); for_each_possible_cpu(i) { asid = atomic64_xchg_relaxed(&per_cpu(active_asids, i), 0); @@ -107,7 +110,7 @@ static void flush_context(void) */ if (asid == 0) asid = per_cpu(reserved_asids, i); - __set_bit(asid2idx(asid), asid_map); + __set_bit(asid2idx(asid), info->map); per_cpu(reserved_asids, i) = asid; } @@ -142,11 +145,11 @@ static bool check_update_reserved_asid(u64 asid, u64 newasid) return hit; } -static u64 new_context(struct mm_struct *mm) +static u64 new_context(struct asid_info *info, struct mm_struct *mm) { static u32 cur_idx = 1; u64 asid = atomic64_read(&mm->context.id); - u64 generation = atomic64_read(&asid_generation); + u64 generation = atomic64_read(&info->generation); if (asid != 0) { u64 newasid = generation | (asid & ~ASID_MASK); @@ -162,7 +165,7 @@ static u64 new_context(struct mm_struct *mm) * We had a valid ASID in a previous life, so try to re-use * it if possible. */ - if (!__test_and_set_bit(asid2idx(asid), asid_map)) + if (!__test_and_set_bit(asid2idx(asid), info->map)) return newasid; } @@ -173,20 +176,20 @@ static u64 new_context(struct mm_struct *mm) * a reserved TTBR0 for the init_mm and we allocate ASIDs in even/odd * pairs. */ - asid = find_next_zero_bit(asid_map, NUM_USER_ASIDS, cur_idx); + asid = find_next_zero_bit(info->map, NUM_USER_ASIDS, cur_idx); if (asid != NUM_USER_ASIDS) goto set_asid; /* We're out of ASIDs, so increment the global generation count */ generation = atomic64_add_return_relaxed(ASID_FIRST_VERSION, - &asid_generation); - flush_context(); + &info->generation); + flush_context(info); /* We have more ASIDs than CPUs, so this will always succeed */ - asid = find_next_zero_bit(asid_map, NUM_USER_ASIDS, 1); + asid = find_next_zero_bit(info->map, NUM_USER_ASIDS, 1); set_asid: - __set_bit(asid, asid_map); + __set_bit(asid, info->map); cur_idx = asid; return idx2asid(asid) | generation; } @@ -195,6 +198,7 @@ void check_and_switch_context(struct mm_struct *mm, unsigned int cpu) { unsigned long flags; u64 asid, old_active_asid; + struct asid_info *info = &asid_info; if (system_supports_cnp()) cpu_set_reserved_ttbr0(); @@ -217,7 +221,7 @@ void check_and_switch_context(struct mm_struct *mm, unsigned int cpu) */ old_active_asid = atomic64_read(&per_cpu(active_asids, cpu)); if (old_active_asid && - !((asid ^ atomic64_read(&asid_generation)) >> asid_bits) && + !((asid ^ atomic64_read(&info->generation)) >> asid_bits) && atomic64_cmpxchg_relaxed(&per_cpu(active_asids, cpu), old_active_asid, asid)) goto switch_mm_fastpath; @@ -225,8 +229,8 @@ void check_and_switch_context(struct mm_struct *mm, unsigned int cpu) raw_spin_lock_irqsave(&cpu_asid_lock, flags); /* Check that our ASID belongs to the current generation. */ asid = atomic64_read(&mm->context.id); - if ((asid ^ atomic64_read(&asid_generation)) >> asid_bits) { - asid = new_context(mm); + if ((asid ^ atomic64_read(&info->generation)) >> asid_bits) { + asid = new_context(info, mm); atomic64_set(&mm->context.id, asid); } @@ -259,16 +263,18 @@ asmlinkage void post_ttbr_update_workaround(void) static int asids_init(void) { + struct asid_info *info = &asid_info; + asid_bits = get_cpu_asid_bits(); /* * Expect allocation after rollover to fail if we don't have at least * one more ASID than CPUs. ASID #0 is reserved for init_mm. */ WARN_ON(NUM_USER_ASIDS - 1 <= num_possible_cpus()); - atomic64_set(&asid_generation, ASID_FIRST_VERSION); - asid_map = kcalloc(BITS_TO_LONGS(NUM_USER_ASIDS), sizeof(*asid_map), - GFP_KERNEL); - if (!asid_map) + atomic64_set(&info->generation, ASID_FIRST_VERSION); + info->map = kcalloc(BITS_TO_LONGS(NUM_USER_ASIDS), sizeof(*info->map), + GFP_KERNEL); + if (!info->map) panic("Failed to allocate bitmap for %lu ASIDs\n", NUM_USER_ASIDS); From patchwork Thu Mar 21 16:36:11 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Grall X-Patchwork-Id: 160801 Delivered-To: patch@linaro.org Received: by 2002:a02:c6d8:0:0:0:0:0 with SMTP id r24csp1001773jan; Thu, 21 Mar 2019 09:36:50 -0700 (PDT) X-Google-Smtp-Source: APXvYqy0Z1TjZQ3uNM/k5V57dt3IaxLb0k7X3LP6IYBGAyZtw1+jB1crH0PUol73z/Uz/tzUX0du X-Received: by 2002:a63:f84d:: with SMTP id v13mr4208273pgj.384.1553186210911; Thu, 21 Mar 2019 09:36:50 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1553186210; cv=none; d=google.com; s=arc-20160816; b=Und8Ub0jeAYQL6yWs4hN/21yASxMB0vKfBFl8fFKQJGQNpp6cDs680ch4um/ypYnDy V6QbqLL9Svty5w7zHtoacoVgDyBdzKYlojkcePnblcb4R0XIauR+JGFh1QuzISz/NC/G xhE4yPlZiyVWAT1FsygwRupqPOkHpcFG3sidoSI3Se/jtCPwpzlrkusgK+8B0+uZ64nG CKSnV+L9NNTGMgSJS02+wDHAZMUrl+9rA5rEpPNlAXlExJnGUfJVdp2UZ4aJmFPsb+C9 S0ESq7NeRqqWW0zKwoJA8JzBe5LTYnMkqo5lomwpoHHSGP2mz8+45d8K7/qoQVuWG9hv D0MA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=W69iQHxSiwaIQDlQYEq5hoXD80Hha3o1C1w9GODxsJ0=; b=FXz+sTM81DSoLiUlMSRn2yy9tfCwmZIFkQxpfKw/9X+Q7cqmmmb5VSxpAdqAWOG7Xn 9yXIex0Uy6JphrpfzMn1j9EoPZfeGTSCDzcOXrWamYNbdDeXnUDF5qMV6F5tn9eUSaOy FydN/Smyy02phYcWwoiylzpgwlG3AI+9Wxm+T5duGoSoztC+ZmnFwXIO3fW4QTGF3Yvl /u7GJUOILlnJbwHkmVyxjHlmL8nPn7Sm4vX4ebzhtXPZtViwpoBk2RYSvg7gNT85ATqo OqzvWpJ+NMP6jYGKF1nBwW6xonQvMfPq8StL2InJ0pAgCk2x9SQ7j7MS6BvTFovO4aYq XpqQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id v9si4422641pgs.438.2019.03.21.09.36.50; Thu, 21 Mar 2019 09:36:50 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728651AbfCUQgt (ORCPT + 31 others); Thu, 21 Mar 2019 12:36:49 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:59366 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728627AbfCUQgp (ORCPT ); Thu, 21 Mar 2019 12:36:45 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 145C0165C; Thu, 21 Mar 2019 09:36:45 -0700 (PDT) Received: from e108454-lin.cambridge.arm.com (e108454-lin.cambridge.arm.com [10.1.196.50]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 248703F614; Thu, 21 Mar 2019 09:36:43 -0700 (PDT) From: Julien Grall To: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu Cc: christoffer.dall@arm.com, james.morse@arm.com, marc.zyngier@arm.com, julien.thierry@arm.com, suzuki.poulose@arm.com, catalin.marinas@arm.com, will.deacon@arm.com, Julien Grall Subject: [PATCH RFC 02/14] arm64/mm: Move active_asids and reserved_asids to asid_info Date: Thu, 21 Mar 2019 16:36:11 +0000 Message-Id: <20190321163623.20219-3-julien.grall@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20190321163623.20219-1-julien.grall@arm.com> References: <20190321163623.20219-1-julien.grall@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The variables active_asids and reserved_asids hold information for a given ASID allocator. So move them to the structure asid_info. At the same time, introduce wrappers to access the active and reserved ASIDs to make the code clearer. Signed-off-by: Julien Grall --- arch/arm64/mm/context.c | 34 ++++++++++++++++++++++------------ 1 file changed, 22 insertions(+), 12 deletions(-) -- 2.11.0 diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm/context.c index 34db54f1a39a..cfe4c5f7abf3 100644 --- a/arch/arm64/mm/context.c +++ b/arch/arm64/mm/context.c @@ -34,10 +34,16 @@ struct asid_info { atomic64_t generation; unsigned long *map; + atomic64_t __percpu *active; + u64 __percpu *reserved; } asid_info; +#define active_asid(info, cpu) *per_cpu_ptr((info)->active, cpu) +#define reserved_asid(info, cpu) *per_cpu_ptr((info)->reserved, cpu) + static DEFINE_PER_CPU(atomic64_t, active_asids); static DEFINE_PER_CPU(u64, reserved_asids); + static cpumask_t tlb_flush_pending; #define ASID_MASK (~GENMASK(asid_bits - 1, 0)) @@ -100,7 +106,7 @@ static void flush_context(struct asid_info *info) bitmap_clear(info->map, 0, NUM_USER_ASIDS); for_each_possible_cpu(i) { - asid = atomic64_xchg_relaxed(&per_cpu(active_asids, i), 0); + asid = atomic64_xchg_relaxed(&active_asid(info, i), 0); /* * If this CPU has already been through a * rollover, but hasn't run another task in @@ -109,9 +115,9 @@ static void flush_context(struct asid_info *info) * the process it is still running. */ if (asid == 0) - asid = per_cpu(reserved_asids, i); + asid = reserved_asid(info, i); __set_bit(asid2idx(asid), info->map); - per_cpu(reserved_asids, i) = asid; + reserved_asid(info, i) = asid; } /* @@ -121,7 +127,8 @@ static void flush_context(struct asid_info *info) cpumask_setall(&tlb_flush_pending); } -static bool check_update_reserved_asid(u64 asid, u64 newasid) +static bool check_update_reserved_asid(struct asid_info *info, u64 asid, + u64 newasid) { int cpu; bool hit = false; @@ -136,9 +143,9 @@ static bool check_update_reserved_asid(u64 asid, u64 newasid) * generation. */ for_each_possible_cpu(cpu) { - if (per_cpu(reserved_asids, cpu) == asid) { + if (reserved_asid(info, cpu) == asid) { hit = true; - per_cpu(reserved_asids, cpu) = newasid; + reserved_asid(info, cpu) = newasid; } } @@ -158,7 +165,7 @@ static u64 new_context(struct asid_info *info, struct mm_struct *mm) * If our current ASID was active during a rollover, we * can continue to use it and this was just a false alarm. */ - if (check_update_reserved_asid(asid, newasid)) + if (check_update_reserved_asid(info, asid, newasid)) return newasid; /* @@ -207,8 +214,8 @@ void check_and_switch_context(struct mm_struct *mm, unsigned int cpu) /* * The memory ordering here is subtle. - * If our active_asids is non-zero and the ASID matches the current - * generation, then we update the active_asids entry with a relaxed + * If our active_asid is non-zero and the ASID matches the current + * generation, then we update the active_asid entry with a relaxed * cmpxchg. Racing with a concurrent rollover means that either: * * - We get a zero back from the cmpxchg and end up waiting on the @@ -219,10 +226,10 @@ void check_and_switch_context(struct mm_struct *mm, unsigned int cpu) * relaxed xchg in flush_context will treat us as reserved * because atomic RmWs are totally ordered for a given location. */ - old_active_asid = atomic64_read(&per_cpu(active_asids, cpu)); + old_active_asid = atomic64_read(&active_asid(info, cpu)); if (old_active_asid && !((asid ^ atomic64_read(&info->generation)) >> asid_bits) && - atomic64_cmpxchg_relaxed(&per_cpu(active_asids, cpu), + atomic64_cmpxchg_relaxed(&active_asid(info, cpu), old_active_asid, asid)) goto switch_mm_fastpath; @@ -237,7 +244,7 @@ void check_and_switch_context(struct mm_struct *mm, unsigned int cpu) if (cpumask_test_and_clear_cpu(cpu, &tlb_flush_pending)) local_flush_tlb_all(); - atomic64_set(&per_cpu(active_asids, cpu), asid); + atomic64_set(&active_asid(info, cpu), asid); raw_spin_unlock_irqrestore(&cpu_asid_lock, flags); switch_mm_fastpath: @@ -278,6 +285,9 @@ static int asids_init(void) panic("Failed to allocate bitmap for %lu ASIDs\n", NUM_USER_ASIDS); + info->active = &active_asids; + info->reserved = &reserved_asids; + pr_info("ASID allocator initialised with %lu entries\n", NUM_USER_ASIDS); return 0; } From patchwork Thu Mar 21 16:36:12 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Grall X-Patchwork-Id: 160813 Delivered-To: patch@linaro.org Received: by 2002:a02:c6d8:0:0:0:0:0 with SMTP id r24csp1002811jan; Thu, 21 Mar 2019 09:37:56 -0700 (PDT) X-Google-Smtp-Source: APXvYqzO1FCasUUdbZ/z68XvmMHBvOASAwVxL0KANirezecpUijfMZmB7xa2/btaACT6y0WKEfKr X-Received: by 2002:a17:902:a714:: with SMTP id w20mr4553494plq.331.1553186276107; Thu, 21 Mar 2019 09:37:56 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1553186276; cv=none; d=google.com; s=arc-20160816; b=ieToEgYrdNXsjKyqi8OV4ESwsCKgOvkI3W+qX6RjTsIjmsuCXrlbOABAo/krXGJUz5 aN4wmGPpZxDstGT3quem7Gkab4+T7iHIvMnfyxnoDBJni2eT0CLhquH+7ZKmXH+BP3iX zFoXkuto/fKjLd+uBT6aZP8H+4fDAaLiQiWKV6Ga9meeA7ntcyesboXnwuso9UoQK1P8 FYT0WRIhEyUnv7MPjH07KMhvg4iajPUlvgJnPkynELJaHrLyusDUVjCJzlSzegs+IQLm dwPC+eqs5eYZXVNtQY9wT1fkMiE2Ft+a0SXgfDNybw6sOCjftv7GpejbUrvL+m6ktSdR BDcQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=F68+XOUCF2F36Xui3ERgiYxilhC0jb7mCxNW/CGEl3c=; b=gp52/omN8ckoXtEPNocIybnKh5WlCS/YoMUETWafFFnZHg2svxDokfPKVQH90CsWXw WYxMnRTCuyPltLc2SSd4Qk9l/EW1xTfXxgC/IayT7lf8sMyfns/0GX2AOWAQQtgTTfO0 qSkbhDZsXP5P6fLkgiiwVYX3UehZh+JXlSglWgho0Ky0MiexUvdoMvHBXLfpEE+CStc4 mUwXYWV2BGbr+eC9Ks1vthy1QuLPWH+3GvbjvdpwNlBCsr7+FHUHUXqu53x2QwbIt3Dy /UiWzNvqUPEfwVNvQzJIv1x/9aDbUX4FCi/Id2CD/DT0AjjZMJismKG+BgcCmg487L86 83Tg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id h69si4601782pfc.120.2019.03.21.09.37.55; Thu, 21 Mar 2019 09:37:56 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728802AbfCUQhy (ORCPT + 31 others); Thu, 21 Mar 2019 12:37:54 -0400 Received: from foss.arm.com ([217.140.101.70]:59376 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728528AbfCUQgr (ORCPT ); Thu, 21 Mar 2019 12:36:47 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 42395374; Thu, 21 Mar 2019 09:36:47 -0700 (PDT) Received: from e108454-lin.cambridge.arm.com (e108454-lin.cambridge.arm.com [10.1.196.50]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 524BF3F614; Thu, 21 Mar 2019 09:36:45 -0700 (PDT) From: Julien Grall To: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu Cc: christoffer.dall@arm.com, james.morse@arm.com, marc.zyngier@arm.com, julien.thierry@arm.com, suzuki.poulose@arm.com, catalin.marinas@arm.com, will.deacon@arm.com, Julien Grall Subject: [PATCH RFC 03/14] arm64/mm: Move bits to asid_info Date: Thu, 21 Mar 2019 16:36:12 +0000 Message-Id: <20190321163623.20219-4-julien.grall@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20190321163623.20219-1-julien.grall@arm.com> References: <20190321163623.20219-1-julien.grall@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The variable bits hold information for a given ASID allocator. So move it to the asid_info structure. Because most of the macros were relying on bits, they are now taking an extra parameter that is a pointer to the asid_info structure. Signed-off-by: Julien Grall --- arch/arm64/mm/context.c | 59 +++++++++++++++++++++++++------------------------ 1 file changed, 30 insertions(+), 29 deletions(-) -- 2.11.0 diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm/context.c index cfe4c5f7abf3..da17ed6c7117 100644 --- a/arch/arm64/mm/context.c +++ b/arch/arm64/mm/context.c @@ -27,7 +27,6 @@ #include #include -static u32 asid_bits; static DEFINE_RAW_SPINLOCK(cpu_asid_lock); struct asid_info @@ -36,6 +35,7 @@ struct asid_info unsigned long *map; atomic64_t __percpu *active; u64 __percpu *reserved; + u32 bits; } asid_info; #define active_asid(info, cpu) *per_cpu_ptr((info)->active, cpu) @@ -46,17 +46,17 @@ static DEFINE_PER_CPU(u64, reserved_asids); static cpumask_t tlb_flush_pending; -#define ASID_MASK (~GENMASK(asid_bits - 1, 0)) -#define ASID_FIRST_VERSION (1UL << asid_bits) +#define ASID_MASK(info) (~GENMASK((info)->bits - 1, 0)) +#define ASID_FIRST_VERSION(info) (1UL << ((info)->bits)) #ifdef CONFIG_UNMAP_KERNEL_AT_EL0 -#define NUM_USER_ASIDS (ASID_FIRST_VERSION >> 1) -#define asid2idx(asid) (((asid) & ~ASID_MASK) >> 1) -#define idx2asid(idx) (((idx) << 1) & ~ASID_MASK) +#define NUM_USER_ASIDS(info) (ASID_FIRST_VERSION(info) >> 1) +#define asid2idx(info, asid) (((asid) & ~ASID_MASK(info)) >> 1) +#define idx2asid(info, idx) (((idx) << 1) & ~ASID_MASK(info)) #else -#define NUM_USER_ASIDS (ASID_FIRST_VERSION) -#define asid2idx(asid) ((asid) & ~ASID_MASK) -#define idx2asid(idx) asid2idx(idx) +#define NUM_USER_ASIDS(info) (ASID_FIRST_VERSION(info)) +#define asid2idx(info, asid) ((asid) & ~ASID_MASK(info)) +#define idx2asid(info, idx) asid2idx(info, idx) #endif /* Get the ASIDBits supported by the current CPU */ @@ -86,13 +86,13 @@ void verify_cpu_asid_bits(void) { u32 asid = get_cpu_asid_bits(); - if (asid < asid_bits) { + if (asid < asid_info.bits) { /* * We cannot decrease the ASID size at runtime, so panic if we support * fewer ASID bits than the boot CPU. */ pr_crit("CPU%d: smaller ASID size(%u) than boot CPU (%u)\n", - smp_processor_id(), asid, asid_bits); + smp_processor_id(), asid, asid_info.bits); cpu_panic_kernel(); } } @@ -103,7 +103,7 @@ static void flush_context(struct asid_info *info) u64 asid; /* Update the list of reserved ASIDs and the ASID bitmap. */ - bitmap_clear(info->map, 0, NUM_USER_ASIDS); + bitmap_clear(info->map, 0, NUM_USER_ASIDS(info)); for_each_possible_cpu(i) { asid = atomic64_xchg_relaxed(&active_asid(info, i), 0); @@ -116,7 +116,7 @@ static void flush_context(struct asid_info *info) */ if (asid == 0) asid = reserved_asid(info, i); - __set_bit(asid2idx(asid), info->map); + __set_bit(asid2idx(info, asid), info->map); reserved_asid(info, i) = asid; } @@ -159,7 +159,7 @@ static u64 new_context(struct asid_info *info, struct mm_struct *mm) u64 generation = atomic64_read(&info->generation); if (asid != 0) { - u64 newasid = generation | (asid & ~ASID_MASK); + u64 newasid = generation | (asid & ~ASID_MASK(info)); /* * If our current ASID was active during a rollover, we @@ -172,7 +172,7 @@ static u64 new_context(struct asid_info *info, struct mm_struct *mm) * We had a valid ASID in a previous life, so try to re-use * it if possible. */ - if (!__test_and_set_bit(asid2idx(asid), info->map)) + if (!__test_and_set_bit(asid2idx(info, asid), info->map)) return newasid; } @@ -183,22 +183,22 @@ static u64 new_context(struct asid_info *info, struct mm_struct *mm) * a reserved TTBR0 for the init_mm and we allocate ASIDs in even/odd * pairs. */ - asid = find_next_zero_bit(info->map, NUM_USER_ASIDS, cur_idx); - if (asid != NUM_USER_ASIDS) + asid = find_next_zero_bit(info->map, NUM_USER_ASIDS(info), cur_idx); + if (asid != NUM_USER_ASIDS(info)) goto set_asid; /* We're out of ASIDs, so increment the global generation count */ - generation = atomic64_add_return_relaxed(ASID_FIRST_VERSION, + generation = atomic64_add_return_relaxed(ASID_FIRST_VERSION(info), &info->generation); flush_context(info); /* We have more ASIDs than CPUs, so this will always succeed */ - asid = find_next_zero_bit(info->map, NUM_USER_ASIDS, 1); + asid = find_next_zero_bit(info->map, NUM_USER_ASIDS(info), 1); set_asid: __set_bit(asid, info->map); cur_idx = asid; - return idx2asid(asid) | generation; + return idx2asid(info, asid) | generation; } void check_and_switch_context(struct mm_struct *mm, unsigned int cpu) @@ -228,7 +228,7 @@ void check_and_switch_context(struct mm_struct *mm, unsigned int cpu) */ old_active_asid = atomic64_read(&active_asid(info, cpu)); if (old_active_asid && - !((asid ^ atomic64_read(&info->generation)) >> asid_bits) && + !((asid ^ atomic64_read(&info->generation)) >> info->bits) && atomic64_cmpxchg_relaxed(&active_asid(info, cpu), old_active_asid, asid)) goto switch_mm_fastpath; @@ -236,7 +236,7 @@ void check_and_switch_context(struct mm_struct *mm, unsigned int cpu) raw_spin_lock_irqsave(&cpu_asid_lock, flags); /* Check that our ASID belongs to the current generation. */ asid = atomic64_read(&mm->context.id); - if ((asid ^ atomic64_read(&info->generation)) >> asid_bits) { + if ((asid ^ atomic64_read(&info->generation)) >> info->bits) { asid = new_context(info, mm); atomic64_set(&mm->context.id, asid); } @@ -272,23 +272,24 @@ static int asids_init(void) { struct asid_info *info = &asid_info; - asid_bits = get_cpu_asid_bits(); + info->bits = get_cpu_asid_bits(); /* * Expect allocation after rollover to fail if we don't have at least * one more ASID than CPUs. ASID #0 is reserved for init_mm. */ - WARN_ON(NUM_USER_ASIDS - 1 <= num_possible_cpus()); - atomic64_set(&info->generation, ASID_FIRST_VERSION); - info->map = kcalloc(BITS_TO_LONGS(NUM_USER_ASIDS), sizeof(*info->map), - GFP_KERNEL); + WARN_ON(NUM_USER_ASIDS(info) - 1 <= num_possible_cpus()); + atomic64_set(&info->generation, ASID_FIRST_VERSION(info)); + info->map = kcalloc(BITS_TO_LONGS(NUM_USER_ASIDS(info)), + sizeof(*info->map), GFP_KERNEL); if (!info->map) panic("Failed to allocate bitmap for %lu ASIDs\n", - NUM_USER_ASIDS); + NUM_USER_ASIDS(info)); info->active = &active_asids; info->reserved = &reserved_asids; - pr_info("ASID allocator initialised with %lu entries\n", NUM_USER_ASIDS); + pr_info("ASID allocator initialised with %lu entries\n", + NUM_USER_ASIDS(info)); return 0; } early_initcall(asids_init); From patchwork Thu Mar 21 16:36:13 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Grall X-Patchwork-Id: 160802 Delivered-To: patch@linaro.org Received: by 2002:a02:c6d8:0:0:0:0:0 with SMTP id r24csp1001854jan; Thu, 21 Mar 2019 09:36:56 -0700 (PDT) X-Google-Smtp-Source: APXvYqy8Wd7xDNgmGD6HZqT3ryVdt2ImTyBV9xe1r4zOCRvkW7IKkQgcY+qDcpCecgAFVgFQalO1 X-Received: by 2002:a63:1723:: with SMTP id x35mr4069726pgl.364.1553186216819; Thu, 21 Mar 2019 09:36:56 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1553186216; cv=none; d=google.com; s=arc-20160816; b=SWjRAqdL1IdTgMVKK/p0oM7hOAGdU2nXiFG6ng5FAmeXzSQ4zblBib43c/QQDlbJLA Ot+tK26eAjNAySCGS0R6yqNbIaiEnKMpzLa4idmtWy4dRnyAEFiZeLku7mCdoTbOY0On JgViby81m8v6fOzstX+1a/nOjv/vBD1noZmx6pbVjRwENpSsubYq9dkMKFj56Yca0JNB C1ra8IAU2u4AMzKIoYwoi4XG5XWT4WPMZgF9EMSTESOPDPbhdVonCyO2S/GWK+vqmkjj C+8c5voyQ4I+BjAHNqYjQr1MWmdirCh0cNOyqp2PJeiyOYGDO4Vzh1OrH8sFUceTytob 1E4A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=AUEhiB1OdB9Pd+vTlmH/pmARc0jMnn/mnjtHblH1WWo=; b=uF7H/7ZAiFYj0t1WEJFQnYRZT6CDwa1zsXSFmxskpPW6Hmf/qSwsXq3eu6IOkEgCQd Rw3h5GilCFXg+TuMXmMSv8vy4nIg7upY54bBb128xLzAa4l+D7JD5JnlQL9ydgVReyhw 3fM5zKWZqDBm9vdRb5zxPoLDbaG1hGsGMjcLjDCLJbMd6mgkh8iSPx/F92AwlUeH/g1U 18oixHnKHeLcqNWVzrrX/aEIs0hy1CxufnV4toC6EBUafkrcDzKGF2gFfqrkraY8u5U6 aTYrWZHgYBM0twuQp7yxel3PSGahHnDPFKNRv9a9KNCtkqLFahv3dGEXYJ6N/b06495j 6PXg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id p24si3231726pfd.288.2019.03.21.09.36.56; Thu, 21 Mar 2019 09:36:56 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728663AbfCUQgy (ORCPT + 31 others); Thu, 21 Mar 2019 12:36:54 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:59386 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728649AbfCUQgt (ORCPT ); Thu, 21 Mar 2019 12:36:49 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 93A41EBD; Thu, 21 Mar 2019 09:36:49 -0700 (PDT) Received: from e108454-lin.cambridge.arm.com (e108454-lin.cambridge.arm.com [10.1.196.50]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 805B93F614; Thu, 21 Mar 2019 09:36:47 -0700 (PDT) From: Julien Grall To: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu Cc: christoffer.dall@arm.com, james.morse@arm.com, marc.zyngier@arm.com, julien.thierry@arm.com, suzuki.poulose@arm.com, catalin.marinas@arm.com, will.deacon@arm.com, Julien Grall , Julien Grall Subject: [PATCH RFC 04/14] arm64/mm: Move the variable lock and tlb_flush_pending to asid_info Date: Thu, 21 Mar 2019 16:36:13 +0000 Message-Id: <20190321163623.20219-5-julien.grall@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20190321163623.20219-1-julien.grall@arm.com> References: <20190321163623.20219-1-julien.grall@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The variables lock and tlb_flush_pending holds information for a given ASID allocator. So move them to the asid_info structure. Signed-off-by: Julien Grall --- arch/arm64/mm/context.c | 17 +++++++++-------- 1 file changed, 9 insertions(+), 8 deletions(-) -- 2.11.0 diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm/context.c index da17ed6c7117..e98ab348b9cb 100644 --- a/arch/arm64/mm/context.c +++ b/arch/arm64/mm/context.c @@ -27,8 +27,6 @@ #include #include -static DEFINE_RAW_SPINLOCK(cpu_asid_lock); - struct asid_info { atomic64_t generation; @@ -36,6 +34,9 @@ struct asid_info atomic64_t __percpu *active; u64 __percpu *reserved; u32 bits; + raw_spinlock_t lock; + /* Which CPU requires context flush on next call */ + cpumask_t flush_pending; } asid_info; #define active_asid(info, cpu) *per_cpu_ptr((info)->active, cpu) @@ -44,8 +45,6 @@ struct asid_info static DEFINE_PER_CPU(atomic64_t, active_asids); static DEFINE_PER_CPU(u64, reserved_asids); -static cpumask_t tlb_flush_pending; - #define ASID_MASK(info) (~GENMASK((info)->bits - 1, 0)) #define ASID_FIRST_VERSION(info) (1UL << ((info)->bits)) @@ -124,7 +123,7 @@ static void flush_context(struct asid_info *info) * Queue a TLB invalidation for each CPU to perform on next * context-switch */ - cpumask_setall(&tlb_flush_pending); + cpumask_setall(&info->flush_pending); } static bool check_update_reserved_asid(struct asid_info *info, u64 asid, @@ -233,7 +232,7 @@ void check_and_switch_context(struct mm_struct *mm, unsigned int cpu) old_active_asid, asid)) goto switch_mm_fastpath; - raw_spin_lock_irqsave(&cpu_asid_lock, flags); + raw_spin_lock_irqsave(&info->lock, flags); /* Check that our ASID belongs to the current generation. */ asid = atomic64_read(&mm->context.id); if ((asid ^ atomic64_read(&info->generation)) >> info->bits) { @@ -241,11 +240,11 @@ void check_and_switch_context(struct mm_struct *mm, unsigned int cpu) atomic64_set(&mm->context.id, asid); } - if (cpumask_test_and_clear_cpu(cpu, &tlb_flush_pending)) + if (cpumask_test_and_clear_cpu(cpu, &info->flush_pending)) local_flush_tlb_all(); atomic64_set(&active_asid(info, cpu), asid); - raw_spin_unlock_irqrestore(&cpu_asid_lock, flags); + raw_spin_unlock_irqrestore(&info->lock, flags); switch_mm_fastpath: @@ -288,6 +287,8 @@ static int asids_init(void) info->active = &active_asids; info->reserved = &reserved_asids; + raw_spin_lock_init(&info->lock); + pr_info("ASID allocator initialised with %lu entries\n", NUM_USER_ASIDS(info)); return 0; From patchwork Thu Mar 21 16:36:14 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Grall X-Patchwork-Id: 160803 Delivered-To: patch@linaro.org Received: by 2002:a02:c6d8:0:0:0:0:0 with SMTP id r24csp1001901jan; Thu, 21 Mar 2019 09:37:00 -0700 (PDT) X-Google-Smtp-Source: APXvYqxFjQwX18jaB4qj8gna+q2g66HUAEHjyvH4xhCQYkT7VzrrjVXa7mZRoV4C+/3ewJrbpOYS X-Received: by 2002:a17:902:681:: with SMTP id 1mr4542160plh.31.1553186220103; Thu, 21 Mar 2019 09:37:00 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1553186220; cv=none; d=google.com; s=arc-20160816; b=txGo/PznTG4aUKEs4n3jd7vVVw1Pj3kw1xJhKJX0C669buyVN41hrPQI2/40jfax3Y f8DZjzZCX2JTVvh3ZOSTjlhRNu3pYZTkbrNG036EjpLeNiKBQZ28Z97WpJT4gkGJ85to qvHq2rs8AuWyE3pWr+a3FtndNz1KJQVfaKf/JYVRrFg48GUhpRgAUktSK33xj9v7Hsbg b08TItwlgQ3Oo9EuG8jJh9kZpRAaTdoNQRDTYlgFEWNej7BZz6iNe8z4d/nMi0uWOJ4a AKwrcig8fTcBm4iJ9w1/e3gcJeZiCGyp6jPmGTlsaCVkJhnWXIzRSOFX4p7Ua2hUH83n 9rhw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=EKG6A6KxV4CTZd8FCHCG65L4bQLQCgvMxSM9xewNCEE=; b=Uk0XFQxqXUCwrlmtSwHBnXQqdCPJTiXkHfYaemjHKgEx7XKuk5rYdFdEOss0Fg1v/R k+GfzEQcPRHjtkazH5OYT5XlgPtYvka7Rlv+Fo2Nf3KGt29AJJoe+fl0sWdF5JHj9sfm hGZy9uj3poQjCNe9njF3B7Ggk9XXQLqqMbG8HHsKJsUXRll7cqc6Ob/13TCl1D3KiJpc r6TO6QZat/GOajy14uBfEqw0ixWwXJ/IYXOIQQyofO6T+hrywwBQHnWVW7FEaLJHl8ih pq4V3AVRBiqB+PbCO4wqVRpYniw9t3+DtdkAUFRrN7aD/Z3a5WBpJRAxDWd1UkCfevfc 36tw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id n23si4667452plp.182.2019.03.21.09.36.59; Thu, 21 Mar 2019 09:37:00 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728675AbfCUQg4 (ORCPT + 31 others); Thu, 21 Mar 2019 12:36:56 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:59398 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728653AbfCUQgw (ORCPT ); Thu, 21 Mar 2019 12:36:52 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id C1893165C; Thu, 21 Mar 2019 09:36:51 -0700 (PDT) Received: from e108454-lin.cambridge.arm.com (e108454-lin.cambridge.arm.com [10.1.196.50]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id D1BD23F614; Thu, 21 Mar 2019 09:36:49 -0700 (PDT) From: Julien Grall To: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu Cc: christoffer.dall@arm.com, james.morse@arm.com, marc.zyngier@arm.com, julien.thierry@arm.com, suzuki.poulose@arm.com, catalin.marinas@arm.com, will.deacon@arm.com, Julien Grall Subject: [PATCH RFC 05/14] arm64/mm: Remove dependency on MM in new_context Date: Thu, 21 Mar 2019 16:36:14 +0000 Message-Id: <20190321163623.20219-6-julien.grall@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20190321163623.20219-1-julien.grall@arm.com> References: <20190321163623.20219-1-julien.grall@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The function new_context will be part of a generic ASID allocator. At the moment, the MM structure is only used to fetch the ASID. To remove the dependency on MM, it is possible to just pass a pointer to the current ASID. Signed-off-by: Julien Grall --- arch/arm64/mm/context.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) -- 2.11.0 diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm/context.c index e98ab348b9cb..488845c39c39 100644 --- a/arch/arm64/mm/context.c +++ b/arch/arm64/mm/context.c @@ -151,10 +151,10 @@ static bool check_update_reserved_asid(struct asid_info *info, u64 asid, return hit; } -static u64 new_context(struct asid_info *info, struct mm_struct *mm) +static u64 new_context(struct asid_info *info, atomic64_t *pasid) { static u32 cur_idx = 1; - u64 asid = atomic64_read(&mm->context.id); + u64 asid = atomic64_read(pasid); u64 generation = atomic64_read(&info->generation); if (asid != 0) { @@ -236,7 +236,7 @@ void check_and_switch_context(struct mm_struct *mm, unsigned int cpu) /* Check that our ASID belongs to the current generation. */ asid = atomic64_read(&mm->context.id); if ((asid ^ atomic64_read(&info->generation)) >> info->bits) { - asid = new_context(info, mm); + asid = new_context(info, &mm->context.id); atomic64_set(&mm->context.id, asid); } From patchwork Thu Mar 21 16:36:15 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Grall X-Patchwork-Id: 160812 Delivered-To: patch@linaro.org Received: by 2002:a02:c6d8:0:0:0:0:0 with SMTP id r24csp1002689jan; Thu, 21 Mar 2019 09:37:47 -0700 (PDT) X-Google-Smtp-Source: APXvYqwhlTQ0G5ef7Onif4Jvb/48MnFRyyGmTrytZOCBmyMkcFWDyjw2mksWaIYctOGgL0+Bp8ly X-Received: by 2002:a62:7049:: with SMTP id l70mr4262169pfc.78.1553186267405; Thu, 21 Mar 2019 09:37:47 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1553186267; cv=none; d=google.com; s=arc-20160816; b=XmRWDUqQ2vLFUrYxBaMWmGA13UqP5oXKnS9atuNmKDxzneTSKaz4XnS6ZP8R2B5+Ei q2UqPtwoMsMOjieGnWlljMg2aQGzdHhIUhpPgoh00MaLks0Ry3ftZXqcUKBnOdWd263v pT73Zttz1JRWDxKe5MqMe58QMiu9NsFE1IVwNgd48loWEMlVmI7nhKw2Mb2704sJN04m YkUWwMV8olAiQU0BYV4L/4VubULKdxwkFBFoGEBK/VGqKOU5QKcECOeqXlqTgE8WUcUW teWuIqNhjrSKPNexHi437rWaIZMu2NIphDzq9BUMnGRgR9+E012M4LycvuNy0Z4XXjcw olNw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=ZJ+zgvZptRj52AShGt6rgtN0LemVMLvxWDYO6hMYLNs=; b=uks657Guj4BYp8z4TQUQr4lRa8P58OwuWHjklQgtB1c1vpeotiiFqCAhienjVwK6n/ T4yhaNQP92kDl6VqqOuT5RvCoBvdeJdsz2zoX0kpfa82XIim1qkEhWoWim26MeAapG4d SJiJidANW16b/sQy1wBQ6lrICM6I1TW1v6S1fk7tdcoS9Itu+vpc+Er+hb21gYkYfY7X +xrWpn6hqbdJTNtAv6AKG5us5DNNQXnxVa39WpyT+0StDfNokqLDtNKBhn4ww+vHti5f eos4Z2S8l60N7D8bcNbPwT5prdG1G5728O/lrTDpKFk3t7jtt8+WPwrd0RvMeWzwD7Qh T6rg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id h69si4601782pfc.120.2019.03.21.09.37.47; Thu, 21 Mar 2019 09:37:47 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728687AbfCUQg5 (ORCPT + 31 others); Thu, 21 Mar 2019 12:36:57 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:59412 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728649AbfCUQgy (ORCPT ); Thu, 21 Mar 2019 12:36:54 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id EF357168F; Thu, 21 Mar 2019 09:36:53 -0700 (PDT) Received: from e108454-lin.cambridge.arm.com (e108454-lin.cambridge.arm.com [10.1.196.50]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 0B7683F614; Thu, 21 Mar 2019 09:36:51 -0700 (PDT) From: Julien Grall To: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu Cc: christoffer.dall@arm.com, james.morse@arm.com, marc.zyngier@arm.com, julien.thierry@arm.com, suzuki.poulose@arm.com, catalin.marinas@arm.com, will.deacon@arm.com, Julien Grall Subject: [PATCH RFC 06/14] arm64/mm: Store the number of asid allocated per context Date: Thu, 21 Mar 2019 16:36:15 +0000 Message-Id: <20190321163623.20219-7-julien.grall@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20190321163623.20219-1-julien.grall@arm.com> References: <20190321163623.20219-1-julien.grall@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Currently the number of ASID allocated per context is determined at compilation time. As the algorithm is becoming generic, the user may want to instantiate the ASID allocator multiple time with different number of ASID allocated. Add a field in asid_info to track the number ASID allocated per context. This is stored in term of shift amount to avoid division in the code. This means the number of ASID allocated per context should be a power of two. At the same time rename NUM_USERS_ASIDS to NUM_CTXT_ASIDS to make the name more generic. Signed-off-by: Julien Grall --- arch/arm64/mm/context.c | 31 +++++++++++++++++-------------- 1 file changed, 17 insertions(+), 14 deletions(-) -- 2.11.0 diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm/context.c index 488845c39c39..5a4c2b1aac71 100644 --- a/arch/arm64/mm/context.c +++ b/arch/arm64/mm/context.c @@ -37,6 +37,8 @@ struct asid_info raw_spinlock_t lock; /* Which CPU requires context flush on next call */ cpumask_t flush_pending; + /* Number of ASID allocated by context (shift value) */ + unsigned int ctxt_shift; } asid_info; #define active_asid(info, cpu) *per_cpu_ptr((info)->active, cpu) @@ -49,15 +51,15 @@ static DEFINE_PER_CPU(u64, reserved_asids); #define ASID_FIRST_VERSION(info) (1UL << ((info)->bits)) #ifdef CONFIG_UNMAP_KERNEL_AT_EL0 -#define NUM_USER_ASIDS(info) (ASID_FIRST_VERSION(info) >> 1) -#define asid2idx(info, asid) (((asid) & ~ASID_MASK(info)) >> 1) -#define idx2asid(info, idx) (((idx) << 1) & ~ASID_MASK(info)) +#define ASID_PER_CONTEXT 2 #else -#define NUM_USER_ASIDS(info) (ASID_FIRST_VERSION(info)) -#define asid2idx(info, asid) ((asid) & ~ASID_MASK(info)) -#define idx2asid(info, idx) asid2idx(info, idx) +#define ASID_PER_CONTEXT 1 #endif +#define NUM_CTXT_ASIDS(info) (ASID_FIRST_VERSION(info) >> (info)->ctxt_shift) +#define asid2idx(info, asid) (((asid) & ~ASID_MASK(info)) >> (info)->ctxt_shift) +#define idx2asid(info, idx) (((idx) << (info)->ctxt_shift) & ~ASID_MASK(info)) + /* Get the ASIDBits supported by the current CPU */ static u32 get_cpu_asid_bits(void) { @@ -102,7 +104,7 @@ static void flush_context(struct asid_info *info) u64 asid; /* Update the list of reserved ASIDs and the ASID bitmap. */ - bitmap_clear(info->map, 0, NUM_USER_ASIDS(info)); + bitmap_clear(info->map, 0, NUM_CTXT_ASIDS(info)); for_each_possible_cpu(i) { asid = atomic64_xchg_relaxed(&active_asid(info, i), 0); @@ -182,8 +184,8 @@ static u64 new_context(struct asid_info *info, atomic64_t *pasid) * a reserved TTBR0 for the init_mm and we allocate ASIDs in even/odd * pairs. */ - asid = find_next_zero_bit(info->map, NUM_USER_ASIDS(info), cur_idx); - if (asid != NUM_USER_ASIDS(info)) + asid = find_next_zero_bit(info->map, NUM_CTXT_ASIDS(info), cur_idx); + if (asid != NUM_CTXT_ASIDS(info)) goto set_asid; /* We're out of ASIDs, so increment the global generation count */ @@ -192,7 +194,7 @@ static u64 new_context(struct asid_info *info, atomic64_t *pasid) flush_context(info); /* We have more ASIDs than CPUs, so this will always succeed */ - asid = find_next_zero_bit(info->map, NUM_USER_ASIDS(info), 1); + asid = find_next_zero_bit(info->map, NUM_CTXT_ASIDS(info), 1); set_asid: __set_bit(asid, info->map); @@ -272,17 +274,18 @@ static int asids_init(void) struct asid_info *info = &asid_info; info->bits = get_cpu_asid_bits(); + info->ctxt_shift = ilog2(ASID_PER_CONTEXT); /* * Expect allocation after rollover to fail if we don't have at least * one more ASID than CPUs. ASID #0 is reserved for init_mm. */ - WARN_ON(NUM_USER_ASIDS(info) - 1 <= num_possible_cpus()); + WARN_ON(NUM_CTXT_ASIDS(info) - 1 <= num_possible_cpus()); atomic64_set(&info->generation, ASID_FIRST_VERSION(info)); - info->map = kcalloc(BITS_TO_LONGS(NUM_USER_ASIDS(info)), + info->map = kcalloc(BITS_TO_LONGS(NUM_CTXT_ASIDS(info)), sizeof(*info->map), GFP_KERNEL); if (!info->map) panic("Failed to allocate bitmap for %lu ASIDs\n", - NUM_USER_ASIDS(info)); + NUM_CTXT_ASIDS(info)); info->active = &active_asids; info->reserved = &reserved_asids; @@ -290,7 +293,7 @@ static int asids_init(void) raw_spin_lock_init(&info->lock); pr_info("ASID allocator initialised with %lu entries\n", - NUM_USER_ASIDS(info)); + NUM_CTXT_ASIDS(info)); return 0; } early_initcall(asids_init); From patchwork Thu Mar 21 16:36:16 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Grall X-Patchwork-Id: 160804 Delivered-To: patch@linaro.org Received: by 2002:a02:c6d8:0:0:0:0:0 with SMTP id r24csp1001945jan; Thu, 21 Mar 2019 09:37:02 -0700 (PDT) X-Google-Smtp-Source: APXvYqyCN7zheiGVF9Ig1Oz92X6Hd2GdQvCnmDPZa8Pisp5SKWzsq968XaI6p6yFdHZ9yK1QrcAQ X-Received: by 2002:a62:5c87:: with SMTP id q129mr4106580pfb.180.1553186222163; Thu, 21 Mar 2019 09:37:02 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1553186222; cv=none; d=google.com; s=arc-20160816; b=yIFWksACLlK3IJFEwn2etiGK/myBuTASHm1RhwYO+SVGAXJoN7lha6ylCmOWg/YxGi UlmTqLaW56NVM8HOo6dkgnFjxDrQ34xK4dSuuYRohuMXR8gstG6s1iX8yRcmGGOwYOgm dwN90MNFzcTbLF1/HZY49tgFPf6ZpZ0F4ncatFwnmheoNGPFl95Mm/u0hoCDZT4ju7cr oia6f5hP6AG9wYBBkDjQMkSpqrlodd9XcsrjLnTT8LJY7c1ZdKDkR2Ofbc5MPbhMMf7W nxR4trh3MjsuhMjID4/WXCSgMTnd3Xb5FYDTeqVO5zSXVO5srOyP02+M2bpJIPAABIk2 CgcQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=VNbH+rCV7W2flDYzz7qXnz9QhaU2DUMuydMVvJDj5mQ=; b=laP19x7KY4Uqx4eMKNrKRTpnq3iocITIAmjNOF7/YP9VN1QszjbMjzQhud39/4ASn5 SRQE84Edr6Bw7FXtTAykUDynzk0fA0LDwcY7icbSqYkVrIh0gdhnAt3PHUgQM+gZyj5A rMgH5kh6H55secVv+jl4NCbPDOZdc0HF2vwogo9gZJ+Mu4O5Kwp61IBXj8OIjruvvdoy 0oCy3voARtFfpWD9dhSTN5BS3mHPol1Q9nNKmcdw42pW+vY0Sb+e0P7+2ydZ0iGWy0Lx mKAJzrlYsEYHheBEFLHg7B2/Tny1wPteACZg+Xja9udYBVTSSLd6LWF1vwh/emECiXKJ d0jw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id p3si821984pgr.382.2019.03.21.09.37.01; Thu, 21 Mar 2019 09:37:02 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728699AbfCUQg7 (ORCPT + 31 others); Thu, 21 Mar 2019 12:36:59 -0400 Received: from foss.arm.com ([217.140.101.70]:59424 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728676AbfCUQg4 (ORCPT ); Thu, 21 Mar 2019 12:36:56 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 293E0374; Thu, 21 Mar 2019 09:36:56 -0700 (PDT) Received: from e108454-lin.cambridge.arm.com (e108454-lin.cambridge.arm.com [10.1.196.50]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 396CA3F614; Thu, 21 Mar 2019 09:36:54 -0700 (PDT) From: Julien Grall To: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu Cc: christoffer.dall@arm.com, james.morse@arm.com, marc.zyngier@arm.com, julien.thierry@arm.com, suzuki.poulose@arm.com, catalin.marinas@arm.com, will.deacon@arm.com, Julien Grall Subject: [PATCH RFC 07/14] arm64/mm: Introduce NUM_ASIDS Date: Thu, 21 Mar 2019 16:36:16 +0000 Message-Id: <20190321163623.20219-8-julien.grall@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20190321163623.20219-1-julien.grall@arm.com> References: <20190321163623.20219-1-julien.grall@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org At the moment ASID_FIRST_VERSION is used to know the number of ASIDs supported. As we are going to move the ASID allocator in a separate, it would be better to use a different name for external users. This patch adds NUM_ASIDS and implements ASID_FIRST_VERSION using it. Signed-off-by: Julien Grall --- arch/arm64/mm/context.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) -- 2.11.0 diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm/context.c index 5a4c2b1aac71..fb13bc249951 100644 --- a/arch/arm64/mm/context.c +++ b/arch/arm64/mm/context.c @@ -48,7 +48,9 @@ static DEFINE_PER_CPU(atomic64_t, active_asids); static DEFINE_PER_CPU(u64, reserved_asids); #define ASID_MASK(info) (~GENMASK((info)->bits - 1, 0)) -#define ASID_FIRST_VERSION(info) (1UL << ((info)->bits)) +#define NUM_ASIDS(info) (1UL << ((info)->bits)) + +#define ASID_FIRST_VERSION(info) NUM_ASIDS(info) #ifdef CONFIG_UNMAP_KERNEL_AT_EL0 #define ASID_PER_CONTEXT 2 @@ -56,7 +58,7 @@ static DEFINE_PER_CPU(u64, reserved_asids); #define ASID_PER_CONTEXT 1 #endif -#define NUM_CTXT_ASIDS(info) (ASID_FIRST_VERSION(info) >> (info)->ctxt_shift) +#define NUM_CTXT_ASIDS(info) (NUM_ASIDS(info) >> (info)->ctxt_shift) #define asid2idx(info, asid) (((asid) & ~ASID_MASK(info)) >> (info)->ctxt_shift) #define idx2asid(info, idx) (((idx) << (info)->ctxt_shift) & ~ASID_MASK(info)) From patchwork Thu Mar 21 16:36:17 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Grall X-Patchwork-Id: 160811 Delivered-To: patch@linaro.org Received: by 2002:a02:c6d8:0:0:0:0:0 with SMTP id r24csp1002626jan; Thu, 21 Mar 2019 09:37:43 -0700 (PDT) X-Google-Smtp-Source: APXvYqwQUubnFunu4N5L46wpcFHpfADsUj+QeDtb7KUM4jIw41dG9XJ0Nx74VGpiKKpSSmPVOk7X X-Received: by 2002:a63:c118:: with SMTP id w24mr4152491pgf.67.1553186263191; Thu, 21 Mar 2019 09:37:43 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1553186263; cv=none; d=google.com; s=arc-20160816; b=r8hsqETe0nUsY23bfyIOTGk11m2IMoli1MXJIqf/kPBFIY4eQBeRnPzDKnxbXQFy9i THhRkcH5CtP8eLx02je+o6TRfYv9m428McpOp5kPSvMs6Idd9VJ+KSCRA8P3glwue0zH YlTFJT2SqhKicsSONIC1MJwI0WBfid1+W6aOF4K9G1JttVkVz5nM3334PM7E/OQw/X27 4m+n2p7kxBP5CLspcBSo+cWrOEZIRmRrGKpd3oHG1qorZa9kFv8tZS3pYIDnoU7nuPIq G47P6hE9AQDN7h0sSmWQJecYjXkU3l/GnpRi88+QM5DL0/aNKRXZLuq2okYmERBwCVG1 cwtw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=hx8c8qP/KK6vveYXjZp1m60x7qSl3p+e7n9LS4wIjI8=; b=L/j5GnArgGzMROd+CCW+LpcF+kCfhonfSWakh4mfc+vPuwNUN640CDhxRH64z3ecaY iNU4yayKkME1UTltl9Ac9SN3wyQ6R0Clnk0LE8xuvRpVA8UZgafwq/cilap895TxbAjG DrdYnx7AkuBSOU28ZWAymeyifV/W6gR0+xrq5yDsLKyPLVpV7/vIby6sERpN28491FXH OvaM+BVrdz2wR7nXotYb4qqusX5Z1geoUniztKZVp4VdUNOmE2RiqMXHAk280UW6WFNS HxwRf5J6J/xtn24eutiTHZYZSVivHl0fwSvzwu5qnfnr4TmNabz86ISlVrn09dGwyVEf iWZA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id x14si4777895pgq.185.2019.03.21.09.37.42; Thu, 21 Mar 2019 09:37:43 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728550AbfCUQhl (ORCPT + 31 others); Thu, 21 Mar 2019 12:37:41 -0400 Received: from foss.arm.com ([217.140.101.70]:59442 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728690AbfCUQg6 (ORCPT ); Thu, 21 Mar 2019 12:36:58 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 57347EBD; Thu, 21 Mar 2019 09:36:58 -0700 (PDT) Received: from e108454-lin.cambridge.arm.com (e108454-lin.cambridge.arm.com [10.1.196.50]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 675413F614; Thu, 21 Mar 2019 09:36:56 -0700 (PDT) From: Julien Grall To: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu Cc: christoffer.dall@arm.com, james.morse@arm.com, marc.zyngier@arm.com, julien.thierry@arm.com, suzuki.poulose@arm.com, catalin.marinas@arm.com, will.deacon@arm.com, Julien Grall Subject: [PATCH RFC 08/14] arm64/mm: Split asid_inits in 2 parts Date: Thu, 21 Mar 2019 16:36:17 +0000 Message-Id: <20190321163623.20219-9-julien.grall@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20190321163623.20219-1-julien.grall@arm.com> References: <20190321163623.20219-1-julien.grall@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Move out the common initialization of the ASID allocator in a separate function. Signed-off-by: Julien Grall --- arch/arm64/mm/context.c | 43 +++++++++++++++++++++++++++++++------------ 1 file changed, 31 insertions(+), 12 deletions(-) -- 2.11.0 diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm/context.c index fb13bc249951..b071a1b3469e 100644 --- a/arch/arm64/mm/context.c +++ b/arch/arm64/mm/context.c @@ -271,31 +271,50 @@ asmlinkage void post_ttbr_update_workaround(void) CONFIG_CAVIUM_ERRATUM_27456)); } -static int asids_init(void) +/* + * Initialize the ASID allocator + * + * @info: Pointer to the asid allocator structure + * @bits: Number of ASIDs available + * @asid_per_ctxt: Number of ASIDs to allocate per-context. ASIDs are + * allocated contiguously for a given context. This value should be a power of + * 2. + */ +static int asid_allocator_init(struct asid_info *info, + u32 bits, unsigned int asid_per_ctxt) { - struct asid_info *info = &asid_info; - - info->bits = get_cpu_asid_bits(); - info->ctxt_shift = ilog2(ASID_PER_CONTEXT); + info->bits = bits; + info->ctxt_shift = ilog2(asid_per_ctxt); /* * Expect allocation after rollover to fail if we don't have at least - * one more ASID than CPUs. ASID #0 is reserved for init_mm. + * one more ASID than CPUs. ASID #0 is always reserved. */ WARN_ON(NUM_CTXT_ASIDS(info) - 1 <= num_possible_cpus()); atomic64_set(&info->generation, ASID_FIRST_VERSION(info)); info->map = kcalloc(BITS_TO_LONGS(NUM_CTXT_ASIDS(info)), sizeof(*info->map), GFP_KERNEL); if (!info->map) - panic("Failed to allocate bitmap for %lu ASIDs\n", - NUM_CTXT_ASIDS(info)); - - info->active = &active_asids; - info->reserved = &reserved_asids; + return -ENOMEM; raw_spin_lock_init(&info->lock); + return 0; +} + +static int asids_init(void) +{ + u32 bits = get_cpu_asid_bits(); + + if (!asid_allocator_init(&asid_info, bits, ASID_PER_CONTEXT)) + panic("Unable to initialize ASID allocator for %lu ASIDs\n", + 1UL << bits); + + asid_info.active = &active_asids; + asid_info.reserved = &reserved_asids; + pr_info("ASID allocator initialised with %lu entries\n", - NUM_CTXT_ASIDS(info)); + NUM_CTXT_ASIDS(&asid_info)); + return 0; } early_initcall(asids_init); From patchwork Thu Mar 21 16:36:18 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Grall X-Patchwork-Id: 160805 Delivered-To: patch@linaro.org Received: by 2002:a02:c6d8:0:0:0:0:0 with SMTP id r24csp1002012jan; Thu, 21 Mar 2019 09:37:05 -0700 (PDT) X-Google-Smtp-Source: APXvYqwoXHtcUgTzH/T6rSaK+IIVz+1N3fav5jLYiVlx7qk4pZpGWIWUtAzKBzgSY/gMgEU/9jC3 X-Received: by 2002:a65:6259:: with SMTP id q25mr4204969pgv.235.1553186225760; Thu, 21 Mar 2019 09:37:05 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1553186225; cv=none; d=google.com; s=arc-20160816; b=GipjXTK9ninvH1oc6Rlz0ZSbcu9UGHSIC8dd/poATGHW/GxdtJMAR11rL/rDJYX84P G0lsdCOqbE/jWZ1TzUR58Lij9hiV+ovB62+0qSKjLIBbUQMrtgHAYmztxOt/oSHftR1O ou73JUIyQX719izeHsPA+x6mrD/eoI3T5MCC7ja9SoRzPZuREAPhPJ5cCTdEnyVND0Kl /rrejEBzL+xR9/O1dNd7Q3B9tota/yKSEeuWnZPuOtSFCTcqQeq2MhU3JlU72GsUlvMr WJ6S+QvK2QcK0FF+JW7tNUJMDkA+rkjMbpv96T0KlCnpQnizRUX5BhSu5jLxoqV0brXb witw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=0z61gpWptVb8W6OIzZg5rilVW6cMGsq224JUxoSaKs4=; b=c9rdYeRHohfH8zVpepLpCNRncsINqlCT/sIKBCTnHwEZX2cl0bQSKSBMQ9jooa15nK 4ceHXLflAXGSQ+ClWG631qfiC9x0iQnpyRlJuznMNlU4NBhdV9XIQE3hUe5tbv2rVZ3V 747z0lJUXhfI2iFjL2Ms6VFIBtQBvTtWDh7iW3oPXbvOlx6coX3DrtlqRhlBgR6Sh+zo LUiI7LvFUhi/T0e4ilDQzaqHNQ+VFJFstNeRBRqj4AwaiCev9RWqX9v2D34pHXOlNpwd i0V1K4SjlevbxpaHNc0xIIMmr+h+nZctOA1u0704bVhrsp+BoxbAOJygE3e2AQ3RhAa3 KrsA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id p3si821984pgr.382.2019.03.21.09.37.05; Thu, 21 Mar 2019 09:37:05 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728710AbfCUQhD (ORCPT + 31 others); Thu, 21 Mar 2019 12:37:03 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:59454 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727987AbfCUQhB (ORCPT ); Thu, 21 Mar 2019 12:37:01 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 85651374; Thu, 21 Mar 2019 09:37:00 -0700 (PDT) Received: from e108454-lin.cambridge.arm.com (e108454-lin.cambridge.arm.com [10.1.196.50]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 958113F614; Thu, 21 Mar 2019 09:36:58 -0700 (PDT) From: Julien Grall To: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu Cc: christoffer.dall@arm.com, james.morse@arm.com, marc.zyngier@arm.com, julien.thierry@arm.com, suzuki.poulose@arm.com, catalin.marinas@arm.com, will.deacon@arm.com, Julien Grall Subject: [PATCH RFC 09/14] arm64/mm: Split the function check_and_switch_context in 3 parts Date: Thu, 21 Mar 2019 16:36:18 +0000 Message-Id: <20190321163623.20219-10-julien.grall@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20190321163623.20219-1-julien.grall@arm.com> References: <20190321163623.20219-1-julien.grall@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The function check_and_switch_context is used to: 1) Check whether the ASID is still valid 2) Generate a new one if it is not valid 3) Switch the context While the latter is specific to the MM subsystem, the rest could be part of the generic ASID allocator. After this patch, the function is now split in 3 parts which corresponds to the use of the functions: 1) asid_check_context: Check if the ASID is still valid 2) asid_new_context: Generate a new ASID for the context 3) check_and_switch_context: Call 1) and 2) and switch the context 1) and 2) have not been merged in a single function because we want to avoid to add a branch in when the ASID is still valid. This will matter when the code will be moved in separate file later on as 1) will reside in the header as a static inline function. Signed-off-by: Julien Grall --- Will wants to avoid to add a branch when the ASID is still valid. So 1) and 2) are in separates function. The former will move to a new header and make static inline. --- arch/arm64/mm/context.c | 51 +++++++++++++++++++++++++++++++++++++------------ 1 file changed, 39 insertions(+), 12 deletions(-) -- 2.11.0 diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm/context.c index b071a1b3469e..cbf1c24cb3ee 100644 --- a/arch/arm64/mm/context.c +++ b/arch/arm64/mm/context.c @@ -204,16 +204,21 @@ static u64 new_context(struct asid_info *info, atomic64_t *pasid) return idx2asid(info, asid) | generation; } -void check_and_switch_context(struct mm_struct *mm, unsigned int cpu) +static void asid_new_context(struct asid_info *info, atomic64_t *pasid, + unsigned int cpu); + +/* + * Check the ASID is still valid for the context. If not generate a new ASID. + * + * @pasid: Pointer to the current ASID batch + * @cpu: current CPU ID. Must have been acquired throught get_cpu() + */ +static void asid_check_context(struct asid_info *info, + atomic64_t *pasid, unsigned int cpu) { - unsigned long flags; u64 asid, old_active_asid; - struct asid_info *info = &asid_info; - if (system_supports_cnp()) - cpu_set_reserved_ttbr0(); - - asid = atomic64_read(&mm->context.id); + asid = atomic64_read(pasid); /* * The memory ordering here is subtle. @@ -234,14 +239,30 @@ void check_and_switch_context(struct mm_struct *mm, unsigned int cpu) !((asid ^ atomic64_read(&info->generation)) >> info->bits) && atomic64_cmpxchg_relaxed(&active_asid(info, cpu), old_active_asid, asid)) - goto switch_mm_fastpath; + return; + + asid_new_context(info, pasid, cpu); +} + +/* + * Generate a new ASID for the context. + * + * @pasid: Pointer to the current ASID batch allocated. It will be updated + * with the new ASID batch. + * @cpu: current CPU ID. Must have been acquired through get_cpu() + */ +static void asid_new_context(struct asid_info *info, atomic64_t *pasid, + unsigned int cpu) +{ + unsigned long flags; + u64 asid; raw_spin_lock_irqsave(&info->lock, flags); /* Check that our ASID belongs to the current generation. */ - asid = atomic64_read(&mm->context.id); + asid = atomic64_read(pasid); if ((asid ^ atomic64_read(&info->generation)) >> info->bits) { - asid = new_context(info, &mm->context.id); - atomic64_set(&mm->context.id, asid); + asid = new_context(info, pasid); + atomic64_set(pasid, asid); } if (cpumask_test_and_clear_cpu(cpu, &info->flush_pending)) @@ -249,8 +270,14 @@ void check_and_switch_context(struct mm_struct *mm, unsigned int cpu) atomic64_set(&active_asid(info, cpu), asid); raw_spin_unlock_irqrestore(&info->lock, flags); +} + +void check_and_switch_context(struct mm_struct *mm, unsigned int cpu) +{ + if (system_supports_cnp()) + cpu_set_reserved_ttbr0(); -switch_mm_fastpath: + asid_check_context(&asid_info, &mm->context.id, cpu); arm64_apply_bp_hardening(); From patchwork Thu Mar 21 16:36:19 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Grall X-Patchwork-Id: 160810 Delivered-To: patch@linaro.org Received: by 2002:a02:c6d8:0:0:0:0:0 with SMTP id r24csp1002503jan; Thu, 21 Mar 2019 09:37:34 -0700 (PDT) X-Google-Smtp-Source: APXvYqyfnG0RCCkFPcXHWhtUS+wg3WeTmr7Ftw13N0qKSOh/xVJmzvn+Qjs7mNg0i0/0loXQ+NpS X-Received: by 2002:a63:145a:: with SMTP id 26mr4204247pgu.433.1553186254692; Thu, 21 Mar 2019 09:37:34 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1553186254; cv=none; d=google.com; s=arc-20160816; b=Q+opT6HmnJwDG3PDtyO/TDSnnTvVAWSQBvYVi+JKXAyr3o5IlEiilyA6N+O3j+kvvd 7qfL2TQna5Pv24383xw8dtdy3RgDf6sQlcjk8Ji7LIMTyNKvqnVYA5X8sKbYJwvAgr6N ZgDuXglBL5ZKvqIDbbe+T04qIreLOVrtEGqyTrww2pdageCWO/PkRTkQz9yD4rzRb/kF Bwjz9Rjevl5QD+a7KwWLefgaMVyLL/ipvQsAhy6NR3p9J0AunqkGw3YgvYcunZ+7/AFC NVQmWHm4+NIX1TMqomGY8QjMlO+2LjHa/eaBeKiJ0RgGDsY26jY3l9rc/4qi8p1YxNSb paeQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=8rjxcqRVMWA4IPRUsoYlEnUBVfCqm3DGIK6xJwlgONY=; b=vHJobWeGh7mAPphnvXMbGK9OeIWqrnZA2na4rz6Ckge3wqBr+UPwVs7bFt4Q2HSW2u q9wgL/blOava9P0uDHaY4acvfMMDWOsx5zUhjhRn1Ei7bCju9TvFp3CcFYdnJMWT2jTb 92XGUDSK6oE6STL6up+ZxkHEcnfp31m3Zi9A/5WolwXqds27wXtal5btbEGjEtVV2O2p iA7CRemF2g+Y+tiNHxzLYZXaE0MIkrFAsmiuD6L9doUlbugXaCdQc9vdw+ja6U9T8mbn sCt4txMRJQJS1Y2ZCq4QmkgJMrx9nvvaeemeEjD8Z9xfiARRU2PkWhN+R9tsvpizU1m8 rn9A== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id x14si4777895pgq.185.2019.03.21.09.37.34; Thu, 21 Mar 2019 09:37:34 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728724AbfCUQhG (ORCPT + 31 others); Thu, 21 Mar 2019 12:37:06 -0400 Received: from foss.arm.com ([217.140.101.70]:59462 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728700AbfCUQhD (ORCPT ); Thu, 21 Mar 2019 12:37:03 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id B31B9168F; Thu, 21 Mar 2019 09:37:02 -0700 (PDT) Received: from e108454-lin.cambridge.arm.com (e108454-lin.cambridge.arm.com [10.1.196.50]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id C36303F614; Thu, 21 Mar 2019 09:37:00 -0700 (PDT) From: Julien Grall To: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu Cc: christoffer.dall@arm.com, james.morse@arm.com, marc.zyngier@arm.com, julien.thierry@arm.com, suzuki.poulose@arm.com, catalin.marinas@arm.com, will.deacon@arm.com, Julien Grall Subject: [PATCH RFC 10/14] arm64/mm: Introduce a callback to flush the local context Date: Thu, 21 Mar 2019 16:36:19 +0000 Message-Id: <20190321163623.20219-11-julien.grall@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20190321163623.20219-1-julien.grall@arm.com> References: <20190321163623.20219-1-julien.grall@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Flushing the local context will vary depending on the actual user of the ASID allocator. Introduce a new callback to flush the local context and move the call to flush local TLB in it. Signed-off-by: Julien Grall --- arch/arm64/mm/context.c | 16 +++++++++++++--- 1 file changed, 13 insertions(+), 3 deletions(-) -- 2.11.0 diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm/context.c index cbf1c24cb3ee..678a57b77c91 100644 --- a/arch/arm64/mm/context.c +++ b/arch/arm64/mm/context.c @@ -39,6 +39,8 @@ struct asid_info cpumask_t flush_pending; /* Number of ASID allocated by context (shift value) */ unsigned int ctxt_shift; + /* Callback to locally flush the context. */ + void (*flush_cpu_ctxt_cb)(void); } asid_info; #define active_asid(info, cpu) *per_cpu_ptr((info)->active, cpu) @@ -266,7 +268,7 @@ static void asid_new_context(struct asid_info *info, atomic64_t *pasid, } if (cpumask_test_and_clear_cpu(cpu, &info->flush_pending)) - local_flush_tlb_all(); + info->flush_cpu_ctxt_cb(); atomic64_set(&active_asid(info, cpu), asid); raw_spin_unlock_irqrestore(&info->lock, flags); @@ -298,6 +300,11 @@ asmlinkage void post_ttbr_update_workaround(void) CONFIG_CAVIUM_ERRATUM_27456)); } +static void asid_flush_cpu_ctxt(void) +{ + local_flush_tlb_all(); +} + /* * Initialize the ASID allocator * @@ -308,10 +315,12 @@ asmlinkage void post_ttbr_update_workaround(void) * 2. */ static int asid_allocator_init(struct asid_info *info, - u32 bits, unsigned int asid_per_ctxt) + u32 bits, unsigned int asid_per_ctxt, + void (*flush_cpu_ctxt_cb)(void)) { info->bits = bits; info->ctxt_shift = ilog2(asid_per_ctxt); + info->flush_cpu_ctxt_cb = flush_cpu_ctxt_cb; /* * Expect allocation after rollover to fail if we don't have at least * one more ASID than CPUs. ASID #0 is always reserved. @@ -332,7 +341,8 @@ static int asids_init(void) { u32 bits = get_cpu_asid_bits(); - if (!asid_allocator_init(&asid_info, bits, ASID_PER_CONTEXT)) + if (!asid_allocator_init(&asid_info, bits, ASID_PER_CONTEXT, + asid_flush_cpu_ctxt)) panic("Unable to initialize ASID allocator for %lu ASIDs\n", 1UL << bits); From patchwork Thu Mar 21 16:36:20 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Grall X-Patchwork-Id: 160806 Delivered-To: patch@linaro.org Received: by 2002:a02:c6d8:0:0:0:0:0 with SMTP id r24csp1002102jan; Thu, 21 Mar 2019 09:37:10 -0700 (PDT) X-Google-Smtp-Source: APXvYqzeHwxn2HME25nix7gd1ycN60FEwlb0RIha5/Wp+schSX3MgfJ0g3CL/B57nmBnYwejhXhy X-Received: by 2002:a62:1b92:: with SMTP id b140mr4282714pfb.159.1553186230785; Thu, 21 Mar 2019 09:37:10 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1553186230; cv=none; d=google.com; s=arc-20160816; b=qWF1mzYC5x7x+NFHbK1v/ORTjthr/BRXmwH1w4PvHrB9/DsS8xWEZwvYHfDjFeh6sH bF8h5nHbi3nEi0xJmws9Ku0YYssHa3hJ/3YhI+fmxuekZONLyWsbWGI0R50GkfvNgsSM ov3Z63SAbew5v2qmzqtqSBw0J/slb82CiRuvDlIfNfBntcuqS2lxStQ6B6blmYb3M3YM nMmrYyQoYKVu77FrydHhG6pAdlabNtsaJejMKrdjfkvkb19ei/C97MP7p9/bpSMjilXJ 4XnTX+BuVdv9es/OSOKyiMEBaALKc7pNmN2DmiI2nSNeRuD1lpDAsKc32Pr9UiCmdZE2 cZxw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=4fE4NEWTNChH/XO2xmPUIaZ815S8eKkApJJvb75BxVA=; b=fTCJdEeEratwoiJ2+4bfUlPhO2Ca0BA72f1XOmlI+YcQioPsq8vDevuivnS3abwtjj hkIS80p5tkGd/D7++qaRRa5UfdzaNffaTiBCzX+6F3hygkMdYojQRPLeZZy6NtQ4RA+V L/04z6t4Cmp9orbRH8RjyPMMT16toVXrKRkfZEjdxNNyBg2GBK6VRXtKov/zs29XCNZ5 kgnEryzB0J55Zslg+apFIZSMAQ6kO3JRE/X/Ldo7JKciZEdoCMLIYxq3U9Pi+e/qWkhU iANcUBOkwhw1bOFLbnoBlHXk1yf2rEZhkiZ4Pm3PSRxiwosc29/1bPzGNR3xDQLoAq/w 4lFg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id x1si4639996plo.58.2019.03.21.09.37.10; Thu, 21 Mar 2019 09:37:10 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728737AbfCUQhH (ORCPT + 31 others); Thu, 21 Mar 2019 12:37:07 -0400 Received: from foss.arm.com ([217.140.101.70]:59472 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727987AbfCUQhG (ORCPT ); Thu, 21 Mar 2019 12:37:06 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 11C81374; Thu, 21 Mar 2019 09:37:05 -0700 (PDT) Received: from e108454-lin.cambridge.arm.com (e108454-lin.cambridge.arm.com [10.1.196.50]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id F17543F614; Thu, 21 Mar 2019 09:37:02 -0700 (PDT) From: Julien Grall To: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu Cc: christoffer.dall@arm.com, james.morse@arm.com, marc.zyngier@arm.com, julien.thierry@arm.com, suzuki.poulose@arm.com, catalin.marinas@arm.com, will.deacon@arm.com, Julien Grall Subject: [PATCH RFC 11/14] arm64: Move the ASID allocator code in a separate file Date: Thu, 21 Mar 2019 16:36:20 +0000 Message-Id: <20190321163623.20219-12-julien.grall@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20190321163623.20219-1-julien.grall@arm.com> References: <20190321163623.20219-1-julien.grall@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org We will want to re-use the ASID allocator in a separate context (e.g allocating VMID). So move the code in a new file. The function asid_check_context has been moved in the header as a static inline function because we want to avoid add a branch when checking if the ASID is still valid. Signed-off-by: Julien Grall --- This code will be used in the virt code for allocating VMID. I am not entirely sure where to place it. Lib could potentially be a good place but I am not entirely convinced the algo as it is could be used by other architecture. Looking at x86, it seems that it will not be possible to re-use because the number of PCID (aka ASID) could be smaller than the number of CPUs. See commit message 10af6235e0d327d42e1bad974385197817923dc1 "x86/mm: Implement PCID based optimization: try to preserve old TLB entries using PCI". --- arch/arm64/include/asm/asid.h | 77 ++++++++++++++ arch/arm64/lib/Makefile | 2 + arch/arm64/lib/asid.c | 185 +++++++++++++++++++++++++++++++++ arch/arm64/mm/context.c | 235 +----------------------------------------- 4 files changed, 267 insertions(+), 232 deletions(-) create mode 100644 arch/arm64/include/asm/asid.h create mode 100644 arch/arm64/lib/asid.c -- 2.11.0 diff --git a/arch/arm64/include/asm/asid.h b/arch/arm64/include/asm/asid.h new file mode 100644 index 000000000000..bb62b587f37f --- /dev/null +++ b/arch/arm64/include/asm/asid.h @@ -0,0 +1,77 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef __ASM_ASM_ASID_H +#define __ASM_ASM_ASID_H + +#include +#include +#include +#include +#include + +struct asid_info +{ + atomic64_t generation; + unsigned long *map; + atomic64_t __percpu *active; + u64 __percpu *reserved; + u32 bits; + /* Lock protecting the structure */ + raw_spinlock_t lock; + /* Which CPU requires context flush on next call */ + cpumask_t flush_pending; + /* Number of ASID allocated by context (shift value) */ + unsigned int ctxt_shift; + /* Callback to locally flush the context. */ + void (*flush_cpu_ctxt_cb)(void); +}; + +#define NUM_ASIDS(info) (1UL << ((info)->bits)) +#define NUM_CTXT_ASIDS(info) (NUM_ASIDS(info) >> (info)->ctxt_shift) + +#define active_asid(info, cpu) *per_cpu_ptr((info)->active, cpu) + +void asid_new_context(struct asid_info *info, atomic64_t *pasid, + unsigned int cpu); + +/* + * Check the ASID is still valid for the context. If not generate a new ASID. + * + * @pasid: Pointer to the current ASID batch + * @cpu: current CPU ID. Must have been acquired throught get_cpu() + */ +static inline void asid_check_context(struct asid_info *info, + atomic64_t *pasid, unsigned int cpu) +{ + u64 asid, old_active_asid; + + asid = atomic64_read(pasid); + + /* + * The memory ordering here is subtle. + * If our active_asid is non-zero and the ASID matches the current + * generation, then we update the active_asid entry with a relaxed + * cmpxchg. Racing with a concurrent rollover means that either: + * + * - We get a zero back from the cmpxchg and end up waiting on the + * lock. Taking the lock synchronises with the rollover and so + * we are forced to see the updated generation. + * + * - We get a valid ASID back from the cmpxchg, which means the + * relaxed xchg in flush_context will treat us as reserved + * because atomic RmWs are totally ordered for a given location. + */ + old_active_asid = atomic64_read(&active_asid(info, cpu)); + if (old_active_asid && + !((asid ^ atomic64_read(&info->generation)) >> info->bits) && + atomic64_cmpxchg_relaxed(&active_asid(info, cpu), + old_active_asid, asid)) + return; + + asid_new_context(info, pasid, cpu); +} + +int asid_allocator_init(struct asid_info *info, + u32 bits, unsigned int asid_per_ctxt, + void (*flush_cpu_ctxt_cb)(void)); + +#endif diff --git a/arch/arm64/lib/Makefile b/arch/arm64/lib/Makefile index 5540a1638baf..720df5ee2aa2 100644 --- a/arch/arm64/lib/Makefile +++ b/arch/arm64/lib/Makefile @@ -5,6 +5,8 @@ lib-y := clear_user.o delay.o copy_from_user.o \ memcmp.o strcmp.o strncmp.o strlen.o strnlen.o \ strchr.o strrchr.o tishift.o +lib-y += asid.o + ifeq ($(CONFIG_KERNEL_MODE_NEON), y) obj-$(CONFIG_XOR_BLOCKS) += xor-neon.o CFLAGS_REMOVE_xor-neon.o += -mgeneral-regs-only diff --git a/arch/arm64/lib/asid.c b/arch/arm64/lib/asid.c new file mode 100644 index 000000000000..72b71bfb32be --- /dev/null +++ b/arch/arm64/lib/asid.c @@ -0,0 +1,185 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Generic ASID allocator. + * + * Based on arch/arm/mm/context.c + * + * Copyright (C) 2002-2003 Deep Blue Solutions Ltd, all rights reserved. + * Copyright (C) 2012 ARM Ltd. + */ + +#include + +#include + +#define reserved_asid(info, cpu) *per_cpu_ptr((info)->reserved, cpu) + +#define ASID_MASK(info) (~GENMASK((info)->bits - 1, 0)) +#define ASID_FIRST_VERSION(info) (1UL << ((info)->bits)) + +#define asid2idx(info, asid) (((asid) & ~ASID_MASK(info)) >> (info)->ctxt_shift) +#define idx2asid(info, idx) (((idx) << (info)->ctxt_shift) & ~ASID_MASK(info)) + +static void flush_context(struct asid_info *info) +{ + int i; + u64 asid; + + /* Update the list of reserved ASIDs and the ASID bitmap. */ + bitmap_clear(info->map, 0, NUM_CTXT_ASIDS(info)); + + for_each_possible_cpu(i) { + asid = atomic64_xchg_relaxed(&active_asid(info, i), 0); + /* + * If this CPU has already been through a + * rollover, but hasn't run another task in + * the meantime, we must preserve its reserved + * ASID, as this is the only trace we have of + * the process it is still running. + */ + if (asid == 0) + asid = reserved_asid(info, i); + __set_bit(asid2idx(info, asid), info->map); + reserved_asid(info, i) = asid; + } + + /* + * Queue a TLB invalidation for each CPU to perform on next + * context-switch + */ + cpumask_setall(&info->flush_pending); +} + +static bool check_update_reserved_asid(struct asid_info *info, u64 asid, + u64 newasid) +{ + int cpu; + bool hit = false; + + /* + * Iterate over the set of reserved ASIDs looking for a match. + * If we find one, then we can update our mm to use newasid + * (i.e. the same ASID in the current generation) but we can't + * exit the loop early, since we need to ensure that all copies + * of the old ASID are updated to reflect the mm. Failure to do + * so could result in us missing the reserved ASID in a future + * generation. + */ + for_each_possible_cpu(cpu) { + if (reserved_asid(info, cpu) == asid) { + hit = true; + reserved_asid(info, cpu) = newasid; + } + } + + return hit; +} + +static u64 new_context(struct asid_info *info, atomic64_t *pasid) +{ + static u32 cur_idx = 1; + u64 asid = atomic64_read(pasid); + u64 generation = atomic64_read(&info->generation); + + if (asid != 0) { + u64 newasid = generation | (asid & ~ASID_MASK(info)); + + /* + * If our current ASID was active during a rollover, we + * can continue to use it and this was just a false alarm. + */ + if (check_update_reserved_asid(info, asid, newasid)) + return newasid; + + /* + * We had a valid ASID in a previous life, so try to re-use + * it if possible. + */ + if (!__test_and_set_bit(asid2idx(info, asid), info->map)) + return newasid; + } + + /* + * Allocate a free ASID. If we can't find one, take a note of the + * currently active ASIDs and mark the TLBs as requiring flushes. We + * always count from ASID #2 (index 1), as we use ASID #0 when setting + * a reserved TTBR0 for the init_mm and we allocate ASIDs in even/odd + * pairs. + */ + asid = find_next_zero_bit(info->map, NUM_CTXT_ASIDS(info), cur_idx); + if (asid != NUM_CTXT_ASIDS(info)) + goto set_asid; + + /* We're out of ASIDs, so increment the global generation count */ + generation = atomic64_add_return_relaxed(ASID_FIRST_VERSION(info), + &info->generation); + flush_context(info); + + /* We have more ASIDs than CPUs, so this will always succeed */ + asid = find_next_zero_bit(info->map, NUM_CTXT_ASIDS(info), 1); + +set_asid: + __set_bit(asid, info->map); + cur_idx = asid; + return idx2asid(info, asid) | generation; +} + +/* + * Generate a new ASID for the context. + * + * @pasid: Pointer to the current ASID batch allocated. It will be updated + * with the new ASID batch. + * @cpu: current CPU ID. Must have been acquired through get_cpu() + */ +void asid_new_context(struct asid_info *info, atomic64_t *pasid, + unsigned int cpu) +{ + unsigned long flags; + u64 asid; + + raw_spin_lock_irqsave(&info->lock, flags); + /* Check that our ASID belongs to the current generation. */ + asid = atomic64_read(pasid); + if ((asid ^ atomic64_read(&info->generation)) >> info->bits) { + asid = new_context(info, pasid); + atomic64_set(pasid, asid); + } + + if (cpumask_test_and_clear_cpu(cpu, &info->flush_pending)) + info->flush_cpu_ctxt_cb(); + + atomic64_set(&active_asid(info, cpu), asid); + raw_spin_unlock_irqrestore(&info->lock, flags); +} + +/* + * Initialize the ASID allocator + * + * @info: Pointer to the asid allocator structure + * @bits: Number of ASIDs available + * @asid_per_ctxt: Number of ASIDs to allocate per-context. ASIDs are + * allocated contiguously for a given context. This value should be a power of + * 2. + */ +int asid_allocator_init(struct asid_info *info, + u32 bits, unsigned int asid_per_ctxt, + void (*flush_cpu_ctxt_cb)(void)) +{ + info->bits = bits; + info->ctxt_shift = ilog2(asid_per_ctxt); + info->flush_cpu_ctxt_cb = flush_cpu_ctxt_cb; + /* + * Expect allocation after rollover to fail if we don't have at least + * one more ASID than CPUs. ASID #0 is always reserved. + */ + WARN_ON(NUM_CTXT_ASIDS(info) - 1 <= num_possible_cpus()); + atomic64_set(&info->generation, ASID_FIRST_VERSION(info)); + info->map = kcalloc(BITS_TO_LONGS(NUM_CTXT_ASIDS(info)), + sizeof(*info->map), GFP_KERNEL); + if (!info->map) + return -ENOMEM; + + raw_spin_lock_init(&info->lock); + + return 0; +} diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm/context.c index 678a57b77c91..95ee7711a2ef 100644 --- a/arch/arm64/mm/context.c +++ b/arch/arm64/mm/context.c @@ -22,47 +22,22 @@ #include #include +#include #include #include #include #include -struct asid_info -{ - atomic64_t generation; - unsigned long *map; - atomic64_t __percpu *active; - u64 __percpu *reserved; - u32 bits; - raw_spinlock_t lock; - /* Which CPU requires context flush on next call */ - cpumask_t flush_pending; - /* Number of ASID allocated by context (shift value) */ - unsigned int ctxt_shift; - /* Callback to locally flush the context. */ - void (*flush_cpu_ctxt_cb)(void); -} asid_info; - -#define active_asid(info, cpu) *per_cpu_ptr((info)->active, cpu) -#define reserved_asid(info, cpu) *per_cpu_ptr((info)->reserved, cpu) - static DEFINE_PER_CPU(atomic64_t, active_asids); static DEFINE_PER_CPU(u64, reserved_asids); -#define ASID_MASK(info) (~GENMASK((info)->bits - 1, 0)) -#define NUM_ASIDS(info) (1UL << ((info)->bits)) - -#define ASID_FIRST_VERSION(info) NUM_ASIDS(info) - #ifdef CONFIG_UNMAP_KERNEL_AT_EL0 #define ASID_PER_CONTEXT 2 #else #define ASID_PER_CONTEXT 1 #endif -#define NUM_CTXT_ASIDS(info) (NUM_ASIDS(info) >> (info)->ctxt_shift) -#define asid2idx(info, asid) (((asid) & ~ASID_MASK(info)) >> (info)->ctxt_shift) -#define idx2asid(info, idx) (((idx) << (info)->ctxt_shift) & ~ASID_MASK(info)) +struct asid_info asid_info; /* Get the ASIDBits supported by the current CPU */ static u32 get_cpu_asid_bits(void) @@ -102,178 +77,6 @@ void verify_cpu_asid_bits(void) } } -static void flush_context(struct asid_info *info) -{ - int i; - u64 asid; - - /* Update the list of reserved ASIDs and the ASID bitmap. */ - bitmap_clear(info->map, 0, NUM_CTXT_ASIDS(info)); - - for_each_possible_cpu(i) { - asid = atomic64_xchg_relaxed(&active_asid(info, i), 0); - /* - * If this CPU has already been through a - * rollover, but hasn't run another task in - * the meantime, we must preserve its reserved - * ASID, as this is the only trace we have of - * the process it is still running. - */ - if (asid == 0) - asid = reserved_asid(info, i); - __set_bit(asid2idx(info, asid), info->map); - reserved_asid(info, i) = asid; - } - - /* - * Queue a TLB invalidation for each CPU to perform on next - * context-switch - */ - cpumask_setall(&info->flush_pending); -} - -static bool check_update_reserved_asid(struct asid_info *info, u64 asid, - u64 newasid) -{ - int cpu; - bool hit = false; - - /* - * Iterate over the set of reserved ASIDs looking for a match. - * If we find one, then we can update our mm to use newasid - * (i.e. the same ASID in the current generation) but we can't - * exit the loop early, since we need to ensure that all copies - * of the old ASID are updated to reflect the mm. Failure to do - * so could result in us missing the reserved ASID in a future - * generation. - */ - for_each_possible_cpu(cpu) { - if (reserved_asid(info, cpu) == asid) { - hit = true; - reserved_asid(info, cpu) = newasid; - } - } - - return hit; -} - -static u64 new_context(struct asid_info *info, atomic64_t *pasid) -{ - static u32 cur_idx = 1; - u64 asid = atomic64_read(pasid); - u64 generation = atomic64_read(&info->generation); - - if (asid != 0) { - u64 newasid = generation | (asid & ~ASID_MASK(info)); - - /* - * If our current ASID was active during a rollover, we - * can continue to use it and this was just a false alarm. - */ - if (check_update_reserved_asid(info, asid, newasid)) - return newasid; - - /* - * We had a valid ASID in a previous life, so try to re-use - * it if possible. - */ - if (!__test_and_set_bit(asid2idx(info, asid), info->map)) - return newasid; - } - - /* - * Allocate a free ASID. If we can't find one, take a note of the - * currently active ASIDs and mark the TLBs as requiring flushes. We - * always count from ASID #2 (index 1), as we use ASID #0 when setting - * a reserved TTBR0 for the init_mm and we allocate ASIDs in even/odd - * pairs. - */ - asid = find_next_zero_bit(info->map, NUM_CTXT_ASIDS(info), cur_idx); - if (asid != NUM_CTXT_ASIDS(info)) - goto set_asid; - - /* We're out of ASIDs, so increment the global generation count */ - generation = atomic64_add_return_relaxed(ASID_FIRST_VERSION(info), - &info->generation); - flush_context(info); - - /* We have more ASIDs than CPUs, so this will always succeed */ - asid = find_next_zero_bit(info->map, NUM_CTXT_ASIDS(info), 1); - -set_asid: - __set_bit(asid, info->map); - cur_idx = asid; - return idx2asid(info, asid) | generation; -} - -static void asid_new_context(struct asid_info *info, atomic64_t *pasid, - unsigned int cpu); - -/* - * Check the ASID is still valid for the context. If not generate a new ASID. - * - * @pasid: Pointer to the current ASID batch - * @cpu: current CPU ID. Must have been acquired throught get_cpu() - */ -static void asid_check_context(struct asid_info *info, - atomic64_t *pasid, unsigned int cpu) -{ - u64 asid, old_active_asid; - - asid = atomic64_read(pasid); - - /* - * The memory ordering here is subtle. - * If our active_asid is non-zero and the ASID matches the current - * generation, then we update the active_asid entry with a relaxed - * cmpxchg. Racing with a concurrent rollover means that either: - * - * - We get a zero back from the cmpxchg and end up waiting on the - * lock. Taking the lock synchronises with the rollover and so - * we are forced to see the updated generation. - * - * - We get a valid ASID back from the cmpxchg, which means the - * relaxed xchg in flush_context will treat us as reserved - * because atomic RmWs are totally ordered for a given location. - */ - old_active_asid = atomic64_read(&active_asid(info, cpu)); - if (old_active_asid && - !((asid ^ atomic64_read(&info->generation)) >> info->bits) && - atomic64_cmpxchg_relaxed(&active_asid(info, cpu), - old_active_asid, asid)) - return; - - asid_new_context(info, pasid, cpu); -} - -/* - * Generate a new ASID for the context. - * - * @pasid: Pointer to the current ASID batch allocated. It will be updated - * with the new ASID batch. - * @cpu: current CPU ID. Must have been acquired through get_cpu() - */ -static void asid_new_context(struct asid_info *info, atomic64_t *pasid, - unsigned int cpu) -{ - unsigned long flags; - u64 asid; - - raw_spin_lock_irqsave(&info->lock, flags); - /* Check that our ASID belongs to the current generation. */ - asid = atomic64_read(pasid); - if ((asid ^ atomic64_read(&info->generation)) >> info->bits) { - asid = new_context(info, pasid); - atomic64_set(pasid, asid); - } - - if (cpumask_test_and_clear_cpu(cpu, &info->flush_pending)) - info->flush_cpu_ctxt_cb(); - - atomic64_set(&active_asid(info, cpu), asid); - raw_spin_unlock_irqrestore(&info->lock, flags); -} - void check_and_switch_context(struct mm_struct *mm, unsigned int cpu) { if (system_supports_cnp()) @@ -305,38 +108,6 @@ static void asid_flush_cpu_ctxt(void) local_flush_tlb_all(); } -/* - * Initialize the ASID allocator - * - * @info: Pointer to the asid allocator structure - * @bits: Number of ASIDs available - * @asid_per_ctxt: Number of ASIDs to allocate per-context. ASIDs are - * allocated contiguously for a given context. This value should be a power of - * 2. - */ -static int asid_allocator_init(struct asid_info *info, - u32 bits, unsigned int asid_per_ctxt, - void (*flush_cpu_ctxt_cb)(void)) -{ - info->bits = bits; - info->ctxt_shift = ilog2(asid_per_ctxt); - info->flush_cpu_ctxt_cb = flush_cpu_ctxt_cb; - /* - * Expect allocation after rollover to fail if we don't have at least - * one more ASID than CPUs. ASID #0 is always reserved. - */ - WARN_ON(NUM_CTXT_ASIDS(info) - 1 <= num_possible_cpus()); - atomic64_set(&info->generation, ASID_FIRST_VERSION(info)); - info->map = kcalloc(BITS_TO_LONGS(NUM_CTXT_ASIDS(info)), - sizeof(*info->map), GFP_KERNEL); - if (!info->map) - return -ENOMEM; - - raw_spin_lock_init(&info->lock); - - return 0; -} - static int asids_init(void) { u32 bits = get_cpu_asid_bits(); @@ -344,7 +115,7 @@ static int asids_init(void) if (!asid_allocator_init(&asid_info, bits, ASID_PER_CONTEXT, asid_flush_cpu_ctxt)) panic("Unable to initialize ASID allocator for %lu ASIDs\n", - 1UL << bits); + NUM_ASIDS(&asid_info)); asid_info.active = &active_asids; asid_info.reserved = &reserved_asids; From patchwork Thu Mar 21 16:36:21 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Grall X-Patchwork-Id: 160807 Delivered-To: patch@linaro.org Received: by 2002:a02:c6d8:0:0:0:0:0 with SMTP id r24csp1002110jan; Thu, 21 Mar 2019 09:37:11 -0700 (PDT) X-Google-Smtp-Source: APXvYqzV/xymcsH3O7f8ovuHu2rsRODKR7DsTkraept2LXdGGKd2NQQ56Wtv+avQKFxttSRLcMfu X-Received: by 2002:a62:484:: with SMTP id 126mr4236522pfe.91.1553186231194; Thu, 21 Mar 2019 09:37:11 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1553186231; cv=none; d=google.com; s=arc-20160816; b=oVTaV1zWOQaVexg/ZTJ4CFlqi4B0N28f4qstEiLDXJfDbZ/9r7Ob+w98MsEcKi12/8 1/jDYhcpQ0gp/Kn6cUVElGJei4LMwVh0sh3HGjAYapEwauJdQ3E6Lh5amTXO5rLGnNio tqCGUe7Cg8CWhCQNuCD/cgbLDImHUHWD2Po2lE/InJ5VljsHpkwPIlSvkVplUjbpspLu e/rkVd4M/TXXtIvbJNetXQMNMTgi66ZvBAsaBIHlQZm9fQvkYnmijt1z4Y7OoTPYiGsv BNYqV3vCHSaBvReuRM40lAS+UzYh1HV1WFHf55z3vbzjBzXIU3wXlxA81yEAv7jQTlok 8Qog== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=Edbrh3/mdoMZkjfezj/jUtLWssMnkQyriFtWu5pHsLg=; b=B1ih2mW5wKm2g9femZA89iWnELQmo7/j5kvp6FWWM9fhQzR0FpSSvg4K5jeTblMxNj 646zHbLpSKyofD4CTBGdk1sDH7MbPy72UVUmx59/U9Gl1uWUUG+ymv79FsxYDYmrQ6DT ArTa9xblSmfBcgy1JAnOTz6oj2xQOtJrmHqxt5A32RCyQS4mWwJQou5QKJa9rygcEe/g Vpnm8hi2xPQyMIiUn8db08OrwauPLpKvKu30UyK3zBvPLJYEesnNFP7dmit2uKySG+JU 1HaYR3JymnS0a3w5j57zVK+lxbmsxPXvUnmlEcqiaukgB7/w+siFFv3zv6V3o8N8MTCM isQQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id x1si4639996plo.58.2019.03.21.09.37.10; Thu, 21 Mar 2019 09:37:11 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728748AbfCUQhJ (ORCPT + 31 others); Thu, 21 Mar 2019 12:37:09 -0400 Received: from foss.arm.com ([217.140.101.70]:59484 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728732AbfCUQhH (ORCPT ); Thu, 21 Mar 2019 12:37:07 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 3E90EEBD; Thu, 21 Mar 2019 09:37:07 -0700 (PDT) Received: from e108454-lin.cambridge.arm.com (e108454-lin.cambridge.arm.com [10.1.196.50]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 4EC3D3F614; Thu, 21 Mar 2019 09:37:05 -0700 (PDT) From: Julien Grall To: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu Cc: christoffer.dall@arm.com, james.morse@arm.com, marc.zyngier@arm.com, julien.thierry@arm.com, suzuki.poulose@arm.com, catalin.marinas@arm.com, will.deacon@arm.com, Julien Grall Subject: [PATCH RFC 12/14] arm64/lib: asid: Allow user to update the context under the lock Date: Thu, 21 Mar 2019 16:36:21 +0000 Message-Id: <20190321163623.20219-13-julien.grall@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20190321163623.20219-1-julien.grall@arm.com> References: <20190321163623.20219-1-julien.grall@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Some users of the ASID allocator (e.g VMID) will require to update the context when a new ASID is generated. This has to be protected by a lock to prevent concurrent modification. Rather than introducing yet another lock, it is possible to re-use the allocator lock for that purpose. This patch introduces a new callback that will be call when updating the context. Signed-off-by: Julien Grall --- arch/arm64/include/asm/asid.h | 12 ++++++++---- arch/arm64/lib/asid.c | 10 ++++++++-- arch/arm64/mm/context.c | 11 ++++++++--- 3 files changed, 24 insertions(+), 9 deletions(-) -- 2.11.0 diff --git a/arch/arm64/include/asm/asid.h b/arch/arm64/include/asm/asid.h index bb62b587f37f..d8d9dc875bec 100644 --- a/arch/arm64/include/asm/asid.h +++ b/arch/arm64/include/asm/asid.h @@ -23,6 +23,8 @@ struct asid_info unsigned int ctxt_shift; /* Callback to locally flush the context. */ void (*flush_cpu_ctxt_cb)(void); + /* Callback to call when a context is updated */ + void (*update_ctxt_cb)(void *ctxt); }; #define NUM_ASIDS(info) (1UL << ((info)->bits)) @@ -31,7 +33,7 @@ struct asid_info #define active_asid(info, cpu) *per_cpu_ptr((info)->active, cpu) void asid_new_context(struct asid_info *info, atomic64_t *pasid, - unsigned int cpu); + unsigned int cpu, void *ctxt); /* * Check the ASID is still valid for the context. If not generate a new ASID. @@ -40,7 +42,8 @@ void asid_new_context(struct asid_info *info, atomic64_t *pasid, * @cpu: current CPU ID. Must have been acquired throught get_cpu() */ static inline void asid_check_context(struct asid_info *info, - atomic64_t *pasid, unsigned int cpu) + atomic64_t *pasid, unsigned int cpu, + void *ctxt) { u64 asid, old_active_asid; @@ -67,11 +70,12 @@ static inline void asid_check_context(struct asid_info *info, old_active_asid, asid)) return; - asid_new_context(info, pasid, cpu); + asid_new_context(info, pasid, cpu, ctxt); } int asid_allocator_init(struct asid_info *info, u32 bits, unsigned int asid_per_ctxt, - void (*flush_cpu_ctxt_cb)(void)); + void (*flush_cpu_ctxt_cb)(void), + void (*update_ctxt_cb)(void *ctxt)); #endif diff --git a/arch/arm64/lib/asid.c b/arch/arm64/lib/asid.c index 72b71bfb32be..b47e6769c1bc 100644 --- a/arch/arm64/lib/asid.c +++ b/arch/arm64/lib/asid.c @@ -130,9 +130,10 @@ static u64 new_context(struct asid_info *info, atomic64_t *pasid) * @pasid: Pointer to the current ASID batch allocated. It will be updated * with the new ASID batch. * @cpu: current CPU ID. Must have been acquired through get_cpu() + * @ctxt: Context to update when calling update_context */ void asid_new_context(struct asid_info *info, atomic64_t *pasid, - unsigned int cpu) + unsigned int cpu, void *ctxt) { unsigned long flags; u64 asid; @@ -149,6 +150,9 @@ void asid_new_context(struct asid_info *info, atomic64_t *pasid, info->flush_cpu_ctxt_cb(); atomic64_set(&active_asid(info, cpu), asid); + + info->update_ctxt_cb(ctxt); + raw_spin_unlock_irqrestore(&info->lock, flags); } @@ -163,11 +167,13 @@ void asid_new_context(struct asid_info *info, atomic64_t *pasid, */ int asid_allocator_init(struct asid_info *info, u32 bits, unsigned int asid_per_ctxt, - void (*flush_cpu_ctxt_cb)(void)) + void (*flush_cpu_ctxt_cb)(void), + void (*update_ctxt_cb)(void *ctxt)) { info->bits = bits; info->ctxt_shift = ilog2(asid_per_ctxt); info->flush_cpu_ctxt_cb = flush_cpu_ctxt_cb; + info->update_ctxt_cb = update_ctxt_cb; /* * Expect allocation after rollover to fail if we don't have at least * one more ASID than CPUs. ASID #0 is always reserved. diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm/context.c index 95ee7711a2ef..737b4bd7bbe7 100644 --- a/arch/arm64/mm/context.c +++ b/arch/arm64/mm/context.c @@ -82,7 +82,7 @@ void check_and_switch_context(struct mm_struct *mm, unsigned int cpu) if (system_supports_cnp()) cpu_set_reserved_ttbr0(); - asid_check_context(&asid_info, &mm->context.id, cpu); + asid_check_context(&asid_info, &mm->context.id, cpu, mm); arm64_apply_bp_hardening(); @@ -108,12 +108,17 @@ static void asid_flush_cpu_ctxt(void) local_flush_tlb_all(); } +static void asid_update_ctxt(void *ctxt) +{ + /* Nothing to do */ +} + static int asids_init(void) { u32 bits = get_cpu_asid_bits(); - if (!asid_allocator_init(&asid_info, bits, ASID_PER_CONTEXT, - asid_flush_cpu_ctxt)) + if (asid_allocator_init(&asid_info, bits, ASID_PER_CONTEXT, + asid_flush_cpu_ctxt, asid_update_ctxt)) panic("Unable to initialize ASID allocator for %lu ASIDs\n", NUM_ASIDS(&asid_info)); From patchwork Thu Mar 21 16:36:22 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Grall X-Patchwork-Id: 160809 Delivered-To: patch@linaro.org Received: by 2002:a02:c6d8:0:0:0:0:0 with SMTP id r24csp1002232jan; Thu, 21 Mar 2019 09:37:19 -0700 (PDT) X-Google-Smtp-Source: APXvYqxSBQjLzlQ63okO2OLl95z6IOfnogAaGGqLjbv8wl7BuYxhx4xXrmm/i8eFzkbTB+q8UI0J X-Received: by 2002:a17:902:2ba8:: with SMTP id l37mr4557537plb.17.1553186239345; Thu, 21 Mar 2019 09:37:19 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1553186239; cv=none; d=google.com; s=arc-20160816; b=VedPcVoGCtGt287AURfAtb+W9fNX9/P09lcwvWSW6uo2ytBuknc0MuRToKJ0e8mdQ9 fUCpXSSjnUn9t2OWuG8WISEjnvKvZxDznZTzBolr3ZrVkRn5Lu5DZ0vSQnjhODcGZfPz /5znoe6LlfW8XdCo4twgELsgaVx5dWTlEHLBZLS939YsHAlkQNEju8a4Jj/rH5Bn5qqO dMIM9Fl20aDiK9tqZwRE0NluCKIB2kEtfyFQQ2IlB86Ur9uhXiHfhs527M4IpqtqwbZ5 3JqMryWx5ZqWDxq9i6JE6i3nFskYj5Dnc+Ci+SnvZDH5uBeWmp93VoLEI580Oa/6iIMB bi9Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=7qya+kVudoxoHwvX9HYmg3qYtbN/K8pZei/QxqjeJ9g=; b=Zz6bbqYdxVwlGBmG12NzPkPI08JQlQ625E+h8+nMGv3j1Kmh0vIPde7G8e0IsQFuiU OM9hQkShbK0mcMFOT7/Uo3ZkrIUxS8f2tzAGA+5zFSXD0phH5uy7kH4Atz1DAVr510GQ Qvo0oNgNQqF9ZrGF9M2alFvmnjDnYtGdzHKFBa8ORYeaaGGfjNFmyHqdHxWXiKX7JwMS FLTo42tczpuTJj6s8OILXE//o/vZzGWY2eDrSDGpfqVQqdcFd+8t9qm9075yqLoGsdJ9 0mCNUSfT2KVIMYuc2PRAS3kWvv4G3uT7WMajAfNOtCFlpQckigj0wOuSiGEzwpgKZhOz 34lQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id a21si4333187pgw.395.2019.03.21.09.37.18; Thu, 21 Mar 2019 09:37:19 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728763AbfCUQhM (ORCPT + 31 others); Thu, 21 Mar 2019 12:37:12 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:59494 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728746AbfCUQhK (ORCPT ); Thu, 21 Mar 2019 12:37:10 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 6CE12165C; Thu, 21 Mar 2019 09:37:09 -0700 (PDT) Received: from e108454-lin.cambridge.arm.com (e108454-lin.cambridge.arm.com [10.1.196.50]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 7CDC03F614; Thu, 21 Mar 2019 09:37:07 -0700 (PDT) From: Julien Grall To: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu Cc: christoffer.dall@arm.com, james.morse@arm.com, marc.zyngier@arm.com, julien.thierry@arm.com, suzuki.poulose@arm.com, catalin.marinas@arm.com, will.deacon@arm.com, Julien Grall Subject: [PATCH RFC 13/14] arm/kvm: Introduce a new VMID allocator Date: Thu, 21 Mar 2019 16:36:22 +0000 Message-Id: <20190321163623.20219-14-julien.grall@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20190321163623.20219-1-julien.grall@arm.com> References: <20190321163623.20219-1-julien.grall@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org A follow-up patch will replace the KVM VMID allocator with the arm64 ASID allocator. It is not yet clear how the code can be shared between arm and arm64, so this is a verbatim copy of arch/arm64/lib/asid.c. Signed-off-by: Julien Grall --- arch/arm/include/asm/kvm_asid.h | 81 +++++++++++++++++ arch/arm/kvm/Makefile | 1 + arch/arm/kvm/asid.c | 191 ++++++++++++++++++++++++++++++++++++++++ 3 files changed, 273 insertions(+) create mode 100644 arch/arm/include/asm/kvm_asid.h create mode 100644 arch/arm/kvm/asid.c -- 2.11.0 diff --git a/arch/arm/include/asm/kvm_asid.h b/arch/arm/include/asm/kvm_asid.h new file mode 100644 index 000000000000..f312a6d7543c --- /dev/null +++ b/arch/arm/include/asm/kvm_asid.h @@ -0,0 +1,81 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef __ARM_KVM_ASID_H__ +#define __ARM_KVM_ASID_H__ + +#include +#include +#include +#include +#include + +struct asid_info +{ + atomic64_t generation; + unsigned long *map; + atomic64_t __percpu *active; + u64 __percpu *reserved; + u32 bits; + /* Lock protecting the structure */ + raw_spinlock_t lock; + /* Which CPU requires context flush on next call */ + cpumask_t flush_pending; + /* Number of ASID allocated by context (shift value) */ + unsigned int ctxt_shift; + /* Callback to locally flush the context. */ + void (*flush_cpu_ctxt_cb)(void); + /* Callback to call when a context is updated */ + void (*update_ctxt_cb)(void *ctxt); +}; + +#define NUM_ASIDS(info) (1UL << ((info)->bits)) +#define NUM_CTXT_ASIDS(info) (NUM_ASIDS(info) >> (info)->ctxt_shift) + +#define active_asid(info, cpu) *per_cpu_ptr((info)->active, cpu) + +void asid_new_context(struct asid_info *info, atomic64_t *pasid, + unsigned int cpu, void *ctxt); + +/* + * Check the ASID is still valid for the context. If not generate a new ASID. + * + * @pasid: Pointer to the current ASID batch + * @cpu: current CPU ID. Must have been acquired throught get_cpu() + */ +static inline void asid_check_context(struct asid_info *info, + atomic64_t *pasid, unsigned int cpu, + void *ctxt) +{ + u64 asid, old_active_asid; + + asid = atomic64_read(pasid); + + /* + * The memory ordering here is subtle. + * If our active_asid is non-zero and the ASID matches the current + * generation, then we update the active_asid entry with a relaxed + * cmpxchg. Racing with a concurrent rollover means that either: + * + * - We get a zero back from the cmpxchg and end up waiting on the + * lock. Taking the lock synchronises with the rollover and so + * we are forced to see the updated generation. + * + * - We get a valid ASID back from the cmpxchg, which means the + * relaxed xchg in flush_context will treat us as reserved + * because atomic RmWs are totally ordered for a given location. + */ + old_active_asid = atomic64_read(&active_asid(info, cpu)); + if (old_active_asid && + !((asid ^ atomic64_read(&info->generation)) >> info->bits) && + atomic64_cmpxchg_relaxed(&active_asid(info, cpu), + old_active_asid, asid)) + return; + + asid_new_context(info, pasid, cpu, ctxt); +} + +int asid_allocator_init(struct asid_info *info, + u32 bits, unsigned int asid_per_ctxt, + void (*flush_cpu_ctxt_cb)(void), + void (*update_ctxt_cb)(void *ctxt)); + +#endif /* __ARM_KVM_ASID_H__ */ diff --git a/arch/arm/kvm/Makefile b/arch/arm/kvm/Makefile index 531e59f5be9c..35d2d4c67827 100644 --- a/arch/arm/kvm/Makefile +++ b/arch/arm/kvm/Makefile @@ -21,6 +21,7 @@ obj-$(CONFIG_KVM_ARM_HOST) += hyp/ obj-y += kvm-arm.o init.o interrupts.o obj-y += handle_exit.o guest.o emulate.o reset.o +obj-y += asid.o obj-y += coproc.o coproc_a15.o coproc_a7.o vgic-v3-coproc.o obj-y += $(KVM)/arm/arm.o $(KVM)/arm/mmu.o $(KVM)/arm/mmio.o obj-y += $(KVM)/arm/psci.o $(KVM)/arm/perf.o diff --git a/arch/arm/kvm/asid.c b/arch/arm/kvm/asid.c new file mode 100644 index 000000000000..60a25270163a --- /dev/null +++ b/arch/arm/kvm/asid.c @@ -0,0 +1,191 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Generic ASID allocator. + * + * Based on arch/arm/mm/context.c + * + * Copyright (C) 2002-2003 Deep Blue Solutions Ltd, all rights reserved. + * Copyright (C) 2012 ARM Ltd. + */ + +#include + +#include + +#define reserved_asid(info, cpu) *per_cpu_ptr((info)->reserved, cpu) + +#define ASID_MASK(info) (~GENMASK((info)->bits - 1, 0)) +#define ASID_FIRST_VERSION(info) (1UL << ((info)->bits)) + +#define asid2idx(info, asid) (((asid) & ~ASID_MASK(info)) >> (info)->ctxt_shift) +#define idx2asid(info, idx) (((idx) << (info)->ctxt_shift) & ~ASID_MASK(info)) + +static void flush_context(struct asid_info *info) +{ + int i; + u64 asid; + + /* Update the list of reserved ASIDs and the ASID bitmap. */ + bitmap_clear(info->map, 0, NUM_CTXT_ASIDS(info)); + + for_each_possible_cpu(i) { + asid = atomic64_xchg_relaxed(&active_asid(info, i), 0); + /* + * If this CPU has already been through a + * rollover, but hasn't run another task in + * the meantime, we must preserve its reserved + * ASID, as this is the only trace we have of + * the process it is still running. + */ + if (asid == 0) + asid = reserved_asid(info, i); + __set_bit(asid2idx(info, asid), info->map); + reserved_asid(info, i) = asid; + } + + /* + * Queue a TLB invalidation for each CPU to perform on next + * context-switch + */ + cpumask_setall(&info->flush_pending); +} + +static bool check_update_reserved_asid(struct asid_info *info, u64 asid, + u64 newasid) +{ + int cpu; + bool hit = false; + + /* + * Iterate over the set of reserved ASIDs looking for a match. + * If we find one, then we can update our mm to use newasid + * (i.e. the same ASID in the current generation) but we can't + * exit the loop early, since we need to ensure that all copies + * of the old ASID are updated to reflect the mm. Failure to do + * so could result in us missing the reserved ASID in a future + * generation. + */ + for_each_possible_cpu(cpu) { + if (reserved_asid(info, cpu) == asid) { + hit = true; + reserved_asid(info, cpu) = newasid; + } + } + + return hit; +} + +static u64 new_context(struct asid_info *info, atomic64_t *pasid) +{ + static u32 cur_idx = 1; + u64 asid = atomic64_read(pasid); + u64 generation = atomic64_read(&info->generation); + + if (asid != 0) { + u64 newasid = generation | (asid & ~ASID_MASK(info)); + + /* + * If our current ASID was active during a rollover, we + * can continue to use it and this was just a false alarm. + */ + if (check_update_reserved_asid(info, asid, newasid)) + return newasid; + + /* + * We had a valid ASID in a previous life, so try to re-use + * it if possible. + */ + if (!__test_and_set_bit(asid2idx(info, asid), info->map)) + return newasid; + } + + /* + * Allocate a free ASID. If we can't find one, take a note of the + * currently active ASIDs and mark the TLBs as requiring flushes. We + * always count from ASID #2 (index 1), as we use ASID #0 when setting + * a reserved TTBR0 for the init_mm and we allocate ASIDs in even/odd + * pairs. + */ + asid = find_next_zero_bit(info->map, NUM_CTXT_ASIDS(info), cur_idx); + if (asid != NUM_CTXT_ASIDS(info)) + goto set_asid; + + /* We're out of ASIDs, so increment the global generation count */ + generation = atomic64_add_return_relaxed(ASID_FIRST_VERSION(info), + &info->generation); + flush_context(info); + + /* We have more ASIDs than CPUs, so this will always succeed */ + asid = find_next_zero_bit(info->map, NUM_CTXT_ASIDS(info), 1); + +set_asid: + __set_bit(asid, info->map); + cur_idx = asid; + return idx2asid(info, asid) | generation; +} + +/* + * Generate a new ASID for the context. + * + * @pasid: Pointer to the current ASID batch allocated. It will be updated + * with the new ASID batch. + * @cpu: current CPU ID. Must have been acquired through get_cpu() + * @ctxt: Context to update when calling update_context + */ +void asid_new_context(struct asid_info *info, atomic64_t *pasid, + unsigned int cpu, void *ctxt) +{ + unsigned long flags; + u64 asid; + + raw_spin_lock_irqsave(&info->lock, flags); + /* Check that our ASID belongs to the current generation. */ + asid = atomic64_read(pasid); + if ((asid ^ atomic64_read(&info->generation)) >> info->bits) { + asid = new_context(info, pasid); + atomic64_set(pasid, asid); + } + + if (cpumask_test_and_clear_cpu(cpu, &info->flush_pending)) + info->flush_cpu_ctxt_cb(); + + atomic64_set(&active_asid(info, cpu), asid); + + info->update_ctxt_cb(ctxt); + + raw_spin_unlock_irqrestore(&info->lock, flags); +} + +/* + * Initialize the ASID allocator + * + * @info: Pointer to the asid allocator structure + * @bits: Number of ASIDs available + * @asid_per_ctxt: Number of ASIDs to allocate per-context. ASIDs are + * allocated contiguously for a given context. This value should be a power of + * 2. + */ +int asid_allocator_init(struct asid_info *info, + u32 bits, unsigned int asid_per_ctxt, + void (*flush_cpu_ctxt_cb)(void), + void (*update_ctxt_cb)(void *ctxt)) +{ + info->bits = bits; + info->ctxt_shift = ilog2(asid_per_ctxt); + info->flush_cpu_ctxt_cb = flush_cpu_ctxt_cb; + info->update_ctxt_cb = update_ctxt_cb; + /* + * Expect allocation after rollover to fail if we don't have at least + * one more ASID than CPUs. ASID #0 is always reserved. + */ + WARN_ON(NUM_CTXT_ASIDS(info) - 1 <= num_possible_cpus()); + atomic64_set(&info->generation, ASID_FIRST_VERSION(info)); + info->map = kcalloc(BITS_TO_LONGS(NUM_CTXT_ASIDS(info)), + sizeof(*info->map), GFP_KERNEL); + if (!info->map) + return -ENOMEM; + + raw_spin_lock_init(&info->lock); + + return 0; +} From patchwork Thu Mar 21 16:36:23 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Grall X-Patchwork-Id: 160808 Delivered-To: patch@linaro.org Received: by 2002:a02:c6d8:0:0:0:0:0 with SMTP id r24csp1002170jan; Thu, 21 Mar 2019 09:37:16 -0700 (PDT) X-Google-Smtp-Source: APXvYqyWvcQx6rmm5TbQB2cLta/bQhHs4nfiCi7bn1gmlq/pisXrhaaUXHzd/yuKPGTHSkgqlc8H X-Received: by 2002:a17:902:7610:: with SMTP id k16mr4486675pll.215.1553186236371; Thu, 21 Mar 2019 09:37:16 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1553186236; cv=none; d=google.com; s=arc-20160816; b=UcDaFsvO7i4AdiVzJcT/co1QIX6gY7EkL8kITzVcr8IhTEmdh3ejRhfVDJEvbJvVk1 wqDb7ysK7erUMZ8twuitB7PdxVr6MzLWoZx3A54V8SF/YuRgV7Wiu321Oe5ziKDhMYu5 KmhVR0wfX+Eszw0R0if6nE/E34Bv3WOm8RyE6rFe5QGctTUWq91U8Q1zoKVz5dUv7IhE UqxCwAtXRahLL3xZUyAsHsb21YV+drfTSydvCYIm9bdocWG6ArMNvE3+v1lG+6RHvsAj Azqq7kzR7PW0Mit6trYL+iO+hBJYRhOwEFi4gCCSIfxSqyLEsqc10Dy27cAS19BcAADq MJWQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=f5F0SAPcTiE+fOGGIy7jdGpOpniDqukTCYdO5Rw3wOE=; b=gAv7O2ax+p0ud0itgCUfKOpbfILV1U51D2XAI8UYe3U+Wuv4oxohAmAvkFd1iHMupK c2sQH9v2p0iPTBlAalN66kOF0x8Np20ij0bM+3tnnWFmPrs8nrIOjtmnLZqDWrT46oPd ZS8OxMymgjCPIIaw2rk1NLBuBPehU/3SuvxcPCR873n3coURHc++i2b8wOFoVwCibp3V ZEBqVVVfxtBiW7wJiJpkRxZQwNPSItkIweW1/oCaVmcSSvA+emNkdJstE6dy+EUtSiyd L/6jbkZ5WhsBi6ytYqo9EjZHkDDGBBaghq+MBVM0Gyn1cANpHfGHmBEn1PzWJPnPKLbD haBw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id a21si4333187pgw.395.2019.03.21.09.37.15; Thu, 21 Mar 2019 09:37:16 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728774AbfCUQhN (ORCPT + 31 others); Thu, 21 Mar 2019 12:37:13 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:59504 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728755AbfCUQhM (ORCPT ); Thu, 21 Mar 2019 12:37:12 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 9ACC5168F; Thu, 21 Mar 2019 09:37:11 -0700 (PDT) Received: from e108454-lin.cambridge.arm.com (e108454-lin.cambridge.arm.com [10.1.196.50]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id AAF143F614; Thu, 21 Mar 2019 09:37:09 -0700 (PDT) From: Julien Grall To: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu Cc: christoffer.dall@arm.com, james.morse@arm.com, marc.zyngier@arm.com, julien.thierry@arm.com, suzuki.poulose@arm.com, catalin.marinas@arm.com, will.deacon@arm.com, Julien Grall Subject: [PATCH RFC 14/14] kvm/arm: Align the VMID allocation with the arm64 ASID one Date: Thu, 21 Mar 2019 16:36:23 +0000 Message-Id: <20190321163623.20219-15-julien.grall@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20190321163623.20219-1-julien.grall@arm.com> References: <20190321163623.20219-1-julien.grall@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org At the moment, the VMID algorithm will send an SGI to all the CPUs to force an exit and then broadcast a full TLB flush and I-Cache invalidation. This patch re-use the new ASID allocator. The benefits are: - CPUs are not forced to exit at roll-over. Instead the VMID will be marked reserved and the context will be flushed at next exit. This will reduce the IPIs traffic. - Context invalidation is now per-CPU rather than broadcasted. With the new algo, the code is now adapted: - The function __kvm_flush_vm_context() has been renamed to __kvm_flush_cpu_vmid_context and now only flushing the current CPU context. - The call to update_vttbr() will be done with preemption disabled as the new algo requires to store information per-CPU. - The TLBs associated to EL1 will be flushed when booting a CPU to deal with stale information. This was previously done on the allocation of the first VMID of a new generation. The measurement was made on a Seattle based SoC (8 CPUs), with the number of VMID limited to 4-bit. The test involves running concurrently 40 guests with 2 vCPUs. Each guest will then execute hackbench 5 times before exiting. The performance difference between the current algo and the new one are: - 2.5% less exit from the guest - 22.4% more flush, although they are now local rather than broadcasted - 0.11% faster (just for the record) Signed-off-by: Julien Grall ---- Looking at the __kvm_flush_vm_context, it might be possible to reduce more the overhead by removing the I-Cache flush for other cache than VIPT. This has been left aside for now. --- arch/arm/include/asm/kvm_asm.h | 2 +- arch/arm/include/asm/kvm_host.h | 5 +- arch/arm/include/asm/kvm_hyp.h | 1 + arch/arm/kvm/hyp/tlb.c | 8 +-- arch/arm64/include/asm/kvm_asid.h | 8 +++ arch/arm64/include/asm/kvm_asm.h | 2 +- arch/arm64/include/asm/kvm_host.h | 5 +- arch/arm64/kvm/hyp/tlb.c | 10 ++-- virt/kvm/arm/arm.c | 112 +++++++++++++------------------------- 9 files changed, 61 insertions(+), 92 deletions(-) create mode 100644 arch/arm64/include/asm/kvm_asid.h -- 2.11.0 diff --git a/arch/arm/include/asm/kvm_asm.h b/arch/arm/include/asm/kvm_asm.h index 35491af87985..ce60a4a46fcc 100644 --- a/arch/arm/include/asm/kvm_asm.h +++ b/arch/arm/include/asm/kvm_asm.h @@ -65,7 +65,7 @@ struct kvm_vcpu; extern char __kvm_hyp_init[]; extern char __kvm_hyp_init_end[]; -extern void __kvm_flush_vm_context(void); +extern void __kvm_flush_cpu_vmid_context(void); extern void __kvm_tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa); extern void __kvm_tlb_flush_vmid(struct kvm *kvm); extern void __kvm_tlb_flush_local_vmid(struct kvm_vcpu *vcpu); diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h index 770d73257ad9..e2c3a4a7b020 100644 --- a/arch/arm/include/asm/kvm_host.h +++ b/arch/arm/include/asm/kvm_host.h @@ -59,8 +59,8 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu); void kvm_reset_coprocs(struct kvm_vcpu *vcpu); struct kvm_vmid { - /* The VMID generation used for the virt. memory system */ - u64 vmid_gen; + /* The ASID used for the ASID allocator */ + atomic64_t asid; u32 vmid; }; @@ -264,7 +264,6 @@ unsigned long __kvm_call_hyp(void *hypfn, ...); ret; \ }) -void force_vm_exit(const cpumask_t *mask); int __kvm_arm_vcpu_get_events(struct kvm_vcpu *vcpu, struct kvm_vcpu_events *events); diff --git a/arch/arm/include/asm/kvm_hyp.h b/arch/arm/include/asm/kvm_hyp.h index 87bcd18df8d5..c3d1011ca1bf 100644 --- a/arch/arm/include/asm/kvm_hyp.h +++ b/arch/arm/include/asm/kvm_hyp.h @@ -75,6 +75,7 @@ #define TLBIALLIS __ACCESS_CP15(c8, 0, c3, 0) #define TLBIALL __ACCESS_CP15(c8, 0, c7, 0) #define TLBIALLNSNHIS __ACCESS_CP15(c8, 4, c3, 4) +#define TLBIALLNSNH __ACCESS_CP15(c8, 4, c7, 4) #define PRRR __ACCESS_CP15(c10, 0, c2, 0) #define NMRR __ACCESS_CP15(c10, 0, c2, 1) #define AMAIR0 __ACCESS_CP15(c10, 0, c3, 0) diff --git a/arch/arm/kvm/hyp/tlb.c b/arch/arm/kvm/hyp/tlb.c index 8e4afba73635..42b9ab47fc94 100644 --- a/arch/arm/kvm/hyp/tlb.c +++ b/arch/arm/kvm/hyp/tlb.c @@ -71,9 +71,9 @@ void __hyp_text __kvm_tlb_flush_local_vmid(struct kvm_vcpu *vcpu) write_sysreg(0, VTTBR); } -void __hyp_text __kvm_flush_vm_context(void) +void __hyp_text __kvm_flush_cpu_vmid_context(void) { - write_sysreg(0, TLBIALLNSNHIS); - write_sysreg(0, ICIALLUIS); - dsb(ish); + write_sysreg(0, TLBIALLNSNH); + write_sysreg(0, ICIALLU); + dsb(nsh); } diff --git a/arch/arm64/include/asm/kvm_asid.h b/arch/arm64/include/asm/kvm_asid.h new file mode 100644 index 000000000000..8b586e43c094 --- /dev/null +++ b/arch/arm64/include/asm/kvm_asid.h @@ -0,0 +1,8 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef __ARM64_KVM_ASID_H__ +#define __ARM64_KVM_ASID_H__ + +#include + +#endif /* __ARM64_KVM_ASID_H__ */ + diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h index f5b79e995f40..8d7d01ee1d03 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -62,7 +62,7 @@ extern char __kvm_hyp_init_end[]; extern char __kvm_hyp_vector[]; -extern void __kvm_flush_vm_context(void); +extern void __kvm_flush_cpu_vmid_context(void); extern void __kvm_tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa); extern void __kvm_tlb_flush_vmid(struct kvm *kvm); extern void __kvm_tlb_flush_local_vmid(struct kvm_vcpu *vcpu); diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index a01fe087e022..c64c9ac031df 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -60,8 +60,8 @@ int kvm_arch_vm_ioctl_check_extension(struct kvm *kvm, long ext); void __extended_idmap_trampoline(phys_addr_t boot_pgd, phys_addr_t idmap_start); struct kvm_vmid { - /* The VMID generation used for the virt. memory system */ - u64 vmid_gen; + /* The ASID used for the ASID allocator */ + atomic64_t asid; u32 vmid; }; @@ -417,7 +417,6 @@ u64 __kvm_call_hyp(void *hypfn, ...); ret; \ }) -void force_vm_exit(const cpumask_t *mask); void kvm_mmu_wp_memory_region(struct kvm *kvm, int slot); int handle_exit(struct kvm_vcpu *vcpu, struct kvm_run *run, diff --git a/arch/arm64/kvm/hyp/tlb.c b/arch/arm64/kvm/hyp/tlb.c index 76c30866069e..e80e922988c1 100644 --- a/arch/arm64/kvm/hyp/tlb.c +++ b/arch/arm64/kvm/hyp/tlb.c @@ -200,10 +200,10 @@ void __hyp_text __kvm_tlb_flush_local_vmid(struct kvm_vcpu *vcpu) __tlb_switch_to_host()(kvm, &cxt); } -void __hyp_text __kvm_flush_vm_context(void) +void __hyp_text __kvm_flush_cpu_vmid_context(void) { - dsb(ishst); - __tlbi(alle1is); - asm volatile("ic ialluis" : : ); - dsb(ish); + dsb(nshst); + __tlbi(alle1); + asm volatile("ic iallu" : : ); + dsb(nsh); } diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c index 99c37384ba7b..03f95fffd672 100644 --- a/virt/kvm/arm/arm.c +++ b/virt/kvm/arm/arm.c @@ -46,6 +46,7 @@ #include #include #include +#include #include #include #include @@ -62,10 +63,10 @@ static DEFINE_PER_CPU(unsigned long, kvm_arm_hyp_stack_page); /* Per-CPU variable containing the currently running vcpu. */ static DEFINE_PER_CPU(struct kvm_vcpu *, kvm_arm_running_vcpu); -/* The VMID used in the VTTBR */ -static atomic64_t kvm_vmid_gen = ATOMIC64_INIT(1); -static u32 kvm_next_vmid; -static DEFINE_SPINLOCK(kvm_vmid_lock); +static DEFINE_PER_CPU(atomic64_t, active_vmids); +static DEFINE_PER_CPU(u64, reserved_vmids); + +struct asid_info vmid_info; static bool vgic_present; @@ -140,9 +141,6 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type) kvm_vgic_early_init(kvm); - /* Mark the initial VMID generation invalid */ - kvm->arch.vmid.vmid_gen = 0; - /* The maximum number of VCPUs is limited by the host's GIC model */ kvm->arch.max_vcpus = vgic_present ? kvm_vgic_get_max_vcpus() : KVM_MAX_VCPUS; @@ -455,35 +453,17 @@ bool kvm_arch_vcpu_in_kernel(struct kvm_vcpu *vcpu) return vcpu_mode_priv(vcpu); } -/* Just ensure a guest exit from a particular CPU */ -static void exit_vm_noop(void *info) +static void vmid_flush_cpu_ctxt(void) { + kvm_call_hyp(__kvm_flush_cpu_vmid_context); } -void force_vm_exit(const cpumask_t *mask) +static void vmid_update_ctxt(void *ctxt) { - preempt_disable(); - smp_call_function_many(mask, exit_vm_noop, NULL, true); - preempt_enable(); -} + struct kvm_vmid *vmid = ctxt; + u64 asid = atomic64_read(&vmid->asid); -/** - * need_new_vmid_gen - check that the VMID is still valid - * @vmid: The VMID to check - * - * return true if there is a new generation of VMIDs being used - * - * The hardware supports a limited set of values with the value zero reserved - * for the host, so we check if an assigned value belongs to a previous - * generation, which which requires us to assign a new value. If we're the - * first to use a VMID for the new generation, we must flush necessary caches - * and TLBs on all CPUs. - */ -static bool need_new_vmid_gen(struct kvm_vmid *vmid) -{ - u64 current_vmid_gen = atomic64_read(&kvm_vmid_gen); - smp_rmb(); /* Orders read of kvm_vmid_gen and kvm->arch.vmid */ - return unlikely(READ_ONCE(vmid->vmid_gen) != current_vmid_gen); + vmid->vmid = asid & ((1ULL << kvm_get_vmid_bits()) - 1); } /** @@ -493,48 +473,11 @@ static bool need_new_vmid_gen(struct kvm_vmid *vmid) */ static void update_vmid(struct kvm_vmid *vmid) { - if (!need_new_vmid_gen(vmid)) - return; - - spin_lock(&kvm_vmid_lock); - - /* - * We need to re-check the vmid_gen here to ensure that if another vcpu - * already allocated a valid vmid for this vm, then this vcpu should - * use the same vmid. - */ - if (!need_new_vmid_gen(vmid)) { - spin_unlock(&kvm_vmid_lock); - return; - } - - /* First user of a new VMID generation? */ - if (unlikely(kvm_next_vmid == 0)) { - atomic64_inc(&kvm_vmid_gen); - kvm_next_vmid = 1; - - /* - * On SMP we know no other CPUs can use this CPU's or each - * other's VMID after force_vm_exit returns since the - * kvm_vmid_lock blocks them from reentry to the guest. - */ - force_vm_exit(cpu_all_mask); - /* - * Now broadcast TLB + ICACHE invalidation over the inner - * shareable domain to make sure all data structures are - * clean. - */ - kvm_call_hyp(__kvm_flush_vm_context); - } + int cpu = get_cpu(); - vmid->vmid = kvm_next_vmid; - kvm_next_vmid++; - kvm_next_vmid &= (1 << kvm_get_vmid_bits()) - 1; + asid_check_context(&vmid_info, &vmid->asid, cpu, vmid); - smp_wmb(); - WRITE_ONCE(vmid->vmid_gen, atomic64_read(&kvm_vmid_gen)); - - spin_unlock(&kvm_vmid_lock); + put_cpu(); } static int kvm_vcpu_first_run_init(struct kvm_vcpu *vcpu) @@ -685,8 +628,6 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run) */ cond_resched(); - update_vmid(&vcpu->kvm->arch.vmid); - check_vcpu_requests(vcpu); /* @@ -696,6 +637,15 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run) */ preempt_disable(); + /* + * The ASID/VMID allocator only tracks active VMIDs per + * physical CPU, and therefore the VMID allocated may not be + * preserved on VMID roll-over if the task was preempted, + * making a thread's VMID inactive. So we need to call + * update_vttbr in non-premptible context. + */ + update_vmid(&vcpu->kvm->arch.vmid); + kvm_pmu_flush_hwstate(vcpu); local_irq_disable(); @@ -734,8 +684,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run) */ smp_store_mb(vcpu->mode, IN_GUEST_MODE); - if (ret <= 0 || need_new_vmid_gen(&vcpu->kvm->arch.vmid) || - kvm_request_pending(vcpu)) { + if (ret <= 0 || kvm_request_pending(vcpu)) { vcpu->mode = OUTSIDE_GUEST_MODE; isb(); /* Ensure work in x_flush_hwstate is committed */ kvm_pmu_sync_hwstate(vcpu); @@ -1305,6 +1254,8 @@ static void cpu_init_hyp_mode(void *dummy) __cpu_init_hyp_mode(pgd_ptr, hyp_stack_ptr, vector_ptr); __cpu_init_stage2(); + + kvm_call_hyp(__kvm_flush_cpu_vmid_context); } static void cpu_hyp_reset(void) @@ -1412,6 +1363,17 @@ static inline void hyp_cpu_pm_exit(void) static int init_common_resources(void) { + /* + * Initialize the ASID allocator telling it to allocate a single + * VMID per VM. + */ + if (asid_allocator_init(&vmid_info, kvm_get_vmid_bits(), 1, + vmid_flush_cpu_ctxt, vmid_update_ctxt)) + panic("Failed to initialize VMID allocator\n"); + + vmid_info.active = &active_vmids; + vmid_info.reserved = &reserved_vmids; + kvm_set_ipa_limit(); return 0;