From patchwork Thu Jun 20 13:05:55 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Grall X-Patchwork-Id: 167339 Delivered-To: patch@linaro.org Received: by 2002:a92:4782:0:0:0:0:0 with SMTP id e2csp2049659ilk; Thu, 20 Jun 2019 06:07:21 -0700 (PDT) X-Google-Smtp-Source: APXvYqy11C9DX7bCZUZUQu+jhJbYn4cSlI9gtYNjY2LXahc1GXO+9dqzd78v0RqUohXasfr/MzFk X-Received: by 2002:a62:778d:: with SMTP id s135mr59933448pfc.204.1561036041715; Thu, 20 Jun 2019 06:07:21 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1561036041; cv=none; d=google.com; s=arc-20160816; b=hwlRsXPPM9T0Wlw/rjtFDUtCK/aRgnbEoIxZqlgLCtPW8LN640ZeDMWlOIDWI9behK ckpYIRWhPTdApDa3cwvu5qoUH83csx+E1rCNS3Wu5aAcoBCiHMjQ7gAMMrBz29LSduFd XLNpquv3A47h6FBSLuWnTBt4dQVeZgYCDmVfriaa1hnx4BKr1wX6M3yeoIpk6fMCrBWR xqN64ywtawjMpkw8pakRJ5gp3iq2xo1MX1v4XndA7yhU5mSIISjRTx7hnT3YFW1fhNWP 5Wiex0VMLa+e4zf9X1IbM0fXRs0+sp90cyIv66xVgYYFhp/H9/6HPIARcgbBPrycp6ie XYMA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=PzK7dk4mGFSLk0Ymc513h4od2L2wPFmhS4jnLMZrFig=; b=AGaqxT3XutcPrIIzRuOjdPZO4a1A/ZwUuDSJcoM0PNoHmDW7aryhlZ3iiuGNlvzHC3 0Il7BQX8yLgtqxFwW+FF75gcN7ho+GY2BTlcrGz4w1bTs0+YOYc6HvNVHHcMI5/qRNLV aP202mO78P/IpqDbC2VeEE7Ai3V6UYA8IvS36ELxp9bk2pzIkSIC9yNGMv07l7NH6X6j x3kz7P8qrkvv4Q6PqzBMcmBEVU3rRxusBPSjZlrs30WL+E2O1iUfW9t3Soc1HTon3QiW vh1vUHLejxnN8nGq8XCq6jZnS03GPPFK2klBA2CrvC6VNovYQOzxW5u7/aQ5vUhzS313 ouQQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id k24si19139328pfk.195.2019.06.20.06.07.21; Thu, 20 Jun 2019 06:07:21 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731961AbfFTNG2 (ORCPT + 30 others); Thu, 20 Jun 2019 09:06:28 -0400 Received: from foss.arm.com ([217.140.110.172]:36720 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731733AbfFTNGZ (ORCPT ); Thu, 20 Jun 2019 09:06:25 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 80349C0A; Thu, 20 Jun 2019 06:06:24 -0700 (PDT) Received: from e108454-lin.cambridge.arm.com (e108454-lin.cambridge.arm.com [10.1.196.50]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 2BA713F718; Thu, 20 Jun 2019 06:06:23 -0700 (PDT) From: Julien Grall To: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu Cc: james.morse@arm.com, marc.zyngier@arm.com, julien.thierry@arm.com, suzuki.poulose@arm.com, catalin.marinas@arm.com, will.deacon@arm.com, Julien Grall Subject: [RFC v2 01/14] arm64/mm: Introduce asid_info structure and move asid_generation/asid_map to it Date: Thu, 20 Jun 2019 14:05:55 +0100 Message-Id: <20190620130608.17230-2-julien.grall@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20190620130608.17230-1-julien.grall@arm.com> References: <20190620130608.17230-1-julien.grall@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org In an attempt to make the ASID allocator generic, create a new structure asid_info to store all the information necessary for the allocator. For now, move the variables asid_generation and asid_map to the new structure asid_info. Follow-up patches will move more variables. Note to avoid more renaming aftwards, a local variable 'info' has been created and is a pointer to the ASID allocator structure. Signed-off-by: Julien Grall --- Changes in v2: - Add turn asid_info to a static variable --- arch/arm64/mm/context.c | 46 ++++++++++++++++++++++++++-------------------- 1 file changed, 26 insertions(+), 20 deletions(-) -- 2.11.0 diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm/context.c index 1f0ea2facf24..8167c369172d 100644 --- a/arch/arm64/mm/context.c +++ b/arch/arm64/mm/context.c @@ -30,8 +30,11 @@ static u32 asid_bits; static DEFINE_RAW_SPINLOCK(cpu_asid_lock); -static atomic64_t asid_generation; -static unsigned long *asid_map; +static struct asid_info +{ + atomic64_t generation; + unsigned long *map; +} asid_info; static DEFINE_PER_CPU(atomic64_t, active_asids); static DEFINE_PER_CPU(u64, reserved_asids); @@ -88,13 +91,13 @@ void verify_cpu_asid_bits(void) } } -static void flush_context(void) +static void flush_context(struct asid_info *info) { int i; u64 asid; /* Update the list of reserved ASIDs and the ASID bitmap. */ - bitmap_clear(asid_map, 0, NUM_USER_ASIDS); + bitmap_clear(info->map, 0, NUM_USER_ASIDS); for_each_possible_cpu(i) { asid = atomic64_xchg_relaxed(&per_cpu(active_asids, i), 0); @@ -107,7 +110,7 @@ static void flush_context(void) */ if (asid == 0) asid = per_cpu(reserved_asids, i); - __set_bit(asid2idx(asid), asid_map); + __set_bit(asid2idx(asid), info->map); per_cpu(reserved_asids, i) = asid; } @@ -142,11 +145,11 @@ static bool check_update_reserved_asid(u64 asid, u64 newasid) return hit; } -static u64 new_context(struct mm_struct *mm) +static u64 new_context(struct asid_info *info, struct mm_struct *mm) { static u32 cur_idx = 1; u64 asid = atomic64_read(&mm->context.id); - u64 generation = atomic64_read(&asid_generation); + u64 generation = atomic64_read(&info->generation); if (asid != 0) { u64 newasid = generation | (asid & ~ASID_MASK); @@ -162,7 +165,7 @@ static u64 new_context(struct mm_struct *mm) * We had a valid ASID in a previous life, so try to re-use * it if possible. */ - if (!__test_and_set_bit(asid2idx(asid), asid_map)) + if (!__test_and_set_bit(asid2idx(asid), info->map)) return newasid; } @@ -173,20 +176,20 @@ static u64 new_context(struct mm_struct *mm) * a reserved TTBR0 for the init_mm and we allocate ASIDs in even/odd * pairs. */ - asid = find_next_zero_bit(asid_map, NUM_USER_ASIDS, cur_idx); + asid = find_next_zero_bit(info->map, NUM_USER_ASIDS, cur_idx); if (asid != NUM_USER_ASIDS) goto set_asid; /* We're out of ASIDs, so increment the global generation count */ generation = atomic64_add_return_relaxed(ASID_FIRST_VERSION, - &asid_generation); - flush_context(); + &info->generation); + flush_context(info); /* We have more ASIDs than CPUs, so this will always succeed */ - asid = find_next_zero_bit(asid_map, NUM_USER_ASIDS, 1); + asid = find_next_zero_bit(info->map, NUM_USER_ASIDS, 1); set_asid: - __set_bit(asid, asid_map); + __set_bit(asid, info->map); cur_idx = asid; return idx2asid(asid) | generation; } @@ -195,6 +198,7 @@ void check_and_switch_context(struct mm_struct *mm, unsigned int cpu) { unsigned long flags; u64 asid, old_active_asid; + struct asid_info *info = &asid_info; if (system_supports_cnp()) cpu_set_reserved_ttbr0(); @@ -217,7 +221,7 @@ void check_and_switch_context(struct mm_struct *mm, unsigned int cpu) */ old_active_asid = atomic64_read(&per_cpu(active_asids, cpu)); if (old_active_asid && - !((asid ^ atomic64_read(&asid_generation)) >> asid_bits) && + !((asid ^ atomic64_read(&info->generation)) >> asid_bits) && atomic64_cmpxchg_relaxed(&per_cpu(active_asids, cpu), old_active_asid, asid)) goto switch_mm_fastpath; @@ -225,8 +229,8 @@ void check_and_switch_context(struct mm_struct *mm, unsigned int cpu) raw_spin_lock_irqsave(&cpu_asid_lock, flags); /* Check that our ASID belongs to the current generation. */ asid = atomic64_read(&mm->context.id); - if ((asid ^ atomic64_read(&asid_generation)) >> asid_bits) { - asid = new_context(mm); + if ((asid ^ atomic64_read(&info->generation)) >> asid_bits) { + asid = new_context(info, mm); atomic64_set(&mm->context.id, asid); } @@ -259,16 +263,18 @@ asmlinkage void post_ttbr_update_workaround(void) static int asids_init(void) { + struct asid_info *info = &asid_info; + asid_bits = get_cpu_asid_bits(); /* * Expect allocation after rollover to fail if we don't have at least * one more ASID than CPUs. ASID #0 is reserved for init_mm. */ WARN_ON(NUM_USER_ASIDS - 1 <= num_possible_cpus()); - atomic64_set(&asid_generation, ASID_FIRST_VERSION); - asid_map = kcalloc(BITS_TO_LONGS(NUM_USER_ASIDS), sizeof(*asid_map), - GFP_KERNEL); - if (!asid_map) + atomic64_set(&info->generation, ASID_FIRST_VERSION); + info->map = kcalloc(BITS_TO_LONGS(NUM_USER_ASIDS), sizeof(*info->map), + GFP_KERNEL); + if (!info->map) panic("Failed to allocate bitmap for %lu ASIDs\n", NUM_USER_ASIDS); From patchwork Thu Jun 20 13:05:56 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Grall X-Patchwork-Id: 167337 Delivered-To: patch@linaro.org Received: by 2002:a92:4782:0:0:0:0:0 with SMTP id e2csp2049550ilk; Thu, 20 Jun 2019 06:07:15 -0700 (PDT) X-Google-Smtp-Source: APXvYqzP3KiPfFdaQOJKxgb+9IexRhNSvBdULE3/ZWRcdCVv55xNQWvUgpc0n6EVdEYq7jIPDxIo X-Received: by 2002:a63:f510:: with SMTP id w16mr13010624pgh.0.1561036035468; Thu, 20 Jun 2019 06:07:15 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1561036035; cv=none; d=google.com; s=arc-20160816; b=0qSZa1nZvQ3sFVqjd1d0WdDYkM7fgx0Iuo2aZ8AWb4wwQcf2ooX9KGIn9IQ0Sbi1Kd Ve51WRmmpug3XGlZcgErZVJ+jbYIuZdb+nXVfZAcD7EKc6eWchmHChoazPlm/3QR7/HG sKgZ1kjJvCpjCJyi7eLU3TwBFGSw5mw99rnoetl8rA1Vo3aR6SQrEKKucjozRiPYY+Oy YDNjNIQsVUzbDiqubnGA0jeO2anQOTpTAL5QHZ+xWyVve3Y2+bmJUw+JcmNzm32t4s72 RoQQqVSyTKYPN7KccL/byuF/qqkYab3sJKpE1xFQgW8AUa3Am5jrYK/s/cGJAPJGzkxj WgzA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=LR9TxxvGk55v6uAhUOaQGCrSMM/1G+bEo4NEvB5gdpU=; b=ippzYpfrYosKQC3cdpzJ830S9AnzbBE56C+vyrobGYXEvVJ3fpcF6eVq3codsBpzo+ jSVkE+RIGeycEMhCyg1Ph9c5MXoUzU7Kn/zVVwk0n120+2OPXrU/J0h7elHji7QpB1F8 91Qo6zK4j7H947wSEs35JbNFmUZfZjYmkPAovj/odHPO35QRKPSOuTCawI2ICroadv9h 9O3249AL/EHf9KKGtTtER6ewdoiSw8glg/gvmWXs6ErsDRSlxCELMsFQfDtt2XO8wfux iWxeR6ogi8Ua/ga1MK/IfG+ZoPEVry72axpiALJVemrYLIJD8YbM2Sj09vyoBIFbHvct iZRQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id z5si5792428pgv.280.2019.06.20.06.07.15; Thu, 20 Jun 2019 06:07:15 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731963AbfFTNG3 (ORCPT + 30 others); Thu, 20 Jun 2019 09:06:29 -0400 Received: from foss.arm.com ([217.140.110.172]:36734 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731942AbfFTNG0 (ORCPT ); Thu, 20 Jun 2019 09:06:26 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 143AA11B3; Thu, 20 Jun 2019 06:06:26 -0700 (PDT) Received: from e108454-lin.cambridge.arm.com (e108454-lin.cambridge.arm.com [10.1.196.50]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id B42803F718; Thu, 20 Jun 2019 06:06:24 -0700 (PDT) From: Julien Grall To: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu Cc: james.morse@arm.com, marc.zyngier@arm.com, julien.thierry@arm.com, suzuki.poulose@arm.com, catalin.marinas@arm.com, will.deacon@arm.com, Julien Grall Subject: [RFC v2 02/14] arm64/mm: Move active_asids and reserved_asids to asid_info Date: Thu, 20 Jun 2019 14:05:56 +0100 Message-Id: <20190620130608.17230-3-julien.grall@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20190620130608.17230-1-julien.grall@arm.com> References: <20190620130608.17230-1-julien.grall@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The variables active_asids and reserved_asids hold information for a given ASID allocator. So move them to the structure asid_info. At the same time, introduce wrappers to access the active and reserved ASIDs to make the code clearer. Signed-off-by: Julien Grall --- arch/arm64/mm/context.c | 34 ++++++++++++++++++++++------------ 1 file changed, 22 insertions(+), 12 deletions(-) -- 2.11.0 diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm/context.c index 8167c369172d..6bacfc295f6e 100644 --- a/arch/arm64/mm/context.c +++ b/arch/arm64/mm/context.c @@ -34,10 +34,16 @@ static struct asid_info { atomic64_t generation; unsigned long *map; + atomic64_t __percpu *active; + u64 __percpu *reserved; } asid_info; +#define active_asid(info, cpu) *per_cpu_ptr((info)->active, cpu) +#define reserved_asid(info, cpu) *per_cpu_ptr((info)->reserved, cpu) + static DEFINE_PER_CPU(atomic64_t, active_asids); static DEFINE_PER_CPU(u64, reserved_asids); + static cpumask_t tlb_flush_pending; #define ASID_MASK (~GENMASK(asid_bits - 1, 0)) @@ -100,7 +106,7 @@ static void flush_context(struct asid_info *info) bitmap_clear(info->map, 0, NUM_USER_ASIDS); for_each_possible_cpu(i) { - asid = atomic64_xchg_relaxed(&per_cpu(active_asids, i), 0); + asid = atomic64_xchg_relaxed(&active_asid(info, i), 0); /* * If this CPU has already been through a * rollover, but hasn't run another task in @@ -109,9 +115,9 @@ static void flush_context(struct asid_info *info) * the process it is still running. */ if (asid == 0) - asid = per_cpu(reserved_asids, i); + asid = reserved_asid(info, i); __set_bit(asid2idx(asid), info->map); - per_cpu(reserved_asids, i) = asid; + reserved_asid(info, i) = asid; } /* @@ -121,7 +127,8 @@ static void flush_context(struct asid_info *info) cpumask_setall(&tlb_flush_pending); } -static bool check_update_reserved_asid(u64 asid, u64 newasid) +static bool check_update_reserved_asid(struct asid_info *info, u64 asid, + u64 newasid) { int cpu; bool hit = false; @@ -136,9 +143,9 @@ static bool check_update_reserved_asid(u64 asid, u64 newasid) * generation. */ for_each_possible_cpu(cpu) { - if (per_cpu(reserved_asids, cpu) == asid) { + if (reserved_asid(info, cpu) == asid) { hit = true; - per_cpu(reserved_asids, cpu) = newasid; + reserved_asid(info, cpu) = newasid; } } @@ -158,7 +165,7 @@ static u64 new_context(struct asid_info *info, struct mm_struct *mm) * If our current ASID was active during a rollover, we * can continue to use it and this was just a false alarm. */ - if (check_update_reserved_asid(asid, newasid)) + if (check_update_reserved_asid(info, asid, newasid)) return newasid; /* @@ -207,8 +214,8 @@ void check_and_switch_context(struct mm_struct *mm, unsigned int cpu) /* * The memory ordering here is subtle. - * If our active_asids is non-zero and the ASID matches the current - * generation, then we update the active_asids entry with a relaxed + * If our active_asid is non-zero and the ASID matches the current + * generation, then we update the active_asid entry with a relaxed * cmpxchg. Racing with a concurrent rollover means that either: * * - We get a zero back from the cmpxchg and end up waiting on the @@ -219,10 +226,10 @@ void check_and_switch_context(struct mm_struct *mm, unsigned int cpu) * relaxed xchg in flush_context will treat us as reserved * because atomic RmWs are totally ordered for a given location. */ - old_active_asid = atomic64_read(&per_cpu(active_asids, cpu)); + old_active_asid = atomic64_read(&active_asid(info, cpu)); if (old_active_asid && !((asid ^ atomic64_read(&info->generation)) >> asid_bits) && - atomic64_cmpxchg_relaxed(&per_cpu(active_asids, cpu), + atomic64_cmpxchg_relaxed(&active_asid(info, cpu), old_active_asid, asid)) goto switch_mm_fastpath; @@ -237,7 +244,7 @@ void check_and_switch_context(struct mm_struct *mm, unsigned int cpu) if (cpumask_test_and_clear_cpu(cpu, &tlb_flush_pending)) local_flush_tlb_all(); - atomic64_set(&per_cpu(active_asids, cpu), asid); + atomic64_set(&active_asid(info, cpu), asid); raw_spin_unlock_irqrestore(&cpu_asid_lock, flags); switch_mm_fastpath: @@ -278,6 +285,9 @@ static int asids_init(void) panic("Failed to allocate bitmap for %lu ASIDs\n", NUM_USER_ASIDS); + info->active = &active_asids; + info->reserved = &reserved_asids; + pr_info("ASID allocator initialised with %lu entries\n", NUM_USER_ASIDS); return 0; } From patchwork Thu Jun 20 13:05:57 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Grall X-Patchwork-Id: 167326 Delivered-To: patch@linaro.org Received: by 2002:a92:4782:0:0:0:0:0 with SMTP id e2csp2048676ilk; Thu, 20 Jun 2019 06:06:34 -0700 (PDT) X-Google-Smtp-Source: APXvYqwVyjydFixX1vIJxjA53GGb7ZDsi7+PdRJL2pFLZs1dRM1ebLUp8hgxsPaUwgncqlGfw6e2 X-Received: by 2002:a17:90a:bb8a:: with SMTP id v10mr3089291pjr.78.1561035994241; Thu, 20 Jun 2019 06:06:34 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1561035994; cv=none; d=google.com; s=arc-20160816; b=zohO5GHqh5H8K4HvF46dgBuHYIgM8SKXBObDHam3IP/ujygYaKz8e4WB00x0Dn+HFv Dg8bQYbOu+rxGGaOo56LT4ha+TngrJE83xQWUW4PRKUkKN5gFuulTB3ZbJhR3WqsBkhH txwxQMVa/19r51VAyjTJyEMnI8m5YyOpqq+joGUADEMJOawsgeRyXOyAlL9LXD3Rq9uQ i45FpO3Vfftmc3EEDUTl0CP5Ck/ouJ+GmLcGkH0n/mAVSF/8KiszqK5UzPPGWfgFUqHX Il4kOz2DfVOenqP2mGgkc8Xan2XtVYZrl9npMDAySJi26SsqB/9wFIPLCq+bcoeti46j gzdg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=jozjcC9jTqX0faGSBAMI1LrgYpSlQlQwuXFiwnsaBO0=; b=h3VdsJEn3coDmEzLJ2i7mH2iPptYUGQsPItQZzdJ0TqcQt9pSlpnBtRAzZ+rdzPfqr dLbFYUeQgvaNF+ipnL+u2HenRw9q15C/mmxoAkHmWTyOfqJ2LR+oCc+H9cf1EP06pbff wvYLmMhEZGvbsh+v9d/7iBr6GeFkZ5YBFSPx66Tj7GDx20zcVOCs+9ogdNlhgvbd2XE9 YEtTIcstORyPC8xHGBBqICIw3pM3mptLENPGiBJ83sWvMR57Wx3MVhPiGVmX3+HJWjuM xKcL5oKdCUnCZBugY1hsijkxAkddzXLQG4jZ6AfaRvF/sb5a9lUzfrwdxSCcJ5Iuabm7 fQjA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id s80si5855597pgs.468.2019.06.20.06.06.33; Thu, 20 Jun 2019 06:06:34 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731989AbfFTNGb (ORCPT + 30 others); Thu, 20 Jun 2019 09:06:31 -0400 Received: from foss.arm.com ([217.140.110.172]:36742 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731943AbfFTNG2 (ORCPT ); Thu, 20 Jun 2019 09:06:28 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 9D245142F; Thu, 20 Jun 2019 06:06:27 -0700 (PDT) Received: from e108454-lin.cambridge.arm.com (e108454-lin.cambridge.arm.com [10.1.196.50]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 485FA3F718; Thu, 20 Jun 2019 06:06:26 -0700 (PDT) From: Julien Grall To: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu Cc: james.morse@arm.com, marc.zyngier@arm.com, julien.thierry@arm.com, suzuki.poulose@arm.com, catalin.marinas@arm.com, will.deacon@arm.com, Julien Grall Subject: [RFC v2 03/14] arm64/mm: Move bits to asid_info Date: Thu, 20 Jun 2019 14:05:57 +0100 Message-Id: <20190620130608.17230-4-julien.grall@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20190620130608.17230-1-julien.grall@arm.com> References: <20190620130608.17230-1-julien.grall@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The variable bits hold information for a given ASID allocator. So move it to the asid_info structure. Because most of the macros were relying on bits, they are now taking an extra parameter that is a pointer to the asid_info structure. Signed-off-by: Julien Grall --- arch/arm64/mm/context.c | 59 +++++++++++++++++++++++++------------------------ 1 file changed, 30 insertions(+), 29 deletions(-) -- 2.11.0 diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm/context.c index 6bacfc295f6e..7883347ece52 100644 --- a/arch/arm64/mm/context.c +++ b/arch/arm64/mm/context.c @@ -27,7 +27,6 @@ #include #include -static u32 asid_bits; static DEFINE_RAW_SPINLOCK(cpu_asid_lock); static struct asid_info @@ -36,6 +35,7 @@ static struct asid_info unsigned long *map; atomic64_t __percpu *active; u64 __percpu *reserved; + u32 bits; } asid_info; #define active_asid(info, cpu) *per_cpu_ptr((info)->active, cpu) @@ -46,17 +46,17 @@ static DEFINE_PER_CPU(u64, reserved_asids); static cpumask_t tlb_flush_pending; -#define ASID_MASK (~GENMASK(asid_bits - 1, 0)) -#define ASID_FIRST_VERSION (1UL << asid_bits) +#define ASID_MASK(info) (~GENMASK((info)->bits - 1, 0)) +#define ASID_FIRST_VERSION(info) (1UL << ((info)->bits)) #ifdef CONFIG_UNMAP_KERNEL_AT_EL0 -#define NUM_USER_ASIDS (ASID_FIRST_VERSION >> 1) -#define asid2idx(asid) (((asid) & ~ASID_MASK) >> 1) -#define idx2asid(idx) (((idx) << 1) & ~ASID_MASK) +#define NUM_USER_ASIDS(info) (ASID_FIRST_VERSION(info) >> 1) +#define asid2idx(info, asid) (((asid) & ~ASID_MASK(info)) >> 1) +#define idx2asid(info, idx) (((idx) << 1) & ~ASID_MASK(info)) #else -#define NUM_USER_ASIDS (ASID_FIRST_VERSION) -#define asid2idx(asid) ((asid) & ~ASID_MASK) -#define idx2asid(idx) asid2idx(idx) +#define NUM_USER_ASIDS(info) (ASID_FIRST_VERSION(info)) +#define asid2idx(info, asid) ((asid) & ~ASID_MASK(info)) +#define idx2asid(info, idx) asid2idx(info, idx) #endif /* Get the ASIDBits supported by the current CPU */ @@ -86,13 +86,13 @@ void verify_cpu_asid_bits(void) { u32 asid = get_cpu_asid_bits(); - if (asid < asid_bits) { + if (asid < asid_info.bits) { /* * We cannot decrease the ASID size at runtime, so panic if we support * fewer ASID bits than the boot CPU. */ pr_crit("CPU%d: smaller ASID size(%u) than boot CPU (%u)\n", - smp_processor_id(), asid, asid_bits); + smp_processor_id(), asid, asid_info.bits); cpu_panic_kernel(); } } @@ -103,7 +103,7 @@ static void flush_context(struct asid_info *info) u64 asid; /* Update the list of reserved ASIDs and the ASID bitmap. */ - bitmap_clear(info->map, 0, NUM_USER_ASIDS); + bitmap_clear(info->map, 0, NUM_USER_ASIDS(info)); for_each_possible_cpu(i) { asid = atomic64_xchg_relaxed(&active_asid(info, i), 0); @@ -116,7 +116,7 @@ static void flush_context(struct asid_info *info) */ if (asid == 0) asid = reserved_asid(info, i); - __set_bit(asid2idx(asid), info->map); + __set_bit(asid2idx(info, asid), info->map); reserved_asid(info, i) = asid; } @@ -159,7 +159,7 @@ static u64 new_context(struct asid_info *info, struct mm_struct *mm) u64 generation = atomic64_read(&info->generation); if (asid != 0) { - u64 newasid = generation | (asid & ~ASID_MASK); + u64 newasid = generation | (asid & ~ASID_MASK(info)); /* * If our current ASID was active during a rollover, we @@ -172,7 +172,7 @@ static u64 new_context(struct asid_info *info, struct mm_struct *mm) * We had a valid ASID in a previous life, so try to re-use * it if possible. */ - if (!__test_and_set_bit(asid2idx(asid), info->map)) + if (!__test_and_set_bit(asid2idx(info, asid), info->map)) return newasid; } @@ -183,22 +183,22 @@ static u64 new_context(struct asid_info *info, struct mm_struct *mm) * a reserved TTBR0 for the init_mm and we allocate ASIDs in even/odd * pairs. */ - asid = find_next_zero_bit(info->map, NUM_USER_ASIDS, cur_idx); - if (asid != NUM_USER_ASIDS) + asid = find_next_zero_bit(info->map, NUM_USER_ASIDS(info), cur_idx); + if (asid != NUM_USER_ASIDS(info)) goto set_asid; /* We're out of ASIDs, so increment the global generation count */ - generation = atomic64_add_return_relaxed(ASID_FIRST_VERSION, + generation = atomic64_add_return_relaxed(ASID_FIRST_VERSION(info), &info->generation); flush_context(info); /* We have more ASIDs than CPUs, so this will always succeed */ - asid = find_next_zero_bit(info->map, NUM_USER_ASIDS, 1); + asid = find_next_zero_bit(info->map, NUM_USER_ASIDS(info), 1); set_asid: __set_bit(asid, info->map); cur_idx = asid; - return idx2asid(asid) | generation; + return idx2asid(info, asid) | generation; } void check_and_switch_context(struct mm_struct *mm, unsigned int cpu) @@ -228,7 +228,7 @@ void check_and_switch_context(struct mm_struct *mm, unsigned int cpu) */ old_active_asid = atomic64_read(&active_asid(info, cpu)); if (old_active_asid && - !((asid ^ atomic64_read(&info->generation)) >> asid_bits) && + !((asid ^ atomic64_read(&info->generation)) >> info->bits) && atomic64_cmpxchg_relaxed(&active_asid(info, cpu), old_active_asid, asid)) goto switch_mm_fastpath; @@ -236,7 +236,7 @@ void check_and_switch_context(struct mm_struct *mm, unsigned int cpu) raw_spin_lock_irqsave(&cpu_asid_lock, flags); /* Check that our ASID belongs to the current generation. */ asid = atomic64_read(&mm->context.id); - if ((asid ^ atomic64_read(&info->generation)) >> asid_bits) { + if ((asid ^ atomic64_read(&info->generation)) >> info->bits) { asid = new_context(info, mm); atomic64_set(&mm->context.id, asid); } @@ -272,23 +272,24 @@ static int asids_init(void) { struct asid_info *info = &asid_info; - asid_bits = get_cpu_asid_bits(); + info->bits = get_cpu_asid_bits(); /* * Expect allocation after rollover to fail if we don't have at least * one more ASID than CPUs. ASID #0 is reserved for init_mm. */ - WARN_ON(NUM_USER_ASIDS - 1 <= num_possible_cpus()); - atomic64_set(&info->generation, ASID_FIRST_VERSION); - info->map = kcalloc(BITS_TO_LONGS(NUM_USER_ASIDS), sizeof(*info->map), - GFP_KERNEL); + WARN_ON(NUM_USER_ASIDS(info) - 1 <= num_possible_cpus()); + atomic64_set(&info->generation, ASID_FIRST_VERSION(info)); + info->map = kcalloc(BITS_TO_LONGS(NUM_USER_ASIDS(info)), + sizeof(*info->map), GFP_KERNEL); if (!info->map) panic("Failed to allocate bitmap for %lu ASIDs\n", - NUM_USER_ASIDS); + NUM_USER_ASIDS(info)); info->active = &active_asids; info->reserved = &reserved_asids; - pr_info("ASID allocator initialised with %lu entries\n", NUM_USER_ASIDS); + pr_info("ASID allocator initialised with %lu entries\n", + NUM_USER_ASIDS(info)); return 0; } early_initcall(asids_init); From patchwork Thu Jun 20 13:05:58 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Grall X-Patchwork-Id: 167338 Delivered-To: patch@linaro.org Received: by 2002:a92:4782:0:0:0:0:0 with SMTP id e2csp2049585ilk; Thu, 20 Jun 2019 06:07:18 -0700 (PDT) X-Google-Smtp-Source: APXvYqxHrVMUGeISSLOsPdFT1lOdg1j8Fte32bi0L51LT4xrCJPrG8dVOEeZbxK0NRxSUbTWU2I3 X-Received: by 2002:a17:902:1101:: with SMTP id d1mr4706407pla.212.1561036038097; Thu, 20 Jun 2019 06:07:18 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1561036038; cv=none; d=google.com; s=arc-20160816; b=ZkFShL9PtToTRWyLWyxv6+ZcochNdTSgL7b7byjgKIqXoCVMCc8Nt5Ff+jKZ7TdBIK WmCGsOfUqluukUWGzZJat/1IJeV7eL2HNo9R791EaCI7HAU6p+ksV7gIvrdnWXFFhUFJ B5iJRklz2n6oNdtmCGNQ9lqut5mnQpM9jdFHNoRYlSzDyzEzMVAFe5bzmQR2mJDDj/bC GswCFEAqUL0huqkR/PC5vA2efD7AI0awuqcB12h9H6xiFY0RXT466N8/RiB5S4vKq8Cb bDeKwdwNAwMNe9sbhFyzoZyEB2B2fiZiL2M6CfehhgK8FOkvUwNIg3xiLuvDasHPqxcl IRrQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=tgupbtoxMN1F6gL1W5bDbb4T+e4aMAl9Xcd7FfiXpfY=; b=VGMSas7OyoUpw7kHrbiWAfgWAWxeg4IFjWegFuJ+JHdOV0wrAq+Xn1tM4H/iQvov+q DcULVUu1MMy8Y+JYqCmWyljxyBbuBDiEgxF6GUsWxFsgYyh1dztRiLUL+QcxDZjHzPjd tDDPDzApjck4bVS8aPoVdm14xoQd8erDmrjN5d2qzMqBpiV3vZ+7Cguq8HkBxtjmzB46 IhR4KcQSPbPpKv3BOxe96iVLmTP8Iddr2+kZTMyGaKzqiqDLbR9swk4v1hpAS2uZG+Nr 9hXMhi8FaZz45a0BH9BTEy59Y8xfGcn4VowsLCPWocwDkxrrqgvqRN/Xze9EhzF667Xa 4+2A== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id y16si4369893pjj.80.2019.06.20.06.07.17; Thu, 20 Jun 2019 06:07:18 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732125AbfFTNHQ (ORCPT + 30 others); Thu, 20 Jun 2019 09:07:16 -0400 Received: from foss.arm.com ([217.140.110.172]:36756 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731975AbfFTNG3 (ORCPT ); Thu, 20 Jun 2019 09:06:29 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 317B7C0A; Thu, 20 Jun 2019 06:06:29 -0700 (PDT) Received: from e108454-lin.cambridge.arm.com (e108454-lin.cambridge.arm.com [10.1.196.50]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id D122C3F718; Thu, 20 Jun 2019 06:06:27 -0700 (PDT) From: Julien Grall To: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu Cc: james.morse@arm.com, marc.zyngier@arm.com, julien.thierry@arm.com, suzuki.poulose@arm.com, catalin.marinas@arm.com, will.deacon@arm.com, Julien Grall Subject: [RFC v2 04/14] arm64/mm: Move the variable lock and tlb_flush_pending to asid_info Date: Thu, 20 Jun 2019 14:05:58 +0100 Message-Id: <20190620130608.17230-5-julien.grall@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20190620130608.17230-1-julien.grall@arm.com> References: <20190620130608.17230-1-julien.grall@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The variables lock and tlb_flush_pending holds information for a given ASID allocator. So move them to the asid_info structure. Signed-off-by: Julien Grall --- arch/arm64/mm/context.c | 17 +++++++++-------- 1 file changed, 9 insertions(+), 8 deletions(-) -- 2.11.0 diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm/context.c index 7883347ece52..6457a9310fe4 100644 --- a/arch/arm64/mm/context.c +++ b/arch/arm64/mm/context.c @@ -27,8 +27,6 @@ #include #include -static DEFINE_RAW_SPINLOCK(cpu_asid_lock); - static struct asid_info { atomic64_t generation; @@ -36,6 +34,9 @@ static struct asid_info atomic64_t __percpu *active; u64 __percpu *reserved; u32 bits; + raw_spinlock_t lock; + /* Which CPU requires context flush on next call */ + cpumask_t flush_pending; } asid_info; #define active_asid(info, cpu) *per_cpu_ptr((info)->active, cpu) @@ -44,8 +45,6 @@ static struct asid_info static DEFINE_PER_CPU(atomic64_t, active_asids); static DEFINE_PER_CPU(u64, reserved_asids); -static cpumask_t tlb_flush_pending; - #define ASID_MASK(info) (~GENMASK((info)->bits - 1, 0)) #define ASID_FIRST_VERSION(info) (1UL << ((info)->bits)) @@ -124,7 +123,7 @@ static void flush_context(struct asid_info *info) * Queue a TLB invalidation for each CPU to perform on next * context-switch */ - cpumask_setall(&tlb_flush_pending); + cpumask_setall(&info->flush_pending); } static bool check_update_reserved_asid(struct asid_info *info, u64 asid, @@ -233,7 +232,7 @@ void check_and_switch_context(struct mm_struct *mm, unsigned int cpu) old_active_asid, asid)) goto switch_mm_fastpath; - raw_spin_lock_irqsave(&cpu_asid_lock, flags); + raw_spin_lock_irqsave(&info->lock, flags); /* Check that our ASID belongs to the current generation. */ asid = atomic64_read(&mm->context.id); if ((asid ^ atomic64_read(&info->generation)) >> info->bits) { @@ -241,11 +240,11 @@ void check_and_switch_context(struct mm_struct *mm, unsigned int cpu) atomic64_set(&mm->context.id, asid); } - if (cpumask_test_and_clear_cpu(cpu, &tlb_flush_pending)) + if (cpumask_test_and_clear_cpu(cpu, &info->flush_pending)) local_flush_tlb_all(); atomic64_set(&active_asid(info, cpu), asid); - raw_spin_unlock_irqrestore(&cpu_asid_lock, flags); + raw_spin_unlock_irqrestore(&info->lock, flags); switch_mm_fastpath: @@ -288,6 +287,8 @@ static int asids_init(void) info->active = &active_asids; info->reserved = &reserved_asids; + raw_spin_lock_init(&info->lock); + pr_info("ASID allocator initialised with %lu entries\n", NUM_USER_ASIDS(info)); return 0; From patchwork Thu Jun 20 13:05:59 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Grall X-Patchwork-Id: 167328 Delivered-To: patch@linaro.org Received: by 2002:a92:4782:0:0:0:0:0 with SMTP id e2csp2048759ilk; Thu, 20 Jun 2019 06:06:37 -0700 (PDT) X-Google-Smtp-Source: APXvYqyiVOp6HCAXmZ1M5knh4PZbcns5DVX1hmmXzVrjsDlK4QcajuEgNv9f6GrABJYRw1pqLn0C X-Received: by 2002:a17:902:f087:: with SMTP id go7mr98576766plb.330.1561035997866; Thu, 20 Jun 2019 06:06:37 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1561035997; cv=none; d=google.com; s=arc-20160816; b=btXE8YlKM6+j1EsidAsiWxcFtywzWCGegzrxz1Mo/wlpvQTD6hp8IPCGTiLS78allX EX2bms6Hh+BX/clFm+ildUlpNRgGm2y8sqx0JugW3Ht7LMxuuJMs4UFTt9VbnDArTP2J lX7AB15OKwj77gYSpTsnLDv+guKq/r7V0TgEB4x3Ai6u6KWwvP1Gyv7r0Xk6ZP1ZypvM IOxoAKaE7BsonqsKbs2LIr1CtHAU3rAHPCckTresB52mrEACOHvAJ1nZ0GkNXFRJpiIS FkcAFOyy0UtGDa/rvG4lINXjUtS5lVSWa0WhxEFaOkM2b8bDcn+oOcbOJNd9PLOn6cx2 O8Bg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=24yZrV7pHSWt9chUb9nOzOuaNeJajNqYzV2vYtipNco=; b=w+5jijSys84Bmj8ORYRG/N/jV/jn3UTwuMRqsN79C24mhP0ZBTkqAubJxFiZf3g6Af /4osJaEYYfzXy/mpApO0wdqbgK3xBQmncXYK2XK0Q5Xvw4z28eD1NNq2IR7l5YpOO91G F8txXuBJyjxlFXV9xyAgQtPfj7srYlfteQfhQ2jllYRtyfVE3huQwimlbqftLwMpy7HS B5vxrSPzFTz4BlrBB3abdwmJmpfCUoOfAyY8Xa9ixvzuCRYAkjosJpbFCH8QNlfp4JSU bL6WC4togQa3Dt3pywQIEqNJan96zT6foAO/vIT0Lev+QBfJD7WFaiMSKqhKuqZEAU7c edSw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id w184si20523122pfw.271.2019.06.20.06.06.37; Thu, 20 Jun 2019 06:06:37 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732018AbfFTNGf (ORCPT + 30 others); Thu, 20 Jun 2019 09:06:35 -0400 Received: from foss.arm.com ([217.140.110.172]:36764 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731983AbfFTNGb (ORCPT ); Thu, 20 Jun 2019 09:06:31 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id BA0D5360; Thu, 20 Jun 2019 06:06:30 -0700 (PDT) Received: from e108454-lin.cambridge.arm.com (e108454-lin.cambridge.arm.com [10.1.196.50]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 659053F73F; Thu, 20 Jun 2019 06:06:29 -0700 (PDT) From: Julien Grall To: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu Cc: james.morse@arm.com, marc.zyngier@arm.com, julien.thierry@arm.com, suzuki.poulose@arm.com, catalin.marinas@arm.com, will.deacon@arm.com, Julien Grall Subject: [RFC v2 05/14] arm64/mm: Remove dependency on MM in new_context Date: Thu, 20 Jun 2019 14:05:59 +0100 Message-Id: <20190620130608.17230-6-julien.grall@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20190620130608.17230-1-julien.grall@arm.com> References: <20190620130608.17230-1-julien.grall@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The function new_context will be part of a generic ASID allocator. At the moment, the MM structure is only used to fetch the ASID. To remove the dependency on MM, it is possible to just pass a pointer to the current ASID. Signed-off-by: Julien Grall --- arch/arm64/mm/context.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) -- 2.11.0 diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm/context.c index 6457a9310fe4..a9cc59288b08 100644 --- a/arch/arm64/mm/context.c +++ b/arch/arm64/mm/context.c @@ -151,10 +151,10 @@ static bool check_update_reserved_asid(struct asid_info *info, u64 asid, return hit; } -static u64 new_context(struct asid_info *info, struct mm_struct *mm) +static u64 new_context(struct asid_info *info, atomic64_t *pasid) { static u32 cur_idx = 1; - u64 asid = atomic64_read(&mm->context.id); + u64 asid = atomic64_read(pasid); u64 generation = atomic64_read(&info->generation); if (asid != 0) { @@ -236,7 +236,7 @@ void check_and_switch_context(struct mm_struct *mm, unsigned int cpu) /* Check that our ASID belongs to the current generation. */ asid = atomic64_read(&mm->context.id); if ((asid ^ atomic64_read(&info->generation)) >> info->bits) { - asid = new_context(info, mm); + asid = new_context(info, &mm->context.id); atomic64_set(&mm->context.id, asid); } From patchwork Thu Jun 20 13:06:00 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Grall X-Patchwork-Id: 167327 Delivered-To: patch@linaro.org Received: by 2002:a92:4782:0:0:0:0:0 with SMTP id e2csp2048716ilk; Thu, 20 Jun 2019 06:06:36 -0700 (PDT) X-Google-Smtp-Source: APXvYqwP6G+5JwsNI4rMz/odbv9UydW7b6oF+kkzBtHjgyfRZZ3YJ0WnnFwVyUchWUADlqGAfWqc X-Received: by 2002:a65:6641:: with SMTP id z1mr13309660pgv.260.1561035996033; Thu, 20 Jun 2019 06:06:36 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1561035996; cv=none; d=google.com; s=arc-20160816; b=Pa6x8idRAeX/XOsjHS96oxT23SGJ+iwicbVB4YVD0gZNoOrOPYh6X4pDAuIfdFd2/Q HDv+spZsWZHsYcaPJz0S31xTC/O5cLzC7+m2NYm37LPonlEDYQWYGReJgSzS6dTu40EI usDsWhNv6uhRPw3RajEGE3rIWgA9+8oxstNF+C5rPy4zr4vnRZH3xmczvRCOj91OqtJm dipZYjcqpHdjbqV5YZUi26fzUDMMb0/JRlPxOcdPyaKq20xZltuza0Q7g4D5X70OO/uj rq6z1TwEcnfBh4/3Q9HlZLv4j6RZVRvH4KDWUiR5GXOkXxUpFHNeUB1vzHYhNTuGoBUW m6oQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=Ke1PcZyWtalfBUAs/F09++V1ta/7k1y5m1JMIAqg7pU=; b=fl9BH0/3k0r93Ev8kP+ZWn2lYyU3Hjph36H2pAglYqpu6EOK9x7eyd5dt2mxPWjZ8g a7F8xa7YmJ1hP8wsD0ySJSas23zAlTHIn65CLHbIbNNaCB0muKnaCJtJoCEeE6e1YrSZ bx3IRQE7jIlTI4Z9Ex2Qef9+8Uk39Cfd/iUuhaUiDyOVknrxNqghYQjX2IqLOI41dmWm 1DgQPfBPZLFDBLuHOoszkGOB1CRNOLmPMbC0POsfmqhJx/0lDhzZFdSOWhH3Zn6ySPJO x/k/dEsU4LrsfW7bC3zpGkw/2vQd+qIQ+1OIpWvb4EXLtPz1uyNOC0m1dXfiIF2+p2JE 26tg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id s80si5855597pgs.468.2019.06.20.06.06.35; Thu, 20 Jun 2019 06:06:36 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732005AbfFTNGe (ORCPT + 30 others); Thu, 20 Jun 2019 09:06:34 -0400 Received: from foss.arm.com ([217.140.110.172]:36770 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731943AbfFTNGd (ORCPT ); Thu, 20 Jun 2019 09:06:33 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 4EAE411B3; Thu, 20 Jun 2019 06:06:32 -0700 (PDT) Received: from e108454-lin.cambridge.arm.com (e108454-lin.cambridge.arm.com [10.1.196.50]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id EE3AC3F718; Thu, 20 Jun 2019 06:06:30 -0700 (PDT) From: Julien Grall To: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu Cc: james.morse@arm.com, marc.zyngier@arm.com, julien.thierry@arm.com, suzuki.poulose@arm.com, catalin.marinas@arm.com, will.deacon@arm.com, Julien Grall Subject: [RFC v2 06/14] arm64/mm: Store the number of asid allocated per context Date: Thu, 20 Jun 2019 14:06:00 +0100 Message-Id: <20190620130608.17230-7-julien.grall@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20190620130608.17230-1-julien.grall@arm.com> References: <20190620130608.17230-1-julien.grall@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Currently the number of ASID allocated per context is determined at compilation time. As the algorithm is becoming generic, the user may want to instantiate the ASID allocator multiple time with different number of ASID allocated. Add a field in asid_info to track the number ASID allocated per context. This is stored in term of shift amount to avoid division in the code. This means the number of ASID allocated per context should be a power of two. At the same time rename NUM_USERS_ASIDS to NUM_CTXT_ASIDS to make the name more generic. Signed-off-by: Julien Grall --- arch/arm64/mm/context.c | 31 +++++++++++++++++-------------- 1 file changed, 17 insertions(+), 14 deletions(-) -- 2.11.0 diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm/context.c index a9cc59288b08..d128f02644b0 100644 --- a/arch/arm64/mm/context.c +++ b/arch/arm64/mm/context.c @@ -37,6 +37,8 @@ static struct asid_info raw_spinlock_t lock; /* Which CPU requires context flush on next call */ cpumask_t flush_pending; + /* Number of ASID allocated by context (shift value) */ + unsigned int ctxt_shift; } asid_info; #define active_asid(info, cpu) *per_cpu_ptr((info)->active, cpu) @@ -49,15 +51,15 @@ static DEFINE_PER_CPU(u64, reserved_asids); #define ASID_FIRST_VERSION(info) (1UL << ((info)->bits)) #ifdef CONFIG_UNMAP_KERNEL_AT_EL0 -#define NUM_USER_ASIDS(info) (ASID_FIRST_VERSION(info) >> 1) -#define asid2idx(info, asid) (((asid) & ~ASID_MASK(info)) >> 1) -#define idx2asid(info, idx) (((idx) << 1) & ~ASID_MASK(info)) +#define ASID_PER_CONTEXT 2 #else -#define NUM_USER_ASIDS(info) (ASID_FIRST_VERSION(info)) -#define asid2idx(info, asid) ((asid) & ~ASID_MASK(info)) -#define idx2asid(info, idx) asid2idx(info, idx) +#define ASID_PER_CONTEXT 1 #endif +#define NUM_CTXT_ASIDS(info) (ASID_FIRST_VERSION(info) >> (info)->ctxt_shift) +#define asid2idx(info, asid) (((asid) & ~ASID_MASK(info)) >> (info)->ctxt_shift) +#define idx2asid(info, idx) (((idx) << (info)->ctxt_shift) & ~ASID_MASK(info)) + /* Get the ASIDBits supported by the current CPU */ static u32 get_cpu_asid_bits(void) { @@ -102,7 +104,7 @@ static void flush_context(struct asid_info *info) u64 asid; /* Update the list of reserved ASIDs and the ASID bitmap. */ - bitmap_clear(info->map, 0, NUM_USER_ASIDS(info)); + bitmap_clear(info->map, 0, NUM_CTXT_ASIDS(info)); for_each_possible_cpu(i) { asid = atomic64_xchg_relaxed(&active_asid(info, i), 0); @@ -182,8 +184,8 @@ static u64 new_context(struct asid_info *info, atomic64_t *pasid) * a reserved TTBR0 for the init_mm and we allocate ASIDs in even/odd * pairs. */ - asid = find_next_zero_bit(info->map, NUM_USER_ASIDS(info), cur_idx); - if (asid != NUM_USER_ASIDS(info)) + asid = find_next_zero_bit(info->map, NUM_CTXT_ASIDS(info), cur_idx); + if (asid != NUM_CTXT_ASIDS(info)) goto set_asid; /* We're out of ASIDs, so increment the global generation count */ @@ -192,7 +194,7 @@ static u64 new_context(struct asid_info *info, atomic64_t *pasid) flush_context(info); /* We have more ASIDs than CPUs, so this will always succeed */ - asid = find_next_zero_bit(info->map, NUM_USER_ASIDS(info), 1); + asid = find_next_zero_bit(info->map, NUM_CTXT_ASIDS(info), 1); set_asid: __set_bit(asid, info->map); @@ -272,17 +274,18 @@ static int asids_init(void) struct asid_info *info = &asid_info; info->bits = get_cpu_asid_bits(); + info->ctxt_shift = ilog2(ASID_PER_CONTEXT); /* * Expect allocation after rollover to fail if we don't have at least * one more ASID than CPUs. ASID #0 is reserved for init_mm. */ - WARN_ON(NUM_USER_ASIDS(info) - 1 <= num_possible_cpus()); + WARN_ON(NUM_CTXT_ASIDS(info) - 1 <= num_possible_cpus()); atomic64_set(&info->generation, ASID_FIRST_VERSION(info)); - info->map = kcalloc(BITS_TO_LONGS(NUM_USER_ASIDS(info)), + info->map = kcalloc(BITS_TO_LONGS(NUM_CTXT_ASIDS(info)), sizeof(*info->map), GFP_KERNEL); if (!info->map) panic("Failed to allocate bitmap for %lu ASIDs\n", - NUM_USER_ASIDS(info)); + NUM_CTXT_ASIDS(info)); info->active = &active_asids; info->reserved = &reserved_asids; @@ -290,7 +293,7 @@ static int asids_init(void) raw_spin_lock_init(&info->lock); pr_info("ASID allocator initialised with %lu entries\n", - NUM_USER_ASIDS(info)); + NUM_CTXT_ASIDS(info)); return 0; } early_initcall(asids_init); From patchwork Thu Jun 20 13:06:01 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Grall X-Patchwork-Id: 167336 Delivered-To: patch@linaro.org Received: by 2002:a92:4782:0:0:0:0:0 with SMTP id e2csp2049495ilk; Thu, 20 Jun 2019 06:07:12 -0700 (PDT) X-Google-Smtp-Source: APXvYqym9KshbB90OAoEsPzBallSRSrl+JAJ6yUtdmQlivwYFbsAR1+848xBx6hEt6DEmS168ATm X-Received: by 2002:a17:90a:cb81:: with SMTP id a1mr2925308pju.81.1561036032189; Thu, 20 Jun 2019 06:07:12 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1561036032; cv=none; d=google.com; s=arc-20160816; b=katRc/WYHrc6m2ewBTBgwn9EqyVr1nSs3PRuj6rxaRhp1L5IHnKebSE4oEpyz0044c vTJaHRsNkl8OWaS0QFeQuEvTXQ47at7XUYsOsYyN16IYSrP/XYQPtmEKRSSzy+s35O2H ftRvUs1iL7zt6ZfOVFQMOB7y4Ybs1Kl5vmtuyxANVePY6qhfKCJPqAC0ZoVQuGWljuuW 05Du9OR4mY2cGBHJ7qPMuJcHQY/cIr5uxfgwv1J6Hdxcfa49fBK1vXNaxVq8zSuW90NU KprYgJYUdMf0HSqNUNzIFrj8x9/1p5Hon56MzjTyC383Jf9OtByQ5QXZCL2GwzLciq3b GpQw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=8Nzd+MAVGAvziU/3k6O9iwk8Ly94P+VbsOglEEZ+odI=; b=Ha6W8xfgYhRFZJgcWstGUoYuVv7+hgNpbKMSJGbSc2ofAWRUDmvSVz2dtq8awyC2Q9 IOduaUYg88LLLEPeazAaSsRBkRV3XlCcrarss/7UDFfhwxfIjHaBzmsFyLKl2JRY+8mp 603yetIjnh+VFBgEVSLrz//sALKK2nVxSiDmy4fWBP3ecvfPjbZfJ/b1Lmx34FKAat4f MkcGm1cXK3Zt6voGomCsTuhukacKMjLG112j3WcZIOo8MfilYv2Bf7ubpicejtwWfK0q QDPIXr493raPU5gPzsr39PDKVOb40ooN3lPRzpkANspxdTjCXS70tLBZQJavyG5f9NCO nHyQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id z5si5792428pgv.280.2019.06.20.06.07.11; Thu, 20 Jun 2019 06:07:12 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732117AbfFTNHK (ORCPT + 30 others); Thu, 20 Jun 2019 09:07:10 -0400 Received: from foss.arm.com ([217.140.110.172]:36780 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731998AbfFTNGe (ORCPT ); Thu, 20 Jun 2019 09:06:34 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id D8415C0A; Thu, 20 Jun 2019 06:06:33 -0700 (PDT) Received: from e108454-lin.cambridge.arm.com (e108454-lin.cambridge.arm.com [10.1.196.50]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 82DB33F718; Thu, 20 Jun 2019 06:06:32 -0700 (PDT) From: Julien Grall To: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu Cc: james.morse@arm.com, marc.zyngier@arm.com, julien.thierry@arm.com, suzuki.poulose@arm.com, catalin.marinas@arm.com, will.deacon@arm.com, Julien Grall Subject: [RFC v2 07/14] arm64/mm: Introduce NUM_ASIDS Date: Thu, 20 Jun 2019 14:06:01 +0100 Message-Id: <20190620130608.17230-8-julien.grall@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20190620130608.17230-1-julien.grall@arm.com> References: <20190620130608.17230-1-julien.grall@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org At the moment ASID_FIRST_VERSION is used to know the number of ASIDs supported. As we are going to move the ASID allocator in a separate, it would be better to use a different name for external users. This patch adds NUM_ASIDS and implements ASID_FIRST_VERSION using it. Signed-off-by: Julien Grall --- arch/arm64/mm/context.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) -- 2.11.0 diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm/context.c index d128f02644b0..beba8e5b4100 100644 --- a/arch/arm64/mm/context.c +++ b/arch/arm64/mm/context.c @@ -48,7 +48,9 @@ static DEFINE_PER_CPU(atomic64_t, active_asids); static DEFINE_PER_CPU(u64, reserved_asids); #define ASID_MASK(info) (~GENMASK((info)->bits - 1, 0)) -#define ASID_FIRST_VERSION(info) (1UL << ((info)->bits)) +#define NUM_ASIDS(info) (1UL << ((info)->bits)) + +#define ASID_FIRST_VERSION(info) NUM_ASIDS(info) #ifdef CONFIG_UNMAP_KERNEL_AT_EL0 #define ASID_PER_CONTEXT 2 @@ -56,7 +58,7 @@ static DEFINE_PER_CPU(u64, reserved_asids); #define ASID_PER_CONTEXT 1 #endif -#define NUM_CTXT_ASIDS(info) (ASID_FIRST_VERSION(info) >> (info)->ctxt_shift) +#define NUM_CTXT_ASIDS(info) (NUM_ASIDS(info) >> (info)->ctxt_shift) #define asid2idx(info, asid) (((asid) & ~ASID_MASK(info)) >> (info)->ctxt_shift) #define idx2asid(info, idx) (((idx) << (info)->ctxt_shift) & ~ASID_MASK(info)) From patchwork Thu Jun 20 13:06:02 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Grall X-Patchwork-Id: 167329 Delivered-To: patch@linaro.org Received: by 2002:a92:4782:0:0:0:0:0 with SMTP id e2csp2048824ilk; Thu, 20 Jun 2019 06:06:40 -0700 (PDT) X-Google-Smtp-Source: APXvYqzck7BmPEIFvY9PKouuHg7vAbQPLGKbn+3E8OSQz2Fai6hTzrwV6w3smr9hu+pQp3KItszT X-Received: by 2002:a63:601:: with SMTP id 1mr66343pgg.380.1561036000786; Thu, 20 Jun 2019 06:06:40 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1561036000; cv=none; d=google.com; s=arc-20160816; b=qjZdnfXgT9OEopTZkDdgLgHiw4DBSeJMh4n4UXZ4vj2MTz8vzwZyBKeJAP2nkmMp9K z+bpWeREwN/LrgDrSzbXrbyC/26qFxwmpaOtQYCvQjLZRFfyxM+QdiYR6Dr2xkvX3FcE qL/H1skj12NT9hH8m465P5pSruHL3t3UqtQ/rVtOOVh80LRz111tbAaNfrA3CJDzpssm nrnEM7Ix0RokAdrk8m1QeWNDxNHIzKSyP+8ZA5IyLjJN9OOvPgdeSYFGrxvVDJK2pb9Z 1x8wO2uog9I9MwRaoYb+z8ckJxXHGmG2XPutlKcVi8JHpnPVc1oGPGXIkq2LYMAnN7mI GLLA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=gmFwqVe3ITc9+AfFXvEg61X9WWbmijEw8bWWrYsWHss=; b=ki8hD4QgfqNOS3SVjpcUC9uj6LHfW/MySCB9vRjvuEqKgOZttUZbX9EcC1pAKj5y9A rdOdxzYF8t7BfBfNIFIj5hp/Z+CUSg3tJ1M3G69pr/MFAYfOJH60wUxCIIZusaWNE7Nk C23L8OmET/QoK/Kcs8BIytxC2GFjAiLRLPsmYnnnQBPTDUdJxX6EtPid+issjHsd1bpB YQ+lrEmnVa7TwoaMBXbSduWzPPvl9ZHPSvqLbS2niKQvN/hTtPjI0nuwBXeNCzqUz4WQ vztRV/dFeQrvPudBum6t/EuDY/hOhX6vwQ7iOd4SxDoe50X5G6+40NaoMP17cRhQFUIg cUxA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id s29si5722550pgn.5.2019.06.20.06.06.40; Thu, 20 Jun 2019 06:06:40 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732032AbfFTNGj (ORCPT + 30 others); Thu, 20 Jun 2019 09:06:39 -0400 Received: from foss.arm.com ([217.140.110.172]:36790 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731831AbfFTNGg (ORCPT ); Thu, 20 Jun 2019 09:06:36 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 6D65F360; Thu, 20 Jun 2019 06:06:35 -0700 (PDT) Received: from e108454-lin.cambridge.arm.com (e108454-lin.cambridge.arm.com [10.1.196.50]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 18A843F73F; Thu, 20 Jun 2019 06:06:33 -0700 (PDT) From: Julien Grall To: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu Cc: james.morse@arm.com, marc.zyngier@arm.com, julien.thierry@arm.com, suzuki.poulose@arm.com, catalin.marinas@arm.com, will.deacon@arm.com, Julien Grall Subject: [RFC v2 08/14] arm64/mm: Split asid_inits in 2 parts Date: Thu, 20 Jun 2019 14:06:02 +0100 Message-Id: <20190620130608.17230-9-julien.grall@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20190620130608.17230-1-julien.grall@arm.com> References: <20190620130608.17230-1-julien.grall@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Move out the common initialization of the ASID allocator in a separate function. Signed-off-by: Julien Grall --- arch/arm64/mm/context.c | 43 +++++++++++++++++++++++++++++++------------ 1 file changed, 31 insertions(+), 12 deletions(-) -- 2.11.0 diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm/context.c index beba8e5b4100..81bc3d365436 100644 --- a/arch/arm64/mm/context.c +++ b/arch/arm64/mm/context.c @@ -271,31 +271,50 @@ asmlinkage void post_ttbr_update_workaround(void) CONFIG_CAVIUM_ERRATUM_27456)); } -static int asids_init(void) +/* + * Initialize the ASID allocator + * + * @info: Pointer to the asid allocator structure + * @bits: Number of ASIDs available + * @asid_per_ctxt: Number of ASIDs to allocate per-context. ASIDs are + * allocated contiguously for a given context. This value should be a power of + * 2. + */ +static int asid_allocator_init(struct asid_info *info, + u32 bits, unsigned int asid_per_ctxt) { - struct asid_info *info = &asid_info; - - info->bits = get_cpu_asid_bits(); - info->ctxt_shift = ilog2(ASID_PER_CONTEXT); + info->bits = bits; + info->ctxt_shift = ilog2(asid_per_ctxt); /* * Expect allocation after rollover to fail if we don't have at least - * one more ASID than CPUs. ASID #0 is reserved for init_mm. + * one more ASID than CPUs. ASID #0 is always reserved. */ WARN_ON(NUM_CTXT_ASIDS(info) - 1 <= num_possible_cpus()); atomic64_set(&info->generation, ASID_FIRST_VERSION(info)); info->map = kcalloc(BITS_TO_LONGS(NUM_CTXT_ASIDS(info)), sizeof(*info->map), GFP_KERNEL); if (!info->map) - panic("Failed to allocate bitmap for %lu ASIDs\n", - NUM_CTXT_ASIDS(info)); - - info->active = &active_asids; - info->reserved = &reserved_asids; + return -ENOMEM; raw_spin_lock_init(&info->lock); + return 0; +} + +static int asids_init(void) +{ + u32 bits = get_cpu_asid_bits(); + + if (!asid_allocator_init(&asid_info, bits, ASID_PER_CONTEXT)) + panic("Unable to initialize ASID allocator for %lu ASIDs\n", + 1UL << bits); + + asid_info.active = &active_asids; + asid_info.reserved = &reserved_asids; + pr_info("ASID allocator initialised with %lu entries\n", - NUM_CTXT_ASIDS(info)); + NUM_CTXT_ASIDS(&asid_info)); + return 0; } early_initcall(asids_init); From patchwork Thu Jun 20 13:06:03 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Grall X-Patchwork-Id: 167335 Delivered-To: patch@linaro.org Received: by 2002:a92:4782:0:0:0:0:0 with SMTP id e2csp2049357ilk; Thu, 20 Jun 2019 06:07:05 -0700 (PDT) X-Google-Smtp-Source: APXvYqw/1Ek/XGCcHsdwJSVSQgmdTTMFTKivpqBwyhJN4m/L/PgPPfqe4TAVhFESVpBQf7DzazFc X-Received: by 2002:a17:90a:22ef:: with SMTP id s102mr3136497pjc.2.1561036025114; Thu, 20 Jun 2019 06:07:05 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1561036025; cv=none; d=google.com; s=arc-20160816; b=ti9DV+8KVBTHDDJqYefI4apW76tgiBl93FMNlI2DZNXO0kUNAaqIQQfkCqu4Ko6Cev DdSWY9k7d+EKYv3wV/y27FOw02XnxemH+Eg2oASY0U5mzwHJPIcJImza4nU3ERffU2h+ JbZ+sn6qzoiUYEnziXq6llEm9urqP+Vv1YqAeRXipcpKtrTPvP1UrVxTkAIgtijGK9sk +5UKmw3xyK13NPOXSU/ymSh88NUiU3999GgQqdmEoyYPj3MhgCfCXgpovZKWggvUilWl GKqV+9XETyEpUwNIKhd/r/Me70O4/kr/oaIGvmTVYFmV3X9m/2ZSzDE9jch+O7icfrV7 ceeA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=HJ7z475PfrdQrKYE9zlXfvqhCcLbgTY/s1X72OLyabo=; b=XtAHEH1nNi+3lisBuggcnIimpMkBjITwwC/jo+JZtEk651tJNq7nlfPkBQT9tS7grv YejZswIrEs7JUjd+CBi23/LqRFc3OPclMSpEiiNd+xGck9FGdBFbiWA6J/zdnasKsW4X 17O3TreWMh1aS7ZpLx0SRYa8GXOhploH9pnmai9DjpL4SKHMGYW4skNz+yjze0G2eRVv b9YjsqkZVycxQN7Mv2KfW82yOE9hdbhWmqduJo6TiMZ6Hkracv908/Tm3xcUSgdaQ67G vNDPiYvx2yMl4ZR2lmQFGytHSeCL0BUwuUEmCQ0xxAh6S3W9uXQhtXD/0/dVNRWYTffn 1POg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id h8si4180374pjs.13.2019.06.20.06.07.04; Thu, 20 Jun 2019 06:07:05 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732107AbfFTNHD (ORCPT + 30 others); Thu, 20 Jun 2019 09:07:03 -0400 Received: from foss.arm.com ([217.140.110.172]:36800 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731983AbfFTNGh (ORCPT ); Thu, 20 Jun 2019 09:06:37 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 02CBFC0A; Thu, 20 Jun 2019 06:06:37 -0700 (PDT) Received: from e108454-lin.cambridge.arm.com (e108454-lin.cambridge.arm.com [10.1.196.50]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id A17143F718; Thu, 20 Jun 2019 06:06:35 -0700 (PDT) From: Julien Grall To: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu Cc: james.morse@arm.com, marc.zyngier@arm.com, julien.thierry@arm.com, suzuki.poulose@arm.com, catalin.marinas@arm.com, will.deacon@arm.com, Julien Grall Subject: [RFC v2 09/14] arm64/mm: Split the function check_and_switch_context in 3 parts Date: Thu, 20 Jun 2019 14:06:03 +0100 Message-Id: <20190620130608.17230-10-julien.grall@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20190620130608.17230-1-julien.grall@arm.com> References: <20190620130608.17230-1-julien.grall@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The function check_and_switch_context is used to: 1) Check whether the ASID is still valid 2) Generate a new one if it is not valid 3) Switch the context While the latter is specific to the MM subsystem, the rest could be part of the generic ASID allocator. After this patch, the function is now split in 3 parts which corresponds to the use of the functions: 1) asid_check_context: Check if the ASID is still valid 2) asid_new_context: Generate a new ASID for the context 3) check_and_switch_context: Call 1) and 2) and switch the context 1) and 2) have not been merged in a single function because we want to avoid to add a branch in when the ASID is still valid. This will matter when the code will be moved in separate file later on as 1) will reside in the header as a static inline function. Signed-off-by: Julien Grall --- Will wants to avoid to add a branch when the ASID is still valid. So 1) and 2) are in separates function. The former will move to a new header and make static inline. --- arch/arm64/mm/context.c | 51 +++++++++++++++++++++++++++++++++++++------------ 1 file changed, 39 insertions(+), 12 deletions(-) -- 2.11.0 diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm/context.c index 81bc3d365436..fbef5a5c5624 100644 --- a/arch/arm64/mm/context.c +++ b/arch/arm64/mm/context.c @@ -204,16 +204,21 @@ static u64 new_context(struct asid_info *info, atomic64_t *pasid) return idx2asid(info, asid) | generation; } -void check_and_switch_context(struct mm_struct *mm, unsigned int cpu) +static void asid_new_context(struct asid_info *info, atomic64_t *pasid, + unsigned int cpu); + +/* + * Check the ASID is still valid for the context. If not generate a new ASID. + * + * @pasid: Pointer to the current ASID batch + * @cpu: current CPU ID. Must have been acquired throught get_cpu() + */ +static void asid_check_context(struct asid_info *info, + atomic64_t *pasid, unsigned int cpu) { - unsigned long flags; u64 asid, old_active_asid; - struct asid_info *info = &asid_info; - if (system_supports_cnp()) - cpu_set_reserved_ttbr0(); - - asid = atomic64_read(&mm->context.id); + asid = atomic64_read(pasid); /* * The memory ordering here is subtle. @@ -234,14 +239,30 @@ void check_and_switch_context(struct mm_struct *mm, unsigned int cpu) !((asid ^ atomic64_read(&info->generation)) >> info->bits) && atomic64_cmpxchg_relaxed(&active_asid(info, cpu), old_active_asid, asid)) - goto switch_mm_fastpath; + return; + + asid_new_context(info, pasid, cpu); +} + +/* + * Generate a new ASID for the context. + * + * @pasid: Pointer to the current ASID batch allocated. It will be updated + * with the new ASID batch. + * @cpu: current CPU ID. Must have been acquired through get_cpu() + */ +static void asid_new_context(struct asid_info *info, atomic64_t *pasid, + unsigned int cpu) +{ + unsigned long flags; + u64 asid; raw_spin_lock_irqsave(&info->lock, flags); /* Check that our ASID belongs to the current generation. */ - asid = atomic64_read(&mm->context.id); + asid = atomic64_read(pasid); if ((asid ^ atomic64_read(&info->generation)) >> info->bits) { - asid = new_context(info, &mm->context.id); - atomic64_set(&mm->context.id, asid); + asid = new_context(info, pasid); + atomic64_set(pasid, asid); } if (cpumask_test_and_clear_cpu(cpu, &info->flush_pending)) @@ -249,8 +270,14 @@ void check_and_switch_context(struct mm_struct *mm, unsigned int cpu) atomic64_set(&active_asid(info, cpu), asid); raw_spin_unlock_irqrestore(&info->lock, flags); +} + +void check_and_switch_context(struct mm_struct *mm, unsigned int cpu) +{ + if (system_supports_cnp()) + cpu_set_reserved_ttbr0(); -switch_mm_fastpath: + asid_check_context(&asid_info, &mm->context.id, cpu); arm64_apply_bp_hardening(); From patchwork Thu Jun 20 13:06:04 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Grall X-Patchwork-Id: 167330 Delivered-To: patch@linaro.org Received: by 2002:a92:4782:0:0:0:0:0 with SMTP id e2csp2048887ilk; Thu, 20 Jun 2019 06:06:43 -0700 (PDT) X-Google-Smtp-Source: APXvYqx4zlS5z58snYdaeh+4bHqRVoPLUux260KycQmMtLr5ocpDU8G7RrXlru7wkdTxzA9NWhV3 X-Received: by 2002:a62:e301:: with SMTP id g1mr32486004pfh.119.1561036003638; Thu, 20 Jun 2019 06:06:43 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1561036003; cv=none; d=google.com; s=arc-20160816; b=SSjqa3gy/DHIF/ZYC5c7mlRJ//6st8Q8+KP3zLUu41yhgPGl8/kKdcDRBJdbmLUbUA nII70O6z6Hv2c7yLuwiHKzhcFWS/uGEC/QAqsWgGKRcvJjKuWR98e5xLkU5pUkX0+KHt 4tVL9/5wN7giqi6KrnOtzUgpo1EGGOWo7rBEFmB9sVoAZ02KCc/m7zNXYsE1dMjeRN06 oJcH7+R75PMZ5wWyX7Zj4GWm1PqaS6gxY4mO5b1WxhMP75T49YtItp5h1sF6tGbzs6lL jpme+UlKdoTMfaYthwgmuCPXC0Z4rLte6LSWyXlZl9SiKfwc01NEiYTXzuQNeKbWQwmp etyg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=P19iB0RN+QVwJmlK/R/hy8q7m6sILENbHxPhaxnl15c=; b=Tk/dJZiAbLAZdHRuIiTGS+YItCTOPidSaDaP0EcDHM7m9NTNXuoBXKOIU4kweTrkQY IAolO2ZOGt+Fn01N2cpGf5xwlMxaC3VxvhNOKdkEEDfmLZcFskxPnvDOrZITMspEnn7W jHB8Bi33fMG4v027H5Q+9+uPW0cJBGOGVi0QBF0M979G7qWnOmVHe5mxiaRXcDrwuCQr QtpDhQivoZZUmlchCD+mX94Fhcciohda6xYEjkn+LGppO5cY7XIMphT+YEWOG8SxgHJh J7SKB7Ghy04luqPUUIspPrbkrmrN4nxO38ZA51g8+M8vLK9qpO0ME9VQ0SnX5ChkYkuG BIcQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id k4si5435948pgq.293.2019.06.20.06.06.42; Thu, 20 Jun 2019 06:06:43 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732044AbfFTNGl (ORCPT + 30 others); Thu, 20 Jun 2019 09:06:41 -0400 Received: from foss.arm.com ([217.140.110.172]:36808 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732029AbfFTNGi (ORCPT ); Thu, 20 Jun 2019 09:06:38 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 8B5CB360; Thu, 20 Jun 2019 06:06:38 -0700 (PDT) Received: from e108454-lin.cambridge.arm.com (e108454-lin.cambridge.arm.com [10.1.196.50]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 3704F3F718; Thu, 20 Jun 2019 06:06:37 -0700 (PDT) From: Julien Grall To: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu Cc: james.morse@arm.com, marc.zyngier@arm.com, julien.thierry@arm.com, suzuki.poulose@arm.com, catalin.marinas@arm.com, will.deacon@arm.com, Julien Grall Subject: [RFC v2 10/14] arm64/mm: Introduce a callback to flush the local context Date: Thu, 20 Jun 2019 14:06:04 +0100 Message-Id: <20190620130608.17230-11-julien.grall@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20190620130608.17230-1-julien.grall@arm.com> References: <20190620130608.17230-1-julien.grall@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Flushing the local context will vary depending on the actual user of the ASID allocator. Introduce a new callback to flush the local context and move the call to flush local TLB in it. Signed-off-by: Julien Grall --- arch/arm64/mm/context.c | 16 +++++++++++++--- 1 file changed, 13 insertions(+), 3 deletions(-) -- 2.11.0 diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm/context.c index fbef5a5c5624..3df63a28856c 100644 --- a/arch/arm64/mm/context.c +++ b/arch/arm64/mm/context.c @@ -39,6 +39,8 @@ static struct asid_info cpumask_t flush_pending; /* Number of ASID allocated by context (shift value) */ unsigned int ctxt_shift; + /* Callback to locally flush the context. */ + void (*flush_cpu_ctxt_cb)(void); } asid_info; #define active_asid(info, cpu) *per_cpu_ptr((info)->active, cpu) @@ -266,7 +268,7 @@ static void asid_new_context(struct asid_info *info, atomic64_t *pasid, } if (cpumask_test_and_clear_cpu(cpu, &info->flush_pending)) - local_flush_tlb_all(); + info->flush_cpu_ctxt_cb(); atomic64_set(&active_asid(info, cpu), asid); raw_spin_unlock_irqrestore(&info->lock, flags); @@ -298,6 +300,11 @@ asmlinkage void post_ttbr_update_workaround(void) CONFIG_CAVIUM_ERRATUM_27456)); } +static void asid_flush_cpu_ctxt(void) +{ + local_flush_tlb_all(); +} + /* * Initialize the ASID allocator * @@ -308,10 +315,12 @@ asmlinkage void post_ttbr_update_workaround(void) * 2. */ static int asid_allocator_init(struct asid_info *info, - u32 bits, unsigned int asid_per_ctxt) + u32 bits, unsigned int asid_per_ctxt, + void (*flush_cpu_ctxt_cb)(void)) { info->bits = bits; info->ctxt_shift = ilog2(asid_per_ctxt); + info->flush_cpu_ctxt_cb = flush_cpu_ctxt_cb; /* * Expect allocation after rollover to fail if we don't have at least * one more ASID than CPUs. ASID #0 is always reserved. @@ -332,7 +341,8 @@ static int asids_init(void) { u32 bits = get_cpu_asid_bits(); - if (!asid_allocator_init(&asid_info, bits, ASID_PER_CONTEXT)) + if (!asid_allocator_init(&asid_info, bits, ASID_PER_CONTEXT, + asid_flush_cpu_ctxt)) panic("Unable to initialize ASID allocator for %lu ASIDs\n", 1UL << bits); From patchwork Thu Jun 20 13:06:05 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Grall X-Patchwork-Id: 167331 Delivered-To: patch@linaro.org Received: by 2002:a92:4782:0:0:0:0:0 with SMTP id e2csp2048907ilk; Thu, 20 Jun 2019 06:06:44 -0700 (PDT) X-Google-Smtp-Source: APXvYqybP/lkiSvuTL/5Pzk3Or2TdhhOBBAJmEewhcAIZp4SHW5Pr7U8JAIOo4tAUlFUUYnufe3W X-Received: by 2002:a63:1b42:: with SMTP id b2mr3070747pgm.115.1561036004226; Thu, 20 Jun 2019 06:06:44 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1561036004; cv=none; d=google.com; s=arc-20160816; b=Mr2fAapKWRAOXv3WsqGGYjCWWlI9+0D6p4mdPqnvAjM+EnZ2NiaIOHDbGOZKzc2HZp ymYpYCUfmxP3KHJjriVUKPi+5dHy5Zv6BIJyCmLNNswl7hLFyCfzHObcYj/TWi5ivoa1 UZrUTY3onOTS4VViGtS9jbqySzOhmeagCXT8lT0NCAcNKwW6CS9JLo+mpS3shwpDQBbA 3dr2HeC/fcXqVVZY0NVWcpSgFQ1tEkKAxeq9BTpY9N2fe+Kc107Z1UkfUBLtiZ5v6edW MzO5g0j5jf/qX8/rYNelkX0NyHNc7xrWwXbEx9bESIWfFH+izmTDfoy3u285iGUb+eOk 4q8A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=c1tP4l7uQZjxuX9v5EZH1p7FWpOBlTy6QhZX3CoppiA=; b=cYS5U87hoQjS0H1/03AljSE7KQ4LHr64Wr0FvgArx+Owq300MBEd1fvdAKTwUrhlS9 B9+ik5YxNOlWWqvHWUPdrWBzp9kP4vhm6PyJBtt8EnqlZCLi3/RrU+5bl/5SgmZ2jB6l eg3JK5JyGwcH/FAASwIIK6KFEJ8Hssn4lgEKLcCalAQzdEWiL0X/6/HetTyipLr+bL8D KqoqXLgAuUeK3UkVpsF8JYykl2HLlcrONZZKRz1AShOmtE5rS684I4Kije/FOBq4cHwp tITEMHUZ01w6MeJ8ZZnkbmbWKFj/lVNkvrn6SivUz+MeBuJwjlBfFP3bodC+77aUWDpE MQ6A== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id v15si13418726pfm.238.2019.06.20.06.06.43; Thu, 20 Jun 2019 06:06:44 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732058AbfFTNGm (ORCPT + 30 others); Thu, 20 Jun 2019 09:06:42 -0400 Received: from foss.arm.com ([217.140.110.172]:36818 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726758AbfFTNGl (ORCPT ); Thu, 20 Jun 2019 09:06:41 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 3AAE9C0A; Thu, 20 Jun 2019 06:06:40 -0700 (PDT) Received: from e108454-lin.cambridge.arm.com (e108454-lin.cambridge.arm.com [10.1.196.50]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id BF9283F718; Thu, 20 Jun 2019 06:06:38 -0700 (PDT) From: Julien Grall To: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu Cc: james.morse@arm.com, marc.zyngier@arm.com, julien.thierry@arm.com, suzuki.poulose@arm.com, catalin.marinas@arm.com, will.deacon@arm.com, Julien Grall Subject: [RFC v2 11/14] arm64: Move the ASID allocator code in a separate file Date: Thu, 20 Jun 2019 14:06:05 +0100 Message-Id: <20190620130608.17230-12-julien.grall@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20190620130608.17230-1-julien.grall@arm.com> References: <20190620130608.17230-1-julien.grall@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org We will want to re-use the ASID allocator in a separate context (e.g allocating VMID). So move the code in a new file. The function asid_check_context has been moved in the header as a static inline function because we want to avoid add a branch when checking if the ASID is still valid. Signed-off-by: Julien Grall --- This code will be used in the virt code for allocating VMID. I am not entirely sure where to place it. Lib could potentially be a good place but I am not entirely convinced the algo as it is could be used by other architecture. Looking at x86, it seems that it will not be possible to re-use because the number of PCID (aka ASID) could be smaller than the number of CPUs. See commit message 10af6235e0d327d42e1bad974385197817923dc1 "x86/mm: Implement PCID based optimization: try to preserve old TLB entries using PCI". Changes in v2: - Rename the header from asid.h to lib_asid.h --- arch/arm64/include/asm/lib_asid.h | 77 +++++++++++++ arch/arm64/lib/Makefile | 2 + arch/arm64/lib/asid.c | 185 ++++++++++++++++++++++++++++++ arch/arm64/mm/context.c | 235 +------------------------------------- 4 files changed, 267 insertions(+), 232 deletions(-) create mode 100644 arch/arm64/include/asm/lib_asid.h create mode 100644 arch/arm64/lib/asid.c -- 2.11.0 diff --git a/arch/arm64/include/asm/lib_asid.h b/arch/arm64/include/asm/lib_asid.h new file mode 100644 index 000000000000..c18e9eca500e --- /dev/null +++ b/arch/arm64/include/asm/lib_asid.h @@ -0,0 +1,77 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef __ASM_ASM_LIB_ASID_H +#define __ASM_ASM_LIB_ASID_H + +#include +#include +#include +#include +#include + +struct asid_info +{ + atomic64_t generation; + unsigned long *map; + atomic64_t __percpu *active; + u64 __percpu *reserved; + u32 bits; + /* Lock protecting the structure */ + raw_spinlock_t lock; + /* Which CPU requires context flush on next call */ + cpumask_t flush_pending; + /* Number of ASID allocated by context (shift value) */ + unsigned int ctxt_shift; + /* Callback to locally flush the context. */ + void (*flush_cpu_ctxt_cb)(void); +}; + +#define NUM_ASIDS(info) (1UL << ((info)->bits)) +#define NUM_CTXT_ASIDS(info) (NUM_ASIDS(info) >> (info)->ctxt_shift) + +#define active_asid(info, cpu) *per_cpu_ptr((info)->active, cpu) + +void asid_new_context(struct asid_info *info, atomic64_t *pasid, + unsigned int cpu); + +/* + * Check the ASID is still valid for the context. If not generate a new ASID. + * + * @pasid: Pointer to the current ASID batch + * @cpu: current CPU ID. Must have been acquired throught get_cpu() + */ +static inline void asid_check_context(struct asid_info *info, + atomic64_t *pasid, unsigned int cpu) +{ + u64 asid, old_active_asid; + + asid = atomic64_read(pasid); + + /* + * The memory ordering here is subtle. + * If our active_asid is non-zero and the ASID matches the current + * generation, then we update the active_asid entry with a relaxed + * cmpxchg. Racing with a concurrent rollover means that either: + * + * - We get a zero back from the cmpxchg and end up waiting on the + * lock. Taking the lock synchronises with the rollover and so + * we are forced to see the updated generation. + * + * - We get a valid ASID back from the cmpxchg, which means the + * relaxed xchg in flush_context will treat us as reserved + * because atomic RmWs are totally ordered for a given location. + */ + old_active_asid = atomic64_read(&active_asid(info, cpu)); + if (old_active_asid && + !((asid ^ atomic64_read(&info->generation)) >> info->bits) && + atomic64_cmpxchg_relaxed(&active_asid(info, cpu), + old_active_asid, asid)) + return; + + asid_new_context(info, pasid, cpu); +} + +int asid_allocator_init(struct asid_info *info, + u32 bits, unsigned int asid_per_ctxt, + void (*flush_cpu_ctxt_cb)(void)); + +#endif diff --git a/arch/arm64/lib/Makefile b/arch/arm64/lib/Makefile index 33c2a4abda04..37169d541ab5 100644 --- a/arch/arm64/lib/Makefile +++ b/arch/arm64/lib/Makefile @@ -5,6 +5,8 @@ lib-y := clear_user.o delay.o copy_from_user.o \ memcmp.o strcmp.o strncmp.o strlen.o strnlen.o \ strchr.o strrchr.o tishift.o +lib-y += asid.o + ifeq ($(CONFIG_KERNEL_MODE_NEON), y) obj-$(CONFIG_XOR_BLOCKS) += xor-neon.o CFLAGS_REMOVE_xor-neon.o += -mgeneral-regs-only diff --git a/arch/arm64/lib/asid.c b/arch/arm64/lib/asid.c new file mode 100644 index 000000000000..7252e4fdd5e9 --- /dev/null +++ b/arch/arm64/lib/asid.c @@ -0,0 +1,185 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Generic ASID allocator. + * + * Based on arch/arm/mm/context.c + * + * Copyright (C) 2002-2003 Deep Blue Solutions Ltd, all rights reserved. + * Copyright (C) 2012 ARM Ltd. + */ + +#include + +#include + +#define reserved_asid(info, cpu) *per_cpu_ptr((info)->reserved, cpu) + +#define ASID_MASK(info) (~GENMASK((info)->bits - 1, 0)) +#define ASID_FIRST_VERSION(info) (1UL << ((info)->bits)) + +#define asid2idx(info, asid) (((asid) & ~ASID_MASK(info)) >> (info)->ctxt_shift) +#define idx2asid(info, idx) (((idx) << (info)->ctxt_shift) & ~ASID_MASK(info)) + +static void flush_context(struct asid_info *info) +{ + int i; + u64 asid; + + /* Update the list of reserved ASIDs and the ASID bitmap. */ + bitmap_clear(info->map, 0, NUM_CTXT_ASIDS(info)); + + for_each_possible_cpu(i) { + asid = atomic64_xchg_relaxed(&active_asid(info, i), 0); + /* + * If this CPU has already been through a + * rollover, but hasn't run another task in + * the meantime, we must preserve its reserved + * ASID, as this is the only trace we have of + * the process it is still running. + */ + if (asid == 0) + asid = reserved_asid(info, i); + __set_bit(asid2idx(info, asid), info->map); + reserved_asid(info, i) = asid; + } + + /* + * Queue a TLB invalidation for each CPU to perform on next + * context-switch + */ + cpumask_setall(&info->flush_pending); +} + +static bool check_update_reserved_asid(struct asid_info *info, u64 asid, + u64 newasid) +{ + int cpu; + bool hit = false; + + /* + * Iterate over the set of reserved ASIDs looking for a match. + * If we find one, then we can update our mm to use newasid + * (i.e. the same ASID in the current generation) but we can't + * exit the loop early, since we need to ensure that all copies + * of the old ASID are updated to reflect the mm. Failure to do + * so could result in us missing the reserved ASID in a future + * generation. + */ + for_each_possible_cpu(cpu) { + if (reserved_asid(info, cpu) == asid) { + hit = true; + reserved_asid(info, cpu) = newasid; + } + } + + return hit; +} + +static u64 new_context(struct asid_info *info, atomic64_t *pasid) +{ + static u32 cur_idx = 1; + u64 asid = atomic64_read(pasid); + u64 generation = atomic64_read(&info->generation); + + if (asid != 0) { + u64 newasid = generation | (asid & ~ASID_MASK(info)); + + /* + * If our current ASID was active during a rollover, we + * can continue to use it and this was just a false alarm. + */ + if (check_update_reserved_asid(info, asid, newasid)) + return newasid; + + /* + * We had a valid ASID in a previous life, so try to re-use + * it if possible. + */ + if (!__test_and_set_bit(asid2idx(info, asid), info->map)) + return newasid; + } + + /* + * Allocate a free ASID. If we can't find one, take a note of the + * currently active ASIDs and mark the TLBs as requiring flushes. We + * always count from ASID #2 (index 1), as we use ASID #0 when setting + * a reserved TTBR0 for the init_mm and we allocate ASIDs in even/odd + * pairs. + */ + asid = find_next_zero_bit(info->map, NUM_CTXT_ASIDS(info), cur_idx); + if (asid != NUM_CTXT_ASIDS(info)) + goto set_asid; + + /* We're out of ASIDs, so increment the global generation count */ + generation = atomic64_add_return_relaxed(ASID_FIRST_VERSION(info), + &info->generation); + flush_context(info); + + /* We have more ASIDs than CPUs, so this will always succeed */ + asid = find_next_zero_bit(info->map, NUM_CTXT_ASIDS(info), 1); + +set_asid: + __set_bit(asid, info->map); + cur_idx = asid; + return idx2asid(info, asid) | generation; +} + +/* + * Generate a new ASID for the context. + * + * @pasid: Pointer to the current ASID batch allocated. It will be updated + * with the new ASID batch. + * @cpu: current CPU ID. Must have been acquired through get_cpu() + */ +void asid_new_context(struct asid_info *info, atomic64_t *pasid, + unsigned int cpu) +{ + unsigned long flags; + u64 asid; + + raw_spin_lock_irqsave(&info->lock, flags); + /* Check that our ASID belongs to the current generation. */ + asid = atomic64_read(pasid); + if ((asid ^ atomic64_read(&info->generation)) >> info->bits) { + asid = new_context(info, pasid); + atomic64_set(pasid, asid); + } + + if (cpumask_test_and_clear_cpu(cpu, &info->flush_pending)) + info->flush_cpu_ctxt_cb(); + + atomic64_set(&active_asid(info, cpu), asid); + raw_spin_unlock_irqrestore(&info->lock, flags); +} + +/* + * Initialize the ASID allocator + * + * @info: Pointer to the asid allocator structure + * @bits: Number of ASIDs available + * @asid_per_ctxt: Number of ASIDs to allocate per-context. ASIDs are + * allocated contiguously for a given context. This value should be a power of + * 2. + */ +int asid_allocator_init(struct asid_info *info, + u32 bits, unsigned int asid_per_ctxt, + void (*flush_cpu_ctxt_cb)(void)) +{ + info->bits = bits; + info->ctxt_shift = ilog2(asid_per_ctxt); + info->flush_cpu_ctxt_cb = flush_cpu_ctxt_cb; + /* + * Expect allocation after rollover to fail if we don't have at least + * one more ASID than CPUs. ASID #0 is always reserved. + */ + WARN_ON(NUM_CTXT_ASIDS(info) - 1 <= num_possible_cpus()); + atomic64_set(&info->generation, ASID_FIRST_VERSION(info)); + info->map = kcalloc(BITS_TO_LONGS(NUM_CTXT_ASIDS(info)), + sizeof(*info->map), GFP_KERNEL); + if (!info->map) + return -ENOMEM; + + raw_spin_lock_init(&info->lock); + + return 0; +} diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm/context.c index 3df63a28856c..b745cf356fe1 100644 --- a/arch/arm64/mm/context.c +++ b/arch/arm64/mm/context.c @@ -23,46 +23,21 @@ #include #include +#include #include #include #include -static struct asid_info -{ - atomic64_t generation; - unsigned long *map; - atomic64_t __percpu *active; - u64 __percpu *reserved; - u32 bits; - raw_spinlock_t lock; - /* Which CPU requires context flush on next call */ - cpumask_t flush_pending; - /* Number of ASID allocated by context (shift value) */ - unsigned int ctxt_shift; - /* Callback to locally flush the context. */ - void (*flush_cpu_ctxt_cb)(void); -} asid_info; - -#define active_asid(info, cpu) *per_cpu_ptr((info)->active, cpu) -#define reserved_asid(info, cpu) *per_cpu_ptr((info)->reserved, cpu) - static DEFINE_PER_CPU(atomic64_t, active_asids); static DEFINE_PER_CPU(u64, reserved_asids); -#define ASID_MASK(info) (~GENMASK((info)->bits - 1, 0)) -#define NUM_ASIDS(info) (1UL << ((info)->bits)) - -#define ASID_FIRST_VERSION(info) NUM_ASIDS(info) - #ifdef CONFIG_UNMAP_KERNEL_AT_EL0 #define ASID_PER_CONTEXT 2 #else #define ASID_PER_CONTEXT 1 #endif -#define NUM_CTXT_ASIDS(info) (NUM_ASIDS(info) >> (info)->ctxt_shift) -#define asid2idx(info, asid) (((asid) & ~ASID_MASK(info)) >> (info)->ctxt_shift) -#define idx2asid(info, idx) (((idx) << (info)->ctxt_shift) & ~ASID_MASK(info)) +static struct asid_info asid_info; /* Get the ASIDBits supported by the current CPU */ static u32 get_cpu_asid_bits(void) @@ -102,178 +77,6 @@ void verify_cpu_asid_bits(void) } } -static void flush_context(struct asid_info *info) -{ - int i; - u64 asid; - - /* Update the list of reserved ASIDs and the ASID bitmap. */ - bitmap_clear(info->map, 0, NUM_CTXT_ASIDS(info)); - - for_each_possible_cpu(i) { - asid = atomic64_xchg_relaxed(&active_asid(info, i), 0); - /* - * If this CPU has already been through a - * rollover, but hasn't run another task in - * the meantime, we must preserve its reserved - * ASID, as this is the only trace we have of - * the process it is still running. - */ - if (asid == 0) - asid = reserved_asid(info, i); - __set_bit(asid2idx(info, asid), info->map); - reserved_asid(info, i) = asid; - } - - /* - * Queue a TLB invalidation for each CPU to perform on next - * context-switch - */ - cpumask_setall(&info->flush_pending); -} - -static bool check_update_reserved_asid(struct asid_info *info, u64 asid, - u64 newasid) -{ - int cpu; - bool hit = false; - - /* - * Iterate over the set of reserved ASIDs looking for a match. - * If we find one, then we can update our mm to use newasid - * (i.e. the same ASID in the current generation) but we can't - * exit the loop early, since we need to ensure that all copies - * of the old ASID are updated to reflect the mm. Failure to do - * so could result in us missing the reserved ASID in a future - * generation. - */ - for_each_possible_cpu(cpu) { - if (reserved_asid(info, cpu) == asid) { - hit = true; - reserved_asid(info, cpu) = newasid; - } - } - - return hit; -} - -static u64 new_context(struct asid_info *info, atomic64_t *pasid) -{ - static u32 cur_idx = 1; - u64 asid = atomic64_read(pasid); - u64 generation = atomic64_read(&info->generation); - - if (asid != 0) { - u64 newasid = generation | (asid & ~ASID_MASK(info)); - - /* - * If our current ASID was active during a rollover, we - * can continue to use it and this was just a false alarm. - */ - if (check_update_reserved_asid(info, asid, newasid)) - return newasid; - - /* - * We had a valid ASID in a previous life, so try to re-use - * it if possible. - */ - if (!__test_and_set_bit(asid2idx(info, asid), info->map)) - return newasid; - } - - /* - * Allocate a free ASID. If we can't find one, take a note of the - * currently active ASIDs and mark the TLBs as requiring flushes. We - * always count from ASID #2 (index 1), as we use ASID #0 when setting - * a reserved TTBR0 for the init_mm and we allocate ASIDs in even/odd - * pairs. - */ - asid = find_next_zero_bit(info->map, NUM_CTXT_ASIDS(info), cur_idx); - if (asid != NUM_CTXT_ASIDS(info)) - goto set_asid; - - /* We're out of ASIDs, so increment the global generation count */ - generation = atomic64_add_return_relaxed(ASID_FIRST_VERSION(info), - &info->generation); - flush_context(info); - - /* We have more ASIDs than CPUs, so this will always succeed */ - asid = find_next_zero_bit(info->map, NUM_CTXT_ASIDS(info), 1); - -set_asid: - __set_bit(asid, info->map); - cur_idx = asid; - return idx2asid(info, asid) | generation; -} - -static void asid_new_context(struct asid_info *info, atomic64_t *pasid, - unsigned int cpu); - -/* - * Check the ASID is still valid for the context. If not generate a new ASID. - * - * @pasid: Pointer to the current ASID batch - * @cpu: current CPU ID. Must have been acquired throught get_cpu() - */ -static void asid_check_context(struct asid_info *info, - atomic64_t *pasid, unsigned int cpu) -{ - u64 asid, old_active_asid; - - asid = atomic64_read(pasid); - - /* - * The memory ordering here is subtle. - * If our active_asid is non-zero and the ASID matches the current - * generation, then we update the active_asid entry with a relaxed - * cmpxchg. Racing with a concurrent rollover means that either: - * - * - We get a zero back from the cmpxchg and end up waiting on the - * lock. Taking the lock synchronises with the rollover and so - * we are forced to see the updated generation. - * - * - We get a valid ASID back from the cmpxchg, which means the - * relaxed xchg in flush_context will treat us as reserved - * because atomic RmWs are totally ordered for a given location. - */ - old_active_asid = atomic64_read(&active_asid(info, cpu)); - if (old_active_asid && - !((asid ^ atomic64_read(&info->generation)) >> info->bits) && - atomic64_cmpxchg_relaxed(&active_asid(info, cpu), - old_active_asid, asid)) - return; - - asid_new_context(info, pasid, cpu); -} - -/* - * Generate a new ASID for the context. - * - * @pasid: Pointer to the current ASID batch allocated. It will be updated - * with the new ASID batch. - * @cpu: current CPU ID. Must have been acquired through get_cpu() - */ -static void asid_new_context(struct asid_info *info, atomic64_t *pasid, - unsigned int cpu) -{ - unsigned long flags; - u64 asid; - - raw_spin_lock_irqsave(&info->lock, flags); - /* Check that our ASID belongs to the current generation. */ - asid = atomic64_read(pasid); - if ((asid ^ atomic64_read(&info->generation)) >> info->bits) { - asid = new_context(info, pasid); - atomic64_set(pasid, asid); - } - - if (cpumask_test_and_clear_cpu(cpu, &info->flush_pending)) - info->flush_cpu_ctxt_cb(); - - atomic64_set(&active_asid(info, cpu), asid); - raw_spin_unlock_irqrestore(&info->lock, flags); -} - void check_and_switch_context(struct mm_struct *mm, unsigned int cpu) { if (system_supports_cnp()) @@ -305,38 +108,6 @@ static void asid_flush_cpu_ctxt(void) local_flush_tlb_all(); } -/* - * Initialize the ASID allocator - * - * @info: Pointer to the asid allocator structure - * @bits: Number of ASIDs available - * @asid_per_ctxt: Number of ASIDs to allocate per-context. ASIDs are - * allocated contiguously for a given context. This value should be a power of - * 2. - */ -static int asid_allocator_init(struct asid_info *info, - u32 bits, unsigned int asid_per_ctxt, - void (*flush_cpu_ctxt_cb)(void)) -{ - info->bits = bits; - info->ctxt_shift = ilog2(asid_per_ctxt); - info->flush_cpu_ctxt_cb = flush_cpu_ctxt_cb; - /* - * Expect allocation after rollover to fail if we don't have at least - * one more ASID than CPUs. ASID #0 is always reserved. - */ - WARN_ON(NUM_CTXT_ASIDS(info) - 1 <= num_possible_cpus()); - atomic64_set(&info->generation, ASID_FIRST_VERSION(info)); - info->map = kcalloc(BITS_TO_LONGS(NUM_CTXT_ASIDS(info)), - sizeof(*info->map), GFP_KERNEL); - if (!info->map) - return -ENOMEM; - - raw_spin_lock_init(&info->lock); - - return 0; -} - static int asids_init(void) { u32 bits = get_cpu_asid_bits(); @@ -344,7 +115,7 @@ static int asids_init(void) if (!asid_allocator_init(&asid_info, bits, ASID_PER_CONTEXT, asid_flush_cpu_ctxt)) panic("Unable to initialize ASID allocator for %lu ASIDs\n", - 1UL << bits); + NUM_ASIDS(&asid_info)); asid_info.active = &active_asids; asid_info.reserved = &reserved_asids; From patchwork Thu Jun 20 13:06:06 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Grall X-Patchwork-Id: 167332 Delivered-To: patch@linaro.org Received: by 2002:a92:4782:0:0:0:0:0 with SMTP id e2csp2049008ilk; Thu, 20 Jun 2019 06:06:49 -0700 (PDT) X-Google-Smtp-Source: APXvYqx3Yi8/R3fAq66cLWt0IIE9ZcbY0pcGJnvK2tH+AT/Yy+o2DIVBHyqgEgEqqhBMiTuEZ4Jw X-Received: by 2002:a63:2cc4:: with SMTP id s187mr7271273pgs.36.1561036009211; Thu, 20 Jun 2019 06:06:49 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1561036009; cv=none; d=google.com; s=arc-20160816; b=w1p6xrYE2EOPcdE/E3BTmA1uVrIhTIGJRpMY4HbP1UYdyGAH48YXDw+giN0NZiORSk VnVKI0+3D6wM0rOOjt+GuY3Z0G4dinaVVS+/jd4OnLPzWFlZf3awi9/cyFcinor6F0S/ ySy06lNEfljANX/3lnZvC7bAamkzsRAegKdTE4L4405OHuSYSL1lXGGrcBUz3SqVNqyC I9xYV/j4ueqxzdlGHjvkeeoDEF9mLxYrWhf/SxWIhUiaiM0Q/m5JolZZiKepkCGhs8yP wflT7IQlHKccg3O9zQiPhVuGf6hYIH0+NtJ188xAtRjDeu68RV+f9wZOIHk0nu7as3VQ xGbA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=+AiRWk2K5zUSNkHA7BdZGL+ycrCc/kS1lAg/gmWUs/A=; b=DI+bosLQpGk2QmXPPDbCiWBzYH1BWdS6e3GKrFzeuMeeW3EF3euIEXpRs5wklq1KLM JFNL9PQqlcd6xSO9jzOyug4Rl8B/t/ihmhk7TlsYBwV52+GFUPRfDwZh9V3zIdR7nADm ToCBXM1o2MtzZf9We4clZLIV/GaTkY7j3373n6UcY8ueSig+aYXoGno6DDmCEltr0M1I xP5XEHARg0oE7Q9dNP9pPU1YgR8WTD0uRyCEX8UeZq4mVlkvIQOKAvxu6nACksg9K5Oo Zgsq2Tp+ImOtd0tRj5daBNLYpXWsSBnOuKTMqD1xKfLWSkTRiwP7rF4y3IdMZYa+eQBO 001Q== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id j25si8767089pfr.11.2019.06.20.06.06.48; Thu, 20 Jun 2019 06:06:49 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732075AbfFTNGr (ORCPT + 30 others); Thu, 20 Jun 2019 09:06:47 -0400 Received: from foss.arm.com ([217.140.110.172]:36828 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732029AbfFTNGm (ORCPT ); Thu, 20 Jun 2019 09:06:42 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id C32D3360; Thu, 20 Jun 2019 06:06:41 -0700 (PDT) Received: from e108454-lin.cambridge.arm.com (e108454-lin.cambridge.arm.com [10.1.196.50]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 6ED513F718; Thu, 20 Jun 2019 06:06:40 -0700 (PDT) From: Julien Grall To: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu Cc: james.morse@arm.com, marc.zyngier@arm.com, julien.thierry@arm.com, suzuki.poulose@arm.com, catalin.marinas@arm.com, will.deacon@arm.com, Julien Grall Subject: [RFC v2 12/14] arm64/lib: asid: Allow user to update the context under the lock Date: Thu, 20 Jun 2019 14:06:06 +0100 Message-Id: <20190620130608.17230-13-julien.grall@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20190620130608.17230-1-julien.grall@arm.com> References: <20190620130608.17230-1-julien.grall@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Some users of the ASID allocator (e.g VMID) will require to update the context when a new ASID is generated. This has to be protected by a lock to prevent concurrent modification. Rather than introducing yet another lock, it is possible to re-use the allocator lock for that purpose. This patch introduces a new callback that will be call when updating the context. Signed-off-by: Julien Grall --- arch/arm64/include/asm/lib_asid.h | 12 ++++++++---- arch/arm64/lib/asid.c | 10 ++++++++-- arch/arm64/mm/context.c | 11 ++++++++--- 3 files changed, 24 insertions(+), 9 deletions(-) -- 2.11.0 diff --git a/arch/arm64/include/asm/lib_asid.h b/arch/arm64/include/asm/lib_asid.h index c18e9eca500e..810f0b05a8da 100644 --- a/arch/arm64/include/asm/lib_asid.h +++ b/arch/arm64/include/asm/lib_asid.h @@ -23,6 +23,8 @@ struct asid_info unsigned int ctxt_shift; /* Callback to locally flush the context. */ void (*flush_cpu_ctxt_cb)(void); + /* Callback to call when a context is updated */ + void (*update_ctxt_cb)(void *ctxt); }; #define NUM_ASIDS(info) (1UL << ((info)->bits)) @@ -31,7 +33,7 @@ struct asid_info #define active_asid(info, cpu) *per_cpu_ptr((info)->active, cpu) void asid_new_context(struct asid_info *info, atomic64_t *pasid, - unsigned int cpu); + unsigned int cpu, void *ctxt); /* * Check the ASID is still valid for the context. If not generate a new ASID. @@ -40,7 +42,8 @@ void asid_new_context(struct asid_info *info, atomic64_t *pasid, * @cpu: current CPU ID. Must have been acquired throught get_cpu() */ static inline void asid_check_context(struct asid_info *info, - atomic64_t *pasid, unsigned int cpu) + atomic64_t *pasid, unsigned int cpu, + void *ctxt) { u64 asid, old_active_asid; @@ -67,11 +70,12 @@ static inline void asid_check_context(struct asid_info *info, old_active_asid, asid)) return; - asid_new_context(info, pasid, cpu); + asid_new_context(info, pasid, cpu, ctxt); } int asid_allocator_init(struct asid_info *info, u32 bits, unsigned int asid_per_ctxt, - void (*flush_cpu_ctxt_cb)(void)); + void (*flush_cpu_ctxt_cb)(void), + void (*update_ctxt_cb)(void *ctxt)); #endif diff --git a/arch/arm64/lib/asid.c b/arch/arm64/lib/asid.c index 7252e4fdd5e9..dd2c6e4c1ff0 100644 --- a/arch/arm64/lib/asid.c +++ b/arch/arm64/lib/asid.c @@ -130,9 +130,10 @@ static u64 new_context(struct asid_info *info, atomic64_t *pasid) * @pasid: Pointer to the current ASID batch allocated. It will be updated * with the new ASID batch. * @cpu: current CPU ID. Must have been acquired through get_cpu() + * @ctxt: Context to update when calling update_context */ void asid_new_context(struct asid_info *info, atomic64_t *pasid, - unsigned int cpu) + unsigned int cpu, void *ctxt) { unsigned long flags; u64 asid; @@ -149,6 +150,9 @@ void asid_new_context(struct asid_info *info, atomic64_t *pasid, info->flush_cpu_ctxt_cb(); atomic64_set(&active_asid(info, cpu), asid); + + info->update_ctxt_cb(ctxt); + raw_spin_unlock_irqrestore(&info->lock, flags); } @@ -163,11 +167,13 @@ void asid_new_context(struct asid_info *info, atomic64_t *pasid, */ int asid_allocator_init(struct asid_info *info, u32 bits, unsigned int asid_per_ctxt, - void (*flush_cpu_ctxt_cb)(void)) + void (*flush_cpu_ctxt_cb)(void), + void (*update_ctxt_cb)(void *ctxt)) { info->bits = bits; info->ctxt_shift = ilog2(asid_per_ctxt); info->flush_cpu_ctxt_cb = flush_cpu_ctxt_cb; + info->update_ctxt_cb = update_ctxt_cb; /* * Expect allocation after rollover to fail if we don't have at least * one more ASID than CPUs. ASID #0 is always reserved. diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm/context.c index b745cf356fe1..527ea82983d7 100644 --- a/arch/arm64/mm/context.c +++ b/arch/arm64/mm/context.c @@ -82,7 +82,7 @@ void check_and_switch_context(struct mm_struct *mm, unsigned int cpu) if (system_supports_cnp()) cpu_set_reserved_ttbr0(); - asid_check_context(&asid_info, &mm->context.id, cpu); + asid_check_context(&asid_info, &mm->context.id, cpu, mm); arm64_apply_bp_hardening(); @@ -108,12 +108,17 @@ static void asid_flush_cpu_ctxt(void) local_flush_tlb_all(); } +static void asid_update_ctxt(void *ctxt) +{ + /* Nothing to do */ +} + static int asids_init(void) { u32 bits = get_cpu_asid_bits(); - if (!asid_allocator_init(&asid_info, bits, ASID_PER_CONTEXT, - asid_flush_cpu_ctxt)) + if (asid_allocator_init(&asid_info, bits, ASID_PER_CONTEXT, + asid_flush_cpu_ctxt, asid_update_ctxt)) panic("Unable to initialize ASID allocator for %lu ASIDs\n", NUM_ASIDS(&asid_info)); From patchwork Thu Jun 20 13:06:07 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Grall X-Patchwork-Id: 167334 Delivered-To: patch@linaro.org Received: by 2002:a92:4782:0:0:0:0:0 with SMTP id e2csp2049210ilk; Thu, 20 Jun 2019 06:06:58 -0700 (PDT) X-Google-Smtp-Source: APXvYqwG/WLXDfgunCaHX/bEFLcDsdB9PUmzleoxu3hNdOUXi+mTubzTn4ueRM97jr6zB2fhzHX/ X-Received: by 2002:a17:902:aa88:: with SMTP id d8mr112471203plr.73.1561036018789; Thu, 20 Jun 2019 06:06:58 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1561036018; cv=none; d=google.com; s=arc-20160816; b=vW9R13JkqVYK0LexHQtt0iluF96mYRvCfzgzOkcvTqs996AUFMoVB3qqslj59GQ9RZ sQutAyrJxg580n07rRpOB6mmeW9VeJVSm+dP3vCmNSIFcjFk+KnSKp45wrGXc/t9rJWi Ax0Gh9fRYg+5YJIpuc0XreXW3wku4wR2oHGzwWzGNSgZiTIpC8ffqg/W/NL15NIs6ykc Q6GdDo5hdEuBQihfO+9xkhDNTzIA/Og8lMLlAGjxPkIeWc7N05ktRqQwY98fvLmhsbWU 0HOG4mJ5R9jtCrTIPm19rjdjOvkb478MizK1PNFquxiABXhhN+nQ9mxFv2+lOraiANXu eeLw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=PPoqIlbKlorV9cRoAwKqmXA0ggrWEm880bMqDsYAalI=; b=sXV0pxbiiV5DhYeQyEH4Zv2a+Gg14tRStrpwUha43cDqGlJVIH3pJsXYC6Y1SdP0eb zKbTOEAyfmfdiZBFyL7DMkTXfuHARhPAR7OI+vkAgZ+URENgA/AGt6Nq3ai79q/LW1v2 9a4ija//BfPN17DgDOCEzT1V7qIE1dPvq3O0CpP3esGBSivlaZgY2aJ+GjFb0amgd8O2 fjEVGUjiVoZf8drJPI4tpDSikVX5JyIVzDO+g1X9JC4W6Pihcit9c145aBcA1V884ImH eqg94iw+QUOBwXbeHKDCDRnwx81wBpzBYfux4CaK8Bq74H4TZey9ezRmWYPPnTs2dONY cUMw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 73si20570690pfu.148.2019.06.20.06.06.58; Thu, 20 Jun 2019 06:06:58 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732095AbfFTNGz (ORCPT + 30 others); Thu, 20 Jun 2019 09:06:55 -0400 Received: from foss.arm.com ([217.140.110.172]:36844 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726758AbfFTNGn (ORCPT ); Thu, 20 Jun 2019 09:06:43 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 72B15C0A; Thu, 20 Jun 2019 06:06:43 -0700 (PDT) Received: from e108454-lin.cambridge.arm.com (e108454-lin.cambridge.arm.com [10.1.196.50]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 035B53F718; Thu, 20 Jun 2019 06:06:41 -0700 (PDT) From: Julien Grall To: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu Cc: james.morse@arm.com, marc.zyngier@arm.com, julien.thierry@arm.com, suzuki.poulose@arm.com, catalin.marinas@arm.com, will.deacon@arm.com, Julien Grall , Russell King Subject: [RFC v2 13/14] arm/kvm: Introduce a new VMID allocator Date: Thu, 20 Jun 2019 14:06:07 +0100 Message-Id: <20190620130608.17230-14-julien.grall@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20190620130608.17230-1-julien.grall@arm.com> References: <20190620130608.17230-1-julien.grall@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org A follow-up patch will replace the KVM VMID allocator with the arm64 ASID allocator. To avoid as much as possible duplication, the arm KVM code will directly compile arch/arm64/lib/asid.c. The header is a verbatim to copy to avoid breaking the assumption that architecture port has self-containers headers. Signed-off-by: Julien Grall Cc: Russell King --- I hit a warning when compiling the ASID code: linux/arch/arm/kvm/../../arm64/lib/asid.c:17: warning: "ASID_MASK" redefined #define ASID_MASK(info) (~GENMASK((info)->bits - 1, 0)) In file included from linux/include/linux/mm_types.h:18, from linux/include/linux/mmzone.h:21, from linux/include/linux/gfp.h:6, from linux/include/linux/slab.h:15, from linux/arch/arm/kvm/../../arm64/lib/asid.c:11: linux/arch/arm/include/asm/mmu.h:26: note: this is the location of the previous definition #define ASID_MASK ((~0ULL) << ASID_BITS) I haven't yet resolved because I am not sure of the best way to go. AFAICT ASID_MASK is only used in mm/context.c. So I am wondering whether it would be acceptable to move the define. Changes in v2: - Re-use arm64/lib/asid.c rather than duplication the code. --- arch/arm/include/asm/lib_asid.h | 81 +++++++++++++++++++++++++++++++++++++++++ arch/arm/kvm/Makefile | 1 + 2 files changed, 82 insertions(+) create mode 100644 arch/arm/include/asm/lib_asid.h -- 2.11.0 diff --git a/arch/arm/include/asm/lib_asid.h b/arch/arm/include/asm/lib_asid.h new file mode 100644 index 000000000000..79bce4686d21 --- /dev/null +++ b/arch/arm/include/asm/lib_asid.h @@ -0,0 +1,81 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef __ARM_LIB_ASID_H__ +#define __ARM_LIB_ASID_H__ + +#include +#include +#include +#include +#include + +struct asid_info +{ + atomic64_t generation; + unsigned long *map; + atomic64_t __percpu *active; + u64 __percpu *reserved; + u32 bits; + /* Lock protecting the structure */ + raw_spinlock_t lock; + /* Which CPU requires context flush on next call */ + cpumask_t flush_pending; + /* Number of ASID allocated by context (shift value) */ + unsigned int ctxt_shift; + /* Callback to locally flush the context. */ + void (*flush_cpu_ctxt_cb)(void); + /* Callback to call when a context is updated */ + void (*update_ctxt_cb)(void *ctxt); +}; + +#define NUM_ASIDS(info) (1UL << ((info)->bits)) +#define NUM_CTXT_ASIDS(info) (NUM_ASIDS(info) >> (info)->ctxt_shift) + +#define active_asid(info, cpu) *per_cpu_ptr((info)->active, cpu) + +void asid_new_context(struct asid_info *info, atomic64_t *pasid, + unsigned int cpu, void *ctxt); + +/* + * Check the ASID is still valid for the context. If not generate a new ASID. + * + * @pasid: Pointer to the current ASID batch + * @cpu: current CPU ID. Must have been acquired throught get_cpu() + */ +static inline void asid_check_context(struct asid_info *info, + atomic64_t *pasid, unsigned int cpu, + void *ctxt) +{ + u64 asid, old_active_asid; + + asid = atomic64_read(pasid); + + /* + * The memory ordering here is subtle. + * If our active_asid is non-zero and the ASID matches the current + * generation, then we update the active_asid entry with a relaxed + * cmpxchg. Racing with a concurrent rollover means that either: + * + * - We get a zero back from the cmpxchg and end up waiting on the + * lock. Taking the lock synchronises with the rollover and so + * we are forced to see the updated generation. + * + * - We get a valid ASID back from the cmpxchg, which means the + * relaxed xchg in flush_context will treat us as reserved + * because atomic RmWs are totally ordered for a given location. + */ + old_active_asid = atomic64_read(&active_asid(info, cpu)); + if (old_active_asid && + !((asid ^ atomic64_read(&info->generation)) >> info->bits) && + atomic64_cmpxchg_relaxed(&active_asid(info, cpu), + old_active_asid, asid)) + return; + + asid_new_context(info, pasid, cpu, ctxt); +} + +int asid_allocator_init(struct asid_info *info, + u32 bits, unsigned int asid_per_ctxt, + void (*flush_cpu_ctxt_cb)(void), + void (*update_ctxt_cb)(void *ctxt)); + +#endif /* __ARM_LIB_ASID_H__ */ diff --git a/arch/arm/kvm/Makefile b/arch/arm/kvm/Makefile index 531e59f5be9c..6ab49bd84531 100644 --- a/arch/arm/kvm/Makefile +++ b/arch/arm/kvm/Makefile @@ -40,3 +40,4 @@ obj-y += $(KVM)/arm/vgic/vgic-its.o obj-y += $(KVM)/arm/vgic/vgic-debug.o obj-y += $(KVM)/irqchip.o obj-y += $(KVM)/arm/arch_timer.o +obj-y += ../../arm64/lib/asid.o From patchwork Thu Jun 20 13:06:08 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Grall X-Patchwork-Id: 167333 Delivered-To: patch@linaro.org Received: by 2002:a92:4782:0:0:0:0:0 with SMTP id e2csp2049085ilk; Thu, 20 Jun 2019 06:06:53 -0700 (PDT) X-Google-Smtp-Source: APXvYqx9LwqzQAh0QDTNijWeupma+wk6kL8fjhzlZCYo14M0y6dL79MC3tVwCssJdBtklpMRCkQ2 X-Received: by 2002:a62:3283:: with SMTP id y125mr81215905pfy.83.1561036013671; Thu, 20 Jun 2019 06:06:53 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1561036013; cv=none; d=google.com; s=arc-20160816; b=bzQa4GAglGvHpPkCDHeWAHAdynPJvnYZpdssut/gY20CnOGHzzQv+yeuq4jOuT2Rk9 gM5wt9/1auL5EebY567o9e7g4pfU4/8EV/JGhb2o2HNF9ZbTkSeSku2u05O0wycMsjjW LMnw0gFcPFxRrl1FNRawgLqcKhdgoR7rIaF3J/am85LykQEZLjJXMgdiEHera0arIdLz RbEcYm6yklqBXgxrvX6PS1XoDBVHDeH2fpWddj+dMgAsfKIMmF78AbPegvXav0S9q0lr vMTSzPJmSSV4ep2kMGg7ibuXEZH7VlGBnvRr+1Sf2aAIrDOaz4t0xM8ekum40BXigFYM 6cVA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=zu+vXczLOqVT/txlj8UrmXpMuaYR2V7j4FH96uWzkDY=; b=FMcS4LvAd5BgO+0XJjLKviidZchzGgCQMDKbX1H8SdS0rBPYfx+RdlR4gyt89sV1AV PViFgdDTcMxM+CrzFOlCYiGTJF+yPjy/XjH2QDrvSBjuwp+HbP8RiF6FRfRMwvEqztye zr6jX3KAN/eRqbauLG8T8v/vhceMYYpqvm3p+2ecAXbJbluchXSnjzGdA0pQpe0SnlTK AX73DjhOGQlJEC0AZyfCmjUrhCS8j2bch7YkFKyxUibnMr2qIIVuZh8ezUnpT9sgz14e graoj0/be3rkL2v+Cz6CqJYGq+xw19MnDwAjOOYEw+BuRfyPaQwOcuwvtDaDI+yv+tvg SH5g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id a17si4263363pjs.98.2019.06.20.06.06.53; Thu, 20 Jun 2019 06:06:53 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732084AbfFTNGv (ORCPT + 30 others); Thu, 20 Jun 2019 09:06:51 -0400 Received: from foss.arm.com ([217.140.110.172]:36854 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732062AbfFTNGq (ORCPT ); Thu, 20 Jun 2019 09:06:46 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 0BA7411B3; Thu, 20 Jun 2019 06:06:45 -0700 (PDT) Received: from e108454-lin.cambridge.arm.com (e108454-lin.cambridge.arm.com [10.1.196.50]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id A6B973F718; Thu, 20 Jun 2019 06:06:43 -0700 (PDT) From: Julien Grall To: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu Cc: james.morse@arm.com, marc.zyngier@arm.com, julien.thierry@arm.com, suzuki.poulose@arm.com, catalin.marinas@arm.com, will.deacon@arm.com, Julien Grall Subject: [RFC v2 14/14] kvm/arm: Align the VMID allocation with the arm64 ASID one Date: Thu, 20 Jun 2019 14:06:08 +0100 Message-Id: <20190620130608.17230-15-julien.grall@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20190620130608.17230-1-julien.grall@arm.com> References: <20190620130608.17230-1-julien.grall@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org At the moment, the VMID algorithm will send an SGI to all the CPUs to force an exit and then broadcast a full TLB flush and I-Cache invalidation. This patch re-use the new ASID allocator. The benefits are: - CPUs are not forced to exit at roll-over. Instead the VMID will be marked reserved and the context will be flushed at next exit. This will reduce the IPIs traffic. - Context invalidation is now per-CPU rather than broadcasted. With the new algo, the code is now adapted: - The function __kvm_flush_vm_context() has been renamed to __kvm_flush_cpu_vmid_context and now only flushing the current CPU context. - The call to update_vttbr() will be done with preemption disabled as the new algo requires to store information per-CPU. - The TLBs associated to EL1 will be flushed when booting a CPU to deal with stale information. This was previously done on the allocation of the first VMID of a new generation. The measurement was made on a Seattle based SoC (8 CPUs), with the number of VMID limited to 4-bit. The test involves running concurrently 40 guests with 2 vCPUs. Each guest will then execute hackbench 5 times before exiting. The performance difference between the current algo and the new one are: - 2.5% less exit from the guest - 22.4% more flush, although they are now local rather than broadcasted - 0.11% faster (just for the record) Signed-off-by: Julien Grall ---- Looking at the __kvm_flush_vm_context, it might be possible to reduce more the overhead by removing the I-Cache flush for other cache than VIPT. This has been left aside for now. --- arch/arm/include/asm/kvm_asm.h | 2 +- arch/arm/include/asm/kvm_host.h | 5 +- arch/arm/include/asm/kvm_hyp.h | 1 + arch/arm/kvm/hyp/tlb.c | 8 +-- arch/arm64/include/asm/kvm_asid.h | 8 +++ arch/arm64/include/asm/kvm_asm.h | 2 +- arch/arm64/include/asm/kvm_host.h | 5 +- arch/arm64/kvm/hyp/tlb.c | 10 ++-- virt/kvm/arm/arm.c | 112 +++++++++++++------------------------- 9 files changed, 61 insertions(+), 92 deletions(-) create mode 100644 arch/arm64/include/asm/kvm_asid.h -- 2.11.0 diff --git a/arch/arm/include/asm/kvm_asm.h b/arch/arm/include/asm/kvm_asm.h index f615830f9f57..c2a2e6ef1e2f 100644 --- a/arch/arm/include/asm/kvm_asm.h +++ b/arch/arm/include/asm/kvm_asm.h @@ -53,7 +53,7 @@ struct kvm_vcpu; extern char __kvm_hyp_init[]; extern char __kvm_hyp_init_end[]; -extern void __kvm_flush_vm_context(void); +extern void __kvm_flush_cpu_vmid_context(void); extern void __kvm_tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa); extern void __kvm_tlb_flush_vmid(struct kvm *kvm); extern void __kvm_tlb_flush_local_vmid(struct kvm_vcpu *vcpu); diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h index f80418ddeb60..7b894ff16688 100644 --- a/arch/arm/include/asm/kvm_host.h +++ b/arch/arm/include/asm/kvm_host.h @@ -50,8 +50,8 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu); void kvm_reset_coprocs(struct kvm_vcpu *vcpu); struct kvm_vmid { - /* The VMID generation used for the virt. memory system */ - u64 vmid_gen; + /* The ASID used for the ASID allocator */ + atomic64_t asid; u32 vmid; }; @@ -259,7 +259,6 @@ unsigned long __kvm_call_hyp(void *hypfn, ...); ret; \ }) -void force_vm_exit(const cpumask_t *mask); int __kvm_arm_vcpu_get_events(struct kvm_vcpu *vcpu, struct kvm_vcpu_events *events); diff --git a/arch/arm/include/asm/kvm_hyp.h b/arch/arm/include/asm/kvm_hyp.h index 87bcd18df8d5..c3d1011ca1bf 100644 --- a/arch/arm/include/asm/kvm_hyp.h +++ b/arch/arm/include/asm/kvm_hyp.h @@ -75,6 +75,7 @@ #define TLBIALLIS __ACCESS_CP15(c8, 0, c3, 0) #define TLBIALL __ACCESS_CP15(c8, 0, c7, 0) #define TLBIALLNSNHIS __ACCESS_CP15(c8, 4, c3, 4) +#define TLBIALLNSNH __ACCESS_CP15(c8, 4, c7, 4) #define PRRR __ACCESS_CP15(c10, 0, c2, 0) #define NMRR __ACCESS_CP15(c10, 0, c2, 1) #define AMAIR0 __ACCESS_CP15(c10, 0, c3, 0) diff --git a/arch/arm/kvm/hyp/tlb.c b/arch/arm/kvm/hyp/tlb.c index 8e4afba73635..42b9ab47fc94 100644 --- a/arch/arm/kvm/hyp/tlb.c +++ b/arch/arm/kvm/hyp/tlb.c @@ -71,9 +71,9 @@ void __hyp_text __kvm_tlb_flush_local_vmid(struct kvm_vcpu *vcpu) write_sysreg(0, VTTBR); } -void __hyp_text __kvm_flush_vm_context(void) +void __hyp_text __kvm_flush_cpu_vmid_context(void) { - write_sysreg(0, TLBIALLNSNHIS); - write_sysreg(0, ICIALLUIS); - dsb(ish); + write_sysreg(0, TLBIALLNSNH); + write_sysreg(0, ICIALLU); + dsb(nsh); } diff --git a/arch/arm64/include/asm/kvm_asid.h b/arch/arm64/include/asm/kvm_asid.h new file mode 100644 index 000000000000..8b586e43c094 --- /dev/null +++ b/arch/arm64/include/asm/kvm_asid.h @@ -0,0 +1,8 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef __ARM64_KVM_ASID_H__ +#define __ARM64_KVM_ASID_H__ + +#include + +#endif /* __ARM64_KVM_ASID_H__ */ + diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h index ff73f5462aca..06821f548c0f 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -62,7 +62,7 @@ extern char __kvm_hyp_init_end[]; extern char __kvm_hyp_vector[]; -extern void __kvm_flush_vm_context(void); +extern void __kvm_flush_cpu_vmid_context(void); extern void __kvm_tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa); extern void __kvm_tlb_flush_vmid(struct kvm *kvm); extern void __kvm_tlb_flush_local_vmid(struct kvm_vcpu *vcpu); diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 4bcd9c1291d5..7ef45b7da4eb 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -68,8 +68,8 @@ int kvm_arch_vm_ioctl_check_extension(struct kvm *kvm, long ext); void __extended_idmap_trampoline(phys_addr_t boot_pgd, phys_addr_t idmap_start); struct kvm_vmid { - /* The VMID generation used for the virt. memory system */ - u64 vmid_gen; + /* The ASID used for the ASID allocator */ + atomic64_t asid; u32 vmid; }; @@ -478,7 +478,6 @@ u64 __kvm_call_hyp(void *hypfn, ...); ret; \ }) -void force_vm_exit(const cpumask_t *mask); void kvm_mmu_wp_memory_region(struct kvm *kvm, int slot); int handle_exit(struct kvm_vcpu *vcpu, struct kvm_run *run, diff --git a/arch/arm64/kvm/hyp/tlb.c b/arch/arm64/kvm/hyp/tlb.c index 76c30866069e..e80e922988c1 100644 --- a/arch/arm64/kvm/hyp/tlb.c +++ b/arch/arm64/kvm/hyp/tlb.c @@ -200,10 +200,10 @@ void __hyp_text __kvm_tlb_flush_local_vmid(struct kvm_vcpu *vcpu) __tlb_switch_to_host()(kvm, &cxt); } -void __hyp_text __kvm_flush_vm_context(void) +void __hyp_text __kvm_flush_cpu_vmid_context(void) { - dsb(ishst); - __tlbi(alle1is); - asm volatile("ic ialluis" : : ); - dsb(ish); + dsb(nshst); + __tlbi(alle1); + asm volatile("ic iallu" : : ); + dsb(nsh); } diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c index bd5c55916d0d..e906278a67cd 100644 --- a/virt/kvm/arm/arm.c +++ b/virt/kvm/arm/arm.c @@ -32,6 +32,7 @@ #include #include #include +#include #include #include #include @@ -50,10 +51,10 @@ static DEFINE_PER_CPU(unsigned long, kvm_arm_hyp_stack_page); /* Per-CPU variable containing the currently running vcpu. */ static DEFINE_PER_CPU(struct kvm_vcpu *, kvm_arm_running_vcpu); -/* The VMID used in the VTTBR */ -static atomic64_t kvm_vmid_gen = ATOMIC64_INIT(1); -static u32 kvm_next_vmid; -static DEFINE_SPINLOCK(kvm_vmid_lock); +static DEFINE_PER_CPU(atomic64_t, active_vmids); +static DEFINE_PER_CPU(u64, reserved_vmids); + +struct asid_info vmid_info; static bool vgic_present; @@ -128,9 +129,6 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type) kvm_vgic_early_init(kvm); - /* Mark the initial VMID generation invalid */ - kvm->arch.vmid.vmid_gen = 0; - /* The maximum number of VCPUs is limited by the host's GIC model */ kvm->arch.max_vcpus = vgic_present ? kvm_vgic_get_max_vcpus() : KVM_MAX_VCPUS; @@ -449,35 +447,17 @@ bool kvm_arch_vcpu_in_kernel(struct kvm_vcpu *vcpu) return vcpu_mode_priv(vcpu); } -/* Just ensure a guest exit from a particular CPU */ -static void exit_vm_noop(void *info) +static void vmid_flush_cpu_ctxt(void) { + kvm_call_hyp(__kvm_flush_cpu_vmid_context); } -void force_vm_exit(const cpumask_t *mask) +static void vmid_update_ctxt(void *ctxt) { - preempt_disable(); - smp_call_function_many(mask, exit_vm_noop, NULL, true); - preempt_enable(); -} + struct kvm_vmid *vmid = ctxt; + u64 asid = atomic64_read(&vmid->asid); -/** - * need_new_vmid_gen - check that the VMID is still valid - * @vmid: The VMID to check - * - * return true if there is a new generation of VMIDs being used - * - * The hardware supports a limited set of values with the value zero reserved - * for the host, so we check if an assigned value belongs to a previous - * generation, which which requires us to assign a new value. If we're the - * first to use a VMID for the new generation, we must flush necessary caches - * and TLBs on all CPUs. - */ -static bool need_new_vmid_gen(struct kvm_vmid *vmid) -{ - u64 current_vmid_gen = atomic64_read(&kvm_vmid_gen); - smp_rmb(); /* Orders read of kvm_vmid_gen and kvm->arch.vmid */ - return unlikely(READ_ONCE(vmid->vmid_gen) != current_vmid_gen); + vmid->vmid = asid & ((1ULL << kvm_get_vmid_bits()) - 1); } /** @@ -487,48 +467,11 @@ static bool need_new_vmid_gen(struct kvm_vmid *vmid) */ static void update_vmid(struct kvm_vmid *vmid) { - if (!need_new_vmid_gen(vmid)) - return; - - spin_lock(&kvm_vmid_lock); - - /* - * We need to re-check the vmid_gen here to ensure that if another vcpu - * already allocated a valid vmid for this vm, then this vcpu should - * use the same vmid. - */ - if (!need_new_vmid_gen(vmid)) { - spin_unlock(&kvm_vmid_lock); - return; - } - - /* First user of a new VMID generation? */ - if (unlikely(kvm_next_vmid == 0)) { - atomic64_inc(&kvm_vmid_gen); - kvm_next_vmid = 1; - - /* - * On SMP we know no other CPUs can use this CPU's or each - * other's VMID after force_vm_exit returns since the - * kvm_vmid_lock blocks them from reentry to the guest. - */ - force_vm_exit(cpu_all_mask); - /* - * Now broadcast TLB + ICACHE invalidation over the inner - * shareable domain to make sure all data structures are - * clean. - */ - kvm_call_hyp(__kvm_flush_vm_context); - } + int cpu = get_cpu(); - vmid->vmid = kvm_next_vmid; - kvm_next_vmid++; - kvm_next_vmid &= (1 << kvm_get_vmid_bits()) - 1; + asid_check_context(&vmid_info, &vmid->asid, cpu, vmid); - smp_wmb(); - WRITE_ONCE(vmid->vmid_gen, atomic64_read(&kvm_vmid_gen)); - - spin_unlock(&kvm_vmid_lock); + put_cpu(); } static int kvm_vcpu_first_run_init(struct kvm_vcpu *vcpu) @@ -682,8 +625,6 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run) */ cond_resched(); - update_vmid(&vcpu->kvm->arch.vmid); - check_vcpu_requests(vcpu); /* @@ -693,6 +634,15 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run) */ preempt_disable(); + /* + * The ASID/VMID allocator only tracks active VMIDs per + * physical CPU, and therefore the VMID allocated may not be + * preserved on VMID roll-over if the task was preempted, + * making a thread's VMID inactive. So we need to call + * update_vttbr in non-premptible context. + */ + update_vmid(&vcpu->kvm->arch.vmid); + kvm_pmu_flush_hwstate(vcpu); local_irq_disable(); @@ -731,8 +681,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run) */ smp_store_mb(vcpu->mode, IN_GUEST_MODE); - if (ret <= 0 || need_new_vmid_gen(&vcpu->kvm->arch.vmid) || - kvm_request_pending(vcpu)) { + if (ret <= 0 || kvm_request_pending(vcpu)) { vcpu->mode = OUTSIDE_GUEST_MODE; isb(); /* Ensure work in x_flush_hwstate is committed */ kvm_pmu_sync_hwstate(vcpu); @@ -1322,6 +1271,8 @@ static void cpu_init_hyp_mode(void *dummy) __cpu_init_hyp_mode(pgd_ptr, hyp_stack_ptr, vector_ptr); __cpu_init_stage2(); + + kvm_call_hyp(__kvm_flush_cpu_vmid_context); } static void cpu_hyp_reset(void) @@ -1429,6 +1380,17 @@ static inline void hyp_cpu_pm_exit(void) static int init_common_resources(void) { + /* + * Initialize the ASID allocator telling it to allocate a single + * VMID per VM. + */ + if (asid_allocator_init(&vmid_info, kvm_get_vmid_bits(), 1, + vmid_flush_cpu_ctxt, vmid_update_ctxt)) + panic("Failed to initialize VMID allocator\n"); + + vmid_info.active = &active_vmids; + vmid_info.reserved = &reserved_vmids; + kvm_set_ipa_limit(); return 0;