From patchwork Wed Jul 24 16:25:20 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Grall X-Patchwork-Id: 169634 Delivered-To: patch@linaro.org Received: by 2002:a92:4782:0:0:0:0:0 with SMTP id e2csp10467070ilk; Wed, 24 Jul 2019 09:26:44 -0700 (PDT) X-Google-Smtp-Source: APXvYqy4FLid07SFAMBjIBF06199OupWeUKOmjmhk4rbeV2gMuWaOwKI8EscaW2D0ht2ZYaM7Vmc X-Received: by 2002:a17:902:a9ca:: with SMTP id b10mr12845791plr.69.1563985604134; Wed, 24 Jul 2019 09:26:44 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1563985604; cv=none; d=google.com; s=arc-20160816; b=xrGTBbW6UtQUGHASpqpdsWAuD566nY/c81O52ISuBDN2vSUtsYNHg125WldoGq6mnm E7sap8Pwpf1mZgoibJZZpO6R1h6iyE6KVNloorCyR1IT9fRg3u81PiRtCfDsI5ZhMPCx FE4L6Ym7Ul87340BDxZOeVoIzAhR8VwlCDb8EIF7q+u045+oNmKmSGmDVhvm6hZ0yB7u X0hE9ddRPxTTf020YymgEYka5J9SCwhPO6i4nE5XGrFsycjHqxEaNviJZLhIpPpx+Pg1 5qGIOYWM7v0JPCsZDjzrS7D6jKCmPT0aRXmF4XfdId/t/edI3+NbdR6lyEnoYmT/r6zh 0Zug== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=h/humE1z8QaxFB1tHDNTELR5qMZYQVNKAiBJkdlcuUw=; b=Uf76jTVX6+bV9BrdlbmIKoyZwJH8LE7CjBHkmo9eHlYGmIKtwFw9qRW16sbRd/NUL5 0TW+AiouM0mmEiYi4WjmHc3EhdFS+SX48BqoK/iqHjSVLcHdu1urWw8J40qNkn6zBCK0 NZWqZcVvPnN3r04nZDJUUW2J6eM0gLC1al14NQ7DhPvemlOAxnkqNF/QdHlOTEiJbY6x A5u14r2eW4VGG7p8rrhariwjh+kg11sno4pP+HQ04RXf1nHLvCp+cmD7v+oTUzcIImZ/ SLlRFcpizLpgKnwa3zcQPXzcolOi95yZV8IZ3lX6AUqcaySCkScZxAD4dsNY7X+1+qR6 AmtA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id a89si16194979pla.60.2019.07.24.09.26.43; Wed, 24 Jul 2019 09:26:44 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728534AbfGXQZw (ORCPT + 29 others); Wed, 24 Jul 2019 12:25:52 -0400 Received: from foss.arm.com ([217.140.110.172]:43336 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726099AbfGXQZt (ORCPT ); Wed, 24 Jul 2019 12:25:49 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id B8977337; Wed, 24 Jul 2019 09:25:48 -0700 (PDT) Received: from e108454-lin.cambridge.arm.com (e108454-lin.cambridge.arm.com [10.1.196.50]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 650D13F71F; Wed, 24 Jul 2019 09:25:47 -0700 (PDT) From: Julien Grall To: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu Cc: james.morse@arm.com, marc.zyngier@arm.com, julien.thierry@arm.com, suzuki.poulose@arm.com, catalin.marinas@arm.com, will.deacon@arm.com, Julien Grall Subject: [PATCH v3 01/15] arm64/mm: Introduce asid_info structure and move asid_generation/asid_map to it Date: Wed, 24 Jul 2019 17:25:20 +0100 Message-Id: <20190724162534.7390-2-julien.grall@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20190724162534.7390-1-julien.grall@arm.com> References: <20190724162534.7390-1-julien.grall@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org In an attempt to make the ASID allocator generic, create a new structure asid_info to store all the information necessary for the allocator. For now, move the variables asid_generation and asid_map to the new structure asid_info. Follow-up patches will move more variables. Note to avoid more renaming aftwards, a local variable 'info' has been created and is a pointer to the ASID allocator structure. Signed-off-by: Julien Grall --- Changes in v2: - Add turn asid_info to a static variable --- arch/arm64/mm/context.c | 46 ++++++++++++++++++++++++++-------------------- 1 file changed, 26 insertions(+), 20 deletions(-) -- 2.11.0 diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm/context.c index b5e329fde2dd..b0789f30d03b 100644 --- a/arch/arm64/mm/context.c +++ b/arch/arm64/mm/context.c @@ -19,8 +19,11 @@ static u32 asid_bits; static DEFINE_RAW_SPINLOCK(cpu_asid_lock); -static atomic64_t asid_generation; -static unsigned long *asid_map; +static struct asid_info +{ + atomic64_t generation; + unsigned long *map; +} asid_info; static DEFINE_PER_CPU(atomic64_t, active_asids); static DEFINE_PER_CPU(u64, reserved_asids); @@ -77,13 +80,13 @@ void verify_cpu_asid_bits(void) } } -static void flush_context(void) +static void flush_context(struct asid_info *info) { int i; u64 asid; /* Update the list of reserved ASIDs and the ASID bitmap. */ - bitmap_clear(asid_map, 0, NUM_USER_ASIDS); + bitmap_clear(info->map, 0, NUM_USER_ASIDS); for_each_possible_cpu(i) { asid = atomic64_xchg_relaxed(&per_cpu(active_asids, i), 0); @@ -96,7 +99,7 @@ static void flush_context(void) */ if (asid == 0) asid = per_cpu(reserved_asids, i); - __set_bit(asid2idx(asid), asid_map); + __set_bit(asid2idx(asid), info->map); per_cpu(reserved_asids, i) = asid; } @@ -131,11 +134,11 @@ static bool check_update_reserved_asid(u64 asid, u64 newasid) return hit; } -static u64 new_context(struct mm_struct *mm) +static u64 new_context(struct asid_info *info, struct mm_struct *mm) { static u32 cur_idx = 1; u64 asid = atomic64_read(&mm->context.id); - u64 generation = atomic64_read(&asid_generation); + u64 generation = atomic64_read(&info->generation); if (asid != 0) { u64 newasid = generation | (asid & ~ASID_MASK); @@ -151,7 +154,7 @@ static u64 new_context(struct mm_struct *mm) * We had a valid ASID in a previous life, so try to re-use * it if possible. */ - if (!__test_and_set_bit(asid2idx(asid), asid_map)) + if (!__test_and_set_bit(asid2idx(asid), info->map)) return newasid; } @@ -162,20 +165,20 @@ static u64 new_context(struct mm_struct *mm) * a reserved TTBR0 for the init_mm and we allocate ASIDs in even/odd * pairs. */ - asid = find_next_zero_bit(asid_map, NUM_USER_ASIDS, cur_idx); + asid = find_next_zero_bit(info->map, NUM_USER_ASIDS, cur_idx); if (asid != NUM_USER_ASIDS) goto set_asid; /* We're out of ASIDs, so increment the global generation count */ generation = atomic64_add_return_relaxed(ASID_FIRST_VERSION, - &asid_generation); - flush_context(); + &info->generation); + flush_context(info); /* We have more ASIDs than CPUs, so this will always succeed */ - asid = find_next_zero_bit(asid_map, NUM_USER_ASIDS, 1); + asid = find_next_zero_bit(info->map, NUM_USER_ASIDS, 1); set_asid: - __set_bit(asid, asid_map); + __set_bit(asid, info->map); cur_idx = asid; return idx2asid(asid) | generation; } @@ -184,6 +187,7 @@ void check_and_switch_context(struct mm_struct *mm, unsigned int cpu) { unsigned long flags; u64 asid, old_active_asid; + struct asid_info *info = &asid_info; if (system_supports_cnp()) cpu_set_reserved_ttbr0(); @@ -206,7 +210,7 @@ void check_and_switch_context(struct mm_struct *mm, unsigned int cpu) */ old_active_asid = atomic64_read(&per_cpu(active_asids, cpu)); if (old_active_asid && - !((asid ^ atomic64_read(&asid_generation)) >> asid_bits) && + !((asid ^ atomic64_read(&info->generation)) >> asid_bits) && atomic64_cmpxchg_relaxed(&per_cpu(active_asids, cpu), old_active_asid, asid)) goto switch_mm_fastpath; @@ -214,8 +218,8 @@ void check_and_switch_context(struct mm_struct *mm, unsigned int cpu) raw_spin_lock_irqsave(&cpu_asid_lock, flags); /* Check that our ASID belongs to the current generation. */ asid = atomic64_read(&mm->context.id); - if ((asid ^ atomic64_read(&asid_generation)) >> asid_bits) { - asid = new_context(mm); + if ((asid ^ atomic64_read(&info->generation)) >> asid_bits) { + asid = new_context(info, mm); atomic64_set(&mm->context.id, asid); } @@ -248,16 +252,18 @@ asmlinkage void post_ttbr_update_workaround(void) static int asids_init(void) { + struct asid_info *info = &asid_info; + asid_bits = get_cpu_asid_bits(); /* * Expect allocation after rollover to fail if we don't have at least * one more ASID than CPUs. ASID #0 is reserved for init_mm. */ WARN_ON(NUM_USER_ASIDS - 1 <= num_possible_cpus()); - atomic64_set(&asid_generation, ASID_FIRST_VERSION); - asid_map = kcalloc(BITS_TO_LONGS(NUM_USER_ASIDS), sizeof(*asid_map), - GFP_KERNEL); - if (!asid_map) + atomic64_set(&info->generation, ASID_FIRST_VERSION); + info->map = kcalloc(BITS_TO_LONGS(NUM_USER_ASIDS), sizeof(*info->map), + GFP_KERNEL); + if (!info->map) panic("Failed to allocate bitmap for %lu ASIDs\n", NUM_USER_ASIDS); From patchwork Wed Jul 24 16:25:21 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Grall X-Patchwork-Id: 169635 Delivered-To: patch@linaro.org Received: by 2002:a92:4782:0:0:0:0:0 with SMTP id e2csp10467151ilk; Wed, 24 Jul 2019 09:26:48 -0700 (PDT) X-Google-Smtp-Source: APXvYqzrG2lF0aq8bNvV/VsWzzXejewomGb2whHqsgYUc5m1aAqcgRaN7MT0/jyRlHhWi2qP2t/j X-Received: by 2002:a17:902:fa2:: with SMTP id 31mr88066007plz.38.1563985608583; Wed, 24 Jul 2019 09:26:48 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1563985608; cv=none; d=google.com; s=arc-20160816; b=PIIU+IaOzN7iaeIT4aHefBYRDjYT7ZOYVIN2Onk8e2UzZ8VAlFq4Qa07Mjp7qBOipR Bj0e8/OXRZrj36HDqc3SNoiD81NLJoAdhh/kn7czzxrqSfDdo35sqpReKeItBxD8D1DT 2LxXV734UgA4Gtuj+nsKl0i59g508J0/o/nO0Rrq0XsWgmYjQYMayhxFZacdKLc+yzUP l6XiFPpoDavmRNLjqL8YTTCK81boio3QGpLXXyGdPmK8wpPVjU6O36uE4hu0yLkSJs4v 9VCP3R3GwwdifFW1NGrkryprnFxYQxP67A3bKssykJHOBvKdhWLGnGU+puyyDalQON/J qR1w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=rDUPpCCj/7s6GGrRzZbcBMeAZaemb7pZvNRgkNDFBLA=; b=jYlxOth4By93C0aTKdih/EFVyauNamTk72MxdiV16Ear8fw8bgit4BgUh87iq9sqee D2/f7A2ylHBlOaE+VWUT5YSTXPweKPekpoBo8jfmhg0xgcCLzxxFEm0jyV8UmVFerAaM nx2+Z6rDRR3+3qtRdpPH8U91EW8Eb64Me6Wm7JPC7uGkcZnzy+R1hSpNCnPgAd9ew4W8 xDUQ4n705um7YwxLAohyGsVF1DdZQ8taEUhgsRZcPHNJxU2EW3UxI4FXCJwic1crDqjq /3tpngWDWj4bSs3T0Bnk0hDT1vHJ0C03gMvr78v1yD4IMhWxy42wJkO4wi2w5AKzCEpx FR7A== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id l16si469636pgt.568.2019.07.24.09.26.48; Wed, 24 Jul 2019 09:26:48 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728967AbfGXQ0q (ORCPT + 29 others); Wed, 24 Jul 2019 12:26:46 -0400 Received: from foss.arm.com ([217.140.110.172]:43346 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728364AbfGXQZu (ORCPT ); Wed, 24 Jul 2019 12:25:50 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 4C9DD15A1; Wed, 24 Jul 2019 09:25:50 -0700 (PDT) Received: from e108454-lin.cambridge.arm.com (e108454-lin.cambridge.arm.com [10.1.196.50]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id ED53B3F71F; Wed, 24 Jul 2019 09:25:48 -0700 (PDT) From: Julien Grall To: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu Cc: james.morse@arm.com, marc.zyngier@arm.com, julien.thierry@arm.com, suzuki.poulose@arm.com, catalin.marinas@arm.com, will.deacon@arm.com, Julien Grall Subject: [PATCH v3 02/15] arm64/mm: Move active_asids and reserved_asids to asid_info Date: Wed, 24 Jul 2019 17:25:21 +0100 Message-Id: <20190724162534.7390-3-julien.grall@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20190724162534.7390-1-julien.grall@arm.com> References: <20190724162534.7390-1-julien.grall@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The variables active_asids and reserved_asids hold information for a given ASID allocator. So move them to the structure asid_info. At the same time, introduce wrappers to access the active and reserved ASIDs to make the code clearer. Signed-off-by: Julien Grall --- arch/arm64/mm/context.c | 34 ++++++++++++++++++++++------------ 1 file changed, 22 insertions(+), 12 deletions(-) -- 2.11.0 diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm/context.c index b0789f30d03b..3de028803284 100644 --- a/arch/arm64/mm/context.c +++ b/arch/arm64/mm/context.c @@ -23,10 +23,16 @@ static struct asid_info { atomic64_t generation; unsigned long *map; + atomic64_t __percpu *active; + u64 __percpu *reserved; } asid_info; +#define active_asid(info, cpu) *per_cpu_ptr((info)->active, cpu) +#define reserved_asid(info, cpu) *per_cpu_ptr((info)->reserved, cpu) + static DEFINE_PER_CPU(atomic64_t, active_asids); static DEFINE_PER_CPU(u64, reserved_asids); + static cpumask_t tlb_flush_pending; #define ASID_MASK (~GENMASK(asid_bits - 1, 0)) @@ -89,7 +95,7 @@ static void flush_context(struct asid_info *info) bitmap_clear(info->map, 0, NUM_USER_ASIDS); for_each_possible_cpu(i) { - asid = atomic64_xchg_relaxed(&per_cpu(active_asids, i), 0); + asid = atomic64_xchg_relaxed(&active_asid(info, i), 0); /* * If this CPU has already been through a * rollover, but hasn't run another task in @@ -98,9 +104,9 @@ static void flush_context(struct asid_info *info) * the process it is still running. */ if (asid == 0) - asid = per_cpu(reserved_asids, i); + asid = reserved_asid(info, i); __set_bit(asid2idx(asid), info->map); - per_cpu(reserved_asids, i) = asid; + reserved_asid(info, i) = asid; } /* @@ -110,7 +116,8 @@ static void flush_context(struct asid_info *info) cpumask_setall(&tlb_flush_pending); } -static bool check_update_reserved_asid(u64 asid, u64 newasid) +static bool check_update_reserved_asid(struct asid_info *info, u64 asid, + u64 newasid) { int cpu; bool hit = false; @@ -125,9 +132,9 @@ static bool check_update_reserved_asid(u64 asid, u64 newasid) * generation. */ for_each_possible_cpu(cpu) { - if (per_cpu(reserved_asids, cpu) == asid) { + if (reserved_asid(info, cpu) == asid) { hit = true; - per_cpu(reserved_asids, cpu) = newasid; + reserved_asid(info, cpu) = newasid; } } @@ -147,7 +154,7 @@ static u64 new_context(struct asid_info *info, struct mm_struct *mm) * If our current ASID was active during a rollover, we * can continue to use it and this was just a false alarm. */ - if (check_update_reserved_asid(asid, newasid)) + if (check_update_reserved_asid(info, asid, newasid)) return newasid; /* @@ -196,8 +203,8 @@ void check_and_switch_context(struct mm_struct *mm, unsigned int cpu) /* * The memory ordering here is subtle. - * If our active_asids is non-zero and the ASID matches the current - * generation, then we update the active_asids entry with a relaxed + * If our active_asid is non-zero and the ASID matches the current + * generation, then we update the active_asid entry with a relaxed * cmpxchg. Racing with a concurrent rollover means that either: * * - We get a zero back from the cmpxchg and end up waiting on the @@ -208,10 +215,10 @@ void check_and_switch_context(struct mm_struct *mm, unsigned int cpu) * relaxed xchg in flush_context will treat us as reserved * because atomic RmWs are totally ordered for a given location. */ - old_active_asid = atomic64_read(&per_cpu(active_asids, cpu)); + old_active_asid = atomic64_read(&active_asid(info, cpu)); if (old_active_asid && !((asid ^ atomic64_read(&info->generation)) >> asid_bits) && - atomic64_cmpxchg_relaxed(&per_cpu(active_asids, cpu), + atomic64_cmpxchg_relaxed(&active_asid(info, cpu), old_active_asid, asid)) goto switch_mm_fastpath; @@ -226,7 +233,7 @@ void check_and_switch_context(struct mm_struct *mm, unsigned int cpu) if (cpumask_test_and_clear_cpu(cpu, &tlb_flush_pending)) local_flush_tlb_all(); - atomic64_set(&per_cpu(active_asids, cpu), asid); + atomic64_set(&active_asid(info, cpu), asid); raw_spin_unlock_irqrestore(&cpu_asid_lock, flags); switch_mm_fastpath: @@ -267,6 +274,9 @@ static int asids_init(void) panic("Failed to allocate bitmap for %lu ASIDs\n", NUM_USER_ASIDS); + info->active = &active_asids; + info->reserved = &reserved_asids; + pr_info("ASID allocator initialised with %lu entries\n", NUM_USER_ASIDS); return 0; } From patchwork Wed Jul 24 16:25:22 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Grall X-Patchwork-Id: 169621 Delivered-To: patch@linaro.org Received: by 2002:a92:4782:0:0:0:0:0 with SMTP id e2csp10466315ilk; Wed, 24 Jul 2019 09:25:55 -0700 (PDT) X-Google-Smtp-Source: APXvYqxmB1EmZuiMBVuoaWosjn+2FyyC/48MqPPeCRJ1m7BHTH8TjliJpevG/MF/B2g1dGAM8Jpl X-Received: by 2002:a63:5d45:: with SMTP id o5mr83277456pgm.40.1563985554823; Wed, 24 Jul 2019 09:25:54 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1563985554; cv=none; d=google.com; s=arc-20160816; b=F9h9/OwCDL/xdhTcsU1Bt3MON+eBQxfNprfPahJxgdHQQ1UJaU/Vr6bqqA5TWmlqWz LnFgK9NaIYog6bii//6TL6WljpSTOVtxeidsem+d2t8td3eKr+X3cnxNW7WNbjpd+986 w3a/jycb3wy2cXEnkXuompup2nbFrDpYSw2GDX8FhiynX1y4Ph3T+81z/xlnBB/jf/sr UUEJoT9awBo8u6vOQW9xdshHovj0INLskUWOZD1IWEotljduQXfkRutdM7UXjjuC13eM /GtUsTzvz43Yo2tHpWpgmjvPoXyh+x/oVANBS2xkqMD/HlDxdERGkyN/TWQJe2dHdP5u Lxgw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=GyYYfAux3MkoHKkBzmU+i33uL30Oyt6+I4DcYxMnefY=; b=iPWAWhZN+iOSTwYWWHK9bqEfA1cuPw4kQ78A+rSH+PMosCcYxAkkwa7I9FiRHYTIKM PieaS6lKIMTolOT5IydtnWy7nH+C2VddhHS0u4F8t0ryK/T2oVIOk/mIupfzEp7G2Kmh 8ukwCQtBCoRv52JM4CMB+bXg2H8Gnr8EcjnjQc08sZ/moj1MdBuENP+2kPV0PqWmhNfm V7vPHPuoHNzPdIYERw8NKWsYXllOnUI72LPpdHtPwVujjofTOaJ7tlbmXWf71c2dL5fE GIglJ5gjqt3xItqW1O2DL6Q2gnJ0rexNE8p+A5QSishiycsrdjBnMWuqc76AeIrCyV3o pfLA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id s62si13071508pjc.75.2019.07.24.09.25.54; Wed, 24 Jul 2019 09:25:54 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728842AbfGXQZx (ORCPT + 29 others); Wed, 24 Jul 2019 12:25:53 -0400 Received: from foss.arm.com ([217.140.110.172]:43354 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728531AbfGXQZw (ORCPT ); Wed, 24 Jul 2019 12:25:52 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id D4C3A15A2; Wed, 24 Jul 2019 09:25:51 -0700 (PDT) Received: from e108454-lin.cambridge.arm.com (e108454-lin.cambridge.arm.com [10.1.196.50]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 815423F71F; Wed, 24 Jul 2019 09:25:50 -0700 (PDT) From: Julien Grall To: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu Cc: james.morse@arm.com, marc.zyngier@arm.com, julien.thierry@arm.com, suzuki.poulose@arm.com, catalin.marinas@arm.com, will.deacon@arm.com, Julien Grall Subject: [PATCH v3 03/15] arm64/mm: Move bits to asid_info Date: Wed, 24 Jul 2019 17:25:22 +0100 Message-Id: <20190724162534.7390-4-julien.grall@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20190724162534.7390-1-julien.grall@arm.com> References: <20190724162534.7390-1-julien.grall@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The variable bits hold information for a given ASID allocator. So move it to the asid_info structure. Because most of the macros were relying on bits, they are now taking an extra parameter that is a pointer to the asid_info structure. Signed-off-by: Julien Grall --- arch/arm64/mm/context.c | 59 +++++++++++++++++++++++++------------------------ 1 file changed, 30 insertions(+), 29 deletions(-) -- 2.11.0 diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm/context.c index 3de028803284..49fff350e12f 100644 --- a/arch/arm64/mm/context.c +++ b/arch/arm64/mm/context.c @@ -16,7 +16,6 @@ #include #include -static u32 asid_bits; static DEFINE_RAW_SPINLOCK(cpu_asid_lock); static struct asid_info @@ -25,6 +24,7 @@ static struct asid_info unsigned long *map; atomic64_t __percpu *active; u64 __percpu *reserved; + u32 bits; } asid_info; #define active_asid(info, cpu) *per_cpu_ptr((info)->active, cpu) @@ -35,17 +35,17 @@ static DEFINE_PER_CPU(u64, reserved_asids); static cpumask_t tlb_flush_pending; -#define ASID_MASK (~GENMASK(asid_bits - 1, 0)) -#define ASID_FIRST_VERSION (1UL << asid_bits) +#define ASID_MASK(info) (~GENMASK((info)->bits - 1, 0)) +#define ASID_FIRST_VERSION(info) (1UL << ((info)->bits)) #ifdef CONFIG_UNMAP_KERNEL_AT_EL0 -#define NUM_USER_ASIDS (ASID_FIRST_VERSION >> 1) -#define asid2idx(asid) (((asid) & ~ASID_MASK) >> 1) -#define idx2asid(idx) (((idx) << 1) & ~ASID_MASK) +#define NUM_USER_ASIDS(info) (ASID_FIRST_VERSION(info) >> 1) +#define asid2idx(info, asid) (((asid) & ~ASID_MASK(info)) >> 1) +#define idx2asid(info, idx) (((idx) << 1) & ~ASID_MASK(info)) #else -#define NUM_USER_ASIDS (ASID_FIRST_VERSION) -#define asid2idx(asid) ((asid) & ~ASID_MASK) -#define idx2asid(idx) asid2idx(idx) +#define NUM_USER_ASIDS(info) (ASID_FIRST_VERSION(info)) +#define asid2idx(info, asid) ((asid) & ~ASID_MASK(info)) +#define idx2asid(info, idx) asid2idx(info, idx) #endif /* Get the ASIDBits supported by the current CPU */ @@ -75,13 +75,13 @@ void verify_cpu_asid_bits(void) { u32 asid = get_cpu_asid_bits(); - if (asid < asid_bits) { + if (asid < asid_info.bits) { /* * We cannot decrease the ASID size at runtime, so panic if we support * fewer ASID bits than the boot CPU. */ pr_crit("CPU%d: smaller ASID size(%u) than boot CPU (%u)\n", - smp_processor_id(), asid, asid_bits); + smp_processor_id(), asid, asid_info.bits); cpu_panic_kernel(); } } @@ -92,7 +92,7 @@ static void flush_context(struct asid_info *info) u64 asid; /* Update the list of reserved ASIDs and the ASID bitmap. */ - bitmap_clear(info->map, 0, NUM_USER_ASIDS); + bitmap_clear(info->map, 0, NUM_USER_ASIDS(info)); for_each_possible_cpu(i) { asid = atomic64_xchg_relaxed(&active_asid(info, i), 0); @@ -105,7 +105,7 @@ static void flush_context(struct asid_info *info) */ if (asid == 0) asid = reserved_asid(info, i); - __set_bit(asid2idx(asid), info->map); + __set_bit(asid2idx(info, asid), info->map); reserved_asid(info, i) = asid; } @@ -148,7 +148,7 @@ static u64 new_context(struct asid_info *info, struct mm_struct *mm) u64 generation = atomic64_read(&info->generation); if (asid != 0) { - u64 newasid = generation | (asid & ~ASID_MASK); + u64 newasid = generation | (asid & ~ASID_MASK(info)); /* * If our current ASID was active during a rollover, we @@ -161,7 +161,7 @@ static u64 new_context(struct asid_info *info, struct mm_struct *mm) * We had a valid ASID in a previous life, so try to re-use * it if possible. */ - if (!__test_and_set_bit(asid2idx(asid), info->map)) + if (!__test_and_set_bit(asid2idx(info, asid), info->map)) return newasid; } @@ -172,22 +172,22 @@ static u64 new_context(struct asid_info *info, struct mm_struct *mm) * a reserved TTBR0 for the init_mm and we allocate ASIDs in even/odd * pairs. */ - asid = find_next_zero_bit(info->map, NUM_USER_ASIDS, cur_idx); - if (asid != NUM_USER_ASIDS) + asid = find_next_zero_bit(info->map, NUM_USER_ASIDS(info), cur_idx); + if (asid != NUM_USER_ASIDS(info)) goto set_asid; /* We're out of ASIDs, so increment the global generation count */ - generation = atomic64_add_return_relaxed(ASID_FIRST_VERSION, + generation = atomic64_add_return_relaxed(ASID_FIRST_VERSION(info), &info->generation); flush_context(info); /* We have more ASIDs than CPUs, so this will always succeed */ - asid = find_next_zero_bit(info->map, NUM_USER_ASIDS, 1); + asid = find_next_zero_bit(info->map, NUM_USER_ASIDS(info), 1); set_asid: __set_bit(asid, info->map); cur_idx = asid; - return idx2asid(asid) | generation; + return idx2asid(info, asid) | generation; } void check_and_switch_context(struct mm_struct *mm, unsigned int cpu) @@ -217,7 +217,7 @@ void check_and_switch_context(struct mm_struct *mm, unsigned int cpu) */ old_active_asid = atomic64_read(&active_asid(info, cpu)); if (old_active_asid && - !((asid ^ atomic64_read(&info->generation)) >> asid_bits) && + !((asid ^ atomic64_read(&info->generation)) >> info->bits) && atomic64_cmpxchg_relaxed(&active_asid(info, cpu), old_active_asid, asid)) goto switch_mm_fastpath; @@ -225,7 +225,7 @@ void check_and_switch_context(struct mm_struct *mm, unsigned int cpu) raw_spin_lock_irqsave(&cpu_asid_lock, flags); /* Check that our ASID belongs to the current generation. */ asid = atomic64_read(&mm->context.id); - if ((asid ^ atomic64_read(&info->generation)) >> asid_bits) { + if ((asid ^ atomic64_read(&info->generation)) >> info->bits) { asid = new_context(info, mm); atomic64_set(&mm->context.id, asid); } @@ -261,23 +261,24 @@ static int asids_init(void) { struct asid_info *info = &asid_info; - asid_bits = get_cpu_asid_bits(); + info->bits = get_cpu_asid_bits(); /* * Expect allocation after rollover to fail if we don't have at least * one more ASID than CPUs. ASID #0 is reserved for init_mm. */ - WARN_ON(NUM_USER_ASIDS - 1 <= num_possible_cpus()); - atomic64_set(&info->generation, ASID_FIRST_VERSION); - info->map = kcalloc(BITS_TO_LONGS(NUM_USER_ASIDS), sizeof(*info->map), - GFP_KERNEL); + WARN_ON(NUM_USER_ASIDS(info) - 1 <= num_possible_cpus()); + atomic64_set(&info->generation, ASID_FIRST_VERSION(info)); + info->map = kcalloc(BITS_TO_LONGS(NUM_USER_ASIDS(info)), + sizeof(*info->map), GFP_KERNEL); if (!info->map) panic("Failed to allocate bitmap for %lu ASIDs\n", - NUM_USER_ASIDS); + NUM_USER_ASIDS(info)); info->active = &active_asids; info->reserved = &reserved_asids; - pr_info("ASID allocator initialised with %lu entries\n", NUM_USER_ASIDS); + pr_info("ASID allocator initialised with %lu entries\n", + NUM_USER_ASIDS(info)); return 0; } early_initcall(asids_init); From patchwork Wed Jul 24 16:25:23 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Grall X-Patchwork-Id: 169622 Delivered-To: patch@linaro.org Received: by 2002:a92:4782:0:0:0:0:0 with SMTP id e2csp10466340ilk; Wed, 24 Jul 2019 09:25:57 -0700 (PDT) X-Google-Smtp-Source: APXvYqz1M+AGAjGSNDyHCztvcekc8YmZ5MTTdCAJ+7ljgyhzOLzbrG9bAf3Z69QUELyDnomzuajt X-Received: by 2002:a17:902:5ac4:: with SMTP id g4mr88618992plm.80.1563985556954; Wed, 24 Jul 2019 09:25:56 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1563985556; cv=none; d=google.com; s=arc-20160816; b=T39QJ0xSaIWc0qJFymUv7MeBpZqD4ZIltXhV9PXuO/nqa1RGXFfCDKvZl3Elmhkg5W XSCVcQRIU8t82feky2x+krTjrELwyx7/6exuztC2U8u8o4FUy8ab7N443AN70xwcplGL EoJdgzIEwtRnPMYUow4JT7r13INEWmBaR4BzASM6LWzx1C5iCIJdmf07u2Y723Fk/EWL KhWFG5doiQHaGly/3dWjq0xogHo0ck2C1fzHppcn23KQgx2+F3rnxPzwZ8HMCHo6uXu+ 05rDak4oM3ALCS6gvfjwQd/ZbfVruuF7+u5p4ibcrKwOhg/YJ2q/6wHFROso24LBoiGD 7cWA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=Xd7v8bSn1XNVtCYUofIUeCtELes/ht1XTsRDsKyb3h0=; b=nQ9+W9bNnbKZEJYwaYasCbYvIdW4HOR3HDmT5hnmI7ZOMt64rm9vankF42bdMa9g0m Tft4d5t/mJICilHSvntRZZOoVdvhsEbCdZJ/qYJVx+uphPvQAeAW2ciABuCgp61jDrvN BAJlOKzQGvoBP+MKKZbjkb/pt9AaREu83zFZ2sAoo/hN4VS4j2dhxmnFLuGLuCLGlrzQ Tan73w8f+odQduiltC7fgSq6VCU7vNJCHYjDAK+rgDrVcqO3uEcefRdGS2Zd7omQGEU0 L0XkMzNjzhmhc1Z7j95nnKPicAgClGnzkljXxO1cQQQ7JoeHs67H4s7Xmj2aUtYmzVdZ wd4A== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id l63si15941264pfl.41.2019.07.24.09.25.56; Wed, 24 Jul 2019 09:25:56 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728856AbfGXQZz (ORCPT + 29 others); Wed, 24 Jul 2019 12:25:55 -0400 Received: from foss.arm.com ([217.140.110.172]:43366 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728789AbfGXQZy (ORCPT ); Wed, 24 Jul 2019 12:25:54 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 691CE28; Wed, 24 Jul 2019 09:25:53 -0700 (PDT) Received: from e108454-lin.cambridge.arm.com (e108454-lin.cambridge.arm.com [10.1.196.50]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 156643F71F; Wed, 24 Jul 2019 09:25:51 -0700 (PDT) From: Julien Grall To: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu Cc: james.morse@arm.com, marc.zyngier@arm.com, julien.thierry@arm.com, suzuki.poulose@arm.com, catalin.marinas@arm.com, will.deacon@arm.com, Julien Grall Subject: [PATCH v3 04/15] arm64/mm: Move the variable lock and tlb_flush_pending to asid_info Date: Wed, 24 Jul 2019 17:25:23 +0100 Message-Id: <20190724162534.7390-5-julien.grall@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20190724162534.7390-1-julien.grall@arm.com> References: <20190724162534.7390-1-julien.grall@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The variables lock and tlb_flush_pending holds information for a given ASID allocator. So move them to the asid_info structure. Signed-off-by: Julien Grall --- arch/arm64/mm/context.c | 17 +++++++++-------- 1 file changed, 9 insertions(+), 8 deletions(-) -- 2.11.0 diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm/context.c index 49fff350e12f..b50f52a09baf 100644 --- a/arch/arm64/mm/context.c +++ b/arch/arm64/mm/context.c @@ -16,8 +16,6 @@ #include #include -static DEFINE_RAW_SPINLOCK(cpu_asid_lock); - static struct asid_info { atomic64_t generation; @@ -25,6 +23,9 @@ static struct asid_info atomic64_t __percpu *active; u64 __percpu *reserved; u32 bits; + raw_spinlock_t lock; + /* Which CPU requires context flush on next call */ + cpumask_t flush_pending; } asid_info; #define active_asid(info, cpu) *per_cpu_ptr((info)->active, cpu) @@ -33,8 +34,6 @@ static struct asid_info static DEFINE_PER_CPU(atomic64_t, active_asids); static DEFINE_PER_CPU(u64, reserved_asids); -static cpumask_t tlb_flush_pending; - #define ASID_MASK(info) (~GENMASK((info)->bits - 1, 0)) #define ASID_FIRST_VERSION(info) (1UL << ((info)->bits)) @@ -113,7 +112,7 @@ static void flush_context(struct asid_info *info) * Queue a TLB invalidation for each CPU to perform on next * context-switch */ - cpumask_setall(&tlb_flush_pending); + cpumask_setall(&info->flush_pending); } static bool check_update_reserved_asid(struct asid_info *info, u64 asid, @@ -222,7 +221,7 @@ void check_and_switch_context(struct mm_struct *mm, unsigned int cpu) old_active_asid, asid)) goto switch_mm_fastpath; - raw_spin_lock_irqsave(&cpu_asid_lock, flags); + raw_spin_lock_irqsave(&info->lock, flags); /* Check that our ASID belongs to the current generation. */ asid = atomic64_read(&mm->context.id); if ((asid ^ atomic64_read(&info->generation)) >> info->bits) { @@ -230,11 +229,11 @@ void check_and_switch_context(struct mm_struct *mm, unsigned int cpu) atomic64_set(&mm->context.id, asid); } - if (cpumask_test_and_clear_cpu(cpu, &tlb_flush_pending)) + if (cpumask_test_and_clear_cpu(cpu, &info->flush_pending)) local_flush_tlb_all(); atomic64_set(&active_asid(info, cpu), asid); - raw_spin_unlock_irqrestore(&cpu_asid_lock, flags); + raw_spin_unlock_irqrestore(&info->lock, flags); switch_mm_fastpath: @@ -277,6 +276,8 @@ static int asids_init(void) info->active = &active_asids; info->reserved = &reserved_asids; + raw_spin_lock_init(&info->lock); + pr_info("ASID allocator initialised with %lu entries\n", NUM_USER_ASIDS(info)); return 0; From patchwork Wed Jul 24 16:25:24 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Grall X-Patchwork-Id: 169623 Delivered-To: patch@linaro.org Received: by 2002:a92:4782:0:0:0:0:0 with SMTP id e2csp10466403ilk; Wed, 24 Jul 2019 09:26:00 -0700 (PDT) X-Google-Smtp-Source: APXvYqwP4WfMUxA8vNrrpTsjEtXUamiDTKu72kybJzH5tqYAqypDdMIoNe5EXmz2rKs6pdRFdErK X-Received: by 2002:a17:902:2869:: with SMTP id e96mr83563771plb.203.1563985560560; Wed, 24 Jul 2019 09:26:00 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1563985560; cv=none; d=google.com; s=arc-20160816; b=x5J3/GAxOqgPu7ecWDg8uVY46Bzu2CY084FwmcxG/IdT0wfuBJ/DcL/E48uAlVOyes zaJtSS6gMhy9CoB9A71bl4OvesrErmEn82pLM/yranymIrDWshEuZr1QbMCCbyb3vPxE KyMZRtLASRavRr8XxMf2Yg1aF0vq+BzPXUDQlskTfsMgXn447SukC+B4l6bVRyCgRQIY 6npdpbxOiKl6iMZt4wm6q+mF+FjZSH9fUqv6ER+Mp7NU5pkJbV59Rb0SKOpm4d0dXqjS cZXdurCj/HS3uSsJR/8CQpsY+irCxEyoG/bR2eWtpqAL+4KK5m2Mz5lb+xlQ6LhDEKff 1u7w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=kG3kUNzQFuHCVgJ+poia/ho+2tpcrmqK+CxantX62WE=; b=yLICpzQTZFp2zRfS62V2pgrgr2by3GblP4+y/NTdlkARGG+5q5jTrwlIASFVjdStsY B24IRO40qwUSLWYtcChTlYv8l97IoThg/vHPoR5BBh/iHga0Eh8UtiJCPDY/EsRy7Wx1 OLleQ5+dr2dMmblrNrE1Wq3asALsKfVEzxyOkP5sewAp2nAWl4nrszc6cvzCa5Z0bFHq Oz/t8h1aaloCZjkP2xbKc8xx2FyUYCjJCAR0FxufSAxE9IIb97o6HFsYLSZPdiqDrKm0 jvj7TOpaqFAjEgcEJJ7l/vdysZ9fLqD3tITQGproa1C1xWWzpmIsHCsTESJs0kCuypJa H3VA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id l63si15941264pfl.41.2019.07.24.09.26.00; Wed, 24 Jul 2019 09:26:00 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728870AbfGXQZ6 (ORCPT + 29 others); Wed, 24 Jul 2019 12:25:58 -0400 Received: from foss.arm.com ([217.140.110.172]:43376 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728850AbfGXQZz (ORCPT ); Wed, 24 Jul 2019 12:25:55 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id F103E337; Wed, 24 Jul 2019 09:25:54 -0700 (PDT) Received: from e108454-lin.cambridge.arm.com (e108454-lin.cambridge.arm.com [10.1.196.50]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 9DB073F71F; Wed, 24 Jul 2019 09:25:53 -0700 (PDT) From: Julien Grall To: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu Cc: james.morse@arm.com, marc.zyngier@arm.com, julien.thierry@arm.com, suzuki.poulose@arm.com, catalin.marinas@arm.com, will.deacon@arm.com, Julien Grall Subject: [PATCH v3 05/15] arm64/mm: Remove dependency on MM in new_context Date: Wed, 24 Jul 2019 17:25:24 +0100 Message-Id: <20190724162534.7390-6-julien.grall@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20190724162534.7390-1-julien.grall@arm.com> References: <20190724162534.7390-1-julien.grall@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The function new_context will be part of a generic ASID allocator. At the moment, the MM structure is only used to fetch the ASID. To remove the dependency on MM, it is possible to just pass a pointer to the current ASID. Signed-off-by: Julien Grall --- arch/arm64/mm/context.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) -- 2.11.0 diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm/context.c index b50f52a09baf..dfb0da35a541 100644 --- a/arch/arm64/mm/context.c +++ b/arch/arm64/mm/context.c @@ -140,10 +140,10 @@ static bool check_update_reserved_asid(struct asid_info *info, u64 asid, return hit; } -static u64 new_context(struct asid_info *info, struct mm_struct *mm) +static u64 new_context(struct asid_info *info, atomic64_t *pasid) { static u32 cur_idx = 1; - u64 asid = atomic64_read(&mm->context.id); + u64 asid = atomic64_read(pasid); u64 generation = atomic64_read(&info->generation); if (asid != 0) { @@ -225,7 +225,7 @@ void check_and_switch_context(struct mm_struct *mm, unsigned int cpu) /* Check that our ASID belongs to the current generation. */ asid = atomic64_read(&mm->context.id); if ((asid ^ atomic64_read(&info->generation)) >> info->bits) { - asid = new_context(info, mm); + asid = new_context(info, &mm->context.id); atomic64_set(&mm->context.id, asid); } From patchwork Wed Jul 24 16:25:25 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Grall X-Patchwork-Id: 169633 Delivered-To: patch@linaro.org Received: by 2002:a92:4782:0:0:0:0:0 with SMTP id e2csp10467009ilk; Wed, 24 Jul 2019 09:26:39 -0700 (PDT) X-Google-Smtp-Source: APXvYqyNXMy8x2UlMc4Ho05Q975uxEQ8ixpYZDOe4424vXQH2z6WrWVuDetosGF38yHEciPzIUAs X-Received: by 2002:a63:121b:: with SMTP id h27mr66955335pgl.335.1563985599230; Wed, 24 Jul 2019 09:26:39 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1563985599; cv=none; d=google.com; s=arc-20160816; b=mabgk8ZEM8NWLiUY2jHWbI41paRmCPIZz+LTQCYZ/1d7w2OMpMSxDABsV5mX5I1tlL zWPu7iniQGI70ZXhI0wU00eT/I3gCg85UMBs5Bu62C75nVkQPuCnclQ96gcebgKI9FbW ds83MuIMtHy1hMByO7+IkaCGe/eCC88wOYvXTuZjuNtPBOGOSbxbftHEmD+6TPAAv4BQ Iew2LnrPLE5fLFGSSxXJjEsbBWZpqCbyrT/JBhd8m6vkkMfgnw2fVAnuBH1qhv9k0+30 54WHoW+R3ZSborh39SLngDZCvPjwWXjILf/jnXgqCJTc8S8JyUv/rzSHBGjmOlNHWr7E kYyA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=AbURwxo4vob5sL6Ic5KLQAxeX1Ix06jE0vb0Uj5rdkU=; b=gzrT5mBAZ7SyAWQGFmTPLp8U/qkz2LM7pGGrG4y3rhfsjNG61yzTvALoT7mye1eTtC LtsBQ2JXTGQaGZdi1F9CzyI0CoiRZPRn+gUZ6oLvV1wpn9zrOHNJuRatAEZm/TPurfSa 8BZnPIxPOH3iIPpdPX9/bM8Ic/DL9gtcRw8Pg8LwuKopOHDatPiECmIW5clzfQFFIiEQ rTbTCTE5dbNhM1vP9sfA+ALOF6Dngrqdutd8MlVWn3ltXszFcO8/Y97VLe/KiddmQr2K 5CZ0fQzQbAiIAxf6uWCFjU6aGbICUcD7z+gs49fGOI2/BeuLe/AXCX8Yg3kjw9Yfdlxe X7bA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id a89si16194979pla.60.2019.07.24.09.26.38; Wed, 24 Jul 2019 09:26:39 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728959AbfGXQ0h (ORCPT + 29 others); Wed, 24 Jul 2019 12:26:37 -0400 Received: from foss.arm.com ([217.140.110.172]:43382 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728858AbfGXQZ5 (ORCPT ); Wed, 24 Jul 2019 12:25:57 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 8580A28; Wed, 24 Jul 2019 09:25:56 -0700 (PDT) Received: from e108454-lin.cambridge.arm.com (e108454-lin.cambridge.arm.com [10.1.196.50]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 3197B3F71F; Wed, 24 Jul 2019 09:25:55 -0700 (PDT) From: Julien Grall To: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu Cc: james.morse@arm.com, marc.zyngier@arm.com, julien.thierry@arm.com, suzuki.poulose@arm.com, catalin.marinas@arm.com, will.deacon@arm.com, Julien Grall Subject: [PATCH v3 06/15] arm64/mm: Store the number of asid allocated per context Date: Wed, 24 Jul 2019 17:25:25 +0100 Message-Id: <20190724162534.7390-7-julien.grall@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20190724162534.7390-1-julien.grall@arm.com> References: <20190724162534.7390-1-julien.grall@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Currently the number of ASID allocated per context is determined at compilation time. As the algorithm is becoming generic, the user may want to instantiate the ASID allocator multiple time with different number of ASID allocated. Add a field in asid_info to track the number ASID allocated per context. This is stored in term of shift amount to avoid division in the code. This means the number of ASID allocated per context should be a power of two. At the same time rename NUM_USERS_ASIDS to NUM_CTXT_ASIDS to make the name more generic. Signed-off-by: Julien Grall --- arch/arm64/mm/context.c | 31 +++++++++++++++++-------------- 1 file changed, 17 insertions(+), 14 deletions(-) -- 2.11.0 diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm/context.c index dfb0da35a541..2e1e495cd1d8 100644 --- a/arch/arm64/mm/context.c +++ b/arch/arm64/mm/context.c @@ -26,6 +26,8 @@ static struct asid_info raw_spinlock_t lock; /* Which CPU requires context flush on next call */ cpumask_t flush_pending; + /* Number of ASID allocated by context (shift value) */ + unsigned int ctxt_shift; } asid_info; #define active_asid(info, cpu) *per_cpu_ptr((info)->active, cpu) @@ -38,15 +40,15 @@ static DEFINE_PER_CPU(u64, reserved_asids); #define ASID_FIRST_VERSION(info) (1UL << ((info)->bits)) #ifdef CONFIG_UNMAP_KERNEL_AT_EL0 -#define NUM_USER_ASIDS(info) (ASID_FIRST_VERSION(info) >> 1) -#define asid2idx(info, asid) (((asid) & ~ASID_MASK(info)) >> 1) -#define idx2asid(info, idx) (((idx) << 1) & ~ASID_MASK(info)) +#define ASID_PER_CONTEXT 2 #else -#define NUM_USER_ASIDS(info) (ASID_FIRST_VERSION(info)) -#define asid2idx(info, asid) ((asid) & ~ASID_MASK(info)) -#define idx2asid(info, idx) asid2idx(info, idx) +#define ASID_PER_CONTEXT 1 #endif +#define NUM_CTXT_ASIDS(info) (ASID_FIRST_VERSION(info) >> (info)->ctxt_shift) +#define asid2idx(info, asid) (((asid) & ~ASID_MASK(info)) >> (info)->ctxt_shift) +#define idx2asid(info, idx) (((idx) << (info)->ctxt_shift) & ~ASID_MASK(info)) + /* Get the ASIDBits supported by the current CPU */ static u32 get_cpu_asid_bits(void) { @@ -91,7 +93,7 @@ static void flush_context(struct asid_info *info) u64 asid; /* Update the list of reserved ASIDs and the ASID bitmap. */ - bitmap_clear(info->map, 0, NUM_USER_ASIDS(info)); + bitmap_clear(info->map, 0, NUM_CTXT_ASIDS(info)); for_each_possible_cpu(i) { asid = atomic64_xchg_relaxed(&active_asid(info, i), 0); @@ -171,8 +173,8 @@ static u64 new_context(struct asid_info *info, atomic64_t *pasid) * a reserved TTBR0 for the init_mm and we allocate ASIDs in even/odd * pairs. */ - asid = find_next_zero_bit(info->map, NUM_USER_ASIDS(info), cur_idx); - if (asid != NUM_USER_ASIDS(info)) + asid = find_next_zero_bit(info->map, NUM_CTXT_ASIDS(info), cur_idx); + if (asid != NUM_CTXT_ASIDS(info)) goto set_asid; /* We're out of ASIDs, so increment the global generation count */ @@ -181,7 +183,7 @@ static u64 new_context(struct asid_info *info, atomic64_t *pasid) flush_context(info); /* We have more ASIDs than CPUs, so this will always succeed */ - asid = find_next_zero_bit(info->map, NUM_USER_ASIDS(info), 1); + asid = find_next_zero_bit(info->map, NUM_CTXT_ASIDS(info), 1); set_asid: __set_bit(asid, info->map); @@ -261,17 +263,18 @@ static int asids_init(void) struct asid_info *info = &asid_info; info->bits = get_cpu_asid_bits(); + info->ctxt_shift = ilog2(ASID_PER_CONTEXT); /* * Expect allocation after rollover to fail if we don't have at least * one more ASID than CPUs. ASID #0 is reserved for init_mm. */ - WARN_ON(NUM_USER_ASIDS(info) - 1 <= num_possible_cpus()); + WARN_ON(NUM_CTXT_ASIDS(info) - 1 <= num_possible_cpus()); atomic64_set(&info->generation, ASID_FIRST_VERSION(info)); - info->map = kcalloc(BITS_TO_LONGS(NUM_USER_ASIDS(info)), + info->map = kcalloc(BITS_TO_LONGS(NUM_CTXT_ASIDS(info)), sizeof(*info->map), GFP_KERNEL); if (!info->map) panic("Failed to allocate bitmap for %lu ASIDs\n", - NUM_USER_ASIDS(info)); + NUM_CTXT_ASIDS(info)); info->active = &active_asids; info->reserved = &reserved_asids; @@ -279,7 +282,7 @@ static int asids_init(void) raw_spin_lock_init(&info->lock); pr_info("ASID allocator initialised with %lu entries\n", - NUM_USER_ASIDS(info)); + NUM_CTXT_ASIDS(info)); return 0; } early_initcall(asids_init); From patchwork Wed Jul 24 16:25:26 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Grall X-Patchwork-Id: 169624 Delivered-To: patch@linaro.org Received: by 2002:a92:4782:0:0:0:0:0 with SMTP id e2csp10466467ilk; Wed, 24 Jul 2019 09:26:04 -0700 (PDT) X-Google-Smtp-Source: APXvYqwCow6HC9RBZZGqRHZ8RfvVyyMEszBY6G9xAAnl+YnOIFoP1Bf2WiSIyHMi6YAqfItEG5Oe X-Received: by 2002:a17:90a:8c0c:: with SMTP id a12mr88408951pjo.67.1563985564733; Wed, 24 Jul 2019 09:26:04 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1563985564; cv=none; d=google.com; s=arc-20160816; b=oJpy6nHEgR7CuJIfF7Rh8454KQLFyLXhY97g086OiLSdYsxHkQ2AQNG4AE6HJxvgTP gCBXPrKUI5dCWXFhjFg6xT2Ua48/3xWMQqEHe1tODjwjL3Dufcpwt6OyoPBzDx7hcTUr RDbPqBbVp1bJHVo+LDIGxJ92ulurZhUA6T8H5TmhaSaXqiqqs+7Zd4I6PTMgeaMco6mr 4He/QMiyCJvRuJHkSb5IMzvSNXHMs8FQaOO2HjcXG0QIpFGEHcBRXfMPNPt0bgBfr5N2 HRO2izfvIBu/QgvH2vTeXYyW37KVHIOTdTw3TpaCUXx1g1PUrGnZAk6++QfKOVearZR6 OnZA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=x+vNRYahee9g7BbZKMh22M1c5Cj3euK4RVv7N5DzXfU=; b=rBzT1uKm065hJHEWhO/Gy9netRYPyobwYhmzc5f/WFJXdO/DsYrLtnl64yjVYhD7lD 58jrSwdFUwNBgDXHH8H2+X2dhoFBQKHqL3XiWscRUWGfFN5r+PXQ7VgMNi1fXGs6KzhE vicK1ZnVUMGBZOiyYpyDyLh/Y4TE2oWxI88KIviL3wHjmwOniOXuxFfGE/9b6/gDcOG7 sxQ9IHrXQ+raE8cuR44JXrLoXonkZ76daQi8X8s7g7bMdANf64I+BnGXOsRTADX1/WDn k290DOxhWeMXrsmW8QLMP5MUV8fJBsL+eHruGml9PvluHH1HPdyb6LRyQoeO+QztavKQ Lw/A== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id c127si18351142pfc.191.2019.07.24.09.26.04; Wed, 24 Jul 2019 09:26:04 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728886AbfGXQ0D (ORCPT + 29 others); Wed, 24 Jul 2019 12:26:03 -0400 Received: from foss.arm.com ([217.140.110.172]:43390 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728869AbfGXQZ6 (ORCPT ); Wed, 24 Jul 2019 12:25:58 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 198EF15A1; Wed, 24 Jul 2019 09:25:58 -0700 (PDT) Received: from e108454-lin.cambridge.arm.com (e108454-lin.cambridge.arm.com [10.1.196.50]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id BA34C3F71F; Wed, 24 Jul 2019 09:25:56 -0700 (PDT) From: Julien Grall To: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu Cc: james.morse@arm.com, marc.zyngier@arm.com, julien.thierry@arm.com, suzuki.poulose@arm.com, catalin.marinas@arm.com, will.deacon@arm.com, Julien Grall Subject: [PATCH v3 07/15] arm64/mm: Introduce NUM_ASIDS Date: Wed, 24 Jul 2019 17:25:26 +0100 Message-Id: <20190724162534.7390-8-julien.grall@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20190724162534.7390-1-julien.grall@arm.com> References: <20190724162534.7390-1-julien.grall@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org At the moment ASID_FIRST_VERSION is used to know the number of ASIDs supported. As we are going to move the ASID allocator in a separate, it would be better to use a different name for external users. This patch adds NUM_ASIDS and implements ASID_FIRST_VERSION using it. Signed-off-by: Julien Grall --- arch/arm64/mm/context.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) -- 2.11.0 diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm/context.c index 2e1e495cd1d8..3b40ac4a2541 100644 --- a/arch/arm64/mm/context.c +++ b/arch/arm64/mm/context.c @@ -37,7 +37,9 @@ static DEFINE_PER_CPU(atomic64_t, active_asids); static DEFINE_PER_CPU(u64, reserved_asids); #define ASID_MASK(info) (~GENMASK((info)->bits - 1, 0)) -#define ASID_FIRST_VERSION(info) (1UL << ((info)->bits)) +#define NUM_ASIDS(info) (1UL << ((info)->bits)) + +#define ASID_FIRST_VERSION(info) NUM_ASIDS(info) #ifdef CONFIG_UNMAP_KERNEL_AT_EL0 #define ASID_PER_CONTEXT 2 @@ -45,7 +47,7 @@ static DEFINE_PER_CPU(u64, reserved_asids); #define ASID_PER_CONTEXT 1 #endif -#define NUM_CTXT_ASIDS(info) (ASID_FIRST_VERSION(info) >> (info)->ctxt_shift) +#define NUM_CTXT_ASIDS(info) (NUM_ASIDS(info) >> (info)->ctxt_shift) #define asid2idx(info, asid) (((asid) & ~ASID_MASK(info)) >> (info)->ctxt_shift) #define idx2asid(info, idx) (((idx) << (info)->ctxt_shift) & ~ASID_MASK(info)) From patchwork Wed Jul 24 16:25:27 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Grall X-Patchwork-Id: 169632 Delivered-To: patch@linaro.org Received: by 2002:a92:4782:0:0:0:0:0 with SMTP id e2csp10466938ilk; Wed, 24 Jul 2019 09:26:34 -0700 (PDT) X-Google-Smtp-Source: APXvYqzT3M/FihqyH8OyYl/AuQCCvfn9u0hiZoh4yz5O3nN2pnlWR2oVhRVsK1YoZWAnIDKLCjPm X-Received: by 2002:a17:902:1003:: with SMTP id b3mr27016901pla.172.1563985594732; Wed, 24 Jul 2019 09:26:34 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1563985594; cv=none; d=google.com; s=arc-20160816; b=lgvllhogPJ3YV8LO1NGtS8MbQNjDlv94XhuqLNSA7SX4tH4qJf/RvmjsCV3Ao37GfS cGBRB7r6aDAsViI6MHyhrMPbumL/WN1dIaepRFKzVvc3kImT2WjQXbeiG1E8xVj2g//0 WZjPZ9bHhDWlcjku56UG5jDnKF4mH8uGxe/Gweoh/8rpHZNTRN8Z/xjHOcApxSyVasBV k0WEvOisa3X49P9VrxNDSwCGRWKGpu9pCoTbB9potjkuI9Pn0Yy3lwxu8glaZ9musa6U qLAcPehID6dwAj5YPM9RKiA98wPXKaTxrmQb/ZbLsZdsEMWe1x7oCirisqHI88Vp9AfE I1pg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=2roKWGGOcYMrcSLUsywhGdIHpXief9ODir3Zcq8C4nY=; b=KwA0PUQc5P5+34RB+kebZVHivTgpnN0LSht6uqCpE2jcBaxGghol80y+51vZ7C0Id7 5kwgGlrijCWNOTl8+bM6HFvhvqikQSH4n8yC64T9jB+QpOJ7Oiginr6NbzQBvl3BF/yN GA0CMGTdDzrD8At/Mk7q2bryhczG7J8u9L8ewcIvjhNBo9tt3NiI3M51JD8TMxY14pg5 AkyrlwA37iCmg/LjYVu78Un3hpFUS80PFbj8Khn4JQsJL7Aq23mvmTlLV9i/Hqajh6jw 6xiQPFJUT/S6TsefmWPp2eeW1AB+Zinw5OfolhjnSCKlM1b/dIhIq358zsoEWbto3NIn kgTg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id i2si16890351pfb.206.2019.07.24.09.26.34; Wed, 24 Jul 2019 09:26:34 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728951AbfGXQ0d (ORCPT + 29 others); Wed, 24 Jul 2019 12:26:33 -0400 Received: from foss.arm.com ([217.140.110.172]:43396 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728874AbfGXQ0A (ORCPT ); Wed, 24 Jul 2019 12:26:00 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id A2BBB28; Wed, 24 Jul 2019 09:25:59 -0700 (PDT) Received: from e108454-lin.cambridge.arm.com (e108454-lin.cambridge.arm.com [10.1.196.50]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 4DC6E3F71F; Wed, 24 Jul 2019 09:25:58 -0700 (PDT) From: Julien Grall To: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu Cc: james.morse@arm.com, marc.zyngier@arm.com, julien.thierry@arm.com, suzuki.poulose@arm.com, catalin.marinas@arm.com, will.deacon@arm.com, Julien Grall Subject: [PATCH v3 08/15] arm64/mm: Split asid_inits in 2 parts Date: Wed, 24 Jul 2019 17:25:27 +0100 Message-Id: <20190724162534.7390-9-julien.grall@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20190724162534.7390-1-julien.grall@arm.com> References: <20190724162534.7390-1-julien.grall@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Move out the common initialization of the ASID allocator in a separate function. Signed-off-by: Julien Grall --- Changes in v3: - Allow bisection (asid_allocator_init() return 0 on success not error!). --- arch/arm64/mm/context.c | 43 +++++++++++++++++++++++++++++++------------ 1 file changed, 31 insertions(+), 12 deletions(-) -- 2.11.0 diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm/context.c index 3b40ac4a2541..27e328fffdb1 100644 --- a/arch/arm64/mm/context.c +++ b/arch/arm64/mm/context.c @@ -260,31 +260,50 @@ asmlinkage void post_ttbr_update_workaround(void) CONFIG_CAVIUM_ERRATUM_27456)); } -static int asids_init(void) +/* + * Initialize the ASID allocator + * + * @info: Pointer to the asid allocator structure + * @bits: Number of ASIDs available + * @asid_per_ctxt: Number of ASIDs to allocate per-context. ASIDs are + * allocated contiguously for a given context. This value should be a power of + * 2. + */ +static int asid_allocator_init(struct asid_info *info, + u32 bits, unsigned int asid_per_ctxt) { - struct asid_info *info = &asid_info; - - info->bits = get_cpu_asid_bits(); - info->ctxt_shift = ilog2(ASID_PER_CONTEXT); + info->bits = bits; + info->ctxt_shift = ilog2(asid_per_ctxt); /* * Expect allocation after rollover to fail if we don't have at least - * one more ASID than CPUs. ASID #0 is reserved for init_mm. + * one more ASID than CPUs. ASID #0 is always reserved. */ WARN_ON(NUM_CTXT_ASIDS(info) - 1 <= num_possible_cpus()); atomic64_set(&info->generation, ASID_FIRST_VERSION(info)); info->map = kcalloc(BITS_TO_LONGS(NUM_CTXT_ASIDS(info)), sizeof(*info->map), GFP_KERNEL); if (!info->map) - panic("Failed to allocate bitmap for %lu ASIDs\n", - NUM_CTXT_ASIDS(info)); - - info->active = &active_asids; - info->reserved = &reserved_asids; + return -ENOMEM; raw_spin_lock_init(&info->lock); + return 0; +} + +static int asids_init(void) +{ + u32 bits = get_cpu_asid_bits(); + + if (asid_allocator_init(&asid_info, bits, ASID_PER_CONTEXT)) + panic("Unable to initialize ASID allocator for %lu ASIDs\n", + 1UL << bits); + + asid_info.active = &active_asids; + asid_info.reserved = &reserved_asids; + pr_info("ASID allocator initialised with %lu entries\n", - NUM_CTXT_ASIDS(info)); + NUM_CTXT_ASIDS(&asid_info)); + return 0; } early_initcall(asids_init); From patchwork Wed Jul 24 16:25:28 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Grall X-Patchwork-Id: 169630 Delivered-To: patch@linaro.org Received: by 2002:a92:4782:0:0:0:0:0 with SMTP id e2csp10466809ilk; Wed, 24 Jul 2019 09:26:25 -0700 (PDT) X-Google-Smtp-Source: APXvYqwdq6DXh7kqC7UDOf0feJZQwjjuxhMrh5Eo9qmziK+4KGsxBHelO/pninOEg0igpEe9Eu4M X-Received: by 2002:aa7:91cc:: with SMTP id z12mr12136035pfa.76.1563985585753; Wed, 24 Jul 2019 09:26:25 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1563985585; cv=none; d=google.com; s=arc-20160816; b=N4raKfS3rBeglmYNu0mPKTvTpWJOR2y2CH7IYtvFyy6GX7e3PXzImYs8bJ96AG6g+d pMySNnOgQ2JmrGvEXKcxUV8qUIK9elX8ItJLrKfq8pYEXdY3Dqi957T2QRaly3ODJR/c hPWpikYe/Rnrw49FfjGEIi/f0C9f5niawl1v7AzK0VkAGy2xdWBXrp1iKEBhj7lsvg4x n4d5frahwR0+WLw52FlqIQaH6jzFkNuFezZLsiko2RgkcWrIUZevgQHuS+j1x4G8qvjj 4v3HrKuIa3Xz7RHwSgqKAL/+O/h3JRu6gILBq0DJdHkiV71cmg6jKCX2MO+k+5ys7uHv qa3Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=eZiAMHBzebv69JkMfkrXvDrrCScfrhf/3lT4kWwzx/c=; b=vGfGDP16W1lrkGjwFacldZnNBlvDB2psDT9EOPyusYnSbfkAnURyQ7xaQD7bYZTqiI DUDS+tV3/kA1mFLtxrDKFe8U32v/puhCfEg89zv/BOrlPOnemOrA7aj3Ge3M4l0+dMSo uWdvluSN0wIdABHk7R6g1ywArbcvKTnEYcVYtWm7vKljyBd37Asvz+gK2Cv9AwSJ+MuT 9kF++VHaPTu12JbkAM0+ZkvQB+KChrrgnrhhmgoWV6YhNtcPOZnQnYq1QKPa1GWG1R4u j9XAQBHruaSXzgl+uFrDxjge4xSChXR+g4YR2QL7Trp5XuUign3AQX3ayQQCGTIyCq0K 4bsQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id t185si14258706pgd.596.2019.07.24.09.26.25; Wed, 24 Jul 2019 09:26:25 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727575AbfGXQ0F (ORCPT + 29 others); Wed, 24 Jul 2019 12:26:05 -0400 Received: from foss.arm.com ([217.140.110.172]:43412 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728877AbfGXQ0B (ORCPT ); Wed, 24 Jul 2019 12:26:01 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 3762B15A2; Wed, 24 Jul 2019 09:26:01 -0700 (PDT) Received: from e108454-lin.cambridge.arm.com (e108454-lin.cambridge.arm.com [10.1.196.50]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id D79053F71F; Wed, 24 Jul 2019 09:25:59 -0700 (PDT) From: Julien Grall To: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu Cc: james.morse@arm.com, marc.zyngier@arm.com, julien.thierry@arm.com, suzuki.poulose@arm.com, catalin.marinas@arm.com, will.deacon@arm.com, Julien Grall Subject: [PATCH v3 09/15] arm64/mm: Split the function check_and_switch_context in 3 parts Date: Wed, 24 Jul 2019 17:25:28 +0100 Message-Id: <20190724162534.7390-10-julien.grall@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20190724162534.7390-1-julien.grall@arm.com> References: <20190724162534.7390-1-julien.grall@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The function check_and_switch_context is used to: 1) Check whether the ASID is still valid 2) Generate a new one if it is not valid 3) Switch the context While the latter is specific to the MM subsystem, the rest could be part of the generic ASID allocator. After this patch, the function is now split in 3 parts which corresponds to the use of the functions: 1) asid_check_context: Check if the ASID is still valid 2) asid_new_context: Generate a new ASID for the context 3) check_and_switch_context: Call 1) and 2) and switch the context 1) and 2) have not been merged in a single function because we want to avoid to add a branch in when the ASID is still valid. This will matter when the code will be moved in separate file later on as 1) will reside in the header as a static inline function. Signed-off-by: Julien Grall --- Will wants to avoid to add a branch when the ASID is still valid. So 1) and 2) are in separates function. The former will move to a new header and make static inline. --- arch/arm64/mm/context.c | 51 +++++++++++++++++++++++++++++++++++++------------ 1 file changed, 39 insertions(+), 12 deletions(-) -- 2.11.0 diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm/context.c index 27e328fffdb1..5e8b381ab67f 100644 --- a/arch/arm64/mm/context.c +++ b/arch/arm64/mm/context.c @@ -193,16 +193,21 @@ static u64 new_context(struct asid_info *info, atomic64_t *pasid) return idx2asid(info, asid) | generation; } -void check_and_switch_context(struct mm_struct *mm, unsigned int cpu) +static void asid_new_context(struct asid_info *info, atomic64_t *pasid, + unsigned int cpu); + +/* + * Check the ASID is still valid for the context. If not generate a new ASID. + * + * @pasid: Pointer to the current ASID batch + * @cpu: current CPU ID. Must have been acquired throught get_cpu() + */ +static void asid_check_context(struct asid_info *info, + atomic64_t *pasid, unsigned int cpu) { - unsigned long flags; u64 asid, old_active_asid; - struct asid_info *info = &asid_info; - if (system_supports_cnp()) - cpu_set_reserved_ttbr0(); - - asid = atomic64_read(&mm->context.id); + asid = atomic64_read(pasid); /* * The memory ordering here is subtle. @@ -223,14 +228,30 @@ void check_and_switch_context(struct mm_struct *mm, unsigned int cpu) !((asid ^ atomic64_read(&info->generation)) >> info->bits) && atomic64_cmpxchg_relaxed(&active_asid(info, cpu), old_active_asid, asid)) - goto switch_mm_fastpath; + return; + + asid_new_context(info, pasid, cpu); +} + +/* + * Generate a new ASID for the context. + * + * @pasid: Pointer to the current ASID batch allocated. It will be updated + * with the new ASID batch. + * @cpu: current CPU ID. Must have been acquired through get_cpu() + */ +static void asid_new_context(struct asid_info *info, atomic64_t *pasid, + unsigned int cpu) +{ + unsigned long flags; + u64 asid; raw_spin_lock_irqsave(&info->lock, flags); /* Check that our ASID belongs to the current generation. */ - asid = atomic64_read(&mm->context.id); + asid = atomic64_read(pasid); if ((asid ^ atomic64_read(&info->generation)) >> info->bits) { - asid = new_context(info, &mm->context.id); - atomic64_set(&mm->context.id, asid); + asid = new_context(info, pasid); + atomic64_set(pasid, asid); } if (cpumask_test_and_clear_cpu(cpu, &info->flush_pending)) @@ -238,8 +259,14 @@ void check_and_switch_context(struct mm_struct *mm, unsigned int cpu) atomic64_set(&active_asid(info, cpu), asid); raw_spin_unlock_irqrestore(&info->lock, flags); +} + +void check_and_switch_context(struct mm_struct *mm, unsigned int cpu) +{ + if (system_supports_cnp()) + cpu_set_reserved_ttbr0(); -switch_mm_fastpath: + asid_check_context(&asid_info, &mm->context.id, cpu); arm64_apply_bp_hardening(); From patchwork Wed Jul 24 16:25:29 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Grall X-Patchwork-Id: 169625 Delivered-To: patch@linaro.org Received: by 2002:a92:4782:0:0:0:0:0 with SMTP id e2csp10466540ilk; Wed, 24 Jul 2019 09:26:09 -0700 (PDT) X-Google-Smtp-Source: APXvYqzhokqfBXG3GCo/wY/sLasasXw02i4abcnBujMLn6nWPaGqNvVabnHyxZqvgu3SwJdmoBpJ X-Received: by 2002:a17:902:aa5:: with SMTP id 34mr89627992plp.166.1563985569095; Wed, 24 Jul 2019 09:26:09 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1563985569; cv=none; d=google.com; s=arc-20160816; b=GcmrAwoIujT4R7oXhL8zPkbOKCl/b4YXzRZeYfNfixu3G4q+oieBko4ZvBdEjyvoEd xIYIvYmJilAXhx6YWLnGa/5oj064Ap0qkqwlN/QkOp6hndo4dzxEryuqUNxmVtD/5Xtz KARr5MoKQYsh/Hkoidg4roHozD0A+MYk1CTUdzH2pFoMGBUixVOaIjTi6i4ndgVhoCqY jx34FVc90znRUEGzcmLWU/cvVInXULC+XROkI0pSiEJkvRXv0C63dRKlxCVFjWY96qcp WeMa7zO1r9Iv70WnxlVQpXcMUNJP8w4mrg09uuRCF3SRxWapdMEd+qN0xrQ3zL9lLZNf 2s4A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=jhZJdnlZgv+L0g0fWU/JM3ZLdrkcxQzGzQgP3mOd9rM=; b=BIJP9LHSXRYlpXmcj+RGnIQq+IyFXFsh3SE7kqhvZ9ey7xT5LAB0paLTFBGqWBgUvg QbiGINfDF42MGRxwEPE64wxTVoSf1DrU1+07fIcnzpaUzJoYEt1aXbikCd0BVraBJavP LBeGsgeFweMbY7IOU0YPGDGBd8o5gdGbqsRRmpMI+s8LdjQsN1DcXPiidZvMM4YCT9Lq ioJbI93WBegEn9/6nndr8oGRJeyI5vqlmzAA/Jcuz1ynqohXSdhO3s7ITiqvDTniwkvF O1jHQxdsY9MLOD00WbR4yGbG6EbAM7KYXxC5YcziwMbGlfSd+DLD4mVFavNK3mNRyRdG Kosw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id c127si18351142pfc.191.2019.07.24.09.26.08; Wed, 24 Jul 2019 09:26:09 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728209AbfGXQ0H (ORCPT + 29 others); Wed, 24 Jul 2019 12:26:07 -0400 Received: from foss.arm.com ([217.140.110.172]:43420 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728883AbfGXQ0D (ORCPT ); Wed, 24 Jul 2019 12:26:03 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id BF86428; Wed, 24 Jul 2019 09:26:02 -0700 (PDT) Received: from e108454-lin.cambridge.arm.com (e108454-lin.cambridge.arm.com [10.1.196.50]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 6BF8C3F71F; Wed, 24 Jul 2019 09:26:01 -0700 (PDT) From: Julien Grall To: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu Cc: james.morse@arm.com, marc.zyngier@arm.com, julien.thierry@arm.com, suzuki.poulose@arm.com, catalin.marinas@arm.com, will.deacon@arm.com, Julien Grall Subject: [PATCH v3 10/15] arm64/mm: Introduce a callback to flush the local context Date: Wed, 24 Jul 2019 17:25:29 +0100 Message-Id: <20190724162534.7390-11-julien.grall@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20190724162534.7390-1-julien.grall@arm.com> References: <20190724162534.7390-1-julien.grall@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Flushing the local context will vary depending on the actual user of the ASID allocator. Introduce a new callback to flush the local context and move the call to flush local TLB in it. Signed-off-by: Julien Grall --- arch/arm64/mm/context.c | 16 +++++++++++++--- 1 file changed, 13 insertions(+), 3 deletions(-) -- 2.11.0 diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm/context.c index 5e8b381ab67f..ac10893b403c 100644 --- a/arch/arm64/mm/context.c +++ b/arch/arm64/mm/context.c @@ -28,6 +28,8 @@ static struct asid_info cpumask_t flush_pending; /* Number of ASID allocated by context (shift value) */ unsigned int ctxt_shift; + /* Callback to locally flush the context. */ + void (*flush_cpu_ctxt_cb)(void); } asid_info; #define active_asid(info, cpu) *per_cpu_ptr((info)->active, cpu) @@ -255,7 +257,7 @@ static void asid_new_context(struct asid_info *info, atomic64_t *pasid, } if (cpumask_test_and_clear_cpu(cpu, &info->flush_pending)) - local_flush_tlb_all(); + info->flush_cpu_ctxt_cb(); atomic64_set(&active_asid(info, cpu), asid); raw_spin_unlock_irqrestore(&info->lock, flags); @@ -287,6 +289,11 @@ asmlinkage void post_ttbr_update_workaround(void) CONFIG_CAVIUM_ERRATUM_27456)); } +static void asid_flush_cpu_ctxt(void) +{ + local_flush_tlb_all(); +} + /* * Initialize the ASID allocator * @@ -297,10 +304,12 @@ asmlinkage void post_ttbr_update_workaround(void) * 2. */ static int asid_allocator_init(struct asid_info *info, - u32 bits, unsigned int asid_per_ctxt) + u32 bits, unsigned int asid_per_ctxt, + void (*flush_cpu_ctxt_cb)(void)) { info->bits = bits; info->ctxt_shift = ilog2(asid_per_ctxt); + info->flush_cpu_ctxt_cb = flush_cpu_ctxt_cb; /* * Expect allocation after rollover to fail if we don't have at least * one more ASID than CPUs. ASID #0 is always reserved. @@ -321,7 +330,8 @@ static int asids_init(void) { u32 bits = get_cpu_asid_bits(); - if (asid_allocator_init(&asid_info, bits, ASID_PER_CONTEXT)) + if (asid_allocator_init(&asid_info, bits, ASID_PER_CONTEXT, + asid_flush_cpu_ctxt)) panic("Unable to initialize ASID allocator for %lu ASIDs\n", 1UL << bits); From patchwork Wed Jul 24 16:25:30 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Grall X-Patchwork-Id: 169631 Delivered-To: patch@linaro.org Received: by 2002:a92:4782:0:0:0:0:0 with SMTP id e2csp10466859ilk; Wed, 24 Jul 2019 09:26:29 -0700 (PDT) X-Google-Smtp-Source: APXvYqy+iD7633WZloCS5kzLORzdQJ+QOra2gxrCe2Zlj9uVxlRUXSKojkuYhay02ZwF5esCGaXr X-Received: by 2002:aa7:8102:: with SMTP id b2mr12294591pfi.105.1563985589068; Wed, 24 Jul 2019 09:26:29 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1563985589; cv=none; d=google.com; s=arc-20160816; b=0EX+bsZBitvJwuji8fqaKabKqfrs3cgqFO5iRBjThgxXNLySfFOcdCUREGAFe/J3c+ mFrRdUEuvVf7CMEOQPIfUv3pbTPyh2lbBNil7JcOf3xWm4Banyu+imTEW/MfEVZGMkhm x+gVrKqtS5sUHiEtkD4QcgpIbhpml00RWk9dmehkjtuQSEDceWnplYiQk47M/qHMEpC9 77Nas/LQYG/pghB3qOkR3pOpnny56KReVn/GvL11EZlBo1LarVzjLhZ9mKZbIeTrSza9 hVguLQTatDZDfmOZfGNDJMW7rR9hcN4DkDqcn49dBQSn5jJYHrmF3RZ9eHv1rVtqkTDk 9qVQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=hsvZrkAcJ8iaI2yb3XVYNcBnZofA+gpDwNTZ9g5yhpI=; b=VQSAzadyHYnuvSlQ9tn0rCek7phDtJAhzbkHdSaxgY22A++WZJ15nehnOB1FjUNP+i I/RVnyxcZJkvKLv+upIIowp52KhAPZKMQLsGcMfUgnAV4Oih1g+hPMVQb91ptX0RS7dF k0OeaxCot1lpp6RYXIek/X92aj88ekmPrnt8+47OuFz0Xk/Z7/IMp9FQoC1gjEyi//PS y4lrOHKOKNZrnggbF/24dtSfJUKlDSnMoYYtairLspRmDJrC1onQvXZ0iO2Sliq1hLps PQNQmaCWxBa8rR/Mhp5gy2kFHBAZV8ONwaEzahCybxLywFT/9ow7HntFgXsicGBK6KdS C/6A== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id i2si16890351pfb.206.2019.07.24.09.26.28; Wed, 24 Jul 2019 09:26:29 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728946AbfGXQ01 (ORCPT + 29 others); Wed, 24 Jul 2019 12:26:27 -0400 Received: from foss.arm.com ([217.140.110.172]:43428 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728895AbfGXQ0F (ORCPT ); Wed, 24 Jul 2019 12:26:05 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 6EE69337; Wed, 24 Jul 2019 09:26:04 -0700 (PDT) Received: from e108454-lin.cambridge.arm.com (e108454-lin.cambridge.arm.com [10.1.196.50]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 002A83F71F; Wed, 24 Jul 2019 09:26:02 -0700 (PDT) From: Julien Grall To: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu Cc: james.morse@arm.com, marc.zyngier@arm.com, julien.thierry@arm.com, suzuki.poulose@arm.com, catalin.marinas@arm.com, will.deacon@arm.com, Julien Grall Subject: [PATCH v3 11/15] arm64: Move the ASID allocator code in a separate file Date: Wed, 24 Jul 2019 17:25:30 +0100 Message-Id: <20190724162534.7390-12-julien.grall@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20190724162534.7390-1-julien.grall@arm.com> References: <20190724162534.7390-1-julien.grall@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org We will want to re-use the ASID allocator in a separate context (e.g allocating VMID). So move the code in a new file. The function asid_check_context has been moved in the header as a static inline function because we want to avoid add a branch when checking if the ASID is still valid. Signed-off-by: Julien Grall --- This code will be used in the virt code for allocating VMID. I am not entirely sure where to place it. Lib could potentially be a good place but I am not entirely convinced the algo as it is could be used by other architecture. Looking at x86, it seems that it will not be possible to re-use because the number of PCID (aka ASID) could be smaller than the number of CPUs. See commit message 10af6235e0d327d42e1bad974385197817923dc1 "x86/mm: Implement PCID based optimization: try to preserve old TLB entries using PCI". Changes in v3: - Correctly move ASID_FIRST_VERSION to the new file Changes in v2: - Rename the header from asid.h to lib_asid.h --- arch/arm64/include/asm/lib_asid.h | 77 +++++++++++++ arch/arm64/lib/Makefile | 2 + arch/arm64/lib/asid.c | 185 ++++++++++++++++++++++++++++++ arch/arm64/mm/context.c | 235 +------------------------------------- 4 files changed, 267 insertions(+), 232 deletions(-) create mode 100644 arch/arm64/include/asm/lib_asid.h create mode 100644 arch/arm64/lib/asid.c -- 2.11.0 diff --git a/arch/arm64/include/asm/lib_asid.h b/arch/arm64/include/asm/lib_asid.h new file mode 100644 index 000000000000..c18e9eca500e --- /dev/null +++ b/arch/arm64/include/asm/lib_asid.h @@ -0,0 +1,77 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef __ASM_ASM_LIB_ASID_H +#define __ASM_ASM_LIB_ASID_H + +#include +#include +#include +#include +#include + +struct asid_info +{ + atomic64_t generation; + unsigned long *map; + atomic64_t __percpu *active; + u64 __percpu *reserved; + u32 bits; + /* Lock protecting the structure */ + raw_spinlock_t lock; + /* Which CPU requires context flush on next call */ + cpumask_t flush_pending; + /* Number of ASID allocated by context (shift value) */ + unsigned int ctxt_shift; + /* Callback to locally flush the context. */ + void (*flush_cpu_ctxt_cb)(void); +}; + +#define NUM_ASIDS(info) (1UL << ((info)->bits)) +#define NUM_CTXT_ASIDS(info) (NUM_ASIDS(info) >> (info)->ctxt_shift) + +#define active_asid(info, cpu) *per_cpu_ptr((info)->active, cpu) + +void asid_new_context(struct asid_info *info, atomic64_t *pasid, + unsigned int cpu); + +/* + * Check the ASID is still valid for the context. If not generate a new ASID. + * + * @pasid: Pointer to the current ASID batch + * @cpu: current CPU ID. Must have been acquired throught get_cpu() + */ +static inline void asid_check_context(struct asid_info *info, + atomic64_t *pasid, unsigned int cpu) +{ + u64 asid, old_active_asid; + + asid = atomic64_read(pasid); + + /* + * The memory ordering here is subtle. + * If our active_asid is non-zero and the ASID matches the current + * generation, then we update the active_asid entry with a relaxed + * cmpxchg. Racing with a concurrent rollover means that either: + * + * - We get a zero back from the cmpxchg and end up waiting on the + * lock. Taking the lock synchronises with the rollover and so + * we are forced to see the updated generation. + * + * - We get a valid ASID back from the cmpxchg, which means the + * relaxed xchg in flush_context will treat us as reserved + * because atomic RmWs are totally ordered for a given location. + */ + old_active_asid = atomic64_read(&active_asid(info, cpu)); + if (old_active_asid && + !((asid ^ atomic64_read(&info->generation)) >> info->bits) && + atomic64_cmpxchg_relaxed(&active_asid(info, cpu), + old_active_asid, asid)) + return; + + asid_new_context(info, pasid, cpu); +} + +int asid_allocator_init(struct asid_info *info, + u32 bits, unsigned int asid_per_ctxt, + void (*flush_cpu_ctxt_cb)(void)); + +#endif diff --git a/arch/arm64/lib/Makefile b/arch/arm64/lib/Makefile index 33c2a4abda04..37169d541ab5 100644 --- a/arch/arm64/lib/Makefile +++ b/arch/arm64/lib/Makefile @@ -5,6 +5,8 @@ lib-y := clear_user.o delay.o copy_from_user.o \ memcmp.o strcmp.o strncmp.o strlen.o strnlen.o \ strchr.o strrchr.o tishift.o +lib-y += asid.o + ifeq ($(CONFIG_KERNEL_MODE_NEON), y) obj-$(CONFIG_XOR_BLOCKS) += xor-neon.o CFLAGS_REMOVE_xor-neon.o += -mgeneral-regs-only diff --git a/arch/arm64/lib/asid.c b/arch/arm64/lib/asid.c new file mode 100644 index 000000000000..0b3a99c4aed4 --- /dev/null +++ b/arch/arm64/lib/asid.c @@ -0,0 +1,185 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Generic ASID allocator. + * + * Based on arch/arm/mm/context.c + * + * Copyright (C) 2002-2003 Deep Blue Solutions Ltd, all rights reserved. + * Copyright (C) 2012 ARM Ltd. + */ + +#include + +#include + +#define reserved_asid(info, cpu) *per_cpu_ptr((info)->reserved, cpu) + +#define ASID_MASK(info) (~GENMASK((info)->bits - 1, 0)) +#define ASID_FIRST_VERSION(info) NUM_ASIDS(info) + +#define asid2idx(info, asid) (((asid) & ~ASID_MASK(info)) >> (info)->ctxt_shift) +#define idx2asid(info, idx) (((idx) << (info)->ctxt_shift) & ~ASID_MASK(info)) + +static void flush_context(struct asid_info *info) +{ + int i; + u64 asid; + + /* Update the list of reserved ASIDs and the ASID bitmap. */ + bitmap_clear(info->map, 0, NUM_CTXT_ASIDS(info)); + + for_each_possible_cpu(i) { + asid = atomic64_xchg_relaxed(&active_asid(info, i), 0); + /* + * If this CPU has already been through a + * rollover, but hasn't run another task in + * the meantime, we must preserve its reserved + * ASID, as this is the only trace we have of + * the process it is still running. + */ + if (asid == 0) + asid = reserved_asid(info, i); + __set_bit(asid2idx(info, asid), info->map); + reserved_asid(info, i) = asid; + } + + /* + * Queue a TLB invalidation for each CPU to perform on next + * context-switch + */ + cpumask_setall(&info->flush_pending); +} + +static bool check_update_reserved_asid(struct asid_info *info, u64 asid, + u64 newasid) +{ + int cpu; + bool hit = false; + + /* + * Iterate over the set of reserved ASIDs looking for a match. + * If we find one, then we can update our mm to use newasid + * (i.e. the same ASID in the current generation) but we can't + * exit the loop early, since we need to ensure that all copies + * of the old ASID are updated to reflect the mm. Failure to do + * so could result in us missing the reserved ASID in a future + * generation. + */ + for_each_possible_cpu(cpu) { + if (reserved_asid(info, cpu) == asid) { + hit = true; + reserved_asid(info, cpu) = newasid; + } + } + + return hit; +} + +static u64 new_context(struct asid_info *info, atomic64_t *pasid) +{ + static u32 cur_idx = 1; + u64 asid = atomic64_read(pasid); + u64 generation = atomic64_read(&info->generation); + + if (asid != 0) { + u64 newasid = generation | (asid & ~ASID_MASK(info)); + + /* + * If our current ASID was active during a rollover, we + * can continue to use it and this was just a false alarm. + */ + if (check_update_reserved_asid(info, asid, newasid)) + return newasid; + + /* + * We had a valid ASID in a previous life, so try to re-use + * it if possible. + */ + if (!__test_and_set_bit(asid2idx(info, asid), info->map)) + return newasid; + } + + /* + * Allocate a free ASID. If we can't find one, take a note of the + * currently active ASIDs and mark the TLBs as requiring flushes. We + * always count from ASID #2 (index 1), as we use ASID #0 when setting + * a reserved TTBR0 for the init_mm and we allocate ASIDs in even/odd + * pairs. + */ + asid = find_next_zero_bit(info->map, NUM_CTXT_ASIDS(info), cur_idx); + if (asid != NUM_CTXT_ASIDS(info)) + goto set_asid; + + /* We're out of ASIDs, so increment the global generation count */ + generation = atomic64_add_return_relaxed(ASID_FIRST_VERSION(info), + &info->generation); + flush_context(info); + + /* We have more ASIDs than CPUs, so this will always succeed */ + asid = find_next_zero_bit(info->map, NUM_CTXT_ASIDS(info), 1); + +set_asid: + __set_bit(asid, info->map); + cur_idx = asid; + return idx2asid(info, asid) | generation; +} + +/* + * Generate a new ASID for the context. + * + * @pasid: Pointer to the current ASID batch allocated. It will be updated + * with the new ASID batch. + * @cpu: current CPU ID. Must have been acquired through get_cpu() + */ +void asid_new_context(struct asid_info *info, atomic64_t *pasid, + unsigned int cpu) +{ + unsigned long flags; + u64 asid; + + raw_spin_lock_irqsave(&info->lock, flags); + /* Check that our ASID belongs to the current generation. */ + asid = atomic64_read(pasid); + if ((asid ^ atomic64_read(&info->generation)) >> info->bits) { + asid = new_context(info, pasid); + atomic64_set(pasid, asid); + } + + if (cpumask_test_and_clear_cpu(cpu, &info->flush_pending)) + info->flush_cpu_ctxt_cb(); + + atomic64_set(&active_asid(info, cpu), asid); + raw_spin_unlock_irqrestore(&info->lock, flags); +} + +/* + * Initialize the ASID allocator + * + * @info: Pointer to the asid allocator structure + * @bits: Number of ASIDs available + * @asid_per_ctxt: Number of ASIDs to allocate per-context. ASIDs are + * allocated contiguously for a given context. This value should be a power of + * 2. + */ +int asid_allocator_init(struct asid_info *info, + u32 bits, unsigned int asid_per_ctxt, + void (*flush_cpu_ctxt_cb)(void)) +{ + info->bits = bits; + info->ctxt_shift = ilog2(asid_per_ctxt); + info->flush_cpu_ctxt_cb = flush_cpu_ctxt_cb; + /* + * Expect allocation after rollover to fail if we don't have at least + * one more ASID than CPUs. ASID #0 is always reserved. + */ + WARN_ON(NUM_CTXT_ASIDS(info) - 1 <= num_possible_cpus()); + atomic64_set(&info->generation, ASID_FIRST_VERSION(info)); + info->map = kcalloc(BITS_TO_LONGS(NUM_CTXT_ASIDS(info)), + sizeof(*info->map), GFP_KERNEL); + if (!info->map) + return -ENOMEM; + + raw_spin_lock_init(&info->lock); + + return 0; +} diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm/context.c index ac10893b403c..9b352a072fbb 100644 --- a/arch/arm64/mm/context.c +++ b/arch/arm64/mm/context.c @@ -12,46 +12,21 @@ #include #include +#include #include #include #include -static struct asid_info -{ - atomic64_t generation; - unsigned long *map; - atomic64_t __percpu *active; - u64 __percpu *reserved; - u32 bits; - raw_spinlock_t lock; - /* Which CPU requires context flush on next call */ - cpumask_t flush_pending; - /* Number of ASID allocated by context (shift value) */ - unsigned int ctxt_shift; - /* Callback to locally flush the context. */ - void (*flush_cpu_ctxt_cb)(void); -} asid_info; - -#define active_asid(info, cpu) *per_cpu_ptr((info)->active, cpu) -#define reserved_asid(info, cpu) *per_cpu_ptr((info)->reserved, cpu) - static DEFINE_PER_CPU(atomic64_t, active_asids); static DEFINE_PER_CPU(u64, reserved_asids); -#define ASID_MASK(info) (~GENMASK((info)->bits - 1, 0)) -#define NUM_ASIDS(info) (1UL << ((info)->bits)) - -#define ASID_FIRST_VERSION(info) NUM_ASIDS(info) - #ifdef CONFIG_UNMAP_KERNEL_AT_EL0 #define ASID_PER_CONTEXT 2 #else #define ASID_PER_CONTEXT 1 #endif -#define NUM_CTXT_ASIDS(info) (NUM_ASIDS(info) >> (info)->ctxt_shift) -#define asid2idx(info, asid) (((asid) & ~ASID_MASK(info)) >> (info)->ctxt_shift) -#define idx2asid(info, idx) (((idx) << (info)->ctxt_shift) & ~ASID_MASK(info)) +static struct asid_info asid_info; /* Get the ASIDBits supported by the current CPU */ static u32 get_cpu_asid_bits(void) @@ -91,178 +66,6 @@ void verify_cpu_asid_bits(void) } } -static void flush_context(struct asid_info *info) -{ - int i; - u64 asid; - - /* Update the list of reserved ASIDs and the ASID bitmap. */ - bitmap_clear(info->map, 0, NUM_CTXT_ASIDS(info)); - - for_each_possible_cpu(i) { - asid = atomic64_xchg_relaxed(&active_asid(info, i), 0); - /* - * If this CPU has already been through a - * rollover, but hasn't run another task in - * the meantime, we must preserve its reserved - * ASID, as this is the only trace we have of - * the process it is still running. - */ - if (asid == 0) - asid = reserved_asid(info, i); - __set_bit(asid2idx(info, asid), info->map); - reserved_asid(info, i) = asid; - } - - /* - * Queue a TLB invalidation for each CPU to perform on next - * context-switch - */ - cpumask_setall(&info->flush_pending); -} - -static bool check_update_reserved_asid(struct asid_info *info, u64 asid, - u64 newasid) -{ - int cpu; - bool hit = false; - - /* - * Iterate over the set of reserved ASIDs looking for a match. - * If we find one, then we can update our mm to use newasid - * (i.e. the same ASID in the current generation) but we can't - * exit the loop early, since we need to ensure that all copies - * of the old ASID are updated to reflect the mm. Failure to do - * so could result in us missing the reserved ASID in a future - * generation. - */ - for_each_possible_cpu(cpu) { - if (reserved_asid(info, cpu) == asid) { - hit = true; - reserved_asid(info, cpu) = newasid; - } - } - - return hit; -} - -static u64 new_context(struct asid_info *info, atomic64_t *pasid) -{ - static u32 cur_idx = 1; - u64 asid = atomic64_read(pasid); - u64 generation = atomic64_read(&info->generation); - - if (asid != 0) { - u64 newasid = generation | (asid & ~ASID_MASK(info)); - - /* - * If our current ASID was active during a rollover, we - * can continue to use it and this was just a false alarm. - */ - if (check_update_reserved_asid(info, asid, newasid)) - return newasid; - - /* - * We had a valid ASID in a previous life, so try to re-use - * it if possible. - */ - if (!__test_and_set_bit(asid2idx(info, asid), info->map)) - return newasid; - } - - /* - * Allocate a free ASID. If we can't find one, take a note of the - * currently active ASIDs and mark the TLBs as requiring flushes. We - * always count from ASID #2 (index 1), as we use ASID #0 when setting - * a reserved TTBR0 for the init_mm and we allocate ASIDs in even/odd - * pairs. - */ - asid = find_next_zero_bit(info->map, NUM_CTXT_ASIDS(info), cur_idx); - if (asid != NUM_CTXT_ASIDS(info)) - goto set_asid; - - /* We're out of ASIDs, so increment the global generation count */ - generation = atomic64_add_return_relaxed(ASID_FIRST_VERSION(info), - &info->generation); - flush_context(info); - - /* We have more ASIDs than CPUs, so this will always succeed */ - asid = find_next_zero_bit(info->map, NUM_CTXT_ASIDS(info), 1); - -set_asid: - __set_bit(asid, info->map); - cur_idx = asid; - return idx2asid(info, asid) | generation; -} - -static void asid_new_context(struct asid_info *info, atomic64_t *pasid, - unsigned int cpu); - -/* - * Check the ASID is still valid for the context. If not generate a new ASID. - * - * @pasid: Pointer to the current ASID batch - * @cpu: current CPU ID. Must have been acquired throught get_cpu() - */ -static void asid_check_context(struct asid_info *info, - atomic64_t *pasid, unsigned int cpu) -{ - u64 asid, old_active_asid; - - asid = atomic64_read(pasid); - - /* - * The memory ordering here is subtle. - * If our active_asid is non-zero and the ASID matches the current - * generation, then we update the active_asid entry with a relaxed - * cmpxchg. Racing with a concurrent rollover means that either: - * - * - We get a zero back from the cmpxchg and end up waiting on the - * lock. Taking the lock synchronises with the rollover and so - * we are forced to see the updated generation. - * - * - We get a valid ASID back from the cmpxchg, which means the - * relaxed xchg in flush_context will treat us as reserved - * because atomic RmWs are totally ordered for a given location. - */ - old_active_asid = atomic64_read(&active_asid(info, cpu)); - if (old_active_asid && - !((asid ^ atomic64_read(&info->generation)) >> info->bits) && - atomic64_cmpxchg_relaxed(&active_asid(info, cpu), - old_active_asid, asid)) - return; - - asid_new_context(info, pasid, cpu); -} - -/* - * Generate a new ASID for the context. - * - * @pasid: Pointer to the current ASID batch allocated. It will be updated - * with the new ASID batch. - * @cpu: current CPU ID. Must have been acquired through get_cpu() - */ -static void asid_new_context(struct asid_info *info, atomic64_t *pasid, - unsigned int cpu) -{ - unsigned long flags; - u64 asid; - - raw_spin_lock_irqsave(&info->lock, flags); - /* Check that our ASID belongs to the current generation. */ - asid = atomic64_read(pasid); - if ((asid ^ atomic64_read(&info->generation)) >> info->bits) { - asid = new_context(info, pasid); - atomic64_set(pasid, asid); - } - - if (cpumask_test_and_clear_cpu(cpu, &info->flush_pending)) - info->flush_cpu_ctxt_cb(); - - atomic64_set(&active_asid(info, cpu), asid); - raw_spin_unlock_irqrestore(&info->lock, flags); -} - void check_and_switch_context(struct mm_struct *mm, unsigned int cpu) { if (system_supports_cnp()) @@ -294,38 +97,6 @@ static void asid_flush_cpu_ctxt(void) local_flush_tlb_all(); } -/* - * Initialize the ASID allocator - * - * @info: Pointer to the asid allocator structure - * @bits: Number of ASIDs available - * @asid_per_ctxt: Number of ASIDs to allocate per-context. ASIDs are - * allocated contiguously for a given context. This value should be a power of - * 2. - */ -static int asid_allocator_init(struct asid_info *info, - u32 bits, unsigned int asid_per_ctxt, - void (*flush_cpu_ctxt_cb)(void)) -{ - info->bits = bits; - info->ctxt_shift = ilog2(asid_per_ctxt); - info->flush_cpu_ctxt_cb = flush_cpu_ctxt_cb; - /* - * Expect allocation after rollover to fail if we don't have at least - * one more ASID than CPUs. ASID #0 is always reserved. - */ - WARN_ON(NUM_CTXT_ASIDS(info) - 1 <= num_possible_cpus()); - atomic64_set(&info->generation, ASID_FIRST_VERSION(info)); - info->map = kcalloc(BITS_TO_LONGS(NUM_CTXT_ASIDS(info)), - sizeof(*info->map), GFP_KERNEL); - if (!info->map) - return -ENOMEM; - - raw_spin_lock_init(&info->lock); - - return 0; -} - static int asids_init(void) { u32 bits = get_cpu_asid_bits(); @@ -333,7 +104,7 @@ static int asids_init(void) if (asid_allocator_init(&asid_info, bits, ASID_PER_CONTEXT, asid_flush_cpu_ctxt)) panic("Unable to initialize ASID allocator for %lu ASIDs\n", - 1UL << bits); + NUM_ASIDS(&asid_info)); asid_info.active = &active_asids; asid_info.reserved = &reserved_asids; From patchwork Wed Jul 24 16:25:31 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Grall X-Patchwork-Id: 169626 Delivered-To: patch@linaro.org Received: by 2002:a92:4782:0:0:0:0:0 with SMTP id e2csp10466584ilk; Wed, 24 Jul 2019 09:26:12 -0700 (PDT) X-Google-Smtp-Source: APXvYqweqWcGjoW/T0OB9/uIZKF3JH16W3g7QHnDGtOPaeD4PfOfRgHMRHpGyF6b8u4rJn83NcNb X-Received: by 2002:a63:4f51:: with SMTP id p17mr60234596pgl.333.1563985572269; Wed, 24 Jul 2019 09:26:12 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1563985572; cv=none; d=google.com; s=arc-20160816; b=tpIsv5T/WuMCH6ONY+9bcSJCKZWp2qZz4ktbB9CcYV4n1YwhBIFAoSkv/MN530djbO Oju/peF7A1qyxlL8wEXf1ixV+JXh3QEucW6tCDTNB5qtnFjBTrVTQduqazSD5xIqKQOg dMBnFihiLNHaH5fI3lbb5vAh9X9GU2xkQaB6fz0q1D0zaCfUAau2VzpYPwSBbrzC2jF+ lGDhFSZ7OnNMhXY+vK7iX+xHBB4kQ+wFHzLtrCp2sVYBrnSTiVYFjhQ8YGq/aoIr4bC4 TkwUxB/9TvBRA6r8zEPgL06AyuMDeS5O1INyvPWRSr/uXeOWnvK3M39oQn1mWKxZonco RvVA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=QMia8MqWppUyTniiiBsj+k4yE9pycwjXa8XNFdZZCZg=; b=ufagfjQDmUJuAtc0D22/YaKcyz9ArIeiL/kaPoBOCmUcFCymEvk4GhqYJ8P7AHOVqi CE6qxOmOkHtBjwGhWVC+hPAyKml4Jl+e1V4BBJDW4Zah3LQe9VggZWa73UNVIVslaKeL at1yJQSJNhDHSLntUHyNy03zuNsd/Fw9729utTWf74zriTJ5/rHnDuoZNtUdAiGLEUtR 24vKWlXGUuOnB4B8/l0+mwx9vI/aEH5gnb5mY9/h/OJpcIistX8TG8QNdW6gJbJLYVKj TUE5XxUI6Tt+n2owP6TouCKyKtroIVj8PmDQpbamkJ2WAmrM9AmuKvt+qCI8kz5rrRsH IEBg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id p8si12873638plq.53.2019.07.24.09.26.12; Wed, 24 Jul 2019 09:26:12 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728915AbfGXQ0K (ORCPT + 29 others); Wed, 24 Jul 2019 12:26:10 -0400 Received: from foss.arm.com ([217.140.110.172]:43434 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728902AbfGXQ0G (ORCPT ); Wed, 24 Jul 2019 12:26:06 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 0329428; Wed, 24 Jul 2019 09:26:06 -0700 (PDT) Received: from e108454-lin.cambridge.arm.com (e108454-lin.cambridge.arm.com [10.1.196.50]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id A35343F71F; Wed, 24 Jul 2019 09:26:04 -0700 (PDT) From: Julien Grall To: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu Cc: james.morse@arm.com, marc.zyngier@arm.com, julien.thierry@arm.com, suzuki.poulose@arm.com, catalin.marinas@arm.com, will.deacon@arm.com, Julien Grall Subject: [PATCH v3 12/15] arm64/lib: Add an helper to free memory allocated by the ASID allocator Date: Wed, 24 Jul 2019 17:25:31 +0100 Message-Id: <20190724162534.7390-13-julien.grall@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20190724162534.7390-1-julien.grall@arm.com> References: <20190724162534.7390-1-julien.grall@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Some users of the ASID allocator (e.g VMID) may require to free any resource if the initialization fail. So introduce a function allows to free any memory allocated by the ASID allocator. Signed-off-by: Julien Grall --- Changes in v3: - Patch added --- arch/arm64/include/asm/lib_asid.h | 2 ++ arch/arm64/lib/asid.c | 5 +++++ 2 files changed, 7 insertions(+) -- 2.11.0 diff --git a/arch/arm64/include/asm/lib_asid.h b/arch/arm64/include/asm/lib_asid.h index c18e9eca500e..ff78865a6823 100644 --- a/arch/arm64/include/asm/lib_asid.h +++ b/arch/arm64/include/asm/lib_asid.h @@ -74,4 +74,6 @@ int asid_allocator_init(struct asid_info *info, u32 bits, unsigned int asid_per_ctxt, void (*flush_cpu_ctxt_cb)(void)); +void asid_allocator_free(struct asid_info *info); + #endif diff --git a/arch/arm64/lib/asid.c b/arch/arm64/lib/asid.c index 0b3a99c4aed4..d23f0df656c1 100644 --- a/arch/arm64/lib/asid.c +++ b/arch/arm64/lib/asid.c @@ -183,3 +183,8 @@ int asid_allocator_init(struct asid_info *info, return 0; } + +void asid_allocator_free(struct asid_info *info) +{ + kfree(info->map); +} From patchwork Wed Jul 24 16:25:32 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Grall X-Patchwork-Id: 169627 Delivered-To: patch@linaro.org Received: by 2002:a92:4782:0:0:0:0:0 with SMTP id e2csp10466661ilk; Wed, 24 Jul 2019 09:26:17 -0700 (PDT) X-Google-Smtp-Source: APXvYqytWfMfu07jO3rqDIOrLcf7ruQF8gcl5phE82zw8P9ewnclZzoLpETMSdFjhCRrUSf8wW1i X-Received: by 2002:a65:6104:: with SMTP id z4mr44969853pgu.27.1563985577260; Wed, 24 Jul 2019 09:26:17 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1563985577; cv=none; d=google.com; s=arc-20160816; b=W5mLZnAja2yv3fxplNl4ZYUgqBxhzHUdX4ZZvz+i+LuEalCw0m6YAxkKbrhpK62kdG 2kdtTYdB4q9Z6QSrVSgYHO01XV5/hEd9pm2VPpOHlLHYYSPv0We1GFBkVRH9ye0QLdhf rvTOBTqqCBztOzC4Qa+eNerNCFxoB2VrUF2b9YccxwoIgZCezUwhddXWvucoX5mO4fbp lPMboHKX7Lx2ze8yhHpJmeCQNvNkXTYmJFNFrWACextoxh4reqTAp2tburAcyDH/bEBD /53Y4/CLVEXnfYTfDZAYAwJMy9X9ZdihIEYYbhY9p5oJ4tzR527mHs1RCFGnQai/acCp ODhQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=m1EF14uxKH9bYXyn8CGlm7rywH1drMVuWXYH/HzeP3c=; b=QHlOwjZzuedVyt2xauBk5z8Y/wYw2xROD5dPiJHYCBI0hJs4rTyXopVLsXb7QlGdOG mfDZIv9hd1eSzLe7QP6bVeq04TisVuP/xY5n+BRzyDA7jRd2K/7R7jcilMJ/dHUMmMFV dT16hLhjsxdlCfuON79DSU3a2GKJP99jD2ISaItvau7okiedErXdpNownK70MHtsrltl hMMfO17Cl9WR80fb4KrQ/gI1GIPwzljOe0RSr4N2UMvCkXvMu29SN6yaYFk3L3qO7rue s0bmV9qLQVGygjhN1UDM1v/IfqdhxudvUUm3GqYKjS3ItyEFcLn7YW3DmL7Uo8rKlygW xVYQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id p8si12873638plq.53.2019.07.24.09.26.16; Wed, 24 Jul 2019 09:26:17 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728934AbfGXQ0Q (ORCPT + 29 others); Wed, 24 Jul 2019 12:26:16 -0400 Received: from foss.arm.com ([217.140.110.172]:43444 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728901AbfGXQ0I (ORCPT ); Wed, 24 Jul 2019 12:26:08 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id A72CB15A1; Wed, 24 Jul 2019 09:26:07 -0700 (PDT) Received: from e108454-lin.cambridge.arm.com (e108454-lin.cambridge.arm.com [10.1.196.50]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 380CF3F71F; Wed, 24 Jul 2019 09:26:06 -0700 (PDT) From: Julien Grall To: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu Cc: james.morse@arm.com, marc.zyngier@arm.com, julien.thierry@arm.com, suzuki.poulose@arm.com, catalin.marinas@arm.com, will.deacon@arm.com, Julien Grall , Russell King Subject: [PATCH v3 13/15] arm/kvm: Introduce a new VMID allocator Date: Wed, 24 Jul 2019 17:25:32 +0100 Message-Id: <20190724162534.7390-14-julien.grall@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20190724162534.7390-1-julien.grall@arm.com> References: <20190724162534.7390-1-julien.grall@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org A follow-up patch will replace the KVM VMID allocator with the arm64 ASID allocator. To avoid as much as possible duplication, the arm KVM code will directly compile arch/arm64/lib/asid.c. The header is a verbatim to copy to avoid breaking the assumption that architecture port has self-containers headers. Signed-off-by: Julien Grall Cc: Russell King --- I hit a warning when compiling the ASID code: linux/arch/arm/kvm/../../arm64/lib/asid.c:17: warning: "ASID_MASK" redefined #define ASID_MASK(info) (~GENMASK((info)->bits - 1, 0)) In file included from linux/include/linux/mm_types.h:18, from linux/include/linux/mmzone.h:21, from linux/include/linux/gfp.h:6, from linux/include/linux/slab.h:15, from linux/arch/arm/kvm/../../arm64/lib/asid.c:11: linux/arch/arm/include/asm/mmu.h:26: note: this is the location of the previous definition #define ASID_MASK ((~0ULL) << ASID_BITS) I haven't yet resolved because I am not sure of the best way to go. AFAICT ASID_MASK is only used in mm/context.c. So I am wondering whether it would be acceptable to move the define. Changes in v3: - Resync arm32 with the arm64 header Changes in v2: - Re-use arm64/lib/asid.c rather than duplication the code. --- arch/arm/include/asm/lib_asid.h | 79 +++++++++++++++++++++++++++++++++++++++++ arch/arm/kvm/Makefile | 1 + 2 files changed, 80 insertions(+) create mode 100644 arch/arm/include/asm/lib_asid.h -- 2.11.0 diff --git a/arch/arm/include/asm/lib_asid.h b/arch/arm/include/asm/lib_asid.h new file mode 100644 index 000000000000..e3233d37f5db --- /dev/null +++ b/arch/arm/include/asm/lib_asid.h @@ -0,0 +1,79 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef __ARM_LIB_ASID_H__ +#define __ARM_LIB_ASID_H__ + +#include +#include +#include +#include +#include + +struct asid_info +{ + atomic64_t generation; + unsigned long *map; + atomic64_t __percpu *active; + u64 __percpu *reserved; + u32 bits; + /* Lock protecting the structure */ + raw_spinlock_t lock; + /* Which CPU requires context flush on next call */ + cpumask_t flush_pending; + /* Number of ASID allocated by context (shift value) */ + unsigned int ctxt_shift; + /* Callback to locally flush the context. */ + void (*flush_cpu_ctxt_cb)(void); +}; + +#define NUM_ASIDS(info) (1UL << ((info)->bits)) +#define NUM_CTXT_ASIDS(info) (NUM_ASIDS(info) >> (info)->ctxt_shift) + +#define active_asid(info, cpu) *per_cpu_ptr((info)->active, cpu) + +void asid_new_context(struct asid_info *info, atomic64_t *pasid, + unsigned int cpu); + +/* + * Check the ASID is still valid for the context. If not generate a new ASID. + * + * @pasid: Pointer to the current ASID batch + * @cpu: current CPU ID. Must have been acquired throught get_cpu() + */ +static inline void asid_check_context(struct asid_info *info, + atomic64_t *pasid, unsigned int cpu) +{ + u64 asid, old_active_asid; + + asid = atomic64_read(pasid); + + /* + * The memory ordering here is subtle. + * If our active_asid is non-zero and the ASID matches the current + * generation, then we update the active_asid entry with a relaxed + * cmpxchg. Racing with a concurrent rollover means that either: + * + * - We get a zero back from the cmpxchg and end up waiting on the + * lock. Taking the lock synchronises with the rollover and so + * we are forced to see the updated generation. + * + * - We get a valid ASID back from the cmpxchg, which means the + * relaxed xchg in flush_context will treat us as reserved + * because atomic RmWs are totally ordered for a given location. + */ + old_active_asid = atomic64_read(&active_asid(info, cpu)); + if (old_active_asid && + !((asid ^ atomic64_read(&info->generation)) >> info->bits) && + atomic64_cmpxchg_relaxed(&active_asid(info, cpu), + old_active_asid, asid)) + return; + + asid_new_context(info, pasid, cpu); +} + +int asid_allocator_init(struct asid_info *info, + u32 bits, unsigned int asid_per_ctxt, + void (*flush_cpu_ctxt_cb)(void)); + +void asid_allocator_free(struct asid_info *info); + +#endif /* __ARM_LIB_ASID_H__ */ diff --git a/arch/arm/kvm/Makefile b/arch/arm/kvm/Makefile index 531e59f5be9c..6ab49bd84531 100644 --- a/arch/arm/kvm/Makefile +++ b/arch/arm/kvm/Makefile @@ -40,3 +40,4 @@ obj-y += $(KVM)/arm/vgic/vgic-its.o obj-y += $(KVM)/arm/vgic/vgic-debug.o obj-y += $(KVM)/irqchip.o obj-y += $(KVM)/arm/arch_timer.o +obj-y += ../../arm64/lib/asid.o From patchwork Wed Jul 24 16:25:33 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Grall X-Patchwork-Id: 169628 Delivered-To: patch@linaro.org Received: by 2002:a92:4782:0:0:0:0:0 with SMTP id e2csp10466700ilk; Wed, 24 Jul 2019 09:26:19 -0700 (PDT) X-Google-Smtp-Source: APXvYqzSME3BgSLN5cErLvAEzQr8Tm6ddWXq1EY4g+0j6sd/WAvC6IYC53IolKJ1kRhbw9VOUKW+ X-Received: by 2002:a62:1b0c:: with SMTP id b12mr11955410pfb.17.1563985579848; Wed, 24 Jul 2019 09:26:19 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1563985579; cv=none; d=google.com; s=arc-20160816; b=voPF9L2RKM7j+ZPQ2DvRHvmdzHcDkuvTUOEbnwNycFgvW6gF7VyEt9RH56YIk2Qo/u bm6S2QSbd+jwq4lxmFj0NfPVNZ6jdbt/PGufXruGZYFJQw8PvFHxz5CbpI903kZ+fwey kwpnvmdpIussYgHCJS+Mn3r3a5wzMoSPOfWNEXoH1+0cc5CHXEdFBlM84X8F+ehLy2fT xvEUDUreAdGZ1UNLWHaa4BBw7bdupqC0tWp4mvoB2bySVOkCfnEL4LKUGAUKN4u9UHwd vB+pTqelvKBBn07XVd1QNLXyTRMcJTrecb2qBHCDOdFHK35xG9aiMaVGaS0nR4U1pRej NXKQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=i1SxRKXE9EM6PmUEbxGh33CclDwiEKzzY0sgZ5QqxwY=; b=MA5xrAImTl0H3HwiAeNCkvR5Ij3hGgk9FvgbXUoAI6lgkgAHZ01NzdXn1Ji/0Q1rb4 U2V67wZfbcp+3M980D57rMQd+ID7nOc+uuYWiwmR4lx4a9xg/Wi7LxeibxrXWOX540+L ymSGp56JdrC/08bp+p8T+zmLhPw+IQ3VLg85uw1uk1gi4tpqvJz+RUfYSgffDAuVj0Kx 1HTTFm4T1fbdhWmd4NsmaC0s2+9SdpW9T0/biKhTBCe+Ma5eTM2+NPg0JXmcFZ+wb7yQ 0mx18CwtRXGP6oFAYVNL8ZJFOdaCiksentmJpozfTt6zIFVmSXNgn9GN+l2fx+Y2opGl i0vw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id p8si12873638plq.53.2019.07.24.09.26.19; Wed, 24 Jul 2019 09:26:19 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728924AbfGXQ0O (ORCPT + 29 others); Wed, 24 Jul 2019 12:26:14 -0400 Received: from foss.arm.com ([217.140.110.172]:43456 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727679AbfGXQ0J (ORCPT ); Wed, 24 Jul 2019 12:26:09 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 3C0BE15A2; Wed, 24 Jul 2019 09:26:09 -0700 (PDT) Received: from e108454-lin.cambridge.arm.com (e108454-lin.cambridge.arm.com [10.1.196.50]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id DB3A53F71F; Wed, 24 Jul 2019 09:26:07 -0700 (PDT) From: Julien Grall To: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu Cc: james.morse@arm.com, marc.zyngier@arm.com, julien.thierry@arm.com, suzuki.poulose@arm.com, catalin.marinas@arm.com, will.deacon@arm.com, Julien Grall Subject: [PATCH v3 14/15] arch/arm64: Introduce a capability to tell whether 16-bit VMID is available Date: Wed, 24 Jul 2019 17:25:33 +0100 Message-Id: <20190724162534.7390-15-julien.grall@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20190724162534.7390-1-julien.grall@arm.com> References: <20190724162534.7390-1-julien.grall@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org At the moment, the function kvm_get_vmid_bits() is looking up for the sanitized value of ID_AA64MMFR1_EL1 and extract the information regarding the number of VMID bits supported. This is fine as the function is mainly used during VMID roll-over. New use in a follow-up patch will require the function to be called a every context switch so we want the function to be more efficient. A new capability is introduced to tell whether 16-bit VMID is available. Signed-off-by: Julien Grall --- Changes in v3: - Patch added --- arch/arm64/include/asm/cpucaps.h | 3 ++- arch/arm64/include/asm/kvm_mmu.h | 4 +--- arch/arm64/kernel/cpufeature.c | 9 +++++++++ 3 files changed, 12 insertions(+), 4 deletions(-) -- 2.11.0 diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h index f19fe4b9acc4..af8ab758b252 100644 --- a/arch/arm64/include/asm/cpucaps.h +++ b/arch/arm64/include/asm/cpucaps.h @@ -52,7 +52,8 @@ #define ARM64_HAS_IRQ_PRIO_MASKING 42 #define ARM64_HAS_DCPODP 43 #define ARM64_WORKAROUND_1463225 44 +#define ARM64_HAS_16BIT_VMID 45 -#define ARM64_NCAPS 45 +#define ARM64_NCAPS 46 #endif /* __ASM_CPUCAPS_H */ diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h index befe37d4bc0e..2ce8055a84b8 100644 --- a/arch/arm64/include/asm/kvm_mmu.h +++ b/arch/arm64/include/asm/kvm_mmu.h @@ -413,9 +413,7 @@ static inline void __kvm_extend_hypmap(pgd_t *boot_hyp_pgd, static inline unsigned int kvm_get_vmid_bits(void) { - int reg = read_sanitised_ftr_reg(SYS_ID_AA64MMFR1_EL1); - - return (cpuid_feature_extract_unsigned_field(reg, ID_AA64MMFR1_VMIDBITS_SHIFT) == 2) ? 16 : 8; + return cpus_have_const_cap(ARM64_HAS_16BIT_VMID) ? 16 : 8; } /* diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index f29f36a65175..b401e56af35a 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -1548,6 +1548,15 @@ static const struct arm64_cpu_capabilities arm64_features[] = { .min_field_value = 1, }, #endif + { + .capability = ARM64_HAS_16BIT_VMID, + .type = ARM64_CPUCAP_SYSTEM_FEATURE, + .sys_reg = SYS_ID_AA64MMFR1_EL1, + .field_pos = ID_AA64MMFR1_VMIDBITS_SHIFT, + .sign = FTR_UNSIGNED, + .min_field_value = ID_AA64MMFR1_VMIDBITS_16, + .matches = has_cpuid_feature, + }, {}, }; From patchwork Wed Jul 24 16:25:34 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Grall X-Patchwork-Id: 169629 Delivered-To: patch@linaro.org Received: by 2002:a92:4782:0:0:0:0:0 with SMTP id e2csp10466777ilk; Wed, 24 Jul 2019 09:26:23 -0700 (PDT) X-Google-Smtp-Source: APXvYqx8JVRjbZ1HvtAKYrkdZLR/AqbBSHSu/uMSoiiu+bZwXbPYdc24YuVxLj5qDreQZGou5e3x X-Received: by 2002:a17:902:42d:: with SMTP id 42mr82638675ple.228.1563985583439; Wed, 24 Jul 2019 09:26:23 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1563985583; cv=none; d=google.com; s=arc-20160816; b=tjuyhVFOWwfAXhNcZ93zzFuOzVaq6ltsfZ6ofyC6x/Dh3ob6Uv1wCD/CXuUi/uibaO V4f5Ep8yz6i6lL1xXgsJpkRdLttng+mRWJruW6ZOZRiImQkkybGr7VyGd8POMfrKeojH SKyYVkv1JAFN/9q3JtbxBpyNK+Im1zZc2P2cMf+cYuD3Gq7NS7eZHZQV5KXcNlOM/Tiy 8CA+XW4wcMihvvWo6VbT+p4C6ysxRcR7xt7bDg7RDMhBW6SmhIm5V58myrdJZw075E2s MhnX1ivAWlGWWlo6nKUTrz5LSGoFNhBMeOg/NVhN4rDlGTFmLc9WEaxNNqhOlptGQDf/ Bz0w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=egt0sQ6VkE+i1lwtM4qUFOKgUbQPzQ+1uKV3BAGQqzg=; b=rOhxF9miTd/ugT+9HD1We/B/XIWj7UUTQqdz9Z+J2pqA2rWCT98FUvE9AcPdYtRV5Y th0iyhvXlYlS5ADRIUiEQswF4fj1IH1sU6FTik2g1OumwwR6pFnVyDrwaKtnSMzrSH1I mI/PvqqpctLfOa11zSI72VD+pRQ7vTgxtsI47gqJ73CE12Qq7aTswaJJkWApSv9JyGCX 1znd30eeT5XjmnE7gWvZgFurdd2nalceNogrJubK4+zTN5MYHsCf2rQOWdSiLDQuGGvC CrdagNjEbglh+3jsGovLpf9/Mnuc+Lz5Gmst+rXC5y1OlEW1iR72XR50zXgW4t2HR9Ql Wq0A== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id t185si14258706pgd.596.2019.07.24.09.26.23; Wed, 24 Jul 2019 09:26:23 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728940AbfGXQ0W (ORCPT + 29 others); Wed, 24 Jul 2019 12:26:22 -0400 Received: from foss.arm.com ([217.140.110.172]:43462 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728913AbfGXQ0L (ORCPT ); Wed, 24 Jul 2019 12:26:11 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id C731628; Wed, 24 Jul 2019 09:26:10 -0700 (PDT) Received: from e108454-lin.cambridge.arm.com (e108454-lin.cambridge.arm.com [10.1.196.50]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 6FE1A3F71F; Wed, 24 Jul 2019 09:26:09 -0700 (PDT) From: Julien Grall To: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu Cc: james.morse@arm.com, marc.zyngier@arm.com, julien.thierry@arm.com, suzuki.poulose@arm.com, catalin.marinas@arm.com, will.deacon@arm.com, Julien Grall Subject: [PATCH v3 15/15] kvm/arm: Align the VMID allocation with the arm64 ASID one Date: Wed, 24 Jul 2019 17:25:34 +0100 Message-Id: <20190724162534.7390-16-julien.grall@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20190724162534.7390-1-julien.grall@arm.com> References: <20190724162534.7390-1-julien.grall@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org At the moment, the VMID algorithm will send an SGI to all the CPUs to force an exit and then broadcast a full TLB flush and I-Cache invalidation. This patch re-use the new ASID allocator. The benefits are: - CPUs are not forced to exit at roll-over. Instead the VMID will be marked reserved and the context will be flushed at next exit. This will reduce the IPIs traffic. - Context invalidation is now per-CPU rather than broadcasted. - Catalin has a formal model of the ASID allocator. With the new algo, the code is now adapted: - The function __kvm_flush_vm_context() has been renamed to __kvm_tlb_flush_local_all() and now only flushing the current CPU context. - The call to update_vttbr() will be done with preemption disabled as the new algo requires to store information per-CPU. - The TLBs associated to EL1 will be flushed when booting a CPU to deal with stale information. This was previously done on the allocation of the first VMID of a new generation. The measurement was made on a Seattle based SoC (8 CPUs), with the number of VMID limited to 4-bit. The test involves running concurrently 40 guests with 2 vCPUs. Each guest will then execute hackbench 5 times before exiting. The performance difference between the current algo and the new one are: - 2.5% less exit from the guest - 22.4% more flush, although they are now local rather than broadcasted - 0.11% faster (just for the record) Signed-off-by: Julien Grall ---- Looking at the __kvm_flush_vm_context, it might be possible to reduce more the overhead by removing the I-Cache flush for other cache than VIPT. This has been left aside for now. Changes in v3: - Free resource if initialization failed - s/__kvm_flush_cpu_vmid_context/__kvm_tlb_flush_local_all/ - s/asid/id/ in kvm_vmid to avoid confusion - Generate the VMID in kvm_get_vttbr() rather than using a callback in the ASID allocator - Use smp_processor_id() rather than {get, put}_cpu() as the code should already be called from non-preemptible context - Mention the formal model in the commit message --- arch/arm/include/asm/kvm_asm.h | 2 +- arch/arm/include/asm/kvm_host.h | 5 +- arch/arm/include/asm/kvm_hyp.h | 1 + arch/arm/include/asm/kvm_mmu.h | 3 +- arch/arm/kvm/hyp/tlb.c | 8 +-- arch/arm64/include/asm/kvm_asid.h | 8 +++ arch/arm64/include/asm/kvm_asm.h | 2 +- arch/arm64/include/asm/kvm_host.h | 5 +- arch/arm64/include/asm/kvm_mmu.h | 3 +- arch/arm64/kvm/hyp/tlb.c | 10 +-- virt/kvm/arm/arm.c | 125 ++++++++++++++------------------------ 11 files changed, 70 insertions(+), 102 deletions(-) create mode 100644 arch/arm64/include/asm/kvm_asid.h -- 2.11.0 diff --git a/arch/arm/include/asm/kvm_asm.h b/arch/arm/include/asm/kvm_asm.h index f615830f9f57..b6342258b466 100644 --- a/arch/arm/include/asm/kvm_asm.h +++ b/arch/arm/include/asm/kvm_asm.h @@ -53,10 +53,10 @@ struct kvm_vcpu; extern char __kvm_hyp_init[]; extern char __kvm_hyp_init_end[]; -extern void __kvm_flush_vm_context(void); extern void __kvm_tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa); extern void __kvm_tlb_flush_vmid(struct kvm *kvm); extern void __kvm_tlb_flush_local_vmid(struct kvm_vcpu *vcpu); +extern void __kvm_tlb_flush_local_all(void); extern void __kvm_timer_set_cntvoff(u32 cntvoff_low, u32 cntvoff_high); diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h index 8a37c8e89777..9b534f73725f 100644 --- a/arch/arm/include/asm/kvm_host.h +++ b/arch/arm/include/asm/kvm_host.h @@ -49,9 +49,7 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu); void kvm_reset_coprocs(struct kvm_vcpu *vcpu); struct kvm_vmid { - /* The VMID generation used for the virt. memory system */ - u64 vmid_gen; - u32 vmid; + atomic64_t id; }; struct kvm_arch { @@ -257,7 +255,6 @@ unsigned long __kvm_call_hyp(void *hypfn, ...); ret; \ }) -void force_vm_exit(const cpumask_t *mask); int __kvm_arm_vcpu_get_events(struct kvm_vcpu *vcpu, struct kvm_vcpu_events *events); diff --git a/arch/arm/include/asm/kvm_hyp.h b/arch/arm/include/asm/kvm_hyp.h index 40e9034db601..46484a516e76 100644 --- a/arch/arm/include/asm/kvm_hyp.h +++ b/arch/arm/include/asm/kvm_hyp.h @@ -64,6 +64,7 @@ #define TLBIALLIS __ACCESS_CP15(c8, 0, c3, 0) #define TLBIALL __ACCESS_CP15(c8, 0, c7, 0) #define TLBIALLNSNHIS __ACCESS_CP15(c8, 4, c3, 4) +#define TLBIALLNSNH __ACCESS_CP15(c8, 4, c7, 4) #define PRRR __ACCESS_CP15(c10, 0, c2, 0) #define NMRR __ACCESS_CP15(c10, 0, c2, 1) #define AMAIR0 __ACCESS_CP15(c10, 0, c3, 0) diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h index 0d84d50bf9ba..d7208e7b01bd 100644 --- a/arch/arm/include/asm/kvm_mmu.h +++ b/arch/arm/include/asm/kvm_mmu.h @@ -426,7 +426,8 @@ static __always_inline u64 kvm_get_vttbr(struct kvm *kvm) u64 vmid_field, baddr; baddr = kvm->arch.pgd_phys; - vmid_field = (u64)vmid->vmid << VTTBR_VMID_SHIFT; + vmid_field = atomic64_read(&vmid->id) << VTTBR_VMID_SHIFT; + vmid_field &= VTTBR_VMID_MASK(kvm_get_vmid_bits()); return kvm_phys_to_vttbr(baddr) | vmid_field; } diff --git a/arch/arm/kvm/hyp/tlb.c b/arch/arm/kvm/hyp/tlb.c index 848f27bbad9d..af0108350a35 100644 --- a/arch/arm/kvm/hyp/tlb.c +++ b/arch/arm/kvm/hyp/tlb.c @@ -60,9 +60,9 @@ void __hyp_text __kvm_tlb_flush_local_vmid(struct kvm_vcpu *vcpu) write_sysreg(0, VTTBR); } -void __hyp_text __kvm_flush_vm_context(void) +void __hyp_text __kvm_tlb_flush_local_all(void) { - write_sysreg(0, TLBIALLNSNHIS); - write_sysreg(0, ICIALLUIS); - dsb(ish); + write_sysreg(0, TLBIALLNSNH); + write_sysreg(0, ICIALLU); + dsb(nsh); } diff --git a/arch/arm64/include/asm/kvm_asid.h b/arch/arm64/include/asm/kvm_asid.h new file mode 100644 index 000000000000..8b586e43c094 --- /dev/null +++ b/arch/arm64/include/asm/kvm_asid.h @@ -0,0 +1,8 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef __ARM64_KVM_ASID_H__ +#define __ARM64_KVM_ASID_H__ + +#include + +#endif /* __ARM64_KVM_ASID_H__ */ + diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h index 44a243754c1b..0e19e8f1283a 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -57,10 +57,10 @@ extern char __kvm_hyp_init_end[]; extern char __kvm_hyp_vector[]; -extern void __kvm_flush_vm_context(void); extern void __kvm_tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa); extern void __kvm_tlb_flush_vmid(struct kvm *kvm); extern void __kvm_tlb_flush_local_vmid(struct kvm_vcpu *vcpu); +extern void __kvm_tlb_flush_local_all(void); extern void __kvm_timer_set_cntvoff(u32 cntvoff_low, u32 cntvoff_high); diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index f656169db8c3..501121a82bbc 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -57,9 +57,7 @@ int kvm_arch_vm_ioctl_check_extension(struct kvm *kvm, long ext); void __extended_idmap_trampoline(phys_addr_t boot_pgd, phys_addr_t idmap_start); struct kvm_vmid { - /* The VMID generation used for the virt. memory system */ - u64 vmid_gen; - u32 vmid; + atomic64_t id; }; struct kvm_arch { @@ -467,7 +465,6 @@ u64 __kvm_call_hyp(void *hypfn, ...); ret; \ }) -void force_vm_exit(const cpumask_t *mask); void kvm_mmu_wp_memory_region(struct kvm *kvm, int slot); int handle_exit(struct kvm_vcpu *vcpu, struct kvm_run *run, diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h index 2ce8055a84b8..0c5b36af4abe 100644 --- a/arch/arm64/include/asm/kvm_mmu.h +++ b/arch/arm64/include/asm/kvm_mmu.h @@ -597,7 +597,8 @@ static __always_inline u64 kvm_get_vttbr(struct kvm *kvm) u64 cnp = system_supports_cnp() ? VTTBR_CNP_BIT : 0; baddr = kvm->arch.pgd_phys; - vmid_field = (u64)vmid->vmid << VTTBR_VMID_SHIFT; + vmid_field = atomic64_read(&vmid->id) << VTTBR_VMID_SHIFT; + vmid_field &= VTTBR_VMID_MASK(kvm_get_vmid_bits()); return kvm_phys_to_vttbr(baddr) | vmid_field | cnp; } diff --git a/arch/arm64/kvm/hyp/tlb.c b/arch/arm64/kvm/hyp/tlb.c index d49a14497715..46839c70461a 100644 --- a/arch/arm64/kvm/hyp/tlb.c +++ b/arch/arm64/kvm/hyp/tlb.c @@ -189,10 +189,10 @@ void __hyp_text __kvm_tlb_flush_local_vmid(struct kvm_vcpu *vcpu) __tlb_switch_to_host()(kvm, &cxt); } -void __hyp_text __kvm_flush_vm_context(void) +void __hyp_text __kvm_tlb_flush_local_all(void) { - dsb(ishst); - __tlbi(alle1is); - asm volatile("ic ialluis" : : ); - dsb(ish); + dsb(nshst); + __tlbi(alle1); + asm volatile("ic iallu" : : ); + dsb(nsh); } diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c index f645c0fbf7ec..c01b6036c909 100644 --- a/virt/kvm/arm/arm.c +++ b/virt/kvm/arm/arm.c @@ -32,6 +32,7 @@ #include #include #include +#include #include #include #include @@ -50,10 +51,10 @@ static DEFINE_PER_CPU(unsigned long, kvm_arm_hyp_stack_page); /* Per-CPU variable containing the currently running vcpu. */ static DEFINE_PER_CPU(struct kvm_vcpu *, kvm_arm_running_vcpu); -/* The VMID used in the VTTBR */ -static atomic64_t kvm_vmid_gen = ATOMIC64_INIT(1); -static u32 kvm_next_vmid; -static DEFINE_SPINLOCK(kvm_vmid_lock); +static DEFINE_PER_CPU(atomic64_t, active_vmids); +static DEFINE_PER_CPU(u64, reserved_vmids); + +struct asid_info vmid_info; static bool vgic_present; @@ -128,9 +129,6 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type) kvm_vgic_early_init(kvm); - /* Mark the initial VMID generation invalid */ - kvm->arch.vmid.vmid_gen = 0; - /* The maximum number of VCPUs is limited by the host's GIC model */ kvm->arch.max_vcpus = vgic_present ? kvm_vgic_get_max_vcpus() : KVM_MAX_VCPUS; @@ -449,35 +447,9 @@ bool kvm_arch_vcpu_in_kernel(struct kvm_vcpu *vcpu) return vcpu_mode_priv(vcpu); } -/* Just ensure a guest exit from a particular CPU */ -static void exit_vm_noop(void *info) -{ -} - -void force_vm_exit(const cpumask_t *mask) -{ - preempt_disable(); - smp_call_function_many(mask, exit_vm_noop, NULL, true); - preempt_enable(); -} - -/** - * need_new_vmid_gen - check that the VMID is still valid - * @vmid: The VMID to check - * - * return true if there is a new generation of VMIDs being used - * - * The hardware supports a limited set of values with the value zero reserved - * for the host, so we check if an assigned value belongs to a previous - * generation, which which requires us to assign a new value. If we're the - * first to use a VMID for the new generation, we must flush necessary caches - * and TLBs on all CPUs. - */ -static bool need_new_vmid_gen(struct kvm_vmid *vmid) +static void vmid_flush_cpu_ctxt(void) { - u64 current_vmid_gen = atomic64_read(&kvm_vmid_gen); - smp_rmb(); /* Orders read of kvm_vmid_gen and kvm->arch.vmid */ - return unlikely(READ_ONCE(vmid->vmid_gen) != current_vmid_gen); + kvm_call_hyp(__kvm_tlb_flush_local_all); } /** @@ -487,48 +459,9 @@ static bool need_new_vmid_gen(struct kvm_vmid *vmid) */ static void update_vmid(struct kvm_vmid *vmid) { - if (!need_new_vmid_gen(vmid)) - return; - - spin_lock(&kvm_vmid_lock); - - /* - * We need to re-check the vmid_gen here to ensure that if another vcpu - * already allocated a valid vmid for this vm, then this vcpu should - * use the same vmid. - */ - if (!need_new_vmid_gen(vmid)) { - spin_unlock(&kvm_vmid_lock); - return; - } - - /* First user of a new VMID generation? */ - if (unlikely(kvm_next_vmid == 0)) { - atomic64_inc(&kvm_vmid_gen); - kvm_next_vmid = 1; - - /* - * On SMP we know no other CPUs can use this CPU's or each - * other's VMID after force_vm_exit returns since the - * kvm_vmid_lock blocks them from reentry to the guest. - */ - force_vm_exit(cpu_all_mask); - /* - * Now broadcast TLB + ICACHE invalidation over the inner - * shareable domain to make sure all data structures are - * clean. - */ - kvm_call_hyp(__kvm_flush_vm_context); - } + int cpu = smp_processor_id(); - vmid->vmid = kvm_next_vmid; - kvm_next_vmid++; - kvm_next_vmid &= (1 << kvm_get_vmid_bits()) - 1; - - smp_wmb(); - WRITE_ONCE(vmid->vmid_gen, atomic64_read(&kvm_vmid_gen)); - - spin_unlock(&kvm_vmid_lock); + asid_check_context(&vmid_info, &vmid->id, cpu); } static int kvm_vcpu_first_run_init(struct kvm_vcpu *vcpu) @@ -682,8 +615,6 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run) */ cond_resched(); - update_vmid(&vcpu->kvm->arch.vmid); - check_vcpu_requests(vcpu); /* @@ -693,6 +624,15 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run) */ preempt_disable(); + /* + * The ASID/VMID allocator only tracks active VMIDs per + * physical CPU, and therefore the VMID allocated may not be + * preserved on VMID roll-over if the task was preempted, + * making a thread's VMID inactive. So we need to call + * update_vttbr in non-premptible context. + */ + update_vmid(&vcpu->kvm->arch.vmid); + kvm_pmu_flush_hwstate(vcpu); local_irq_disable(); @@ -731,8 +671,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run) */ smp_store_mb(vcpu->mode, IN_GUEST_MODE); - if (ret <= 0 || need_new_vmid_gen(&vcpu->kvm->arch.vmid) || - kvm_request_pending(vcpu)) { + if (ret <= 0 || kvm_request_pending(vcpu)) { vcpu->mode = OUTSIDE_GUEST_MODE; isb(); /* Ensure work in x_flush_hwstate is committed */ kvm_pmu_sync_hwstate(vcpu); @@ -1328,6 +1267,8 @@ static void cpu_hyp_reset(void) { if (!is_kernel_in_hyp_mode()) __hyp_reset_vectors(); + + kvm_call_hyp(__kvm_tlb_flush_local_all); } static void cpu_hyp_reinit(void) @@ -1431,11 +1372,32 @@ static inline void hyp_cpu_pm_exit(void) static int init_common_resources(void) { + int err; + + /* + * Initialize the ASID allocator telling it to allocate a single + * VMID per VM. + */ + err = asid_allocator_init(&vmid_info, kvm_get_vmid_bits(), 1, + vmid_flush_cpu_ctxt); + if (err) { + kvm_err("Failed to initialize VMID allocator.\n"); + return err; + } + + vmid_info.active = &active_vmids; + vmid_info.reserved = &reserved_vmids; + kvm_set_ipa_limit(); return 0; } +static void free_common_resources(void) +{ + asid_allocator_free(&vmid_info); +} + static int init_subsystems(void) { int err = 0; @@ -1684,7 +1646,7 @@ int kvm_arch_init(void *opaque) err = kvm_arm_init_sve(); if (err) - return err; + goto out_err; if (!in_hyp_mode) { err = init_hyp_mode(); @@ -1707,6 +1669,7 @@ int kvm_arch_init(void *opaque) if (!in_hyp_mode) teardown_hyp_mode(); out_err: + free_common_resources(); return err; }