From patchwork Wed Jul 24 16:25:28 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Grall X-Patchwork-Id: 169630 Delivered-To: patch@linaro.org Received: by 2002:a92:4782:0:0:0:0:0 with SMTP id e2csp10466809ilk; Wed, 24 Jul 2019 09:26:25 -0700 (PDT) X-Google-Smtp-Source: APXvYqwdq6DXh7kqC7UDOf0feJZQwjjuxhMrh5Eo9qmziK+4KGsxBHelO/pninOEg0igpEe9Eu4M X-Received: by 2002:aa7:91cc:: with SMTP id z12mr12136035pfa.76.1563985585753; Wed, 24 Jul 2019 09:26:25 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1563985585; cv=none; d=google.com; s=arc-20160816; b=N4raKfS3rBeglmYNu0mPKTvTpWJOR2y2CH7IYtvFyy6GX7e3PXzImYs8bJ96AG6g+d pMySNnOgQ2JmrGvEXKcxUV8qUIK9elX8ItJLrKfq8pYEXdY3Dqi957T2QRaly3ODJR/c hPWpikYe/Rnrw49FfjGEIi/f0C9f5niawl1v7AzK0VkAGy2xdWBXrp1iKEBhj7lsvg4x n4d5frahwR0+WLw52FlqIQaH6jzFkNuFezZLsiko2RgkcWrIUZevgQHuS+j1x4G8qvjj 4v3HrKuIa3Xz7RHwSgqKAL/+O/h3JRu6gILBq0DJdHkiV71cmg6jKCX2MO+k+5ys7uHv qa3Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=eZiAMHBzebv69JkMfkrXvDrrCScfrhf/3lT4kWwzx/c=; b=vGfGDP16W1lrkGjwFacldZnNBlvDB2psDT9EOPyusYnSbfkAnURyQ7xaQD7bYZTqiI DUDS+tV3/kA1mFLtxrDKFe8U32v/puhCfEg89zv/BOrlPOnemOrA7aj3Ge3M4l0+dMSo uWdvluSN0wIdABHk7R6g1ywArbcvKTnEYcVYtWm7vKljyBd37Asvz+gK2Cv9AwSJ+MuT 9kF++VHaPTu12JbkAM0+ZkvQB+KChrrgnrhhmgoWV6YhNtcPOZnQnYq1QKPa1GWG1R4u j9XAQBHruaSXzgl+uFrDxjge4xSChXR+g4YR2QL7Trp5XuUign3AQX3ayQQCGTIyCq0K 4bsQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id t185si14258706pgd.596.2019.07.24.09.26.25; Wed, 24 Jul 2019 09:26:25 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727575AbfGXQ0F (ORCPT + 29 others); Wed, 24 Jul 2019 12:26:05 -0400 Received: from foss.arm.com ([217.140.110.172]:43412 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728877AbfGXQ0B (ORCPT ); Wed, 24 Jul 2019 12:26:01 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 3762B15A2; Wed, 24 Jul 2019 09:26:01 -0700 (PDT) Received: from e108454-lin.cambridge.arm.com (e108454-lin.cambridge.arm.com [10.1.196.50]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id D79053F71F; Wed, 24 Jul 2019 09:25:59 -0700 (PDT) From: Julien Grall To: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu Cc: james.morse@arm.com, marc.zyngier@arm.com, julien.thierry@arm.com, suzuki.poulose@arm.com, catalin.marinas@arm.com, will.deacon@arm.com, Julien Grall Subject: [PATCH v3 09/15] arm64/mm: Split the function check_and_switch_context in 3 parts Date: Wed, 24 Jul 2019 17:25:28 +0100 Message-Id: <20190724162534.7390-10-julien.grall@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20190724162534.7390-1-julien.grall@arm.com> References: <20190724162534.7390-1-julien.grall@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The function check_and_switch_context is used to: 1) Check whether the ASID is still valid 2) Generate a new one if it is not valid 3) Switch the context While the latter is specific to the MM subsystem, the rest could be part of the generic ASID allocator. After this patch, the function is now split in 3 parts which corresponds to the use of the functions: 1) asid_check_context: Check if the ASID is still valid 2) asid_new_context: Generate a new ASID for the context 3) check_and_switch_context: Call 1) and 2) and switch the context 1) and 2) have not been merged in a single function because we want to avoid to add a branch in when the ASID is still valid. This will matter when the code will be moved in separate file later on as 1) will reside in the header as a static inline function. Signed-off-by: Julien Grall --- Will wants to avoid to add a branch when the ASID is still valid. So 1) and 2) are in separates function. The former will move to a new header and make static inline. --- arch/arm64/mm/context.c | 51 +++++++++++++++++++++++++++++++++++++------------ 1 file changed, 39 insertions(+), 12 deletions(-) -- 2.11.0 diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm/context.c index 27e328fffdb1..5e8b381ab67f 100644 --- a/arch/arm64/mm/context.c +++ b/arch/arm64/mm/context.c @@ -193,16 +193,21 @@ static u64 new_context(struct asid_info *info, atomic64_t *pasid) return idx2asid(info, asid) | generation; } -void check_and_switch_context(struct mm_struct *mm, unsigned int cpu) +static void asid_new_context(struct asid_info *info, atomic64_t *pasid, + unsigned int cpu); + +/* + * Check the ASID is still valid for the context. If not generate a new ASID. + * + * @pasid: Pointer to the current ASID batch + * @cpu: current CPU ID. Must have been acquired throught get_cpu() + */ +static void asid_check_context(struct asid_info *info, + atomic64_t *pasid, unsigned int cpu) { - unsigned long flags; u64 asid, old_active_asid; - struct asid_info *info = &asid_info; - if (system_supports_cnp()) - cpu_set_reserved_ttbr0(); - - asid = atomic64_read(&mm->context.id); + asid = atomic64_read(pasid); /* * The memory ordering here is subtle. @@ -223,14 +228,30 @@ void check_and_switch_context(struct mm_struct *mm, unsigned int cpu) !((asid ^ atomic64_read(&info->generation)) >> info->bits) && atomic64_cmpxchg_relaxed(&active_asid(info, cpu), old_active_asid, asid)) - goto switch_mm_fastpath; + return; + + asid_new_context(info, pasid, cpu); +} + +/* + * Generate a new ASID for the context. + * + * @pasid: Pointer to the current ASID batch allocated. It will be updated + * with the new ASID batch. + * @cpu: current CPU ID. Must have been acquired through get_cpu() + */ +static void asid_new_context(struct asid_info *info, atomic64_t *pasid, + unsigned int cpu) +{ + unsigned long flags; + u64 asid; raw_spin_lock_irqsave(&info->lock, flags); /* Check that our ASID belongs to the current generation. */ - asid = atomic64_read(&mm->context.id); + asid = atomic64_read(pasid); if ((asid ^ atomic64_read(&info->generation)) >> info->bits) { - asid = new_context(info, &mm->context.id); - atomic64_set(&mm->context.id, asid); + asid = new_context(info, pasid); + atomic64_set(pasid, asid); } if (cpumask_test_and_clear_cpu(cpu, &info->flush_pending)) @@ -238,8 +259,14 @@ void check_and_switch_context(struct mm_struct *mm, unsigned int cpu) atomic64_set(&active_asid(info, cpu), asid); raw_spin_unlock_irqrestore(&info->lock, flags); +} + +void check_and_switch_context(struct mm_struct *mm, unsigned int cpu) +{ + if (system_supports_cnp()) + cpu_set_reserved_ttbr0(); -switch_mm_fastpath: + asid_check_context(&asid_info, &mm->context.id, cpu); arm64_apply_bp_hardening();