diff mbox

[1/2] arm64: mm: keep reserved ASIDs in sync with mm after multiple rollovers

Message ID 1448545780-1550-1-git-send-email-will.deacon@arm.com
State Accepted
Commit 0ebea8088095f1c18c1d1de284ccc4c479ca21c1
Headers show

Commit Message

Will Deacon Nov. 26, 2015, 1:49 p.m. UTC
Under some unusual context-switching patterns, it is possible to end up
with multiple threads from the same mm running concurrently with
different ASIDs:

1. CPU x schedules task t with mm p containing ASID a and generation g
   This task doesn't block and the CPU doesn't context switch.
   So:
     * per_cpu(active_asid, x) = {g,a}
     * p->context.id = {g,a}

2. Some other CPU generates an ASID rollover. The global generation is
   now (g + 1). CPU x is still running t, with no context switch and
   so per_cpu(reserved_asid, x) = {g,a}

3. CPU y schedules task t', which shares mm p with t. The generation
   mismatches, so we take the slowpath and hit the reserved ASID from
   CPU x. p is then updated so that p->context.id = {g + 1,a}

4. CPU y schedules some other task u, which has an mm != p.

5. Some other CPU generates *another* CPU rollover. The global
   generation is now (g + 2). CPU x is still running t, with no context
   switch and so per_cpu(reserved_asid, x) = {g,a}.

6. CPU y once again schedules task t', but now *fails* to hit the
   reserved ASID from CPU x because of the generation mismatch. This
   results in a new ASID being allocated, despite the fact that t is
   still running on CPU x with the same mm.

Consequently, TLBIs (e.g. as a result of CoW) will not be synchronised
between the two threads.

This patch fixes the problem by updating all of the matching reserved
ASIDs when we hit on the slowpath (i.e. in step 3 above). This keeps
the reserved ASIDs in-sync with the mm and avoids the problem.

Reported-by: Tony Thompson <anthony.thompson@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>

Signed-off-by: Will Deacon <will.deacon@arm.com>

---
 arch/arm64/mm/context.c | 38 ++++++++++++++++++++++++++------------
 1 file changed, 26 insertions(+), 12 deletions(-)

-- 
2.1.4


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

Comments

Will Deacon Nov. 26, 2015, 1:55 p.m. UTC | #1
On Thu, Nov 26, 2015 at 01:49:40PM +0000, Will Deacon wrote:
> Under some unusual context-switching patterns, it is possible to end up

> with multiple threads from the same mm running concurrently with

> different ASIDs:

> 

> 1. CPU x schedules task t with mm p containing ASID a and generation g

>    This task doesn't block and the CPU doesn't context switch.

>    So:

>      * per_cpu(active_asid, x) = {g,a}

>      * p->context.id = {g,a}

> 

> 2. Some other CPU generates an ASID rollover. The global generation is

>    now (g + 1). CPU x is still running t, with no context switch and

>    so per_cpu(reserved_asid, x) = {g,a}

> 

> 3. CPU y schedules task t', which shares mm p with t. The generation

>    mismatches, so we take the slowpath and hit the reserved ASID from

>    CPU x. p is then updated so that p->context.id = {g + 1,a}

> 

> 4. CPU y schedules some other task u, which has an mm != p.

> 

> 5. Some other CPU generates *another* CPU rollover. The global

>    generation is now (g + 2). CPU x is still running t, with no context

>    switch and so per_cpu(reserved_asid, x) = {g,a}.

> 

> 6. CPU y once again schedules task t', but now *fails* to hit the

>    reserved ASID from CPU x because of the generation mismatch. This

>    results in a new ASID being allocated, despite the fact that t is

>    still running on CPU x with the same mm.

> 

> Consequently, TLBIs (e.g. as a result of CoW) will not be synchronised

> between the two threads.

> 

> This patch fixes the problem by updating all of the matching reserved

> ASIDs when we hit on the slowpath (i.e. in step 3 above). This keeps

> the reserved ASIDs in-sync with the mm and avoids the problem.

> 

> Cc: <stable@vger.kernel.org>

> Reported-by: Tony Thompson <anthony.thompson@arm.com>

> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>

> Signed-off-by: Will Deacon <will.deacon@arm.com>

> ---

>  arch/arm/mm/context.c | 39 ++++++++++++++++++++++++++-------------

>  1 file changed, 26 insertions(+), 13 deletions(-)


[...]

> @@ -216,11 +233,7 @@ static u64 new_context(struct mm_struct *mm, unsigned int cpu)

>  

>  	__set_bit(asid, asid_map);

>  	cur_idx = asid;

> -

> -bump_gen:

> -	asid |= generation;

> -	cpumask_clear(mm_cpumask(mm));

> -	return asid;

> +	return asid | generation;


Hmm, I probably shouldn't be dropping the cpumask_clear line here. It
hasn't made a difference in practice, but it does defeat the optimisation
in switch_mm, so I'll add that back in v2.

Will

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
diff mbox

Patch

diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm/context.c
index f636a2639f03..e87f53ff5f58 100644
--- a/arch/arm64/mm/context.c
+++ b/arch/arm64/mm/context.c
@@ -76,13 +76,28 @@  static void flush_context(unsigned int cpu)
 		__flush_icache_all();
 }
 
-static int is_reserved_asid(u64 asid)
+static bool check_update_reserved_asid(u64 asid, u64 newasid)
 {
 	int cpu;
-	for_each_possible_cpu(cpu)
-		if (per_cpu(reserved_asids, cpu) == asid)
-			return 1;
-	return 0;
+	bool hit = false;
+
+	/*
+	 * Iterate over the set of reserved ASIDs looking for a match.
+	 * If we find one, then we can update our mm to use newasid
+	 * (i.e. the same ASID in the current generation) but we can't
+	 * exit the loop early, since we need to ensure that all copies
+	 * of the old ASID are updated to reflect the mm. Failure to do
+	 * so could result in us missing the reserved ASID in a future
+	 * generation.
+	 */
+	for_each_possible_cpu(cpu) {
+		if (per_cpu(reserved_asids, cpu) == asid) {
+			hit = true;
+			per_cpu(reserved_asids, cpu) = newasid;
+		}
+	}
+
+	return hit;
 }
 
 static u64 new_context(struct mm_struct *mm, unsigned int cpu)
@@ -92,12 +107,14 @@  static u64 new_context(struct mm_struct *mm, unsigned int cpu)
 	u64 generation = atomic64_read(&asid_generation);
 
 	if (asid != 0) {
+		u64 newasid = generation | (asid & ~ASID_MASK);
+
 		/*
 		 * If our current ASID was active during a rollover, we
 		 * can continue to use it and this was just a false alarm.
 		 */
-		if (is_reserved_asid(asid))
-			return generation | (asid & ~ASID_MASK);
+		if (check_update_reserved_asid(asid, newasid))
+			return newasid;
 
 		/*
 		 * We had a valid ASID in a previous life, so try to re-use
@@ -105,7 +122,7 @@  static u64 new_context(struct mm_struct *mm, unsigned int cpu)
 		 */
 		asid &= ~ASID_MASK;
 		if (!__test_and_set_bit(asid, asid_map))
-			goto bump_gen;
+			return newasid;
 	}
 
 	/*
@@ -129,10 +146,7 @@  static u64 new_context(struct mm_struct *mm, unsigned int cpu)
 set_asid:
 	__set_bit(asid, asid_map);
 	cur_idx = asid;
-
-bump_gen:
-	asid |= generation;
-	return asid;
+	return asid | generation;
 }
 
 void check_and_switch_context(struct mm_struct *mm, unsigned int cpu)