Message ID | 20220127192437.1192957-1-valentin.schneider@arm.com |
---|---|
State | New |
Headers | show |
Series | [v4] arm64: mm: Make arch_faults_on_old_pte() check for migratability | expand |
On 2022-01-27 19:24:37 [+0000], Valentin Schneider wrote: > arch_faults_on_old_pte() relies on the calling context being > non-preemptible. CONFIG_PREEMPT_RT turns the PTE lock into a sleepable > spinlock, which doesn't disable preemption once acquired, triggering the > warning in arch_faults_on_old_pte(). > > It does however disable migration, ensuring the task remains on the same > CPU during the entirety of the critical section, making the read of > cpu_has_hw_af() safe and stable. > > Make arch_faults_on_old_pte() check cant_migrate() instead of preemptible(). > > Suggested-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> > Signed-off-by: Valentin Schneider <valentin.schneider@arm.com> > Link: https://lore.kernel.org/r/20210811201354.1976839-5-valentin.schneider@arm.com Let me pick that up so I can drop the other two. I hope the ARM64 folks follow my lead ;) Acked-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Sebastian
On Thu, Jan 27, 2022 at 07:24:37PM +0000, Valentin Schneider wrote: > arch_faults_on_old_pte() relies on the calling context being > non-preemptible. CONFIG_PREEMPT_RT turns the PTE lock into a sleepable > spinlock, which doesn't disable preemption once acquired, triggering the > warning in arch_faults_on_old_pte(). > > It does however disable migration, ensuring the task remains on the same > CPU during the entirety of the critical section, making the read of > cpu_has_hw_af() safe and stable. > > Make arch_faults_on_old_pte() check cant_migrate() instead of preemptible(). > > Suggested-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> > Signed-off-by: Valentin Schneider <valentin.schneider@arm.com> > Link: https://lore.kernel.org/r/20210811201354.1976839-5-valentin.schneider@arm.com > --- > v3 -> v4: Dropped migratable(), reuse cant_migrate() (Sebastian) > --- > arch/arm64/include/asm/pgtable.h | 3 ++- > 1 file changed, 2 insertions(+), 1 deletion(-) > > diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h > index c4ba047a82d2..3caf6346ea95 100644 > --- a/arch/arm64/include/asm/pgtable.h > +++ b/arch/arm64/include/asm/pgtable.h > @@ -1001,7 +1001,8 @@ static inline void update_mmu_cache(struct vm_area_struct *vma, > */ > static inline bool arch_faults_on_old_pte(void) > { > - WARN_ON(preemptible()); > + /* The register read below requires a stable CPU to make any sense */ > + cant_migrate(); > > return !cpu_has_hw_af(); > } The patch looks alright but what's the plan with it? Does it go into the preempt-rt tree, mainline? It's not a fix to be queued now, so: Acked-by: Catalin Marinas <catalin.marinas@arm.com>
On 2022-02-04 18:55:17 [+0000], Catalin Marinas wrote: > > The patch looks alright but what's the plan with it? Does it go into the > preempt-rt tree, mainline? It's not a fix to be queued now, so: It would be nice if if it could go mainline, too. > Acked-by: Catalin Marinas <catalin.marinas@arm.com> Sebastian
On Thu, Jan 27, 2022 at 07:24:37PM +0000, Valentin Schneider wrote: > arch_faults_on_old_pte() relies on the calling context being > non-preemptible. CONFIG_PREEMPT_RT turns the PTE lock into a sleepable > spinlock, which doesn't disable preemption once acquired, triggering the > warning in arch_faults_on_old_pte(). > > It does however disable migration, ensuring the task remains on the same > CPU during the entirety of the critical section, making the read of > cpu_has_hw_af() safe and stable. > > Make arch_faults_on_old_pte() check cant_migrate() instead of preemptible(). > > Suggested-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> > Signed-off-by: Valentin Schneider <valentin.schneider@arm.com> > Link: https://lore.kernel.org/r/20210811201354.1976839-5-valentin.schneider@arm.com > --- > v3 -> v4: Dropped migratable(), reuse cant_migrate() (Sebastian) > --- > arch/arm64/include/asm/pgtable.h | 3 ++- > 1 file changed, 2 insertions(+), 1 deletion(-) > > diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h > index c4ba047a82d2..3caf6346ea95 100644 > --- a/arch/arm64/include/asm/pgtable.h > +++ b/arch/arm64/include/asm/pgtable.h > @@ -1001,7 +1001,8 @@ static inline void update_mmu_cache(struct vm_area_struct *vma, > */ > static inline bool arch_faults_on_old_pte(void) > { > - WARN_ON(preemptible()); > + /* The register read below requires a stable CPU to make any sense */ > + cant_migrate(); FWIW: There's a patch in the multi-generational LRU series that drops this WARN_ON() entirely. I'm not sure if/when that lot will land, but just wanted to let you know. Will
diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index c4ba047a82d2..3caf6346ea95 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -1001,7 +1001,8 @@ static inline void update_mmu_cache(struct vm_area_struct *vma, */ static inline bool arch_faults_on_old_pte(void) { - WARN_ON(preemptible()); + /* The register read below requires a stable CPU to make any sense */ + cant_migrate(); return !cpu_has_hw_af(); }
arch_faults_on_old_pte() relies on the calling context being non-preemptible. CONFIG_PREEMPT_RT turns the PTE lock into a sleepable spinlock, which doesn't disable preemption once acquired, triggering the warning in arch_faults_on_old_pte(). It does however disable migration, ensuring the task remains on the same CPU during the entirety of the critical section, making the read of cpu_has_hw_af() safe and stable. Make arch_faults_on_old_pte() check cant_migrate() instead of preemptible(). Suggested-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Valentin Schneider <valentin.schneider@arm.com> Link: https://lore.kernel.org/r/20210811201354.1976839-5-valentin.schneider@arm.com --- v3 -> v4: Dropped migratable(), reuse cant_migrate() (Sebastian) --- arch/arm64/include/asm/pgtable.h | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)