diff mbox series

[1/3] sched: Introduce is_pcpu_safe()

Message ID 20210721115118.729943-2-valentin.schneider@arm.com
State Superseded
Headers show
Series [1/3] sched: Introduce is_pcpu_safe() | expand

Commit Message

Valentin Schneider July 21, 2021, 11:51 a.m. UTC
Some areas use preempt_disable() + preempt_enable() to safely access
per-CPU data. The PREEMPT_RT folks have shown this can also be done by
keeping preemption enabled and instead disabling migration (and acquiring a
sleepable lock, if relevant).

Introduce a helper which checks whether the current task can safely access
per-CPU data, IOW if the task's context guarantees the accesses will target
a single CPU. This accounts for preemption, CPU affinity, and migrate
disable - note that the CPU affinity check also mandates the presence of
PF_NO_SETAFFINITY, as otherwise userspace could concurrently render the
upcoming per-CPU access(es) unsafe.

Signed-off-by: Valentin Schneider <valentin.schneider@arm.com>
---
 include/linux/sched.h | 10 ++++++++++
 1 file changed, 10 insertions(+)

Comments

Paul E. McKenney July 27, 2021, 4:23 p.m. UTC | #1
On Wed, Jul 21, 2021 at 12:51:16PM +0100, Valentin Schneider wrote:
> Some areas use preempt_disable() + preempt_enable() to safely access

> per-CPU data. The PREEMPT_RT folks have shown this can also be done by

> keeping preemption enabled and instead disabling migration (and acquiring a

> sleepable lock, if relevant).

> 

> Introduce a helper which checks whether the current task can safely access

> per-CPU data, IOW if the task's context guarantees the accesses will target

> a single CPU. This accounts for preemption, CPU affinity, and migrate

> disable - note that the CPU affinity check also mandates the presence of

> PF_NO_SETAFFINITY, as otherwise userspace could concurrently render the

> upcoming per-CPU access(es) unsafe.

> 

> Signed-off-by: Valentin Schneider <valentin.schneider@arm.com>


Acked-by: Paul E. McKenney <paulmck@kernel.org>


> ---

>  include/linux/sched.h | 10 ++++++++++

>  1 file changed, 10 insertions(+)

> 

> diff --git a/include/linux/sched.h b/include/linux/sched.h

> index efdbdf654876..7ce2d5c1ad55 100644

> --- a/include/linux/sched.h

> +++ b/include/linux/sched.h

> @@ -1707,6 +1707,16 @@ static inline bool is_percpu_thread(void)

>  #endif

>  }

>  

> +/* Is the current task guaranteed not to be migrated elsewhere? */

> +static inline bool is_pcpu_safe(void)

> +{

> +#ifdef CONFIG_SMP

> +	return !preemptible() || is_percpu_thread() || current->migration_disabled;

> +#else

> +	return true;

> +#endif

> +}

> +

>  /* Per-process atomic flags. */

>  #define PFA_NO_NEW_PRIVS		0	/* May not gain new privileges. */

>  #define PFA_SPREAD_PAGE			1	/* Spread page cache over cpuset */

> -- 

> 2.25.1

>
diff mbox series

Patch

diff --git a/include/linux/sched.h b/include/linux/sched.h
index efdbdf654876..7ce2d5c1ad55 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1707,6 +1707,16 @@  static inline bool is_percpu_thread(void)
 #endif
 }
 
+/* Is the current task guaranteed not to be migrated elsewhere? */
+static inline bool is_pcpu_safe(void)
+{
+#ifdef CONFIG_SMP
+	return !preemptible() || is_percpu_thread() || current->migration_disabled;
+#else
+	return true;
+#endif
+}
+
 /* Per-process atomic flags. */
 #define PFA_NO_NEW_PRIVS		0	/* May not gain new privileges. */
 #define PFA_SPREAD_PAGE			1	/* Spread page cache over cpuset */