Message ID | 510cb07946be056d2da7dda721bbf444c288751b.1606923183.git.luto@kernel.org |
---|---|
State | Superseded |
Headers | show |
Series | [v2,1/4] x86/membarrier: Get rid of a dubious optimization | expand |
diff --git a/kernel/sched/membarrier.c b/kernel/sched/membarrier.c index 5a40b3828ff2..6251d3d12abe 100644 --- a/kernel/sched/membarrier.c +++ b/kernel/sched/membarrier.c @@ -168,6 +168,14 @@ static void ipi_mb(void *info) static void ipi_rseq(void *info) { + /* + * Ensure that all stores done by the calling thread are visible + * to the current task before the current task resumes. We could + * probably optimize this away on most architectures, but by the + * time we've already sent an IPI, the cost of the extra smp_mb() + * is negligible. + */ + smp_mb(); rseq_preempt(current); }
It seems to me that most RSEQ membarrier users will expect any stores done before the membarrier() syscall to be visible to the target task(s). While this is extremely likely to be true in practice, nothing actually guarantees it by a strict reading of the x86 manuals. Rather than providing this guarantee by accident and potentially causing a problem down the road, just add an explicit barrier. Cc: stable@vger.kernel.org Signed-off-by: Andy Lutomirski <luto@kernel.org> --- kernel/sched/membarrier.c | 8 ++++++++ 1 file changed, 8 insertions(+)