diff mbox

[tip/core/rcu,48/55] powerpc: strengthen value-returning-atomics memory barriers

Message ID 1315332049-2604-48-git-send-email-paulmck@linux.vnet.ibm.com
State New
Headers show

Commit Message

Paul E. McKenney Sept. 6, 2011, 6 p.m. UTC
The trailing isync/lwsync in PowerPC value-returning atomics needs
to be a sync in order to provide the required ordering properties.
The leading lwsync/eieio can remain, as the remainder of the required
ordering guarantees are provided by the atomic instructions: Any
reordering will cause the stwcx to fail, which will result in a retry.

This commit provides the needed adjustment.

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: anton@samba.org
Cc: benh@kernel.crashing.org
Cc: paulus@samba.org
---
 arch/powerpc/include/asm/synch.h |    6 +-----
 1 files changed, 1 insertions(+), 5 deletions(-)

Comments

Olof Johansson Sept. 9, 2011, 5:23 p.m. UTC | #1
[+linuxppc-dev]

On Tue, Sep 6, 2011 at 11:00 AM, Paul E. McKenney
<paulmck@linux.vnet.ibm.com> wrote:
> The trailing isync/lwsync in PowerPC value-returning atomics needs
> to be a sync in order to provide the required ordering properties.
> The leading lwsync/eieio can remain, as the remainder of the required
> ordering guarantees are provided by the atomic instructions: Any
> reordering will cause the stwcx to fail, which will result in a retry.

Admittedly, my powerpc barrier memory is starting to fade, but isn't
isync sufficient here? It will make sure all instructions before it
have retired, and will restart any speculative/issued instructions
beyond it.

lwsync not being sufficient makes sense since a load can overtake it.

> diff --git a/arch/powerpc/include/asm/synch.h b/arch/powerpc/include/asm/synch.h
> index d7cab44..4d97fbe 100644
> --- a/arch/powerpc/include/asm/synch.h
> +++ b/arch/powerpc/include/asm/synch.h
> @@ -37,11 +37,7 @@ static inline void isync(void)
>  #endif
>
>  #ifdef CONFIG_SMP
> -#define __PPC_ACQUIRE_BARRIER                          \
> -       START_LWSYNC_SECTION(97);                       \
> -       isync;                                          \
> -       MAKE_LWSYNC_SECTION_ENTRY(97, __lwsync_fixup);
> -#define PPC_ACQUIRE_BARRIER    "\n" stringify_in_c(__PPC_ACQUIRE_BARRIER)
> +#define PPC_ACQUIRE_BARRIER    "\n" stringify_in_c(sync;)

This can just be done as "\n\tsync\n" instead of the stringify stuff.


-Olof
Paul E. McKenney Sept. 9, 2011, 5:34 p.m. UTC | #2
On Fri, Sep 09, 2011 at 10:23:33AM -0700, Olof Johansson wrote:
> [+linuxppc-dev]
> 
> On Tue, Sep 6, 2011 at 11:00 AM, Paul E. McKenney
> <paulmck@linux.vnet.ibm.com> wrote:
> > The trailing isync/lwsync in PowerPC value-returning atomics needs
> > to be a sync in order to provide the required ordering properties.
> > The leading lwsync/eieio can remain, as the remainder of the required
> > ordering guarantees are provided by the atomic instructions: Any
> > reordering will cause the stwcx to fail, which will result in a retry.
> 
> Admittedly, my powerpc barrier memory is starting to fade, but isn't
> isync sufficient here? It will make sure all instructions before it
> have retired, and will restart any speculative/issued instructions
> beyond it.
> 
> lwsync not being sufficient makes sense since a load can overtake it.

As I understand it, although isync waits for the prior stwcx to execute,
it does not guarantee that the corresponding store is visible to all
processors before any following loads.

> > diff --git a/arch/powerpc/include/asm/synch.h b/arch/powerpc/include/asm/synch.h
> > index d7cab44..4d97fbe 100644
> > --- a/arch/powerpc/include/asm/synch.h
> > +++ b/arch/powerpc/include/asm/synch.h
> > @@ -37,11 +37,7 @@ static inline void isync(void)
> >  #endif
> >
> >  #ifdef CONFIG_SMP
> > -#define __PPC_ACQUIRE_BARRIER                          \
> > -       START_LWSYNC_SECTION(97);                       \
> > -       isync;                                          \
> > -       MAKE_LWSYNC_SECTION_ENTRY(97, __lwsync_fixup);
> > -#define PPC_ACQUIRE_BARRIER    "\n" stringify_in_c(__PPC_ACQUIRE_BARRIER)
> > +#define PPC_ACQUIRE_BARRIER    "\n" stringify_in_c(sync;)
> 
> This can just be done as "\n\tsync\n" instead of the stringify stuff.

That does sound a bit more straightforward, now that you mention it.  ;-)

							Thanx, Paul
Olof Johansson Sept. 9, 2011, 6:43 p.m. UTC | #3
On Fri, Sep 9, 2011 at 10:34 AM, Paul E. McKenney
<paulmck@linux.vnet.ibm.com> wrote:
> On Fri, Sep 09, 2011 at 10:23:33AM -0700, Olof Johansson wrote:
>> [+linuxppc-dev]
>>
>> On Tue, Sep 6, 2011 at 11:00 AM, Paul E. McKenney
>> <paulmck@linux.vnet.ibm.com> wrote:
>> > The trailing isync/lwsync in PowerPC value-returning atomics needs
>> > to be a sync in order to provide the required ordering properties.
>> > The leading lwsync/eieio can remain, as the remainder of the required
>> > ordering guarantees are provided by the atomic instructions: Any
>> > reordering will cause the stwcx to fail, which will result in a retry.
>>
>> Admittedly, my powerpc barrier memory is starting to fade, but isn't
>> isync sufficient here? It will make sure all instructions before it
>> have retired, and will restart any speculative/issued instructions
>> beyond it.
>>
>> lwsync not being sufficient makes sense since a load can overtake it.
>
> As I understand it, although isync waits for the prior stwcx to execute,
> it does not guarantee that the corresponding store is visible to all
> processors before any following loads.

Ah yes, combined with brushing up on the semantics in
memory-barriers.txt, this sounds reasonable to me.

>> > diff --git a/arch/powerpc/include/asm/synch.h b/arch/powerpc/include/asm/synch.h
>> > index d7cab44..4d97fbe 100644
>> > --- a/arch/powerpc/include/asm/synch.h
>> > +++ b/arch/powerpc/include/asm/synch.h
>> > @@ -37,11 +37,7 @@ static inline void isync(void)
>> >  #endif
>> >
>> >  #ifdef CONFIG_SMP
>> > -#define __PPC_ACQUIRE_BARRIER                          \
>> > -       START_LWSYNC_SECTION(97);                       \
>> > -       isync;                                          \
>> > -       MAKE_LWSYNC_SECTION_ENTRY(97, __lwsync_fixup);
>> > -#define PPC_ACQUIRE_BARRIER    "\n" stringify_in_c(__PPC_ACQUIRE_BARRIER)
>> > +#define PPC_ACQUIRE_BARRIER    "\n" stringify_in_c(sync;)
>>
>> This can just be done as "\n\tsync\n" instead of the stringify stuff.
>
> That does sound a bit more straightforward, now that you mention it.  ;-)

With that change, I'm:

Acked-by: Olof Johansson <olof@lixom.net>

But at least Ben or Anton should sign off on it too.


-Olof
diff mbox

Patch

diff --git a/arch/powerpc/include/asm/synch.h b/arch/powerpc/include/asm/synch.h
index d7cab44..4d97fbe 100644
--- a/arch/powerpc/include/asm/synch.h
+++ b/arch/powerpc/include/asm/synch.h
@@ -37,11 +37,7 @@  static inline void isync(void)
 #endif
 
 #ifdef CONFIG_SMP
-#define __PPC_ACQUIRE_BARRIER				\
-	START_LWSYNC_SECTION(97);			\
-	isync;						\
-	MAKE_LWSYNC_SECTION_ENTRY(97, __lwsync_fixup);
-#define PPC_ACQUIRE_BARRIER	"\n" stringify_in_c(__PPC_ACQUIRE_BARRIER)
+#define PPC_ACQUIRE_BARRIER	"\n" stringify_in_c(sync;)
 #define PPC_RELEASE_BARRIER	stringify_in_c(LWSYNC) "\n"
 #else
 #define PPC_ACQUIRE_BARRIER