diff mbox series

[PATCHv3,09/18] atomics/alpha: define atomic64_fetch_add_unless()

Message ID 20180618101919.51973-10-mark.rutland@arm.com
State Superseded
Headers show
Series atomics: API cleanups | expand

Commit Message

Mark Rutland June 18, 2018, 10:19 a.m. UTC
As a step towards unifying the atomic/atomic64/atomic_long APIs, this
patch converts the arch/alpha implementation of atomic64_add_unless() into
an implementation of atomic64_fetch_add_unless().

A wrapper in <linux/atomic.h> will build atomic_add_unless() atop of
this, provided it is given a preprocessor definition.

No functional change is intended as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>

Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>

Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Richard Henderson <rth@twiddle.net>
Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
Cc: Matt Turner <mattst88@gmail.com>
---
 arch/alpha/include/asm/atomic.h | 23 ++++++++++++-----------
 1 file changed, 12 insertions(+), 11 deletions(-)

-- 
2.11.0

Comments

Will Deacon June 18, 2018, 3:54 p.m. UTC | #1
On Mon, Jun 18, 2018 at 11:19:10AM +0100, Mark Rutland wrote:
> As a step towards unifying the atomic/atomic64/atomic_long APIs, this

> patch converts the arch/alpha implementation of atomic64_add_unless() into

> an implementation of atomic64_fetch_add_unless().

> 

> A wrapper in <linux/atomic.h> will build atomic_add_unless() atop of

> this, provided it is given a preprocessor definition.

> 

> No functional change is intended as a result of this patch.

> 

> Signed-off-by: Mark Rutland <mark.rutland@arm.com>

> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>

> Cc: Boqun Feng <boqun.feng@gmail.com>

> Cc: Will Deacon <will.deacon@arm.com>

> Cc: Richard Henderson <rth@twiddle.net>

> Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>

> Cc: Matt Turner <mattst88@gmail.com>

> ---

>  arch/alpha/include/asm/atomic.h | 23 ++++++++++++-----------

>  1 file changed, 12 insertions(+), 11 deletions(-)

> 

> diff --git a/arch/alpha/include/asm/atomic.h b/arch/alpha/include/asm/atomic.h

> index 4a800a3424a3..dcb7bbeeae02 100644

> --- a/arch/alpha/include/asm/atomic.h

> +++ b/arch/alpha/include/asm/atomic.h

> @@ -238,35 +238,36 @@ static __inline__ int atomic_fetch_add_unless(atomic_t *v, int a, int u)

>  #define atomic_fetch_add_unless atomic_fetch_add_unless

>  

>  /**

> - * atomic64_add_unless - add unless the number is a given value

> + * atomic64_fetch_add_unless - add unless the number is a given value

>   * @v: pointer of type atomic64_t

>   * @a: the amount to add to v...

>   * @u: ...unless v is equal to u.

>   *

>   * Atomically adds @a to @v, so long as it was not @u.

> - * Returns true iff @v was not @u.

> + * Returns the old value of @v.

>   */

> -static __inline__ int atomic64_add_unless(atomic64_t *v, long a, long u)

> +static __inline__ int atomic64_fetch_add_unless(atomic64_t *v, long a, long u)


Don't you want a 64-bit return type (e.g. long) here?

Will
Mark Rutland June 18, 2018, 4:08 p.m. UTC | #2
On Mon, Jun 18, 2018 at 04:54:40PM +0100, Will Deacon wrote:
> On Mon, Jun 18, 2018 at 11:19:10AM +0100, Mark Rutland wrote:

> >  /**

> > - * atomic64_add_unless - add unless the number is a given value

> > + * atomic64_fetch_add_unless - add unless the number is a given value

> >   * @v: pointer of type atomic64_t

> >   * @a: the amount to add to v...

> >   * @u: ...unless v is equal to u.

> >   *

> >   * Atomically adds @a to @v, so long as it was not @u.

> > - * Returns true iff @v was not @u.

> > + * Returns the old value of @v.

> >   */

> > -static __inline__ int atomic64_add_unless(atomic64_t *v, long a, long u)

> > +static __inline__ int atomic64_fetch_add_unless(atomic64_t *v, long a, long u)

> 


Whoops; yes.

From a scan of the series, I messed that up in the instrumentation, too,
but the rest sems fine.

I'll fix those up and push out an updated branch.

Thanks,
Mark.
Ingo Molnar June 21, 2018, 11 a.m. UTC | #3
* Mark Rutland <mark.rutland@arm.com> wrote:

> On Mon, Jun 18, 2018 at 04:54:40PM +0100, Will Deacon wrote:

> > On Mon, Jun 18, 2018 at 11:19:10AM +0100, Mark Rutland wrote:

> > >  /**

> > > - * atomic64_add_unless - add unless the number is a given value

> > > + * atomic64_fetch_add_unless - add unless the number is a given value

> > >   * @v: pointer of type atomic64_t

> > >   * @a: the amount to add to v...

> > >   * @u: ...unless v is equal to u.

> > >   *

> > >   * Atomically adds @a to @v, so long as it was not @u.

> > > - * Returns true iff @v was not @u.

> > > + * Returns the old value of @v.

> > >   */

> > > -static __inline__ int atomic64_add_unless(atomic64_t *v, long a, long u)

> > > +static __inline__ int atomic64_fetch_add_unless(atomic64_t *v, long a, long u)

> > 

> 

> Whoops; yes.

> 

> From a scan of the series, I messed that up in the instrumentation, too,

> but the rest sems fine.

> 

> I'll fix those up and push out an updated branch.


Please send out an updated series via email as well once it has all settled down.

Thanks!

	Ingo
diff mbox series

Patch

diff --git a/arch/alpha/include/asm/atomic.h b/arch/alpha/include/asm/atomic.h
index 4a800a3424a3..dcb7bbeeae02 100644
--- a/arch/alpha/include/asm/atomic.h
+++ b/arch/alpha/include/asm/atomic.h
@@ -238,35 +238,36 @@  static __inline__ int atomic_fetch_add_unless(atomic_t *v, int a, int u)
 #define atomic_fetch_add_unless atomic_fetch_add_unless
 
 /**
- * atomic64_add_unless - add unless the number is a given value
+ * atomic64_fetch_add_unless - add unless the number is a given value
  * @v: pointer of type atomic64_t
  * @a: the amount to add to v...
  * @u: ...unless v is equal to u.
  *
  * Atomically adds @a to @v, so long as it was not @u.
- * Returns true iff @v was not @u.
+ * Returns the old value of @v.
  */
-static __inline__ int atomic64_add_unless(atomic64_t *v, long a, long u)
+static __inline__ int atomic64_fetch_add_unless(atomic64_t *v, long a, long u)
 {
-	long c, tmp;
+	long c, new, old;
 	smp_mb();
 	__asm__ __volatile__(
-	"1:	ldq_l	%[tmp],%[mem]\n"
-	"	cmpeq	%[tmp],%[u],%[c]\n"
-	"	addq	%[tmp],%[a],%[tmp]\n"
+	"1:	ldq_l	%[old],%[mem]\n"
+	"	cmpeq	%[old],%[u],%[c]\n"
+	"	addq	%[old],%[a],%[new]\n"
 	"	bne	%[c],2f\n"
-	"	stq_c	%[tmp],%[mem]\n"
-	"	beq	%[tmp],3f\n"
+	"	stq_c	%[new],%[mem]\n"
+	"	beq	%[new],3f\n"
 	"2:\n"
 	".subsection 2\n"
 	"3:	br	1b\n"
 	".previous"
-	: [tmp] "=&r"(tmp), [c] "=&r"(c)
+	: [old] "=&r"(old), [new] "=&r"(new), [c] "=&r"(c)
 	: [mem] "m"(*v), [a] "rI"(a), [u] "rI"(u)
 	: "memory");
 	smp_mb();
-	return !c;
+	return old;
 }
+#define atomic64_fetch_add_unless atomic64_fetch_add_unless
 
 /*
  * atomic64_dec_if_positive - decrement by 1 if old value positive