[v3] lowlevellock comments

Message ID 1414066371-394-1-git-send-email-bernie.ogden@linaro.org
State New
Headers show

Commit Message

Bernie Ogden Oct. 23, 2014, 12:12 p.m.
Here's a v3. All comments addressed - replies to comments I have something to
say about inline below.

On 6 October 2014 13:26, Torvald Riegel <triegel@redhat.com> wrote:
>> To make the >1 state easier to refer to in comments I've gone for:
>> >1 - locked-with-waiters, there _may_ be waiters
>>
>> If I'm right then strictly >1 means locked-possibly-with-waiters, but that's
>> really getting too much of a mouthful (as 'potentially contended' would have
>> been in the earlier terminology).
>
> But "locked possibly with waiters" is the right description, so I'd
> rather use this.

OK, changed throughout to 'acquired, possibly with waiters' (based on the
suggested text below).

>> > Please use locked, or unlocked terms everywhere.
>>
>> I've gone for unlocked, locked and locked with waiters.
>
> My vote would be for "acquired" / "not acquired", as in "the lock has
> been acquired".

Switched to acquired/not acquired, as no-one defended locked/unlocked.

>> +   thread ID while the clone is running and is reset to zero
>> +   afterwards.       */
>
> Could you add who resets it?

Done (it's the kernel).

>>
>> +/* Uses futexes to avoid unnecessary calls into the kernel. Likely paths are
>
> I'd say it uses futexes to block using the kernel.  That's the purpose
> of the futex call, so that we don't need to just spin-wait.
<snip>
>> +   the cases with no waiters and hence do not involve syscalls. Unlikely paths
>> +   usually will involve syscalls to wait or to wake.
>
> The "likely" path is not necessarily likely, but we try to make
> uncontended mutexes fast, so that's why we avoid the syscalls in these
> cases. I would suggest:
>
> Low-level locks use a combination of atomic operations (to acquire and
> release lock ownership) and futex operations (to block until the state
> of a lock changes).  A lock can be in one of three states:
> 0:  not acquired,
> 1:  acquired; no other threads are blocked or about to block for changes
> to the lock state,
>>1:  acquired, other threads are potentially blocked or about to block
> for changes to the lock state.
>
> We expect that the common case is an uncontended lock, so we just need
> to transition the lock between states 0 and 1; releasing the lock does
> not need to wake any other blocked threads.  If the lock is contended
> and a thread decides to block using a futex operation, then this thread
> needs to first change the state to >1; if this state is observed during
> lock release, the releasing thread will wake one of the potentially
> blocked threads.

Used your text, tweaked slightly to make the difference between 1 and >1 even
more explicit. Also reworded the >1 description to a form that makes a bit
more sense to the pedant in me.

>> +   cond_locks are never in the locked state, they are either unlocked or
>> +   locked with waiters.
>
> Change that to the following?:
>
> Condition variables contain an optimization for broadcasts that requeues
> waiting threads on a lock's futex.  Therefore, there is a special
> variant of the locks (whose name contains "cond") that makes sure to
> always set the lock state to >1 and not just 1.

And just used this as is.

>> +/* If LOCK is 0 (unlocked), set to 1 (locked) and return 0 to indicate the
>> +   lock was acquired. Otherwise leave the lock unchanged and return non-zero
>> +   to indicate the lock was not acquired.  */
>
> If you use acquired / not acquired instead of locked / unlocked, this
> gets simpler.

In that I can be a bit less verbose about the meaning of returns? Done.

>> +   2) The previous owner of the lock dies. FUTEX will be
>> +      ID | FUTEX_OWNER_DIED. FUTEX_WAITERS may also be set. Return FUTEX to
>> +      indicate that the lock is not acquired.  */
>
> I'd say "this value" instead of the second occurence of "FUTEX" to
> highlight that we do NOT read FUTEX twice.

Fixed - also put the FUTEX_WAITERS part in brackets and cleared up some
ambiguity.


2014-10-23  Bernard Ogden  <bernie.ogden@linaro.org>

        * nptl/lowlevellock.c: Add comments.
        * nptl/lowlevelrobustlock.c: Likewise.
        * sysdeps/nptl/lowlevellock.h: Likewise.

---

Comments

Bernie Ogden Nov. 10, 2014, 9:25 a.m. | #1
Ping

On 23 October 2014 13:12, Bernard Ogden <bernie.ogden@linaro.org> wrote:
> Here's a v3. All comments addressed - replies to comments I have something to
> say about inline below.
>
> On 6 October 2014 13:26, Torvald Riegel <triegel@redhat.com> wrote:
>>> To make the >1 state easier to refer to in comments I've gone for:
>>> >1 - locked-with-waiters, there _may_ be waiters
>>>
>>> If I'm right then strictly >1 means locked-possibly-with-waiters, but that's
>>> really getting too much of a mouthful (as 'potentially contended' would have
>>> been in the earlier terminology).
>>
>> But "locked possibly with waiters" is the right description, so I'd
>> rather use this.
>
> OK, changed throughout to 'acquired, possibly with waiters' (based on the
> suggested text below).
>
>>> > Please use locked, or unlocked terms everywhere.
>>>
>>> I've gone for unlocked, locked and locked with waiters.
>>
>> My vote would be for "acquired" / "not acquired", as in "the lock has
>> been acquired".
>
> Switched to acquired/not acquired, as no-one defended locked/unlocked.
>
>>> +   thread ID while the clone is running and is reset to zero
>>> +   afterwards.       */
>>
>> Could you add who resets it?
>
> Done (it's the kernel).
>
>>>
>>> +/* Uses futexes to avoid unnecessary calls into the kernel. Likely paths are
>>
>> I'd say it uses futexes to block using the kernel.  That's the purpose
>> of the futex call, so that we don't need to just spin-wait.
> <snip>
>>> +   the cases with no waiters and hence do not involve syscalls. Unlikely paths
>>> +   usually will involve syscalls to wait or to wake.
>>
>> The "likely" path is not necessarily likely, but we try to make
>> uncontended mutexes fast, so that's why we avoid the syscalls in these
>> cases. I would suggest:
>>
>> Low-level locks use a combination of atomic operations (to acquire and
>> release lock ownership) and futex operations (to block until the state
>> of a lock changes).  A lock can be in one of three states:
>> 0:  not acquired,
>> 1:  acquired; no other threads are blocked or about to block for changes
>> to the lock state,
>>>1:  acquired, other threads are potentially blocked or about to block
>> for changes to the lock state.
>>
>> We expect that the common case is an uncontended lock, so we just need
>> to transition the lock between states 0 and 1; releasing the lock does
>> not need to wake any other blocked threads.  If the lock is contended
>> and a thread decides to block using a futex operation, then this thread
>> needs to first change the state to >1; if this state is observed during
>> lock release, the releasing thread will wake one of the potentially
>> blocked threads.
>
> Used your text, tweaked slightly to make the difference between 1 and >1 even
> more explicit. Also reworded the >1 description to a form that makes a bit
> more sense to the pedant in me.
>
>>> +   cond_locks are never in the locked state, they are either unlocked or
>>> +   locked with waiters.
>>
>> Change that to the following?:
>>
>> Condition variables contain an optimization for broadcasts that requeues
>> waiting threads on a lock's futex.  Therefore, there is a special
>> variant of the locks (whose name contains "cond") that makes sure to
>> always set the lock state to >1 and not just 1.
>
> And just used this as is.
>
>>> +/* If LOCK is 0 (unlocked), set to 1 (locked) and return 0 to indicate the
>>> +   lock was acquired. Otherwise leave the lock unchanged and return non-zero
>>> +   to indicate the lock was not acquired.  */
>>
>> If you use acquired / not acquired instead of locked / unlocked, this
>> gets simpler.
>
> In that I can be a bit less verbose about the meaning of returns? Done.
>
>>> +   2) The previous owner of the lock dies. FUTEX will be
>>> +      ID | FUTEX_OWNER_DIED. FUTEX_WAITERS may also be set. Return FUTEX to
>>> +      indicate that the lock is not acquired.  */
>>
>> I'd say "this value" instead of the second occurence of "FUTEX" to
>> highlight that we do NOT read FUTEX twice.
>
> Fixed - also put the FUTEX_WAITERS part in brackets and cleared up some
> ambiguity.
>
>
> 2014-10-23  Bernard Ogden  <bernie.ogden@linaro.org>
>
>         * nptl/lowlevellock.c: Add comments.
>         * nptl/lowlevelrobustlock.c: Likewise.
>         * sysdeps/nptl/lowlevellock.h: Likewise.
>
> ---
>
> diff --git a/nptl/lowlevellock.c b/nptl/lowlevellock.c
> index e198af7..99cca8e 100644
> --- a/nptl/lowlevellock.c
> +++ b/nptl/lowlevellock.c
> @@ -27,10 +27,10 @@ void
>  __lll_lock_wait_private (int *futex)
>  {
>    if (*futex == 2)
> -    lll_futex_wait (futex, 2, LLL_PRIVATE);
> +    lll_futex_wait (futex, 2, LLL_PRIVATE); /* Wait if *futex == 2.  */
>
>    while (atomic_exchange_acq (futex, 2) != 0)
> -    lll_futex_wait (futex, 2, LLL_PRIVATE);
> +    lll_futex_wait (futex, 2, LLL_PRIVATE); /* Wait if *futex == 2.  */
>  }
>
>
> @@ -40,10 +40,10 @@ void
>  __lll_lock_wait (int *futex, int private)
>  {
>    if (*futex == 2)
> -    lll_futex_wait (futex, 2, private);
> +    lll_futex_wait (futex, 2, private); /* Wait if *futex == 2.  */
>
>    while (atomic_exchange_acq (futex, 2) != 0)
> -    lll_futex_wait (futex, 2, private);
> +    lll_futex_wait (futex, 2, private); /* Wait if *futex == 2.  */
>  }
>
>
> @@ -75,7 +75,7 @@ __lll_timedlock_wait (int *futex, const struct timespec *abstime, int private)
>        if (rt.tv_sec < 0)
>         return ETIMEDOUT;
>
> -      /* Wait.  */
> +      /* If *futex == 2, wait until woken or timeout.  */
>        lll_futex_timed_wait (futex, 2, &rt, private);
>      }
>
> @@ -83,6 +83,11 @@ __lll_timedlock_wait (int *futex, const struct timespec *abstime, int private)
>  }
>
>
> +/* The kernel notifies a process which uses CLONE_CHILD_CLEARTID via futex
> +   wake-up when the clone terminates.  The memory location contains the
> +   thread ID while the clone is running and is reset to zero by the kernel
> +   afterwards.  The kernel up to version 3.16.3 does not use the private futex
> +   operations for futex wake-up when the clone terminates.  */
>  int
>  __lll_timedwait_tid (int *tidp, const struct timespec *abstime)
>  {
> @@ -113,8 +118,10 @@ __lll_timedwait_tid (int *tidp, const struct timespec *abstime)
>        if (rt.tv_sec < 0)
>         return ETIMEDOUT;
>
> -      /* Wait until thread terminates.  The kernel so far does not use
> -        the private futex operations for this.  */
> +      /* If *tidp == tid, wait until thread terminates or the wait times out.
> +         The kernel to version 3.16.3 does not use the private futex
> +         operations for futex wake-up when the clone terminates.
> +      */
>        if (lll_futex_timed_wait (tidp, tid, &rt, LLL_SHARED) == -ETIMEDOUT)
>         return ETIMEDOUT;
>      }
> diff --git a/nptl/lowlevelrobustlock.c b/nptl/lowlevelrobustlock.c
> index 3525807..2f4f314 100644
> --- a/nptl/lowlevelrobustlock.c
> +++ b/nptl/lowlevelrobustlock.c
> @@ -36,14 +36,17 @@ __lll_robust_lock_wait (int *futex, int private)
>
>    do
>      {
> +      /* If the owner died, return the present value of the futex.  */
>        if (__glibc_unlikely (oldval & FUTEX_OWNER_DIED))
>         return oldval;
>
> +      /* Attempt to put lock into state 'acquired, possibly with waiters'.  */
>        int newval = oldval | FUTEX_WAITERS;
>        if (oldval != newval
>           && atomic_compare_and_exchange_bool_acq (futex, newval, oldval))
>         continue;
>
> +      /* If *futex == 2, wait until woken.  */
>        lll_futex_wait (futex, newval, private);
>
>      try:
> @@ -100,15 +103,17 @@ __lll_robust_timedlock_wait (int *futex, const struct timespec *abstime,
>         return ETIMEDOUT;
>  #endif
>
> -      /* Wait.  */
> +      /* If the owner died, return the present value of the futex.  */
>        if (__glibc_unlikely (oldval & FUTEX_OWNER_DIED))
>         return oldval;
>
> +      /* Attempt to put lock into state 'acquired, possibly with waiters'.  */
>        int newval = oldval | FUTEX_WAITERS;
>        if (oldval != newval
>           && atomic_compare_and_exchange_bool_acq (futex, newval, oldval))
>         continue;
>
> +      /* If *futex == 2, wait until woken or timeout.  */
>  #if (!defined __ASSUME_FUTEX_CLOCK_REALTIME \
>       || !defined lll_futex_timed_wait_bitset)
>        lll_futex_timed_wait (futex, newval, &rt, private);
> diff --git a/sysdeps/nptl/lowlevellock.h b/sysdeps/nptl/lowlevellock.h
> index 28f4ba3..8f6e270 100644
> --- a/sysdeps/nptl/lowlevellock.h
> +++ b/sysdeps/nptl/lowlevellock.h
> @@ -22,9 +22,53 @@
>  #include <atomic.h>
>  #include <lowlevellock-futex.h>
>
> +/* Low-level locks use a combination of atomic operations (to acquire and
> +   release lock ownership) and futex operations (to block until the state
> +   of a lock changes).  A lock can be in one of three states:
> +   0:  not acquired,
> +   1:  acquired with no waiters; no other threads are blocked or about to block
> +       for changes to the lock state,
> +   >1: acquired, possibly with waiters; there may be other threads blocked or
> +       about to block for changes to the lock state.
> +
> +   We expect that the common case is an uncontended lock, so we just need
> +   to transition the lock between states 0 and 1; releasing the lock does
> +   not need to wake any other blocked threads.  If the lock is contended
> +   and a thread decides to block using a futex operation, then this thread
> +   needs to first change the state to >1; if this state is observed during
> +   lock release, the releasing thread will wake one of the potentially
> +   blocked threads.
> +
> +   Much of this code takes a 'private' parameter.  This may be:
> +   LLL_PRIVATE: lock only shared within a process
> +   LLL_SHARED:  lock may be shared across processes.
> +
> +   Condition variables contain an optimization for broadcasts that requeues
> +   waiting threads on a lock's futex.  Therefore, there is a special
> +   variant of the locks (whose name contains "cond") that makes sure to
> +   always set the lock state to >1 and not just 1.
> +
> +   Robust locks set the lock to the id of the owner.  This allows detection
> +   of the case where the owner exits without releasing the lock.  Flags are
> +   OR'd with the owner id to record additional information about lock state.
> +   Therefore the states of robust locks are:
> +    0: not acquired
> +   id: acquired (by user identified by id & FUTEX_TID_MASK)
> +
> +   The following flags may be set in the robust lock value:
> +   FUTEX_WAITERS     - possibly has waiters
> +   FUTEX_OWNER_DIED  - owning user has exited without releasing the futex.  */
> +
> +
> +/* If LOCK is 0 (not acquired), set to 1 (acquired) and return 0.  Otherwise
> +   leave lock unchanged and return non-zero to indicate that the lock was not
> +   acquired.  */
>  #define lll_trylock(lock)      \
>    atomic_compare_and_exchange_bool_acq (&(lock), 1, 0)
>
> +/* If LOCK is 0 (not acquired), set to 2 (acquired, possibly with waiters) and
> +   return 0.  Otherwise leave lock unchanged and return non-zero to indicate
> +   that the lock was not acquired.  */
>  #define lll_cond_trylock(lock) \
>    atomic_compare_and_exchange_bool_acq (&(lock), 2, 0)
>
> @@ -35,6 +79,13 @@ extern int __lll_robust_lock_wait (int *futex, int private) attribute_hidden;
>  /* This is an expression rather than a statement even though its value is
>     void, so that it can be used in a comma expression or as an expression
>     that's cast to void.  */
> +/* The inner conditional compiles to a call to __lll_lock_wait_private if
> +   private is known at compile time to be LLL_PRIVATE, and to a call to
> +   __lll_lock_wait otherwise.  */
> +/* If FUTEX is 0 (not acquired), set to 1 (acquired with no waiters) and
> +   return.  Otherwise block until we acquire the lock, at which point FUTEX is
> +   2 (acquired, possibly with waiters), then return.  The lock is aways
> +   acquired on return.  */
>  #define __lll_lock(futex, private)                                      \
>    ((void)                                                               \
>     ({                                                                   \
> @@ -52,6 +103,14 @@ extern int __lll_robust_lock_wait (int *futex, int private) attribute_hidden;
>    __lll_lock (&(futex), private)
>
>
> +/* If FUTEX is 0 (not acquired), set to ID (acquired with no waiters) and
> +   return 0.  Otherwise, block until either:
> +   1) We acquire the lock, at which point FUTEX will be ID | FUTEX_WAITERS
> +      (acquired, possibly with waiters), then return 0.
> +   2) The previous owner of the lock dies.  FUTEX will be the value of id as
> +      set by the previous owner, with FUTEX_OWNER_DIED set (FUTEX_WAITERS may
> +      also be set).  Return this value to indicate that the lock is not
> +      acquired.  */
>  #define __lll_robust_lock(futex, id, private)                           \
>    ({                                                                    \
>      int *__futex = (futex);                                             \
> @@ -69,6 +128,12 @@ extern int __lll_robust_lock_wait (int *futex, int private) attribute_hidden;
>  /* This is an expression rather than a statement even though its value is
>     void, so that it can be used in a comma expression or as an expression
>     that's cast to void.  */
> +/* Unconditionally set FUTEX to 2 (acquired, possibly with waiters).  condvar
> +   locks only have 'not acquired' and 'acquired, possibly with waiters' states,
> +   so there is no need to check FUTEX before setting.
> +   If FUTEX was 0 (not acquired) then return.  Otherwise, block until the lock
> +   is acquired, at which point FUTEX is 2 (acquired, possibly with waiters).
> +   The lock is always acquired on return.  */
>  #define __lll_cond_lock(futex, private)                                 \
>    ((void)                                                               \
>     ({                                                                   \
> @@ -79,6 +144,14 @@ extern int __lll_robust_lock_wait (int *futex, int private) attribute_hidden;
>  #define lll_cond_lock(futex, private) __lll_cond_lock (&(futex), private)
>
>
> +/* If FUTEX is 0 (not acquired), set to ID | FUTEX_WAITERS (acquired, possibly
> +   with waiters) and return 0.  Otherwise, block until either:
> +   1) We acquire the lock, at which point FUTEX will be ID | FUTEX_WAITERS
> +      (acquired, possibly with waiters), then return 0.
> +   2) The previous owner of the lock dies.  FUTEX will be the value of id as
> +      set by the previous owner, with FUTEX_OWNER_DIED set (FUTEX_WAITERS may
> +      also be set).  Return this value to indicate that the lock is not
> +      acquired.  */
>  #define lll_robust_cond_lock(futex, id, private)       \
>    __lll_robust_lock (&(futex), (id) | FUTEX_WAITERS, private)
>
> @@ -88,8 +161,9 @@ extern int __lll_timedlock_wait (int *futex, const struct timespec *,
>  extern int __lll_robust_timedlock_wait (int *futex, const struct timespec *,
>                                         int private) attribute_hidden;
>
> -/* Take futex if it is untaken.
> -   Otherwise block until either we get the futex or abstime runs out.  */
> +
> +/* As __lll_lock, but with a timeout.  If the timeout occurs then return
> +   ETIMEDOUT.  If ABSTIME is invalid, return EINVAL.  */
>  #define __lll_timedlock(futex, abstime, private)                \
>    ({                                                            \
>      int *__futex = (futex);                                     \
> @@ -104,6 +178,8 @@ extern int __lll_robust_timedlock_wait (int *futex, const struct timespec *,
>    __lll_timedlock (&(futex), abstime, private)
>
>
> +/* As __lll_robust_lock, but with a timeout.  If the timeout occurs then return
> +   ETIMEDOUT.  If ABSTIME is invalid, return EINVAL.  */
>  #define __lll_robust_timedlock(futex, abstime, id, private)             \
>    ({                                                                    \
>      int *__futex = (futex);                                             \
> @@ -121,6 +197,9 @@ extern int __lll_robust_timedlock_wait (int *futex, const struct timespec *,
>  /* This is an expression rather than a statement even though its value is
>     void, so that it can be used in a comma expression or as an expression
>     that's cast to void.  */
> +/* Unconditionally set FUTEX to 0 (not acquired), releasing the lock.  If FUTEX
> +   was >1 (acquired, possibly with waiters), then wake any waiters.  The waiter
> +   who acquires the lock will set FUTEX to >1.  */
>  #define __lll_unlock(futex, private)                    \
>    ((void)                                               \
>     ({                                                   \
> @@ -136,6 +215,9 @@ extern int __lll_robust_timedlock_wait (int *futex, const struct timespec *,
>  /* This is an expression rather than a statement even though its value is
>     void, so that it can be used in a comma expression or as an expression
>     that's cast to void.  */
> +/* Unconditionally set FUTEX to 0 (not acquired), releasing the lock.  If FUTEX
> +   had FUTEX_WAITERS set then wake any waiters.  The waiter who acquires the
> +   lock will set FUTEX_WAITERS.  */
>  #define __lll_robust_unlock(futex, private)             \
>    ((void)                                               \
>     ({                                                   \
> @@ -159,15 +241,12 @@ extern int __lll_robust_timedlock_wait (int *futex, const struct timespec *,
>  #define LLL_LOCK_INITIALIZER           (0)
>  #define LLL_LOCK_INITIALIZER_LOCKED    (1)
>
> -/* The states of a lock are:
> -    0  -  untaken
> -    1  -  taken by one user
> -   >1  -  taken by more users */
>
>  /* The kernel notifies a process which uses CLONE_CHILD_CLEARTID via futex
> -   wakeup when the clone terminates.  The memory location contains the
> -   thread ID while the clone is running and is reset to zero
> -   afterwards. */
> +   wake-up when the clone terminates.  The memory location contains the
> +   thread ID while the clone is running and is reset to zero by the kernel
> +   afterwards.  The kernel up to version 3.16.3 does not use the private futex
> +   operations for futex wake-up when the clone terminates.  */
>  #define lll_wait_tid(tid) \
>    do {                                 \
>      __typeof (tid) __tid;              \
> @@ -178,6 +257,8 @@ extern int __lll_robust_timedlock_wait (int *futex, const struct timespec *,
>  extern int __lll_timedwait_tid (int *, const struct timespec *)
>       attribute_hidden;
>
> +/* As lll_wait_tid, but with a timeout.  If the timeout occurs then return
> +   ETIMEDOUT.  If ABSTIME is invalid, return EINVAL.  */
>  #define lll_timedwait_tid(tid, abstime) \
>    ({                                                   \
>      int __res = 0;                                     \
Bernie Ogden Nov. 17, 2014, 1:56 p.m. | #2
Ping^2

On 10 November 2014 09:25, Bernie Ogden <bernie.ogden@linaro.org> wrote:
> Ping
>
> On 23 October 2014 13:12, Bernard Ogden <bernie.ogden@linaro.org> wrote:
>> Here's a v3. All comments addressed - replies to comments I have something to
>> say about inline below.
>>
>> On 6 October 2014 13:26, Torvald Riegel <triegel@redhat.com> wrote:
>>>> To make the >1 state easier to refer to in comments I've gone for:
>>>> >1 - locked-with-waiters, there _may_ be waiters
>>>>
>>>> If I'm right then strictly >1 means locked-possibly-with-waiters, but that's
>>>> really getting too much of a mouthful (as 'potentially contended' would have
>>>> been in the earlier terminology).
>>>
>>> But "locked possibly with waiters" is the right description, so I'd
>>> rather use this.
>>
>> OK, changed throughout to 'acquired, possibly with waiters' (based on the
>> suggested text below).
>>
>>>> > Please use locked, or unlocked terms everywhere.
>>>>
>>>> I've gone for unlocked, locked and locked with waiters.
>>>
>>> My vote would be for "acquired" / "not acquired", as in "the lock has
>>> been acquired".
>>
>> Switched to acquired/not acquired, as no-one defended locked/unlocked.
>>
>>>> +   thread ID while the clone is running and is reset to zero
>>>> +   afterwards.       */
>>>
>>> Could you add who resets it?
>>
>> Done (it's the kernel).
>>
>>>>
>>>> +/* Uses futexes to avoid unnecessary calls into the kernel. Likely paths are
>>>
>>> I'd say it uses futexes to block using the kernel.  That's the purpose
>>> of the futex call, so that we don't need to just spin-wait.
>> <snip>
>>>> +   the cases with no waiters and hence do not involve syscalls. Unlikely paths
>>>> +   usually will involve syscalls to wait or to wake.
>>>
>>> The "likely" path is not necessarily likely, but we try to make
>>> uncontended mutexes fast, so that's why we avoid the syscalls in these
>>> cases. I would suggest:
>>>
>>> Low-level locks use a combination of atomic operations (to acquire and
>>> release lock ownership) and futex operations (to block until the state
>>> of a lock changes).  A lock can be in one of three states:
>>> 0:  not acquired,
>>> 1:  acquired; no other threads are blocked or about to block for changes
>>> to the lock state,
>>>>1:  acquired, other threads are potentially blocked or about to block
>>> for changes to the lock state.
>>>
>>> We expect that the common case is an uncontended lock, so we just need
>>> to transition the lock between states 0 and 1; releasing the lock does
>>> not need to wake any other blocked threads.  If the lock is contended
>>> and a thread decides to block using a futex operation, then this thread
>>> needs to first change the state to >1; if this state is observed during
>>> lock release, the releasing thread will wake one of the potentially
>>> blocked threads.
>>
>> Used your text, tweaked slightly to make the difference between 1 and >1 even
>> more explicit. Also reworded the >1 description to a form that makes a bit
>> more sense to the pedant in me.
>>
>>>> +   cond_locks are never in the locked state, they are either unlocked or
>>>> +   locked with waiters.
>>>
>>> Change that to the following?:
>>>
>>> Condition variables contain an optimization for broadcasts that requeues
>>> waiting threads on a lock's futex.  Therefore, there is a special
>>> variant of the locks (whose name contains "cond") that makes sure to
>>> always set the lock state to >1 and not just 1.
>>
>> And just used this as is.
>>
>>>> +/* If LOCK is 0 (unlocked), set to 1 (locked) and return 0 to indicate the
>>>> +   lock was acquired. Otherwise leave the lock unchanged and return non-zero
>>>> +   to indicate the lock was not acquired.  */
>>>
>>> If you use acquired / not acquired instead of locked / unlocked, this
>>> gets simpler.
>>
>> In that I can be a bit less verbose about the meaning of returns? Done.
>>
>>>> +   2) The previous owner of the lock dies. FUTEX will be
>>>> +      ID | FUTEX_OWNER_DIED. FUTEX_WAITERS may also be set. Return FUTEX to
>>>> +      indicate that the lock is not acquired.  */
>>>
>>> I'd say "this value" instead of the second occurence of "FUTEX" to
>>> highlight that we do NOT read FUTEX twice.
>>
>> Fixed - also put the FUTEX_WAITERS part in brackets and cleared up some
>> ambiguity.
>>
>>
>> 2014-10-23  Bernard Ogden  <bernie.ogden@linaro.org>
>>
>>         * nptl/lowlevellock.c: Add comments.
>>         * nptl/lowlevelrobustlock.c: Likewise.
>>         * sysdeps/nptl/lowlevellock.h: Likewise.
>>
>> ---
>>
>> diff --git a/nptl/lowlevellock.c b/nptl/lowlevellock.c
>> index e198af7..99cca8e 100644
>> --- a/nptl/lowlevellock.c
>> +++ b/nptl/lowlevellock.c
>> @@ -27,10 +27,10 @@ void
>>  __lll_lock_wait_private (int *futex)
>>  {
>>    if (*futex == 2)
>> -    lll_futex_wait (futex, 2, LLL_PRIVATE);
>> +    lll_futex_wait (futex, 2, LLL_PRIVATE); /* Wait if *futex == 2.  */
>>
>>    while (atomic_exchange_acq (futex, 2) != 0)
>> -    lll_futex_wait (futex, 2, LLL_PRIVATE);
>> +    lll_futex_wait (futex, 2, LLL_PRIVATE); /* Wait if *futex == 2.  */
>>  }
>>
>>
>> @@ -40,10 +40,10 @@ void
>>  __lll_lock_wait (int *futex, int private)
>>  {
>>    if (*futex == 2)
>> -    lll_futex_wait (futex, 2, private);
>> +    lll_futex_wait (futex, 2, private); /* Wait if *futex == 2.  */
>>
>>    while (atomic_exchange_acq (futex, 2) != 0)
>> -    lll_futex_wait (futex, 2, private);
>> +    lll_futex_wait (futex, 2, private); /* Wait if *futex == 2.  */
>>  }
>>
>>
>> @@ -75,7 +75,7 @@ __lll_timedlock_wait (int *futex, const struct timespec *abstime, int private)
>>        if (rt.tv_sec < 0)
>>         return ETIMEDOUT;
>>
>> -      /* Wait.  */
>> +      /* If *futex == 2, wait until woken or timeout.  */
>>        lll_futex_timed_wait (futex, 2, &rt, private);
>>      }
>>
>> @@ -83,6 +83,11 @@ __lll_timedlock_wait (int *futex, const struct timespec *abstime, int private)
>>  }
>>
>>
>> +/* The kernel notifies a process which uses CLONE_CHILD_CLEARTID via futex
>> +   wake-up when the clone terminates.  The memory location contains the
>> +   thread ID while the clone is running and is reset to zero by the kernel
>> +   afterwards.  The kernel up to version 3.16.3 does not use the private futex
>> +   operations for futex wake-up when the clone terminates.  */
>>  int
>>  __lll_timedwait_tid (int *tidp, const struct timespec *abstime)
>>  {
>> @@ -113,8 +118,10 @@ __lll_timedwait_tid (int *tidp, const struct timespec *abstime)
>>        if (rt.tv_sec < 0)
>>         return ETIMEDOUT;
>>
>> -      /* Wait until thread terminates.  The kernel so far does not use
>> -        the private futex operations for this.  */
>> +      /* If *tidp == tid, wait until thread terminates or the wait times out.
>> +         The kernel to version 3.16.3 does not use the private futex
>> +         operations for futex wake-up when the clone terminates.
>> +      */
>>        if (lll_futex_timed_wait (tidp, tid, &rt, LLL_SHARED) == -ETIMEDOUT)
>>         return ETIMEDOUT;
>>      }
>> diff --git a/nptl/lowlevelrobustlock.c b/nptl/lowlevelrobustlock.c
>> index 3525807..2f4f314 100644
>> --- a/nptl/lowlevelrobustlock.c
>> +++ b/nptl/lowlevelrobustlock.c
>> @@ -36,14 +36,17 @@ __lll_robust_lock_wait (int *futex, int private)
>>
>>    do
>>      {
>> +      /* If the owner died, return the present value of the futex.  */
>>        if (__glibc_unlikely (oldval & FUTEX_OWNER_DIED))
>>         return oldval;
>>
>> +      /* Attempt to put lock into state 'acquired, possibly with waiters'.  */
>>        int newval = oldval | FUTEX_WAITERS;
>>        if (oldval != newval
>>           && atomic_compare_and_exchange_bool_acq (futex, newval, oldval))
>>         continue;
>>
>> +      /* If *futex == 2, wait until woken.  */
>>        lll_futex_wait (futex, newval, private);
>>
>>      try:
>> @@ -100,15 +103,17 @@ __lll_robust_timedlock_wait (int *futex, const struct timespec *abstime,
>>         return ETIMEDOUT;
>>  #endif
>>
>> -      /* Wait.  */
>> +      /* If the owner died, return the present value of the futex.  */
>>        if (__glibc_unlikely (oldval & FUTEX_OWNER_DIED))
>>         return oldval;
>>
>> +      /* Attempt to put lock into state 'acquired, possibly with waiters'.  */
>>        int newval = oldval | FUTEX_WAITERS;
>>        if (oldval != newval
>>           && atomic_compare_and_exchange_bool_acq (futex, newval, oldval))
>>         continue;
>>
>> +      /* If *futex == 2, wait until woken or timeout.  */
>>  #if (!defined __ASSUME_FUTEX_CLOCK_REALTIME \
>>       || !defined lll_futex_timed_wait_bitset)
>>        lll_futex_timed_wait (futex, newval, &rt, private);
>> diff --git a/sysdeps/nptl/lowlevellock.h b/sysdeps/nptl/lowlevellock.h
>> index 28f4ba3..8f6e270 100644
>> --- a/sysdeps/nptl/lowlevellock.h
>> +++ b/sysdeps/nptl/lowlevellock.h
>> @@ -22,9 +22,53 @@
>>  #include <atomic.h>
>>  #include <lowlevellock-futex.h>
>>
>> +/* Low-level locks use a combination of atomic operations (to acquire and
>> +   release lock ownership) and futex operations (to block until the state
>> +   of a lock changes).  A lock can be in one of three states:
>> +   0:  not acquired,
>> +   1:  acquired with no waiters; no other threads are blocked or about to block
>> +       for changes to the lock state,
>> +   >1: acquired, possibly with waiters; there may be other threads blocked or
>> +       about to block for changes to the lock state.
>> +
>> +   We expect that the common case is an uncontended lock, so we just need
>> +   to transition the lock between states 0 and 1; releasing the lock does
>> +   not need to wake any other blocked threads.  If the lock is contended
>> +   and a thread decides to block using a futex operation, then this thread
>> +   needs to first change the state to >1; if this state is observed during
>> +   lock release, the releasing thread will wake one of the potentially
>> +   blocked threads.
>> +
>> +   Much of this code takes a 'private' parameter.  This may be:
>> +   LLL_PRIVATE: lock only shared within a process
>> +   LLL_SHARED:  lock may be shared across processes.
>> +
>> +   Condition variables contain an optimization for broadcasts that requeues
>> +   waiting threads on a lock's futex.  Therefore, there is a special
>> +   variant of the locks (whose name contains "cond") that makes sure to
>> +   always set the lock state to >1 and not just 1.
>> +
>> +   Robust locks set the lock to the id of the owner.  This allows detection
>> +   of the case where the owner exits without releasing the lock.  Flags are
>> +   OR'd with the owner id to record additional information about lock state.
>> +   Therefore the states of robust locks are:
>> +    0: not acquired
>> +   id: acquired (by user identified by id & FUTEX_TID_MASK)
>> +
>> +   The following flags may be set in the robust lock value:
>> +   FUTEX_WAITERS     - possibly has waiters
>> +   FUTEX_OWNER_DIED  - owning user has exited without releasing the futex.  */
>> +
>> +
>> +/* If LOCK is 0 (not acquired), set to 1 (acquired) and return 0.  Otherwise
>> +   leave lock unchanged and return non-zero to indicate that the lock was not
>> +   acquired.  */
>>  #define lll_trylock(lock)      \
>>    atomic_compare_and_exchange_bool_acq (&(lock), 1, 0)
>>
>> +/* If LOCK is 0 (not acquired), set to 2 (acquired, possibly with waiters) and
>> +   return 0.  Otherwise leave lock unchanged and return non-zero to indicate
>> +   that the lock was not acquired.  */
>>  #define lll_cond_trylock(lock) \
>>    atomic_compare_and_exchange_bool_acq (&(lock), 2, 0)
>>
>> @@ -35,6 +79,13 @@ extern int __lll_robust_lock_wait (int *futex, int private) attribute_hidden;
>>  /* This is an expression rather than a statement even though its value is
>>     void, so that it can be used in a comma expression or as an expression
>>     that's cast to void.  */
>> +/* The inner conditional compiles to a call to __lll_lock_wait_private if
>> +   private is known at compile time to be LLL_PRIVATE, and to a call to
>> +   __lll_lock_wait otherwise.  */
>> +/* If FUTEX is 0 (not acquired), set to 1 (acquired with no waiters) and
>> +   return.  Otherwise block until we acquire the lock, at which point FUTEX is
>> +   2 (acquired, possibly with waiters), then return.  The lock is aways
>> +   acquired on return.  */
>>  #define __lll_lock(futex, private)                                      \
>>    ((void)                                                               \
>>     ({                                                                   \
>> @@ -52,6 +103,14 @@ extern int __lll_robust_lock_wait (int *futex, int private) attribute_hidden;
>>    __lll_lock (&(futex), private)
>>
>>
>> +/* If FUTEX is 0 (not acquired), set to ID (acquired with no waiters) and
>> +   return 0.  Otherwise, block until either:
>> +   1) We acquire the lock, at which point FUTEX will be ID | FUTEX_WAITERS
>> +      (acquired, possibly with waiters), then return 0.
>> +   2) The previous owner of the lock dies.  FUTEX will be the value of id as
>> +      set by the previous owner, with FUTEX_OWNER_DIED set (FUTEX_WAITERS may
>> +      also be set).  Return this value to indicate that the lock is not
>> +      acquired.  */
>>  #define __lll_robust_lock(futex, id, private)                           \
>>    ({                                                                    \
>>      int *__futex = (futex);                                             \
>> @@ -69,6 +128,12 @@ extern int __lll_robust_lock_wait (int *futex, int private) attribute_hidden;
>>  /* This is an expression rather than a statement even though its value is
>>     void, so that it can be used in a comma expression or as an expression
>>     that's cast to void.  */
>> +/* Unconditionally set FUTEX to 2 (acquired, possibly with waiters).  condvar
>> +   locks only have 'not acquired' and 'acquired, possibly with waiters' states,
>> +   so there is no need to check FUTEX before setting.
>> +   If FUTEX was 0 (not acquired) then return.  Otherwise, block until the lock
>> +   is acquired, at which point FUTEX is 2 (acquired, possibly with waiters).
>> +   The lock is always acquired on return.  */
>>  #define __lll_cond_lock(futex, private)                                 \
>>    ((void)                                                               \
>>     ({                                                                   \
>> @@ -79,6 +144,14 @@ extern int __lll_robust_lock_wait (int *futex, int private) attribute_hidden;
>>  #define lll_cond_lock(futex, private) __lll_cond_lock (&(futex), private)
>>
>>
>> +/* If FUTEX is 0 (not acquired), set to ID | FUTEX_WAITERS (acquired, possibly
>> +   with waiters) and return 0.  Otherwise, block until either:
>> +   1) We acquire the lock, at which point FUTEX will be ID | FUTEX_WAITERS
>> +      (acquired, possibly with waiters), then return 0.
>> +   2) The previous owner of the lock dies.  FUTEX will be the value of id as
>> +      set by the previous owner, with FUTEX_OWNER_DIED set (FUTEX_WAITERS may
>> +      also be set).  Return this value to indicate that the lock is not
>> +      acquired.  */
>>  #define lll_robust_cond_lock(futex, id, private)       \
>>    __lll_robust_lock (&(futex), (id) | FUTEX_WAITERS, private)
>>
>> @@ -88,8 +161,9 @@ extern int __lll_timedlock_wait (int *futex, const struct timespec *,
>>  extern int __lll_robust_timedlock_wait (int *futex, const struct timespec *,
>>                                         int private) attribute_hidden;
>>
>> -/* Take futex if it is untaken.
>> -   Otherwise block until either we get the futex or abstime runs out.  */
>> +
>> +/* As __lll_lock, but with a timeout.  If the timeout occurs then return
>> +   ETIMEDOUT.  If ABSTIME is invalid, return EINVAL.  */
>>  #define __lll_timedlock(futex, abstime, private)                \
>>    ({                                                            \
>>      int *__futex = (futex);                                     \
>> @@ -104,6 +178,8 @@ extern int __lll_robust_timedlock_wait (int *futex, const struct timespec *,
>>    __lll_timedlock (&(futex), abstime, private)
>>
>>
>> +/* As __lll_robust_lock, but with a timeout.  If the timeout occurs then return
>> +   ETIMEDOUT.  If ABSTIME is invalid, return EINVAL.  */
>>  #define __lll_robust_timedlock(futex, abstime, id, private)             \
>>    ({                                                                    \
>>      int *__futex = (futex);                                             \
>> @@ -121,6 +197,9 @@ extern int __lll_robust_timedlock_wait (int *futex, const struct timespec *,
>>  /* This is an expression rather than a statement even though its value is
>>     void, so that it can be used in a comma expression or as an expression
>>     that's cast to void.  */
>> +/* Unconditionally set FUTEX to 0 (not acquired), releasing the lock.  If FUTEX
>> +   was >1 (acquired, possibly with waiters), then wake any waiters.  The waiter
>> +   who acquires the lock will set FUTEX to >1.  */
>>  #define __lll_unlock(futex, private)                    \
>>    ((void)                                               \
>>     ({                                                   \
>> @@ -136,6 +215,9 @@ extern int __lll_robust_timedlock_wait (int *futex, const struct timespec *,
>>  /* This is an expression rather than a statement even though its value is
>>     void, so that it can be used in a comma expression or as an expression
>>     that's cast to void.  */
>> +/* Unconditionally set FUTEX to 0 (not acquired), releasing the lock.  If FUTEX
>> +   had FUTEX_WAITERS set then wake any waiters.  The waiter who acquires the
>> +   lock will set FUTEX_WAITERS.  */
>>  #define __lll_robust_unlock(futex, private)             \
>>    ((void)                                               \
>>     ({                                                   \
>> @@ -159,15 +241,12 @@ extern int __lll_robust_timedlock_wait (int *futex, const struct timespec *,
>>  #define LLL_LOCK_INITIALIZER           (0)
>>  #define LLL_LOCK_INITIALIZER_LOCKED    (1)
>>
>> -/* The states of a lock are:
>> -    0  -  untaken
>> -    1  -  taken by one user
>> -   >1  -  taken by more users */
>>
>>  /* The kernel notifies a process which uses CLONE_CHILD_CLEARTID via futex
>> -   wakeup when the clone terminates.  The memory location contains the
>> -   thread ID while the clone is running and is reset to zero
>> -   afterwards. */
>> +   wake-up when the clone terminates.  The memory location contains the
>> +   thread ID while the clone is running and is reset to zero by the kernel
>> +   afterwards.  The kernel up to version 3.16.3 does not use the private futex
>> +   operations for futex wake-up when the clone terminates.  */
>>  #define lll_wait_tid(tid) \
>>    do {                                 \
>>      __typeof (tid) __tid;              \
>> @@ -178,6 +257,8 @@ extern int __lll_robust_timedlock_wait (int *futex, const struct timespec *,
>>  extern int __lll_timedwait_tid (int *, const struct timespec *)
>>       attribute_hidden;
>>
>> +/* As lll_wait_tid, but with a timeout.  If the timeout occurs then return
>> +   ETIMEDOUT.  If ABSTIME is invalid, return EINVAL.  */
>>  #define lll_timedwait_tid(tid, abstime) \
>>    ({                                                   \
>>      int __res = 0;                                     \

Patch

diff --git a/nptl/lowlevellock.c b/nptl/lowlevellock.c
index e198af7..99cca8e 100644
--- a/nptl/lowlevellock.c
+++ b/nptl/lowlevellock.c
@@ -27,10 +27,10 @@  void
 __lll_lock_wait_private (int *futex)
 {
   if (*futex == 2)
-    lll_futex_wait (futex, 2, LLL_PRIVATE);
+    lll_futex_wait (futex, 2, LLL_PRIVATE); /* Wait if *futex == 2.  */
 
   while (atomic_exchange_acq (futex, 2) != 0)
-    lll_futex_wait (futex, 2, LLL_PRIVATE);
+    lll_futex_wait (futex, 2, LLL_PRIVATE); /* Wait if *futex == 2.  */
 }
 
 
@@ -40,10 +40,10 @@  void
 __lll_lock_wait (int *futex, int private)
 {
   if (*futex == 2)
-    lll_futex_wait (futex, 2, private);
+    lll_futex_wait (futex, 2, private); /* Wait if *futex == 2.  */
 
   while (atomic_exchange_acq (futex, 2) != 0)
-    lll_futex_wait (futex, 2, private);
+    lll_futex_wait (futex, 2, private); /* Wait if *futex == 2.  */
 }
 
 
@@ -75,7 +75,7 @@  __lll_timedlock_wait (int *futex, const struct timespec *abstime, int private)
       if (rt.tv_sec < 0)
 	return ETIMEDOUT;
 
-      /* Wait.  */
+      /* If *futex == 2, wait until woken or timeout.  */
       lll_futex_timed_wait (futex, 2, &rt, private);
     }
 
@@ -83,6 +83,11 @@  __lll_timedlock_wait (int *futex, const struct timespec *abstime, int private)
 }
 
 
+/* The kernel notifies a process which uses CLONE_CHILD_CLEARTID via futex
+   wake-up when the clone terminates.  The memory location contains the
+   thread ID while the clone is running and is reset to zero by the kernel
+   afterwards.  The kernel up to version 3.16.3 does not use the private futex
+   operations for futex wake-up when the clone terminates.  */
 int
 __lll_timedwait_tid (int *tidp, const struct timespec *abstime)
 {
@@ -113,8 +118,10 @@  __lll_timedwait_tid (int *tidp, const struct timespec *abstime)
       if (rt.tv_sec < 0)
 	return ETIMEDOUT;
 
-      /* Wait until thread terminates.  The kernel so far does not use
-	 the private futex operations for this.  */
+      /* If *tidp == tid, wait until thread terminates or the wait times out.
+         The kernel to version 3.16.3 does not use the private futex
+         operations for futex wake-up when the clone terminates.
+      */
       if (lll_futex_timed_wait (tidp, tid, &rt, LLL_SHARED) == -ETIMEDOUT)
 	return ETIMEDOUT;
     }
diff --git a/nptl/lowlevelrobustlock.c b/nptl/lowlevelrobustlock.c
index 3525807..2f4f314 100644
--- a/nptl/lowlevelrobustlock.c
+++ b/nptl/lowlevelrobustlock.c
@@ -36,14 +36,17 @@  __lll_robust_lock_wait (int *futex, int private)
 
   do
     {
+      /* If the owner died, return the present value of the futex.  */
       if (__glibc_unlikely (oldval & FUTEX_OWNER_DIED))
 	return oldval;
 
+      /* Attempt to put lock into state 'acquired, possibly with waiters'.  */
       int newval = oldval | FUTEX_WAITERS;
       if (oldval != newval
 	  && atomic_compare_and_exchange_bool_acq (futex, newval, oldval))
 	continue;
 
+      /* If *futex == 2, wait until woken.  */
       lll_futex_wait (futex, newval, private);
 
     try:
@@ -100,15 +103,17 @@  __lll_robust_timedlock_wait (int *futex, const struct timespec *abstime,
 	return ETIMEDOUT;
 #endif
 
-      /* Wait.  */
+      /* If the owner died, return the present value of the futex.  */
       if (__glibc_unlikely (oldval & FUTEX_OWNER_DIED))
 	return oldval;
 
+      /* Attempt to put lock into state 'acquired, possibly with waiters'.  */
       int newval = oldval | FUTEX_WAITERS;
       if (oldval != newval
 	  && atomic_compare_and_exchange_bool_acq (futex, newval, oldval))
 	continue;
 
+      /* If *futex == 2, wait until woken or timeout.  */
 #if (!defined __ASSUME_FUTEX_CLOCK_REALTIME \
      || !defined lll_futex_timed_wait_bitset)
       lll_futex_timed_wait (futex, newval, &rt, private);
diff --git a/sysdeps/nptl/lowlevellock.h b/sysdeps/nptl/lowlevellock.h
index 28f4ba3..8f6e270 100644
--- a/sysdeps/nptl/lowlevellock.h
+++ b/sysdeps/nptl/lowlevellock.h
@@ -22,9 +22,53 @@ 
 #include <atomic.h>
 #include <lowlevellock-futex.h>
 
+/* Low-level locks use a combination of atomic operations (to acquire and
+   release lock ownership) and futex operations (to block until the state
+   of a lock changes).  A lock can be in one of three states:
+   0:  not acquired,
+   1:  acquired with no waiters; no other threads are blocked or about to block
+       for changes to the lock state,
+   >1: acquired, possibly with waiters; there may be other threads blocked or
+       about to block for changes to the lock state.
+
+   We expect that the common case is an uncontended lock, so we just need
+   to transition the lock between states 0 and 1; releasing the lock does
+   not need to wake any other blocked threads.  If the lock is contended
+   and a thread decides to block using a futex operation, then this thread
+   needs to first change the state to >1; if this state is observed during
+   lock release, the releasing thread will wake one of the potentially
+   blocked threads.
+
+   Much of this code takes a 'private' parameter.  This may be:
+   LLL_PRIVATE: lock only shared within a process
+   LLL_SHARED:  lock may be shared across processes.
+
+   Condition variables contain an optimization for broadcasts that requeues
+   waiting threads on a lock's futex.  Therefore, there is a special
+   variant of the locks (whose name contains "cond") that makes sure to
+   always set the lock state to >1 and not just 1.
+
+   Robust locks set the lock to the id of the owner.  This allows detection
+   of the case where the owner exits without releasing the lock.  Flags are
+   OR'd with the owner id to record additional information about lock state.
+   Therefore the states of robust locks are:
+    0: not acquired
+   id: acquired (by user identified by id & FUTEX_TID_MASK)
+
+   The following flags may be set in the robust lock value:
+   FUTEX_WAITERS     - possibly has waiters
+   FUTEX_OWNER_DIED  - owning user has exited without releasing the futex.  */
+
+
+/* If LOCK is 0 (not acquired), set to 1 (acquired) and return 0.  Otherwise
+   leave lock unchanged and return non-zero to indicate that the lock was not
+   acquired.  */
 #define lll_trylock(lock)	\
   atomic_compare_and_exchange_bool_acq (&(lock), 1, 0)
 
+/* If LOCK is 0 (not acquired), set to 2 (acquired, possibly with waiters) and
+   return 0.  Otherwise leave lock unchanged and return non-zero to indicate
+   that the lock was not acquired.  */
 #define lll_cond_trylock(lock)	\
   atomic_compare_and_exchange_bool_acq (&(lock), 2, 0)
 
@@ -35,6 +79,13 @@  extern int __lll_robust_lock_wait (int *futex, int private) attribute_hidden;
 /* This is an expression rather than a statement even though its value is
    void, so that it can be used in a comma expression or as an expression
    that's cast to void.  */
+/* The inner conditional compiles to a call to __lll_lock_wait_private if
+   private is known at compile time to be LLL_PRIVATE, and to a call to
+   __lll_lock_wait otherwise.  */
+/* If FUTEX is 0 (not acquired), set to 1 (acquired with no waiters) and
+   return.  Otherwise block until we acquire the lock, at which point FUTEX is
+   2 (acquired, possibly with waiters), then return.  The lock is aways
+   acquired on return.  */
 #define __lll_lock(futex, private)                                      \
   ((void)                                                               \
    ({                                                                   \
@@ -52,6 +103,14 @@  extern int __lll_robust_lock_wait (int *futex, int private) attribute_hidden;
   __lll_lock (&(futex), private)
 
 
+/* If FUTEX is 0 (not acquired), set to ID (acquired with no waiters) and
+   return 0.  Otherwise, block until either:
+   1) We acquire the lock, at which point FUTEX will be ID | FUTEX_WAITERS
+      (acquired, possibly with waiters), then return 0.
+   2) The previous owner of the lock dies.  FUTEX will be the value of id as
+      set by the previous owner, with FUTEX_OWNER_DIED set (FUTEX_WAITERS may
+      also be set).  Return this value to indicate that the lock is not
+      acquired.  */
 #define __lll_robust_lock(futex, id, private)                           \
   ({                                                                    \
     int *__futex = (futex);                                             \
@@ -69,6 +128,12 @@  extern int __lll_robust_lock_wait (int *futex, int private) attribute_hidden;
 /* This is an expression rather than a statement even though its value is
    void, so that it can be used in a comma expression or as an expression
    that's cast to void.  */
+/* Unconditionally set FUTEX to 2 (acquired, possibly with waiters).  condvar
+   locks only have 'not acquired' and 'acquired, possibly with waiters' states,
+   so there is no need to check FUTEX before setting.
+   If FUTEX was 0 (not acquired) then return.  Otherwise, block until the lock
+   is acquired, at which point FUTEX is 2 (acquired, possibly with waiters).
+   The lock is always acquired on return.  */
 #define __lll_cond_lock(futex, private)                                 \
   ((void)                                                               \
    ({                                                                   \
@@ -79,6 +144,14 @@  extern int __lll_robust_lock_wait (int *futex, int private) attribute_hidden;
 #define lll_cond_lock(futex, private) __lll_cond_lock (&(futex), private)
 
 
+/* If FUTEX is 0 (not acquired), set to ID | FUTEX_WAITERS (acquired, possibly
+   with waiters) and return 0.  Otherwise, block until either:
+   1) We acquire the lock, at which point FUTEX will be ID | FUTEX_WAITERS
+      (acquired, possibly with waiters), then return 0.
+   2) The previous owner of the lock dies.  FUTEX will be the value of id as
+      set by the previous owner, with FUTEX_OWNER_DIED set (FUTEX_WAITERS may
+      also be set).  Return this value to indicate that the lock is not
+      acquired.  */
 #define lll_robust_cond_lock(futex, id, private)	\
   __lll_robust_lock (&(futex), (id) | FUTEX_WAITERS, private)
 
@@ -88,8 +161,9 @@  extern int __lll_timedlock_wait (int *futex, const struct timespec *,
 extern int __lll_robust_timedlock_wait (int *futex, const struct timespec *,
 					int private) attribute_hidden;
 
-/* Take futex if it is untaken.
-   Otherwise block until either we get the futex or abstime runs out.  */
+
+/* As __lll_lock, but with a timeout.  If the timeout occurs then return
+   ETIMEDOUT.  If ABSTIME is invalid, return EINVAL.  */
 #define __lll_timedlock(futex, abstime, private)                \
   ({                                                            \
     int *__futex = (futex);                                     \
@@ -104,6 +178,8 @@  extern int __lll_robust_timedlock_wait (int *futex, const struct timespec *,
   __lll_timedlock (&(futex), abstime, private)
 
 
+/* As __lll_robust_lock, but with a timeout.  If the timeout occurs then return
+   ETIMEDOUT.  If ABSTIME is invalid, return EINVAL.  */
 #define __lll_robust_timedlock(futex, abstime, id, private)             \
   ({                                                                    \
     int *__futex = (futex);                                             \
@@ -121,6 +197,9 @@  extern int __lll_robust_timedlock_wait (int *futex, const struct timespec *,
 /* This is an expression rather than a statement even though its value is
    void, so that it can be used in a comma expression or as an expression
    that's cast to void.  */
+/* Unconditionally set FUTEX to 0 (not acquired), releasing the lock.  If FUTEX
+   was >1 (acquired, possibly with waiters), then wake any waiters.  The waiter
+   who acquires the lock will set FUTEX to >1.  */
 #define __lll_unlock(futex, private)                    \
   ((void)                                               \
    ({                                                   \
@@ -136,6 +215,9 @@  extern int __lll_robust_timedlock_wait (int *futex, const struct timespec *,
 /* This is an expression rather than a statement even though its value is
    void, so that it can be used in a comma expression or as an expression
    that's cast to void.  */
+/* Unconditionally set FUTEX to 0 (not acquired), releasing the lock.  If FUTEX
+   had FUTEX_WAITERS set then wake any waiters.  The waiter who acquires the
+   lock will set FUTEX_WAITERS.  */
 #define __lll_robust_unlock(futex, private)             \
   ((void)                                               \
    ({                                                   \
@@ -159,15 +241,12 @@  extern int __lll_robust_timedlock_wait (int *futex, const struct timespec *,
 #define LLL_LOCK_INITIALIZER		(0)
 #define LLL_LOCK_INITIALIZER_LOCKED	(1)
 
-/* The states of a lock are:
-    0  -  untaken
-    1  -  taken by one user
-   >1  -  taken by more users */
 
 /* The kernel notifies a process which uses CLONE_CHILD_CLEARTID via futex
-   wakeup when the clone terminates.  The memory location contains the
-   thread ID while the clone is running and is reset to zero
-   afterwards.	*/
+   wake-up when the clone terminates.  The memory location contains the
+   thread ID while the clone is running and is reset to zero by the kernel
+   afterwards.  The kernel up to version 3.16.3 does not use the private futex
+   operations for futex wake-up when the clone terminates.  */
 #define lll_wait_tid(tid) \
   do {					\
     __typeof (tid) __tid;		\
@@ -178,6 +257,8 @@  extern int __lll_robust_timedlock_wait (int *futex, const struct timespec *,
 extern int __lll_timedwait_tid (int *, const struct timespec *)
      attribute_hidden;
 
+/* As lll_wait_tid, but with a timeout.  If the timeout occurs then return
+   ETIMEDOUT.  If ABSTIME is invalid, return EINVAL.  */
 #define lll_timedwait_tid(tid, abstime) \
   ({							\
     int __res = 0;					\