Message ID | 1492550000-31374-1-git-send-email-adhemerval.zanella@linaro.org |
---|---|
State | New |
Headers | show |
Series | [1/2] nptl: Remove __ASSUME_SET_ROBUST_LIST | expand |
On 04/18/2017 11:13 PM, Adhemerval Zanella wrote: > This patch removes the __ASSUME_SET_ROBUST_LIST usage on nptl generic > code. The set_robust_list availability is defined by '__set_robust_list_avail' > which is now defined regardless. Its initial value is set to -1 and > defined to a positive value if both __NR_set_robust_list is defined > and the syscall returns correctly. I think we should drop support for kernels without set_robust_list. We really need something like robust mutex support for liveness detection of the nscd mappings. Making POSIX robust mutexes conditionally supported is awkward, and the fact that some architectures may lack support for them is undocumented. Application programmers will not expect this. Thanks, Florian
On 19/04/2017 02:36, Florian Weimer wrote: > On 04/18/2017 11:13 PM, Adhemerval Zanella wrote: >> This patch removes the __ASSUME_SET_ROBUST_LIST usage on nptl generic >> code. The set_robust_list availability is defined by '__set_robust_list_avail' >> which is now defined regardless. Its initial value is set to -1 and >> defined to a positive value if both __NR_set_robust_list is defined >> and the syscall returns correctly. > > I think we should drop support for kernels without set_robust_list. We really need something like robust mutex support for liveness detection of the nscd mappings. Making POSIX robust mutexes conditionally supported is awkward, and the fact that some architectures may lack support for them is undocumented. Application programmers will not expect this. > The problem is kernel supports for some architecture depends of kernel config and the underlying hardware revision/model. * ARM we have: arch/arm/include/asm/futex.h v3.0-rc1 #if defined(CONFIG_CPU_USE_DOMAINS) && defined(CONFIG_SMP) /* ARM doesn't provide unprivileged exclusive memory accessors */ #include <asm-generic/futex.h> #else And it was unconditionally define only on v3.15-rc1. It means that for some ARMv6 configure the futex_atomic_cmpxchg_inatomic may return ENOSYS. * Microblaze have a nasty bug that was fixed only on 3.10-rc4 (f6a12a7d0b): microblaze: Reversed logic in futex cmpxchg futex_atomic_cmpxchg_inatomic exchanged if the values were unequal rather than equal. This caused incorrect behavior of robust futexes. * MIPS still lacks proper supports depending of the underlying hardware: arch/mips/include/asm/futex.h 142 static inline int 143 futex_atomic_cmpxchg_inatomic(u32 *uval, u32 __user *uaddr, 144 u32 oldval, u32 newval) 145 { 146 int ret = 0; 147 u32 val; 148 149 if (!access_ok(VERIFY_WRITE, uaddr, sizeof(u32))) 150 return -EFAULT; 151 152 if (cpu_has_llsc && R10000_LLSC_WAR) { [...] 181 } else if (cpu_has_llsc) { 182 __asm__ __volatile__( [...] 210 } else 211 return -ENOSYS; And kernel still have support for some architecture that lacks 'cpu_has_llsc' (although I am not sure if glibc also have support for this kind of cpu as well). * sh4 lacks proper SMP support (although not sure how common it is) and it was implemented on v4.8-rc1 (00b73d8d1b71). * m68k Uses the default include/asm-generic/futex.h which only has support for uniprocessors. That's why I think best option would to simplify glibc build to avoid multiple config depending of the underlying kernel config and query robust support at runtime. It is indeed awkward that some kernel option lacks robust support and we can check with hardware mantainers how often or if it desirable to support such config/chips.
On Apr 19 2017, Adhemerval Zanella <adhemerval.zanella@linaro.org> wrote: > * m68k Uses the default include/asm-generic/futex.h which only has support for > uniprocessors. m68k has no support for SMP. Andreas. -- Andreas Schwab, schwab@linux-m68k.org GPG Key fingerprint = 58CA 54C7 6D53 942B 1756 01D3 44D5 214B 8276 4ED5 "And now for something completely different."
On 19/04/2017 11:51, Andreas Schwab wrote: > On Apr 19 2017, Adhemerval Zanella <adhemerval.zanella@linaro.org> wrote: > >> * m68k Uses the default include/asm-generic/futex.h which only has support for >> uniprocessors. > > m68k has no support for SMP. > > Andreas. > Right, so m68k should be safe regarding robust support (assuming asm-generic/futex.h works correctly).
On 04/19/2017 04:17 PM, Adhemerval Zanella wrote: > The problem is kernel supports for some architecture depends of kernel config > and the underlying hardware revision/model. I understand that. But I have seen a lot of use of robust mutexes lately, and I'm sure these developers aren't aware that the mutexes aren't portable across GNU/Linux (the Linux architecture subset supported by glibc). I expect it's like the missing accept4 system call—you need to provide the set_robust_list system call in the kernel if you want to a working, modern system. Thanks, Florian
On Wed, Apr 19, 2017 at 12:28 PM, Florian Weimer <fweimer@redhat.com> wrote: > On 04/19/2017 04:17 PM, Adhemerval Zanella wrote: >> >> The problem is kernel supports for some architecture depends of kernel >> config >> and the underlying hardware revision/model. > > > I understand that. But I have seen a lot of use of robust mutexes lately, > and I'm sure these developers aren't aware that the mutexes aren't portable > across GNU/Linux (the Linux architecture subset supported by glibc). > > I expect it's like the missing accept4 system call—you need to provide the > set_robust_list system call in the kernel if you want to a working, modern > system. But my point is with current minimum supported kernel version for some architectures we can't simple assume set_robust_list support and even bumping minimum kernel version for some architectures also do not solve the issue (on mips for instance). That's why I still think adding a runtime check support is still required and slight better than just disable for such architecture that might still have support. I think in the future when I might increase minimum kernel support we might rework and require robust support regardless for such architectures. -- “One thing I have learned in a long life: that all our science, measured against reality, is primitive and childlike -- and yet it is the most precious thing we have.” ― Albert Einstein
On 04/19/2017 07:38 PM, Adhemerval Zanella wrote: > On Wed, Apr 19, 2017 at 12:28 PM, Florian Weimer <fweimer@redhat.com> wrote: >> On 04/19/2017 04:17 PM, Adhemerval Zanella wrote: >>> >>> The problem is kernel supports for some architecture depends of kernel >>> config >>> and the underlying hardware revision/model. >> >> >> I understand that. But I have seen a lot of use of robust mutexes lately, >> and I'm sure these developers aren't aware that the mutexes aren't portable >> across GNU/Linux (the Linux architecture subset supported by glibc). >> >> I expect it's like the missing accept4 system call—you need to provide the >> set_robust_list system call in the kernel if you want to a working, modern >> system. > > But my point is with current minimum supported kernel version for some > architectures > we can't simple assume set_robust_list support and even bumping minimum kernel > version for some architectures also do not solve the issue (on mips > for instance). Hmm, maybe you are right, and we have to keep things this way for now. But I don't like that your patch reintroduces the conditional code. You could use #define __set_robust_list_avail 1 for the __ASSUME_SET_ROBUST_LIST case to reduce clutter, and let GCC do the rest. Thanks, Florian
On 19/04/2017 15:09, Florian Weimer wrote: > On 04/19/2017 07:38 PM, Adhemerval Zanella wrote: >> On Wed, Apr 19, 2017 at 12:28 PM, Florian Weimer <fweimer@redhat.com> wrote: >>> On 04/19/2017 04:17 PM, Adhemerval Zanella wrote: >>>> >>>> The problem is kernel supports for some architecture depends of kernel >>>> config >>>> and the underlying hardware revision/model. >>> >>> >>> I understand that. But I have seen a lot of use of robust mutexes lately, >>> and I'm sure these developers aren't aware that the mutexes aren't portable >>> across GNU/Linux (the Linux architecture subset supported by glibc). >>> >>> I expect it's like the missing accept4 system call—you need to provide the >>> set_robust_list system call in the kernel if you want to a working, modern >>> system. >> >> But my point is with current minimum supported kernel version for some >> architectures >> we can't simple assume set_robust_list support and even bumping minimum kernel >> version for some architectures also do not solve the issue (on mips >> for instance). > > Hmm, maybe you are right, and we have to keep things this way for now. > > But I don't like that your patch reintroduces the conditional code. You could use > > #define __set_robust_list_avail 1 > > for the __ASSUME_SET_ROBUST_LIST case to reduce clutter, and let GCC do the rest. > > Thanks, > Florian Alright, I will change it and send a new version.
On 19/04/2017 15:09, Florian Weimer wrote: > On 04/19/2017 07:38 PM, Adhemerval Zanella wrote: >> On Wed, Apr 19, 2017 at 12:28 PM, Florian Weimer <fweimer@redhat.com> wrote: >>> On 04/19/2017 04:17 PM, Adhemerval Zanella wrote: >>>> >>>> The problem is kernel supports for some architecture depends of kernel >>>> config >>>> and the underlying hardware revision/model. >>> >>> >>> I understand that. But I have seen a lot of use of robust mutexes lately, >>> and I'm sure these developers aren't aware that the mutexes aren't portable >>> across GNU/Linux (the Linux architecture subset supported by glibc). >>> >>> I expect it's like the missing accept4 system call—you need to provide the >>> set_robust_list system call in the kernel if you want to a working, modern >>> system. >> >> But my point is with current minimum supported kernel version for some >> architectures >> we can't simple assume set_robust_list support and even bumping minimum kernel >> version for some architectures also do not solve the issue (on mips >> for instance). > > Hmm, maybe you are right, and we have to keep things this way for now. > > But I don't like that your patch reintroduces the conditional code. You could use > > #define __set_robust_list_avail 1 > > for the __ASSUME_SET_ROBUST_LIST case to reduce clutter, and let GCC do the rest. Rechecking what you meant, I am not following you here: the idea is to have __set_robust_list_avail being a runtime variable and get rid of the _ASSUME define. How using a define would help in this way?
On 04/20/2017 02:50 PM, Adhemerval Zanella wrote: > > > On 19/04/2017 15:09, Florian Weimer wrote: >> On 04/19/2017 07:38 PM, Adhemerval Zanella wrote: >>> On Wed, Apr 19, 2017 at 12:28 PM, Florian Weimer <fweimer@redhat.com> wrote: >>>> On 04/19/2017 04:17 PM, Adhemerval Zanella wrote: >>>>> >>>>> The problem is kernel supports for some architecture depends of kernel >>>>> config >>>>> and the underlying hardware revision/model. >>>> >>>> >>>> I understand that. But I have seen a lot of use of robust mutexes lately, >>>> and I'm sure these developers aren't aware that the mutexes aren't portable >>>> across GNU/Linux (the Linux architecture subset supported by glibc). >>>> >>>> I expect it's like the missing accept4 system call—you need to provide the >>>> set_robust_list system call in the kernel if you want to a working, modern >>>> system. >>> >>> But my point is with current minimum supported kernel version for some >>> architectures >>> we can't simple assume set_robust_list support and even bumping minimum kernel >>> version for some architectures also do not solve the issue (on mips >>> for instance). >> >> Hmm, maybe you are right, and we have to keep things this way for now. >> >> But I don't like that your patch reintroduces the conditional code. You could use >> >> #define __set_robust_list_avail 1 >> >> for the __ASSUME_SET_ROBUST_LIST case to reduce clutter, and let GCC do the rest. > > Rechecking what you meant, I am not following you here: the idea is to have > __set_robust_list_avail being a runtime variable and get rid of the _ASSUME > define. How using a define would help in this way? I suggest to keep the __ASSUME_SET_ROBUST_LIST define but concentrate its use to a single place by making __set_robust_list_avail available unconditionally, but as a constant for __ASSUME_SET_ROBUST_LIST. This is the pattern we used elsewhere (e.g. for have_cloexec). Thanks, Florian
On 20/04/2017 09:52, Florian Weimer wrote: > On 04/20/2017 02:50 PM, Adhemerval Zanella wrote: >> >> >> On 19/04/2017 15:09, Florian Weimer wrote: >>> On 04/19/2017 07:38 PM, Adhemerval Zanella wrote: >>>> On Wed, Apr 19, 2017 at 12:28 PM, Florian Weimer <fweimer@redhat.com> wrote: >>>>> On 04/19/2017 04:17 PM, Adhemerval Zanella wrote: >>>>>> >>>>>> The problem is kernel supports for some architecture depends of kernel >>>>>> config >>>>>> and the underlying hardware revision/model. >>>>> >>>>> >>>>> I understand that. But I have seen a lot of use of robust mutexes lately, >>>>> and I'm sure these developers aren't aware that the mutexes aren't portable >>>>> across GNU/Linux (the Linux architecture subset supported by glibc). >>>>> >>>>> I expect it's like the missing accept4 system call—you need to provide the >>>>> set_robust_list system call in the kernel if you want to a working, modern >>>>> system. >>>> >>>> But my point is with current minimum supported kernel version for some >>>> architectures >>>> we can't simple assume set_robust_list support and even bumping minimum kernel >>>> version for some architectures also do not solve the issue (on mips >>>> for instance). >>> >>> Hmm, maybe you are right, and we have to keep things this way for now. >>> >>> But I don't like that your patch reintroduces the conditional code. You could use >>> >>> #define __set_robust_list_avail 1 >>> >>> for the __ASSUME_SET_ROBUST_LIST case to reduce clutter, and let GCC do the rest. >> >> Rechecking what you meant, I am not following you here: the idea is to have >> __set_robust_list_avail being a runtime variable and get rid of the _ASSUME >> define. How using a define would help in this way? > > I suggest to keep the __ASSUME_SET_ROBUST_LIST define but concentrate its use to a single place by making __set_robust_list_avail available unconditionally, but as a constant for __ASSUME_SET_ROBUST_LIST. This is the pattern we used elsewhere (e.g. for have_cloexec). But what I am suggesting is exactly to get rid of __ASSUME_SET_ROBUST_LIST to simplify the conditionals and have it defined at runtime instead of having each architecture define whether it supports or not set_robust_list. What you are suggesting is basically current behaviour.
On 04/20/2017 03:09 PM, Adhemerval Zanella wrote: > But what I am suggesting is exactly to get rid of __ASSUME_SET_ROBUST_LIST to > simplify the conditionals and have it defined at runtime instead of having > each architecture define whether it supports or not set_robust_list. What you > are suggesting is basically current behaviour. Right, but I don't think we should do the cleanup yet when we still need run-time checking. Thanks, Florian
On 20/04/2017 10:47, Florian Weimer wrote: > On 04/20/2017 03:09 PM, Adhemerval Zanella wrote: > >> But what I am suggesting is exactly to get rid of __ASSUME_SET_ROBUST_LIST to >> simplify the conditionals and have it defined at runtime instead of having >> each architecture define whether it supports or not set_robust_list. What you >> are suggesting is basically current behaviour. > > Right, but I don't think we should do the cleanup yet when we still need run-time checking. But it does not remove the run-time checking, but rather sets it as the default. The idea is to continue provide the run-time capabilities and to simplify the code by removing compilation defines.
diff --git a/nptl/nptl-init.c b/nptl/nptl-init.c index 2921607..c86b35e 100644 --- a/nptl/nptl-init.c +++ b/nptl/nptl-init.c @@ -48,14 +48,8 @@ int *__libc_multiple_threads_ptr attribute_hidden; size_t __static_tls_size; size_t __static_tls_align_m1; -#ifndef __ASSUME_SET_ROBUST_LIST /* Negative if we do not have the system call and we can use it. */ -int __set_robust_list_avail; -# define set_robust_list_not_avail() \ - __set_robust_list_avail = -1 -#else -# define set_robust_list_not_avail() do { } while (0) -#endif +int __set_robust_list_avail = -1; #ifndef __ASSUME_FUTEX_CLOCK_REALTIME /* Nonzero if we do not have FUTEX_CLOCK_REALTIME. */ @@ -308,9 +302,9 @@ __pthread_initialize_minimal_internal (void) INTERNAL_SYSCALL_DECL (err); int res = INTERNAL_SYSCALL (set_robust_list, err, 2, &pd->robust_head, sizeof (struct robust_list_head)); - if (INTERNAL_SYSCALL_ERROR_P (res, err)) + if (!INTERNAL_SYSCALL_ERROR_P (res, err)) + __set_robust_list_avail = 1; #endif - set_robust_list_not_avail (); } #ifdef __NR_futex diff --git a/nptl/pthreadP.h b/nptl/pthreadP.h index 7fc1e50..4125e7f 100644 --- a/nptl/pthreadP.h +++ b/nptl/pthreadP.h @@ -213,10 +213,8 @@ hidden_proto (__pthread_keys) /* Number of threads running. */ extern unsigned int __nptl_nthreads attribute_hidden; -#ifndef __ASSUME_SET_ROBUST_LIST /* Negative if we do not have the system call and we can use it. */ extern int __set_robust_list_avail attribute_hidden; -#endif /* Thread Priority Protection. */ extern int __sched_fifo_min_prio attribute_hidden; diff --git a/nptl/pthread_create.c b/nptl/pthread_create.c index d0d7414..76b0ac4 100644 --- a/nptl/pthread_create.c +++ b/nptl/pthread_create.c @@ -387,18 +387,16 @@ START_THREAD_DEFN if (__glibc_unlikely (atomic_exchange_acq (&pd->setxid_futex, 0) == -2)) futex_wake (&pd->setxid_futex, 1, FUTEX_PRIVATE); -#ifdef __NR_set_robust_list -# ifndef __ASSUME_SET_ROBUST_LIST if (__set_robust_list_avail >= 0) -# endif { +#ifdef __NR_set_robust_list INTERNAL_SYSCALL_DECL (err); /* This call should never fail because the initial call in init.c succeeded. */ INTERNAL_SYSCALL (set_robust_list, err, 2, &pd->robust_head, sizeof (struct robust_list_head)); - } #endif + } #ifdef SIGCANCEL /* If the parent was running cancellation handlers while creating @@ -508,7 +506,6 @@ START_THREAD_DEFN the breakpoint reports TD_THR_RUN state rather than TD_THR_ZOMBIE. */ atomic_bit_set (&pd->cancelhandling, EXITING_BIT); -#ifndef __ASSUME_SET_ROBUST_LIST /* If this thread has any robust mutexes locked, handle them now. */ # ifdef __PTHREAD_MUTEX_HAVE_PREV void *robust = pd->robust_head.list; @@ -539,7 +536,6 @@ START_THREAD_DEFN } while (robust != (void *) &pd->robust_head); } -#endif /* Mark the memory of the stack as usable to the kernel. We free everything except for the space used for the TCB itself. */ diff --git a/nptl/pthread_mutex_init.c b/nptl/pthread_mutex_init.c index 138e144..18724e8 100644 --- a/nptl/pthread_mutex_init.c +++ b/nptl/pthread_mutex_init.c @@ -91,11 +91,9 @@ __pthread_mutex_init (pthread_mutex_t *mutex, if ((imutexattr->mutexkind & PTHREAD_MUTEXATTR_FLAG_ROBUST) != 0) { -#ifndef __ASSUME_SET_ROBUST_LIST if ((imutexattr->mutexkind & PTHREAD_MUTEXATTR_FLAG_PSHARED) != 0 && __set_robust_list_avail < 0) return ENOTSUP; -#endif mutex->__data.__kind |= PTHREAD_MUTEX_ROBUST_NORMAL_NP; }