From patchwork Thu May 9 18:42:40 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg Kroah-Hartman X-Patchwork-Id: 163750 Delivered-To: patch@linaro.org Received: by 2002:a05:6e02:142:0:0:0:0 with SMTP id j2csp1359594ilr; Thu, 9 May 2019 11:49:41 -0700 (PDT) X-Google-Smtp-Source: APXvYqyaEd3+Be1dlIwcV3KmleWGwvbVpKvYJhYPA26q91LNrE6e/vGch8OSCQs3P9J4OfM99X8l X-Received: by 2002:a17:902:bd8a:: with SMTP id q10mr7403550pls.155.1557427780923; Thu, 09 May 2019 11:49:40 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1557427780; cv=none; d=google.com; s=arc-20160816; b=FU1PKznMKaPNe368ykdoOhMzck2Cn6+f4fK/PCH/Y4PMTh5fxFMaAZpSdB27b8v3Mb kqhm97K1T/kguCpU97ht5oPHGgBUAtwwOJFcM17aicOzuc7TejMxHJIJoEIdAIig0Fwc 6YqYm6Af64OfZyDGVeV0a0BRFcCYZiX+8YqbMjHLbJaz12Cwa0tK3qRh1sXu1CRmwVwr Pjx0j2CEbRxvHILo22uWHQJDasyBjS/HYaYoUgksNUU3L2p3V5PRFB2OdRax0lHeFM1o x2//ytNU/NyS/RcOFyhvxUuKrPYJaXQdXzxvEt73DbUvbntS9eVqaQN9Xe80U62u5Cct kH1g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=q23QdccsJT+M/DMEuzrSntBjc0QjMWbS9FjBVPJ1PNM=; b=QZb4c8gWd8m7dusS79TddfEo9JDuOeWRLxSMjTu9naLgwKOhPgYboCS7bpBuYhTLDR RfwdgdbDH9SRxy3dqmzjRpxCXoo/kKk0r90642ht7iKEVM6xiW/YQMcY96xfio7NNSCK x/n7ujU2NAmsu4oXzPT9vq8udxHa3sLzoV+8kKRmpZscs4isz/AA1G0qe3bW7Hyj/rmd JoI5Yl4g1/3XZLq9Rg2Q3+kt5YYVaBv6303zyjQHXzkH6McR0PjitqsPnBSC9j2LkzV7 rc1+cerobompdjBADZk7VHvXzIpmdfqCVyiT0uJrixd5MAt7x9WP+E0UTvf9CPn+JrQm yeyg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b="c2m/akXu"; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id v66si4022689pgb.542.2019.05.09.11.49.40; Thu, 09 May 2019 11:49:40 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b="c2m/akXu"; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728267AbfEIStj (ORCPT + 30 others); Thu, 9 May 2019 14:49:39 -0400 Received: from mail.kernel.org ([198.145.29.99]:42932 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728254AbfEIStf (ORCPT ); Thu, 9 May 2019 14:49:35 -0400 Received: from localhost (83-86-89-107.cable.dynamic.v4.ziggo.nl [83.86.89.107]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 77CE321848; Thu, 9 May 2019 18:49:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1557427775; bh=d/YhNRsEIcuoX3aeZ4ffMWnZ0tLICeNfrqDEJdLudO4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=c2m/akXuSk3Q7K/lqqSzcw/KJGlY1jB0NAE9oH8EGnBJFLVatONbzDZ4W7Nb+0Pam 4zdMcmTBb77EJTY61ReW26042yRFpY7tXPwPSBx6DwdWWw8B7ftA6CLqvLxSZw+WSi nfBHdOPNhoGetBOCktydb3AsNVJZKvmn6CkwsWfA= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, stable@kernel.org, "Peter Zijlstra (Intel)" , Will Deacon Subject: [PATCH 4.19 65/66] locking/futex: Allow low-level atomic operations to return -EAGAIN Date: Thu, 9 May 2019 20:42:40 +0200 Message-Id: <20190509181308.185416876@linuxfoundation.org> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190509181301.719249738@linuxfoundation.org> References: <20190509181301.719249738@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Will Deacon commit 6b4f4bc9cb22875f97023984a625386f0c7cc1c0 upstream. Some futex() operations, including FUTEX_WAKE_OP, require the kernel to perform an atomic read-modify-write of the futex word via the userspace mapping. These operations are implemented by each architecture in arch_futex_atomic_op_inuser() and futex_atomic_cmpxchg_inatomic(), which are called in atomic context with the relevant hash bucket locks held. Although these routines may return -EFAULT in response to a page fault generated when accessing userspace, they are expected to succeed (i.e. return 0) in all other cases. This poses a problem for architectures that do not provide bounded forward progress guarantees or fairness of contended atomic operations and can lead to starvation in some cases. In these problematic scenarios, we must return back to the core futex code so that we can drop the hash bucket locks and reschedule if necessary, much like we do in the case of a page fault. Allow architectures to return -EAGAIN from their implementations of arch_futex_atomic_op_inuser() and futex_atomic_cmpxchg_inatomic(), which will cause the core futex code to reschedule if necessary and return back to the architecture code later on. Cc: Acked-by: Peter Zijlstra (Intel) Signed-off-by: Will Deacon Signed-off-by: Greg Kroah-Hartman --- kernel/futex.c | 188 +++++++++++++++++++++++++++++++++++---------------------- 1 file changed, 117 insertions(+), 71 deletions(-) --- a/kernel/futex.c +++ b/kernel/futex.c @@ -1306,13 +1306,15 @@ static int lookup_pi_state(u32 __user *u static int lock_pi_update_atomic(u32 __user *uaddr, u32 uval, u32 newval) { + int err; u32 uninitialized_var(curval); if (unlikely(should_fail_futex(true))) return -EFAULT; - if (unlikely(cmpxchg_futex_value_locked(&curval, uaddr, uval, newval))) - return -EFAULT; + err = cmpxchg_futex_value_locked(&curval, uaddr, uval, newval); + if (unlikely(err)) + return err; /* If user space value changed, let the caller retry */ return curval != uval ? -EAGAIN : 0; @@ -1498,10 +1500,8 @@ static int wake_futex_pi(u32 __user *uad if (unlikely(should_fail_futex(true))) ret = -EFAULT; - if (cmpxchg_futex_value_locked(&curval, uaddr, uval, newval)) { - ret = -EFAULT; - - } else if (curval != uval) { + ret = cmpxchg_futex_value_locked(&curval, uaddr, uval, newval); + if (!ret && (curval != uval)) { /* * If a unconditional UNLOCK_PI operation (user space did not * try the TID->0 transition) raced with a waiter setting the @@ -1696,32 +1696,32 @@ retry_private: double_lock_hb(hb1, hb2); op_ret = futex_atomic_op_inuser(op, uaddr2); if (unlikely(op_ret < 0)) { - double_unlock_hb(hb1, hb2); -#ifndef CONFIG_MMU - /* - * we don't get EFAULT from MMU faults if we don't have an MMU, - * but we might get them from range checking - */ - ret = op_ret; - goto out_put_keys; -#endif - - if (unlikely(op_ret != -EFAULT)) { + if (!IS_ENABLED(CONFIG_MMU) || + unlikely(op_ret != -EFAULT && op_ret != -EAGAIN)) { + /* + * we don't get EFAULT from MMU faults if we don't have + * an MMU, but we might get them from range checking + */ ret = op_ret; goto out_put_keys; } - ret = fault_in_user_writeable(uaddr2); - if (ret) - goto out_put_keys; + if (op_ret == -EFAULT) { + ret = fault_in_user_writeable(uaddr2); + if (ret) + goto out_put_keys; + } - if (!(flags & FLAGS_SHARED)) + if (!(flags & FLAGS_SHARED)) { + cond_resched(); goto retry_private; + } put_futex_key(&key2); put_futex_key(&key1); + cond_resched(); goto retry; } @@ -2346,7 +2346,7 @@ static int fixup_pi_state_owner(u32 __us u32 uval, uninitialized_var(curval), newval; struct task_struct *oldowner, *newowner; u32 newtid; - int ret; + int ret, err = 0; lockdep_assert_held(q->lock_ptr); @@ -2417,14 +2417,17 @@ retry: if (!pi_state->owner) newtid |= FUTEX_OWNER_DIED; - if (get_futex_value_locked(&uval, uaddr)) - goto handle_fault; + err = get_futex_value_locked(&uval, uaddr); + if (err) + goto handle_err; for (;;) { newval = (uval & FUTEX_OWNER_DIED) | newtid; - if (cmpxchg_futex_value_locked(&curval, uaddr, uval, newval)) - goto handle_fault; + err = cmpxchg_futex_value_locked(&curval, uaddr, uval, newval); + if (err) + goto handle_err; + if (curval == uval) break; uval = curval; @@ -2452,23 +2455,37 @@ retry: return 0; /* - * To handle the page fault we need to drop the locks here. That gives - * the other task (either the highest priority waiter itself or the - * task which stole the rtmutex) the chance to try the fixup of the - * pi_state. So once we are back from handling the fault we need to - * check the pi_state after reacquiring the locks and before trying to - * do another fixup. When the fixup has been done already we simply - * return. + * In order to reschedule or handle a page fault, we need to drop the + * locks here. In the case of a fault, this gives the other task + * (either the highest priority waiter itself or the task which stole + * the rtmutex) the chance to try the fixup of the pi_state. So once we + * are back from handling the fault we need to check the pi_state after + * reacquiring the locks and before trying to do another fixup. When + * the fixup has been done already we simply return. * * Note: we hold both hb->lock and pi_mutex->wait_lock. We can safely * drop hb->lock since the caller owns the hb -> futex_q relation. * Dropping the pi_mutex->wait_lock requires the state revalidate. */ -handle_fault: +handle_err: raw_spin_unlock_irq(&pi_state->pi_mutex.wait_lock); spin_unlock(q->lock_ptr); - ret = fault_in_user_writeable(uaddr); + switch (err) { + case -EFAULT: + ret = fault_in_user_writeable(uaddr); + break; + + case -EAGAIN: + cond_resched(); + ret = 0; + break; + + default: + WARN_ON_ONCE(1); + ret = err; + break; + } spin_lock(q->lock_ptr); raw_spin_lock_irq(&pi_state->pi_mutex.wait_lock); @@ -3037,10 +3054,8 @@ retry: * A unconditional UNLOCK_PI op raced against a waiter * setting the FUTEX_WAITERS bit. Try again. */ - if (ret == -EAGAIN) { - put_futex_key(&key); - goto retry; - } + if (ret == -EAGAIN) + goto pi_retry; /* * wake_futex_pi has detected invalid state. Tell user * space. @@ -3055,9 +3070,19 @@ retry: * preserve the WAITERS bit not the OWNER_DIED one. We are the * owner. */ - if (cmpxchg_futex_value_locked(&curval, uaddr, uval, 0)) { + if ((ret = cmpxchg_futex_value_locked(&curval, uaddr, uval, 0))) { spin_unlock(&hb->lock); - goto pi_faulted; + switch (ret) { + case -EFAULT: + goto pi_faulted; + + case -EAGAIN: + goto pi_retry; + + default: + WARN_ON_ONCE(1); + goto out_putkey; + } } /* @@ -3071,6 +3096,11 @@ out_putkey: put_futex_key(&key); return ret; +pi_retry: + put_futex_key(&key); + cond_resched(); + goto retry; + pi_faulted: put_futex_key(&key); @@ -3431,6 +3461,7 @@ err_unlock: int handle_futex_death(u32 __user *uaddr, struct task_struct *curr, int pi) { u32 uval, uninitialized_var(nval), mval; + int err; /* Futex address must be 32bit aligned */ if ((((unsigned long)uaddr) % sizeof(*uaddr)) != 0) @@ -3440,42 +3471,57 @@ retry: if (get_user(uval, uaddr)) return -1; - if ((uval & FUTEX_TID_MASK) == task_pid_vnr(curr)) { - /* - * Ok, this dying thread is truly holding a futex - * of interest. Set the OWNER_DIED bit atomically - * via cmpxchg, and if the value had FUTEX_WAITERS - * set, wake up a waiter (if any). (We have to do a - * futex_wake() even if OWNER_DIED is already set - - * to handle the rare but possible case of recursive - * thread-death.) The rest of the cleanup is done in - * userspace. - */ - mval = (uval & FUTEX_WAITERS) | FUTEX_OWNER_DIED; - /* - * We are not holding a lock here, but we want to have - * the pagefault_disable/enable() protection because - * we want to handle the fault gracefully. If the - * access fails we try to fault in the futex with R/W - * verification via get_user_pages. get_user() above - * does not guarantee R/W access. If that fails we - * give up and leave the futex locked. - */ - if (cmpxchg_futex_value_locked(&nval, uaddr, uval, mval)) { + if ((uval & FUTEX_TID_MASK) != task_pid_vnr(curr)) + return 0; + + /* + * Ok, this dying thread is truly holding a futex + * of interest. Set the OWNER_DIED bit atomically + * via cmpxchg, and if the value had FUTEX_WAITERS + * set, wake up a waiter (if any). (We have to do a + * futex_wake() even if OWNER_DIED is already set - + * to handle the rare but possible case of recursive + * thread-death.) The rest of the cleanup is done in + * userspace. + */ + mval = (uval & FUTEX_WAITERS) | FUTEX_OWNER_DIED; + + /* + * We are not holding a lock here, but we want to have + * the pagefault_disable/enable() protection because + * we want to handle the fault gracefully. If the + * access fails we try to fault in the futex with R/W + * verification via get_user_pages. get_user() above + * does not guarantee R/W access. If that fails we + * give up and leave the futex locked. + */ + if ((err = cmpxchg_futex_value_locked(&nval, uaddr, uval, mval))) { + switch (err) { + case -EFAULT: if (fault_in_user_writeable(uaddr)) return -1; goto retry; - } - if (nval != uval) + + case -EAGAIN: + cond_resched(); goto retry; - /* - * Wake robust non-PI futexes here. The wakeup of - * PI futexes happens in exit_pi_state(): - */ - if (!pi && (uval & FUTEX_WAITERS)) - futex_wake(uaddr, 1, 1, FUTEX_BITSET_MATCH_ANY); + default: + WARN_ON_ONCE(1); + return err; + } } + + if (nval != uval) + goto retry; + + /* + * Wake robust non-PI futexes here. The wakeup of + * PI futexes happens in exit_pi_state(): + */ + if (!pi && (uval & FUTEX_WAITERS)) + futex_wake(uaddr, 1, 1, FUTEX_BITSET_MATCH_ANY); + return 0; } From patchwork Thu May 9 18:42:41 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg Kroah-Hartman X-Patchwork-Id: 163751 Delivered-To: patch@linaro.org Received: by 2002:a05:6e02:142:0:0:0:0 with SMTP id j2csp1359632ilr; Thu, 9 May 2019 11:49:43 -0700 (PDT) X-Google-Smtp-Source: APXvYqwg13/dbq6oDLEFx4BPDEVLmn1+y684QaabA4e/n1f6CRjKtTn7eDkJbQ7H9DSeDOBL5ttp X-Received: by 2002:a63:1e0c:: with SMTP id e12mr7440921pge.218.1557427783553; Thu, 09 May 2019 11:49:43 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1557427783; cv=none; d=google.com; s=arc-20160816; b=DYO1A2ddUqPl8EuCUXbTgTiLUiZeGOFPO9LYBvJRDvZnL7lgGj4MPcOll3KxPub12Y tDU0jMySlFr0W8WXBsbR7mc+haoponZfV6Y/frmJ6Z5W2QKCIkyokwjzh4KGeeuIZnW9 nxo6HS+7X4keHE4Z1zewLzBd0ooH8JosqWlo9J2CVe1Hhrw1z+vlubtK7JVGPHYCmiPR Cdfq+poPfMuUhIamsE+hGhtlVlqyD3Vj5b4ZqnQC3ZAGD/jy+PcMj878lP/l14zngnMi GWsLBxyzieslEh0qynufr2yW+L3jWTNsPeaYHresIqnFJgQe9hXbowI/g40W2b7Gl09D 6M4g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=Zxp6oyGnJtBFNAsvsvyqiJiWKlsOz/o8wAvGHDmgefA=; b=ODXstxT1PKqp2gMxyc7FoWeQakOo+6VxwAzapE2JDCobstvbIpooQR7X3eoYyYSH7s 54jB1fR6INrC4oFIRbrGB0pzgDNmgKz5/Sm4zYfEfqXvi/Zq2aELaub5T+qS+c+ObZUF LAcVNr909CHdCTf7Rqb3WWvW3yyOsuhPnz+n3BvIss9siszEOhLZoFJZMlPtLxyxKf2i /yzC1SVQgcoLHA3PDqaiLxfyKHJ38SkN5M51cDEZDN6z82SHlnNm9cyjo0/WmbXPQw7h GWSHYL0egOwDFFSyidrcdHLReBwSKFGhOTrf34NRePujVLV3XupSKDA0JEZ06oMn6M1b wUbg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b="ZfZWN/fG"; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id v66si4022689pgb.542.2019.05.09.11.49.43; Thu, 09 May 2019 11:49:43 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b="ZfZWN/fG"; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728283AbfEIStm (ORCPT + 30 others); Thu, 9 May 2019 14:49:42 -0400 Received: from mail.kernel.org ([198.145.29.99]:42990 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728262AbfEIStj (ORCPT ); Thu, 9 May 2019 14:49:39 -0400 Received: from localhost (83-86-89-107.cable.dynamic.v4.ziggo.nl [83.86.89.107]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 123E9217D7; Thu, 9 May 2019 18:49:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1557427777; bh=M/WflKJ18bUUTjs4bfqR6PSZ0fvoLlGIFcH8OVmjCno=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ZfZWN/fG6Joh2HnUY8ghVf25qpVS8BCVJ1neYj2GRh6QLDbqWsgqSgYz4heY23nHR po9080YJE6jgUrohYdsJFJnDtkqVnTeejZ9dadaJHlBDAc1CXpqOIcoVJWZwyvTvc6 VoA07xshY0FKH2QwNyiG8Wf7gQjm4rj3A4RxpNn4= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, stable@kernel.org, Will Deacon Subject: [PATCH 4.19 66/66] arm64: futex: Bound number of LDXR/STXR loops in FUTEX_WAKE_OP Date: Thu, 9 May 2019 20:42:41 +0200 Message-Id: <20190509181308.288924477@linuxfoundation.org> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190509181301.719249738@linuxfoundation.org> References: <20190509181301.719249738@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Will Deacon commit 03110a5cb2161690ae5ac04994d47ed0cd6cef75 upstream. Our futex implementation makes use of LDXR/STXR loops to perform atomic updates to user memory from atomic context. This can lead to latency problems if we end up spinning around the LL/SC sequence at the expense of doing something useful. Rework our futex atomic operations so that we return -EAGAIN if we fail to update the futex word after 128 attempts. The core futex code will reschedule if necessary and we'll try again later. Cc: Fixes: 6170a97460db ("arm64: Atomic operations") Signed-off-by: Will Deacon Signed-off-by: Greg Kroah-Hartman --- arch/arm64/include/asm/futex.h | 55 +++++++++++++++++++++++++---------------- 1 file changed, 34 insertions(+), 21 deletions(-) --- a/arch/arm64/include/asm/futex.h +++ b/arch/arm64/include/asm/futex.h @@ -23,26 +23,34 @@ #include +#define FUTEX_MAX_LOOPS 128 /* What's the largest number you can think of? */ + #define __futex_atomic_op(insn, ret, oldval, uaddr, tmp, oparg) \ do { \ + unsigned int loops = FUTEX_MAX_LOOPS; \ + \ uaccess_enable(); \ asm volatile( \ " prfm pstl1strm, %2\n" \ "1: ldxr %w1, %2\n" \ insn "\n" \ "2: stlxr %w0, %w3, %2\n" \ -" cbnz %w0, 1b\n" \ -" dmb ish\n" \ +" cbz %w0, 3f\n" \ +" sub %w4, %w4, %w0\n" \ +" cbnz %w4, 1b\n" \ +" mov %w0, %w7\n" \ "3:\n" \ +" dmb ish\n" \ " .pushsection .fixup,\"ax\"\n" \ " .align 2\n" \ -"4: mov %w0, %w5\n" \ +"4: mov %w0, %w6\n" \ " b 3b\n" \ " .popsection\n" \ _ASM_EXTABLE(1b, 4b) \ _ASM_EXTABLE(2b, 4b) \ - : "=&r" (ret), "=&r" (oldval), "+Q" (*uaddr), "=&r" (tmp) \ - : "r" (oparg), "Ir" (-EFAULT) \ + : "=&r" (ret), "=&r" (oldval), "+Q" (*uaddr), "=&r" (tmp), \ + "+r" (loops) \ + : "r" (oparg), "Ir" (-EFAULT), "Ir" (-EAGAIN) \ : "memory"); \ uaccess_disable(); \ } while (0) @@ -57,23 +65,23 @@ arch_futex_atomic_op_inuser(int op, int switch (op) { case FUTEX_OP_SET: - __futex_atomic_op("mov %w3, %w4", + __futex_atomic_op("mov %w3, %w5", ret, oldval, uaddr, tmp, oparg); break; case FUTEX_OP_ADD: - __futex_atomic_op("add %w3, %w1, %w4", + __futex_atomic_op("add %w3, %w1, %w5", ret, oldval, uaddr, tmp, oparg); break; case FUTEX_OP_OR: - __futex_atomic_op("orr %w3, %w1, %w4", + __futex_atomic_op("orr %w3, %w1, %w5", ret, oldval, uaddr, tmp, oparg); break; case FUTEX_OP_ANDN: - __futex_atomic_op("and %w3, %w1, %w4", + __futex_atomic_op("and %w3, %w1, %w5", ret, oldval, uaddr, tmp, ~oparg); break; case FUTEX_OP_XOR: - __futex_atomic_op("eor %w3, %w1, %w4", + __futex_atomic_op("eor %w3, %w1, %w5", ret, oldval, uaddr, tmp, oparg); break; default: @@ -93,6 +101,7 @@ futex_atomic_cmpxchg_inatomic(u32 *uval, u32 oldval, u32 newval) { int ret = 0; + unsigned int loops = FUTEX_MAX_LOOPS; u32 val, tmp; u32 __user *uaddr; @@ -104,20 +113,24 @@ futex_atomic_cmpxchg_inatomic(u32 *uval, asm volatile("// futex_atomic_cmpxchg_inatomic\n" " prfm pstl1strm, %2\n" "1: ldxr %w1, %2\n" -" sub %w3, %w1, %w4\n" -" cbnz %w3, 3f\n" -"2: stlxr %w3, %w5, %2\n" -" cbnz %w3, 1b\n" -" dmb ish\n" +" sub %w3, %w1, %w5\n" +" cbnz %w3, 4f\n" +"2: stlxr %w3, %w6, %2\n" +" cbz %w3, 3f\n" +" sub %w4, %w4, %w3\n" +" cbnz %w4, 1b\n" +" mov %w0, %w8\n" "3:\n" +" dmb ish\n" +"4:\n" " .pushsection .fixup,\"ax\"\n" -"4: mov %w0, %w6\n" -" b 3b\n" +"5: mov %w0, %w7\n" +" b 4b\n" " .popsection\n" - _ASM_EXTABLE(1b, 4b) - _ASM_EXTABLE(2b, 4b) - : "+r" (ret), "=&r" (val), "+Q" (*uaddr), "=&r" (tmp) - : "r" (oldval), "r" (newval), "Ir" (-EFAULT) + _ASM_EXTABLE(1b, 5b) + _ASM_EXTABLE(2b, 5b) + : "+r" (ret), "=&r" (val), "+Q" (*uaddr), "=&r" (tmp), "+r" (loops) + : "r" (oldval), "r" (newval), "Ir" (-EFAULT), "Ir" (-EAGAIN) : "memory"); uaccess_disable();