From patchwork Thu Jan 21 09:29:17 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ding Tianhong X-Patchwork-Id: 60058 Delivered-To: patch@linaro.org Received: by 10.112.130.2 with SMTP id oa2csp3666902lbb; Thu, 21 Jan 2016 01:30:03 -0800 (PST) X-Received: by 10.98.12.213 with SMTP id 82mr58830214pfm.116.1453368603334; Thu, 21 Jan 2016 01:30:03 -0800 (PST) Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id wt9si843526pac.62.2016.01.21.01.30.02; Thu, 21 Jan 2016 01:30:03 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1759178AbcAUJ36 (ORCPT + 29 others); Thu, 21 Jan 2016 04:29:58 -0500 Received: from szxga03-in.huawei.com ([119.145.14.66]:47832 "EHLO szxga03-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1759151AbcAUJ3y (ORCPT ); Thu, 21 Jan 2016 04:29:54 -0500 Received: from 172.24.1.51 (EHLO szxeml422-hub.china.huawei.com) ([172.24.1.51]) by szxrg03-dlp.huawei.com (MOS 4.4.3-GA FastPath queued) with ESMTP id BUZ15393; Thu, 21 Jan 2016 17:29:39 +0800 (CST) Received: from [127.0.0.1] (10.177.22.246) by szxeml422-hub.china.huawei.com (10.82.67.152) with Microsoft SMTP Server id 14.3.235.1; Thu, 21 Jan 2016 17:29:20 +0800 From: Ding Tianhong Subject: [PATCH RFC] locking/mutexes: don't spin on owner when wait list is not NULL. To: Peter Zijlstra , Ingo Molnar , "linux-kernel@vger.kernel.org" CC: Davidlohr Bueso , Linus Torvalds , "Paul E. McKenney" , Thomas Gleixner , Will Deacon , Jason Low , Tim Chen , "Waiman Long" Message-ID: <56A0A4ED.3070308@huawei.com> Date: Thu, 21 Jan 2016 17:29:17 +0800 User-Agent: Mozilla/5.0 (Windows NT 6.1; rv:38.0) Gecko/20100101 Thunderbird/38.5.0 MIME-Version: 1.0 X-Originating-IP: [10.177.22.246] X-CFilter-Loop: Reflected X-Mirapoint-Virus-RAPID-Raw: score=unknown(0), refid=str=0001.0A020205.56A0A505.00F2, ss=1, re=0.000, recu=0.000, reip=0.000, cl=1, cld=1, fgs=0, ip=0.0.0.0, so=2013-05-26 15:14:31, dmn=2013-03-21 17:37:32 X-Mirapoint-Loop-Id: 46cf0efa466072fa8913ab344f7ace77 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org I build a script to create several process for ioctl loop calling, the ioctl will calling the kernel function just like: xx_ioctl { ... rtnl_lock(); function(); rtnl_unlock(); ... } The function may sleep several ms, but will not halt, at the same time another user service may calling ifconfig to change the state of the ethernet, and after several hours, the hung task thread report this problem: -- 2.5.0 ======================================================================== 149738.039038] INFO: task ifconfig:11890 blocked for more than 120 seconds. [149738.040597] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [149738.042280] ifconfig D ffff88061ec13680 0 11890 11573 0x00000080 [149738.042284] ffff88052449bd40 0000000000000082 ffff88053a33f300 ffff88052449bfd8 [149738.042286] ffff88052449bfd8 ffff88052449bfd8 ffff88053a33f300 ffffffff819e6240 [149738.042288] ffffffff819e6244 ffff88053a33f300 00000000ffffffff ffffffff819e6248 [149738.042290] Call Trace: [149738.042300] [] schedule_preempt_disabled+0x29/0x70 [149738.042303] [] __mutex_lock_slowpath+0xc5/0x1c0 [149738.042305] [] mutex_lock+0x1f/0x2f [149738.042309] [] rtnl_lock+0x15/0x20 [149738.042311] [] dev_ioctl+0xda/0x590 [149738.042314] [] ? __do_page_fault+0x21c/0x560 [149738.042318] [] sock_do_ioctl+0x45/0x50 [149738.042320] [] sock_ioctl+0x1f0/0x2c0 [149738.042324] [] do_vfs_ioctl+0x2e5/0x4c0 [149738.042327] [] ? fget_light+0xa0/0xd0 ================================ cut here ================================ I got the vmcore and found that the ifconfig is already in the wait_list of the rtnl_lock for 120 second, but my process could get and release the rtnl_lock normally several times in one second, so it means that my process jump the queue and the ifconfig couldn't get the rtnl all the time, I check the mutex lock slow path and found that the mutex may spin on owner ignore whether the wait list is empty, it will cause the task in the wait list always be cut in line, so add test for wait list in the mutex_can_spin_on_owner and avoid this problem. Signed-off-by: Ding Tianhong Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Davidlohr Bueso Cc: Linus Torvalds Cc: Paul E. McKenney Cc: Thomas Gleixner Cc: Will Deacon Cc: Jason Low Cc: Tim Chen Cc: Waiman Long --- kernel/locking/mutex.c | 11 ++++++----- 1 file changed, 6 insertions(+), 5 deletions(-) diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c index 0551c21..596b341 100644 --- a/kernel/locking/mutex.c +++ b/kernel/locking/mutex.c @@ -256,7 +256,7 @@ static inline int mutex_can_spin_on_owner(struct mutex *lock) struct task_struct *owner; int retval = 1; - if (need_resched()) + if (need_resched() || atomic_read(&lock->count) == -1) return 0; rcu_read_lock(); @@ -283,10 +283,11 @@ static inline bool mutex_try_to_acquire(struct mutex *lock) /* * Optimistic spinning. * - * We try to spin for acquisition when we find that the lock owner - * is currently running on a (different) CPU and while we don't - * need to reschedule. The rationale is that if the lock owner is - * running, it is likely to release the lock soon. + * We try to spin for acquisition when we find that there are no + * pending waiters and the lock owner is currently running on a + * (different) CPU and while we don't need to reschedule. The + * rationale is that if the lock owner is running, it is likely + * to release the lock soon. * * Since this needs the lock owner, and this mutex implementation * doesn't track the owner atomically in the lock field, we need to