From patchwork Thu Jan 21 06:53:21 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ding Tianhong X-Patchwork-Id: 60056 Delivered-To: patch@linaro.org Received: by 10.112.130.2 with SMTP id oa2csp3607820lbb; Wed, 20 Jan 2016 22:56:14 -0800 (PST) X-Received: by 10.98.1.203 with SMTP id 194mr59677092pfb.10.1453359374844; Wed, 20 Jan 2016 22:56:14 -0800 (PST) Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id c87si11427363pfd.70.2016.01.20.22.56.14; Wed, 20 Jan 2016 22:56:14 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1758825AbcAUG4N (ORCPT + 29 others); Thu, 21 Jan 2016 01:56:13 -0500 Received: from szxga01-in.huawei.com ([58.251.152.64]:8668 "EHLO szxga01-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752175AbcAUG4J (ORCPT ); Thu, 21 Jan 2016 01:56:09 -0500 Received: from 172.24.1.48 (EHLO SZXEML429-HUB.china.huawei.com) ([172.24.1.48]) by szxrg01-dlp.huawei.com (MOS 4.3.7-GA FastPath queued) with ESMTP id DDK95880; Thu, 21 Jan 2016 14:55:07 +0800 (CST) Received: from [127.0.0.1] (10.177.22.246) by SZXEML429-HUB.china.huawei.com (10.82.67.184) with Microsoft SMTP Server id 14.3.235.1; Thu, 21 Jan 2016 14:55:03 +0800 To: Peter Zijlstra , Ingo Molnar , "linux-kernel@vger.kernel.org" From: Ding Tianhong Subject: [PATCH RFC ] locking/mutexes: don't spin on owner when wait list is not NULL. Message-ID: <56A08061.2080708@huawei.com> Date: Thu, 21 Jan 2016 14:53:21 +0800 User-Agent: Mozilla/5.0 (Windows NT 6.1; rv:38.0) Gecko/20100101 Thunderbird/38.5.0 MIME-Version: 1.0 X-Originating-IP: [10.177.22.246] X-CFilter-Loop: Reflected X-Mirapoint-Virus-RAPID-Raw: score=unknown(0), refid=str=0001.0A020205.56A080CD.0030, ss=1, re=0.000, recu=0.000, reip=0.000, cl=1, cld=1, fgs=0, ip=0.0.0.0, so=2013-06-18 04:22:30, dmn=2013-03-21 17:37:32 X-Mirapoint-Loop-Id: 984e2e0d1c337aacce77c7d1cb743c34 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org I build a script to create several process for ioctl loop calling, the ioctl will calling the kernel function just like: xx_ioctl { ... rtnl_lock(); function(); rtnl_unlock(); ... } The function may sleep several ms, but will not halt, at the same time another user service may calling ifconfig to change the state of the ethernet, and after several hours, the hung task thread report this problem: -- 2.5.0 ======================================================================== 149738.039038] INFO: task ifconfig:11890 blocked for more than 120 seconds. [149738.040597] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [149738.042280] ifconfig D ffff88061ec13680 0 11890 11573 0x00000080 [149738.042284] ffff88052449bd40 0000000000000082 ffff88053a33f300 ffff88052449bfd8 [149738.042286] ffff88052449bfd8 ffff88052449bfd8 ffff88053a33f300 ffffffff819e6240 [149738.042288] ffffffff819e6244 ffff88053a33f300 00000000ffffffff ffffffff819e6248 [149738.042290] Call Trace: [149738.042300] [] schedule_preempt_disabled+0x29/0x70 [149738.042303] [] __mutex_lock_slowpath+0xc5/0x1c0 [149738.042305] [] mutex_lock+0x1f/0x2f [149738.042309] [] rtnl_lock+0x15/0x20 [149738.042311] [] dev_ioctl+0xda/0x590 [149738.042314] [] ? __do_page_fault+0x21c/0x560 [149738.042318] [] sock_do_ioctl+0x45/0x50 [149738.042320] [] sock_ioctl+0x1f0/0x2c0 [149738.042324] [] do_vfs_ioctl+0x2e5/0x4c0 [149738.042327] [] ? fget_light+0xa0/0xd0 ================================ cut here ================================ I got the vmcore and found that the ifconfig is already in the wait_list of the rtnl_lock for 120 second, but my process could get and release the rtnl_lock normally several times in one second, so it means that my process jump the queue and the ifconfig couldn't get the rtnl all the time, I check the mutex lock slow path and found that the mutex may spin on owner ignore whether the wait list is empty, it will cause the task in the wait list always be cut in line, so add test for wait list in the mutex_can_spin_on_owner and avoid this problem. Signed-off-by: Ding Tianhong --- kernel/locking/mutex.c | 11 ++++++----- 1 file changed, 6 insertions(+), 5 deletions(-) diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c index 0551c21..596b341 100644 --- a/kernel/locking/mutex.c +++ b/kernel/locking/mutex.c @@ -256,7 +256,7 @@ static inline int mutex_can_spin_on_owner(struct mutex *lock) struct task_struct *owner; int retval = 1; - if (need_resched()) + if (need_resched() || atomic_read(&lock->count) == -1) return 0; rcu_read_lock(); @@ -283,10 +283,11 @@ static inline bool mutex_try_to_acquire(struct mutex *lock) /* * Optimistic spinning. * - * We try to spin for acquisition when we find that the lock owner - * is currently running on a (different) CPU and while we don't - * need to reschedule. The rationale is that if the lock owner is - * running, it is likely to release the lock soon. + * We try to spin for acquisition when we find that there are no + * pending waiters and the lock owner is currently running on a + * (different) CPU and while we don't need to reschedule. The + * rationale is that if the lock owner is running, it is likely + * to release the lock soon. * * Since this needs the lock owner, and this mutex implementation * doesn't track the owner atomically in the lock field, we need to