From patchwork Fri Feb 21 21:24:29 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tom Zanussi X-Patchwork-Id: 213218 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.1 required=3.0 tests=DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 727BBC35666 for ; Fri, 21 Feb 2020 21:25:19 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 4173224676 for ; Fri, 21 Feb 2020 21:25:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1582320319; bh=tO7cRQjKEB4q9bUJsNF2pCQeYGZBUa7kmT640jbVpIY=; h=From:To:Subject:Date:In-Reply-To:References:In-Reply-To: References:List-ID:From; b=exwZx/FbiL6AYEsV3P6YNgOAINlRbjk20td0Si+vCDT8CR4OW3IY0N1r7rhWrgtAw DPpPddtLqJiDa9s35oXJbMv8xwh36lZ/sFJi4ABl36xO+pbR3HNaiUh0U3A7ZcN7r6 hc9/Df90ZVJlDHHm5teaTk/B7/IK0je1W6RKJ2QI= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728765AbgBUVZS (ORCPT ); Fri, 21 Feb 2020 16:25:18 -0500 Received: from mail.kernel.org ([198.145.29.99]:38978 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728351AbgBUVZQ (ORCPT ); Fri, 21 Feb 2020 16:25:16 -0500 Received: from localhost.localdomain (c-98-220-238-81.hsd1.il.comcast.net [98.220.238.81]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 4D3AA24650; Fri, 21 Feb 2020 21:25:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1582320316; bh=tO7cRQjKEB4q9bUJsNF2pCQeYGZBUa7kmT640jbVpIY=; h=From:To:Subject:Date:In-Reply-To:References:In-Reply-To: References:From; b=bsmdGu/qvwQz+4FyGO7NULsGV5yfozGYy8ejmTY4NMOgE1Q1r0imqtp2XP2IDkGcj /koKPSxw/XAofmq26z/vpeywZC+1I+0E0hInwRV401+18hxwD7UMOixHoZzZioQk83 KciDBk9Hkq9jqEg/ivamr9kc0qtgWUzPeTPnubrw= From: zanussi@kernel.org To: LKML , linux-rt-users , Steven Rostedt , Thomas Gleixner , Carsten Emde , John Kacur , Sebastian Andrzej Siewior , Daniel Wagner , Tom Zanussi Subject: [PATCH RT 01/25] Fix wrong-variable use in irq_set_affinity_notifier Date: Fri, 21 Feb 2020 15:24:29 -0600 Message-Id: <3c564c6520216babcb5ecbd9268bfd3d2e45492c.1582320278.git.zanussi@kernel.org> X-Mailer: git-send-email 2.14.1 In-Reply-To: References: In-Reply-To: References: Sender: linux-rt-users-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rt-users@vger.kernel.org From: Joe Korty v4.14.170-rt75-rc1 stable review patch. If anyone has any objections, please let me know. ----------- [ Fixes upstream commit 3e4242082f0384311f15ab9c93e2620268c6257f, which erroneously switched old_notify->work to notify->work when fixing a merge conflict ] 4.14-rt: Fix wrong-variable use in irq_set_affinity_notifier. The bug was introduced in the 4.14-rt patch 0461-genirq-Handle-missing-work_struct-in-irq_set_affinit.patch The symptom is a NULL pointer panic in the i40e driver on system shutdown. Rebooting. BUG: unable to handle kernel NULL pointer dereference at 0000000000000020 IP: __kthread_cancel_work_sync+0x12/0xa0 CPU: 15 PID: 6274 Comm: reboot Not tainted 4.14.155-rt70-RedHawk-8.0.2-prt-trace #1 task: ffff9ef0d1a58000 task.stack: ffffbe540c038000 RIP: 0010:__kthread_cancel_work_sync+0x12/0xa0 RSP: 0018:ffffbe540c03bbd8 EFLAGS: 00010296 RAX: 0000084000000020 RBX: 0000000000000000 RCX: 0000000000000034 RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000008 RBP: ffffbe540c03bc00 R08: ffff9ee8ccdc3800 R09: ffff9ef0d8c0c000 R10: ffff9ef0d8c0c028 R11: 0000000000000040 R12: ffff9ee8ccdc3800 R13: 0000000000000000 R14: ffff9ee8ccdc3960 R15: 0000000000000074 FS: 00007ffff7fcf380(0000) GS:ffff9ef0ffdc0000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000000000020 CR3: 000000104b428003 CR4: 00000000005606e0 DR0: 00000000006040e0 DR1: 00000000006040e8 DR2: 00000000006040f0 DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000600 PKRU: 55555554 Call Trace: kthread_cancel_work_sync+0xb/0x10 irq_set_affinity_notifier+0x8e/0xc0 i40e_vsi_free_irq+0xbc/0x230 [i40e] i40e_vsi_close+0x24/0xa0 [i40e] i40e_close+0x10/0x20 [i40e] i40e_quiesce_vsi.part.40+0x30/0x40 [i40e] i40e_pf_quiesce_all_vsi.isra.41+0x34/0x50 [i40e] i40e_prep_for_reset+0x67/0x110 [i40e] i40e_shutdown+0x39/0x220 [i40e] pci_device_shutdown+0x2b/0x50 device_shutdown+0x147/0x1f0 kernel_restart_prepare+0x71/0x74 kernel_restart+0xd/0x4e SyS_reboot.cold.1+0x9/0x34 do_syscall_64+0x7c/0x150 4.19-rt and above do not have this problem due to a refactoring. Signed-off-by: Joe Korty Signed-off-by: Tom Zanussi --- kernel/irq/manage.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c index 071691963f7b..12702d48aaa3 100644 --- a/kernel/irq/manage.c +++ b/kernel/irq/manage.c @@ -353,7 +353,7 @@ irq_set_affinity_notifier(unsigned int irq, struct irq_affinity_notify *notify) if (old_notify) { #ifdef CONFIG_PREEMPT_RT_BASE - kthread_cancel_work_sync(¬ify->work); + kthread_cancel_work_sync(&old_notify->work); #else cancel_work_sync(&old_notify->work); #endif From patchwork Fri Feb 21 21:24:30 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tom Zanussi X-Patchwork-Id: 213207 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.1 required=3.0 tests=DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8D287C35669 for ; Fri, 21 Feb 2020 21:26:59 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 5589924650 for ; Fri, 21 Feb 2020 21:26:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1582320419; bh=ydTYfYBQgD78+JZJ9VmOf5XD18xkNDBLsvo8rGpCkFY=; h=From:To:Subject:Date:In-Reply-To:References:In-Reply-To: References:List-ID:From; b=e5jhXoGX8YTaNO2p4jkuCOjBOVHeziuIYymeSP+ml1SW0CKDrpW6WnRCsJss8d4VI pc+AYxmybFsFDNNTvhw8/1+y70N4XOoenBRPZ/hQFJnXmYFKi3LAecE4cKqWkfq0JV pMNEIAuQy4lbiU4mVyIVaM1kmOkZ/cmwb9Xi8mTA= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728722AbgBUVZR (ORCPT ); Fri, 21 Feb 2020 16:25:17 -0500 Received: from mail.kernel.org ([198.145.29.99]:38992 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726683AbgBUVZR (ORCPT ); Fri, 21 Feb 2020 16:25:17 -0500 Received: from localhost.localdomain (c-98-220-238-81.hsd1.il.comcast.net [98.220.238.81]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 3C60D24653; Fri, 21 Feb 2020 21:25:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1582320316; bh=ydTYfYBQgD78+JZJ9VmOf5XD18xkNDBLsvo8rGpCkFY=; h=From:To:Subject:Date:In-Reply-To:References:In-Reply-To: References:From; b=Bx6gdwaSZKb92r5uvPedbvlrB7UwuiSfC5wmJlLJr3fuq5jP+EUPc8Zx/M8rGMHOy mU5xwlSeEFhpBwoUUghFoxm2+29tifndEEAx7vq9+fhMj6NiQuiDhU+rw6v/0L2TpW /sk5F90myeAxzz+LXRpvb8+BLfD43EyNNkpL9C4g= From: zanussi@kernel.org To: LKML , linux-rt-users , Steven Rostedt , Thomas Gleixner , Carsten Emde , John Kacur , Sebastian Andrzej Siewior , Daniel Wagner , Tom Zanussi Subject: [PATCH RT 02/25] x86: preempt: Check preemption level before looking at lazy-preempt Date: Fri, 21 Feb 2020 15:24:30 -0600 Message-Id: X-Mailer: git-send-email 2.14.1 In-Reply-To: References: In-Reply-To: References: Sender: linux-rt-users-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rt-users@vger.kernel.org From: Sebastian Andrzej Siewior v4.14.170-rt75-rc1 stable review patch. If anyone has any objections, please let me know. ----------- [ Upstream commit 19fc8557f2323c52b26561651ed4d51fc688a740 ] Before evaluating the lazy-preempt state it must be ensure that the preempt-count is zero. Signed-off-by: Sebastian Andrzej Siewior Signed-off-by: Tom Zanussi --- arch/x86/include/asm/preempt.h | 2 ++ 1 file changed, 2 insertions(+) diff --git a/arch/x86/include/asm/preempt.h b/arch/x86/include/asm/preempt.h index f66708779274..afa0e42ccdd1 100644 --- a/arch/x86/include/asm/preempt.h +++ b/arch/x86/include/asm/preempt.h @@ -96,6 +96,8 @@ static __always_inline bool __preempt_count_dec_and_test(void) if (____preempt_count_dec_and_test()) return true; #ifdef CONFIG_PREEMPT_LAZY + if (preempt_count()) + return false; if (current_thread_info()->preempt_lazy_count) return false; return test_thread_flag(TIF_NEED_RESCHED_LAZY); From patchwork Fri Feb 21 21:24:31 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tom Zanussi X-Patchwork-Id: 213206 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.1 required=3.0 tests=DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2BBBAC35669 for ; Fri, 21 Feb 2020 21:27:04 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id EA16524650 for ; Fri, 21 Feb 2020 21:27:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1582320424; bh=uHCUX8XBDJdEPhvPI5BGTgSTzCMMQv9TwfPe4ivWfRs=; h=From:To:Subject:Date:In-Reply-To:References:In-Reply-To: References:List-ID:From; b=UClCz/S0joP6lv7oeS0ZgNcJn07Sha1T6TONll/Omhby1Jo2MvxH0hYEaAQaSKl2g 1+oPXsfjEPxTDLc9tMUQHUd1AmpoUievRG+NUsslbQrcNxpj27U7F0W+ube+3n2VGB PUDITDhSU8dCq93cvOogAhI08JWoJxqR+uqN8LtM= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729855AbgBUV07 (ORCPT ); Fri, 21 Feb 2020 16:26:59 -0500 Received: from mail.kernel.org ([198.145.29.99]:39036 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728745AbgBUVZS (ORCPT ); Fri, 21 Feb 2020 16:25:18 -0500 Received: from localhost.localdomain (c-98-220-238-81.hsd1.il.comcast.net [98.220.238.81]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 4C32724656; Fri, 21 Feb 2020 21:25:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1582320318; bh=uHCUX8XBDJdEPhvPI5BGTgSTzCMMQv9TwfPe4ivWfRs=; h=From:To:Subject:Date:In-Reply-To:References:In-Reply-To: References:From; b=bzGE6TBD685FDCwhUiWC+dQlS7NbMcmMrnDIVUDiKXGfq0C2mxzOlpFlrzwal1x5H uqFzo2dS62x1VRWazPktYhvHrE+o9jXdCyBLJjW1aP8nCcN8o8CurwN6II99a3+4ed KVkt8eYc3u4fNUd26I/SSsh2d0xbWh4e6nKlKvIM= From: zanussi@kernel.org To: LKML , linux-rt-users , Steven Rostedt , Thomas Gleixner , Carsten Emde , John Kacur , Sebastian Andrzej Siewior , Daniel Wagner , Tom Zanussi Subject: [PATCH RT 03/25] sched/deadline: Ensure inactive_timer runs in hardirq context Date: Fri, 21 Feb 2020 15:24:31 -0600 Message-Id: <11a532007a600928e64e761722da7100e19a0c5f.1582320278.git.zanussi@kernel.org> X-Mailer: git-send-email 2.14.1 In-Reply-To: References: In-Reply-To: References: Sender: linux-rt-users-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rt-users@vger.kernel.org From: Juri Lelli v4.14.170-rt75-rc1 stable review patch. If anyone has any objections, please let me know. ----------- [ Upstream commit ba94e7aed7405c58251b1380e6e7d73aa8284b41 ] SCHED_DEADLINE inactive timer needs to run in hardirq context (as dl_task_timer already does) on PREEMPT_RT Change the mode to HRTIMER_MODE_REL_HARD. [ tglx: Fixed up the start site, so mode debugging works ] Signed-off-by: Juri Lelli Signed-off-by: Thomas Gleixner Link: https://lkml.kernel.org/r/20190731103715.4047-1-juri.lelli@redhat.com Signed-off-by: Sebastian Andrzej Siewior Signed-off-by: Tom Zanussi --- kernel/sched/deadline.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c index eb68f7fb8a36..7b04e54bea01 100644 --- a/kernel/sched/deadline.c +++ b/kernel/sched/deadline.c @@ -252,7 +252,7 @@ static void task_non_contending(struct task_struct *p) dl_se->dl_non_contending = 1; get_task_struct(p); - hrtimer_start(timer, ns_to_ktime(zerolag_time), HRTIMER_MODE_REL); + hrtimer_start(timer, ns_to_ktime(zerolag_time), HRTIMER_MODE_REL_HARD); } static void task_contending(struct sched_dl_entity *dl_se, int flags) @@ -1234,7 +1234,7 @@ void init_dl_inactive_task_timer(struct sched_dl_entity *dl_se) { struct hrtimer *timer = &dl_se->inactive_timer; - hrtimer_init(timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL); + hrtimer_init(timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL_HARD); timer->function = inactive_task_timer; } From patchwork Fri Feb 21 21:24:36 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tom Zanussi X-Patchwork-Id: 213208 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.1 required=3.0 tests=DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1D289C35669 for ; Fri, 21 Feb 2020 21:26:49 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id DB39924650 for ; Fri, 21 Feb 2020 21:26:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1582320409; bh=BodMjiPO1I2v3+3ZzPoqraWPFok3/J5zp6BF7XhVlKs=; h=From:To:Subject:Date:In-Reply-To:References:In-Reply-To: References:List-ID:From; b=D0sdGrFZmMck/u/JSvvwNQNmE/Byh3MXYO5Wov32bwJKugF8zDm9Z/BWV2wYDCE1K kfGHKaJ51G4+yf02DDZOW8T/DmTR8hGT0gBEYRp6bBsEe4aFjGeQMKSeZ+C3iHYvmJ op2seHUiNUkRHolr/heJua96KApEjKj0zit+bikg= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729205AbgBUV0o (ORCPT ); Fri, 21 Feb 2020 16:26:44 -0500 Received: from mail.kernel.org ([198.145.29.99]:39142 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729066AbgBUVZX (ORCPT ); Fri, 21 Feb 2020 16:25:23 -0500 Received: from localhost.localdomain (c-98-220-238-81.hsd1.il.comcast.net [98.220.238.81]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 37E6124676; Fri, 21 Feb 2020 21:25:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1582320322; bh=BodMjiPO1I2v3+3ZzPoqraWPFok3/J5zp6BF7XhVlKs=; h=From:To:Subject:Date:In-Reply-To:References:In-Reply-To: References:From; b=YTQDf/c7eZQB+xA0PPAl+2ykyp0ZaqYS2zvHOt1LMNkr4TTXkMOHBezmCOZl/x7qy 7GWNau/jznJLjpjB0DU3ppNCSZsJgXLExK9BP8M59l1v3EUCY0NnZBDhHia0oPDN1h 9Cx55J7YV74WubMTylnpEoIk6+hPfosKBywGKr8o= From: zanussi@kernel.org To: LKML , linux-rt-users , Steven Rostedt , Thomas Gleixner , Carsten Emde , John Kacur , Sebastian Andrzej Siewior , Daniel Wagner , Tom Zanussi Subject: [PATCH RT 08/25] sched: Remove dead __migrate_disabled() check Date: Fri, 21 Feb 2020 15:24:36 -0600 Message-Id: <6560961cbc09e41f13267a26f5f0951af292d59d.1582320278.git.zanussi@kernel.org> X-Mailer: git-send-email 2.14.1 In-Reply-To: References: In-Reply-To: References: Sender: linux-rt-users-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rt-users@vger.kernel.org From: Scott Wood v4.14.170-rt75-rc1 stable review patch. If anyone has any objections, please let me know. ----------- [ Upstream commit 14d9272d534ea91262e15db99443fc5995c7c016 ] This code was unreachable given the __migrate_disabled() branch to "out" immediately beforehand. Signed-off-by: Scott Wood Signed-off-by: Sebastian Andrzej Siewior Signed-off-by: Tom Zanussi Conflicts: kernel/sched/core.c --- kernel/sched/core.c | 7 ------- 1 file changed, 7 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 8d6badac9225..4708129e8df1 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -1217,13 +1217,6 @@ static int __set_cpus_allowed_ptr(struct task_struct *p, if (cpumask_test_cpu(task_cpu(p), new_mask) || __migrate_disabled(p)) goto out; -#if defined(CONFIG_SMP) && defined(CONFIG_PREEMPT_RT_BASE) - if (__migrate_disabled(p)) { - p->migrate_disable_update = 1; - goto out; - } -#endif - if (task_running(rq, p) || p->state == TASK_WAKING) { struct migration_arg arg = { p, dest_cpu }; /* Need help from migration thread: drop lock and wait. */ From patchwork Fri Feb 21 21:24:37 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tom Zanussi X-Patchwork-Id: 213209 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.1 required=3.0 tests=DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9A3AEC3566F for ; Fri, 21 Feb 2020 21:26:40 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 63A0F208C4 for ; Fri, 21 Feb 2020 21:26:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1582320400; bh=U3QRFgorUSPKmV1pjsKNqZxdmuKqHi6ASTjt45Lygjc=; h=From:To:Subject:Date:In-Reply-To:References:In-Reply-To: References:List-ID:From; b=nY12kKRp9hwi6zy/95TYmZpO9vLqaFE1xCyCy/gHp7ikPX/y7PjVl0nIc0fvp56Qt YW4mFHbgqOd4xGCkmXLTANyxUjtnBGCf/tfmp2Fo+YzCLYWD3rBlHCaT7B4V2ygLnu bEWtNGQl/uPEht4Cp1zgHE60x6r5M2A/wtZNdrdk= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729195AbgBUVZY (ORCPT ); Fri, 21 Feb 2020 16:25:24 -0500 Received: from mail.kernel.org ([198.145.29.99]:39176 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729172AbgBUVZY (ORCPT ); Fri, 21 Feb 2020 16:25:24 -0500 Received: from localhost.localdomain (c-98-220-238-81.hsd1.il.comcast.net [98.220.238.81]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 3E5282467A; Fri, 21 Feb 2020 21:25:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1582320323; bh=U3QRFgorUSPKmV1pjsKNqZxdmuKqHi6ASTjt45Lygjc=; h=From:To:Subject:Date:In-Reply-To:References:In-Reply-To: References:From; b=YsRQzxWE3n5HLjDQRcu2WAdQciZkZRIetTpuume5gT0hZ+9OK2dJJHDW/o2FiZrBH qOBTcxdD894Mc+xLGB03o9eEfUTpZnnu3OBGgpP7z8Eo39s5Gcaun9YAqhlkajl8Yf 0Lvy21CIr8un8Epg4iPd7YUJOG+1ZWygA2F5C8ZY= From: zanussi@kernel.org To: LKML , linux-rt-users , Steven Rostedt , Thomas Gleixner , Carsten Emde , John Kacur , Sebastian Andrzej Siewior , Daniel Wagner , Tom Zanussi Subject: [PATCH RT 09/25] sched: migrate disable: Protect cpus_ptr with lock Date: Fri, 21 Feb 2020 15:24:37 -0600 Message-Id: X-Mailer: git-send-email 2.14.1 In-Reply-To: References: In-Reply-To: References: Sender: linux-rt-users-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rt-users@vger.kernel.org From: Scott Wood v4.14.170-rt75-rc1 stable review patch. If anyone has any objections, please let me know. ----------- [ Upstream commit 27ee52a891ed2c7e2e2c8332ccae0de7c2674b09 ] Various places assume that cpus_ptr is protected by rq/pi locks, so don't change it before grabbing those locks. Signed-off-by: Scott Wood Signed-off-by: Sebastian Andrzej Siewior Signed-off-by: Tom Zanussi --- kernel/sched/core.c | 6 ++---- 1 file changed, 2 insertions(+), 4 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 4708129e8df1..189e6f08575e 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -6923,9 +6923,8 @@ migrate_disable_update_cpus_allowed(struct task_struct *p) struct rq *rq; struct rq_flags rf; - p->cpus_ptr = cpumask_of(smp_processor_id()); - rq = task_rq_lock(p, &rf); + p->cpus_ptr = cpumask_of(smp_processor_id()); update_nr_migratory(p, -1); p->nr_cpus_allowed = 1; task_rq_unlock(rq, p, &rf); @@ -6937,9 +6936,8 @@ migrate_enable_update_cpus_allowed(struct task_struct *p) struct rq *rq; struct rq_flags rf; - p->cpus_ptr = &p->cpus_mask; - rq = task_rq_lock(p, &rf); + p->cpus_ptr = &p->cpus_mask; p->nr_cpus_allowed = cpumask_weight(&p->cpus_mask); update_nr_migratory(p, 1); task_rq_unlock(rq, p, &rf); From patchwork Fri Feb 21 21:24:39 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tom Zanussi X-Patchwork-Id: 213217 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.1 required=3.0 tests=DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E920AC35666 for ; Fri, 21 Feb 2020 21:25:29 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id A1C6B24686 for ; Fri, 21 Feb 2020 21:25:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1582320329; bh=HsgY/wCFlXN+GbvocWfOrdhZKUhwHYi+Q4X0YZnl5To=; h=From:To:Subject:Date:In-Reply-To:References:In-Reply-To: References:List-ID:From; b=ISkfIlxOEJvxHsda15aABsye+IXNtcLfhzdU/xUy0HxYZQaWn6BvkHa7NSG5OtUgl jBUJe5JagQsgnYDK8aw9Oeoqu6+3+X70cXhhL6OoEv37+Kn54UHNEt3O9HSrYosO4w QZtIhfFB7VErtjPFY7jFQZPZhdkHxY5boMVWcwEc= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729318AbgBUVZ2 (ORCPT ); Fri, 21 Feb 2020 16:25:28 -0500 Received: from mail.kernel.org ([198.145.29.99]:39200 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729261AbgBUVZ1 (ORCPT ); Fri, 21 Feb 2020 16:25:27 -0500 Received: from localhost.localdomain (c-98-220-238-81.hsd1.il.comcast.net [98.220.238.81]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 327B324680; Fri, 21 Feb 2020 21:25:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1582320326; bh=HsgY/wCFlXN+GbvocWfOrdhZKUhwHYi+Q4X0YZnl5To=; h=From:To:Subject:Date:In-Reply-To:References:In-Reply-To: References:From; b=JJqlYZNLQBQq0eA2m47StfOMjfwhESxqZrmsqKOQP10mOoVJULG+DCWsfblnwc8Ba N8pnVzuCeaGMSaKc+FkG/fkuSkxGUpmgHKd/UsHJsADAl4kC2xMXWHbt8h+0SzVf6x 9BTyk7TKr+RRjZx6mufIK8NahbsEudUOVCXnZz7E= From: zanussi@kernel.org To: LKML , linux-rt-users , Steven Rostedt , Thomas Gleixner , Carsten Emde , John Kacur , Sebastian Andrzej Siewior , Daniel Wagner , Tom Zanussi Subject: [PATCH RT 11/25] futex: Make the futex_hash_bucket spinlock_t again and bring back its old state Date: Fri, 21 Feb 2020 15:24:39 -0600 Message-Id: X-Mailer: git-send-email 2.14.1 In-Reply-To: References: In-Reply-To: References: Sender: linux-rt-users-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rt-users@vger.kernel.org From: Sebastian Andrzej Siewior v4.14.170-rt75-rc1 stable review patch. If anyone has any objections, please let me know. ----------- [ Upstream commit 954ad80c23edfe71f4e8ce70b961eac884320c3a ] This is an all-in-one patch that reverts the patches: futex: Make the futex_hash_bucket lock raw futex: Delay deallocation of pi_state and adds back the old patches we had: futex: workaround migrate_disable/enable in different context rtmutex: Handle the various new futex race conditions futex: Fix bug on when a requeued RT task times out futex: Ensure lock/unlock symetry versus pi_lock and hash bucket lock Signed-off-by: Sebastian Andrzej Siewior Signed-off-by: Tom Zanussi Conflicts: kernel/futex.c --- kernel/futex.c | 231 +++++++++++++++++++++++----------------- kernel/locking/rtmutex.c | 65 +++++++++-- kernel/locking/rtmutex_common.h | 3 + 3 files changed, 194 insertions(+), 105 deletions(-) diff --git a/kernel/futex.c b/kernel/futex.c index bcef01354d5c..581d40ee22a8 100644 --- a/kernel/futex.c +++ b/kernel/futex.c @@ -243,7 +243,7 @@ struct futex_q { struct plist_node list; struct task_struct *task; - raw_spinlock_t *lock_ptr; + spinlock_t *lock_ptr; union futex_key key; struct futex_pi_state *pi_state; struct rt_mutex_waiter *rt_waiter; @@ -264,7 +264,7 @@ static const struct futex_q futex_q_init = { */ struct futex_hash_bucket { atomic_t waiters; - raw_spinlock_t lock; + spinlock_t lock; struct plist_head chain; } ____cacheline_aligned_in_smp; @@ -831,13 +831,13 @@ static void get_pi_state(struct futex_pi_state *pi_state) * Drops a reference to the pi_state object and frees or caches it * when the last reference is gone. */ -static struct futex_pi_state *__put_pi_state(struct futex_pi_state *pi_state) +static void put_pi_state(struct futex_pi_state *pi_state) { if (!pi_state) - return NULL; + return; if (!atomic_dec_and_test(&pi_state->refcount)) - return NULL; + return; /* * If pi_state->owner is NULL, the owner is most probably dying @@ -857,7 +857,9 @@ static struct futex_pi_state *__put_pi_state(struct futex_pi_state *pi_state) raw_spin_unlock_irq(&pi_state->pi_mutex.wait_lock); } - if (!current->pi_state_cache) { + if (current->pi_state_cache) { + kfree(pi_state); + } else { /* * pi_state->list is already empty. * clear pi_state->owner. @@ -866,30 +868,6 @@ static struct futex_pi_state *__put_pi_state(struct futex_pi_state *pi_state) pi_state->owner = NULL; atomic_set(&pi_state->refcount, 1); current->pi_state_cache = pi_state; - pi_state = NULL; - } - return pi_state; -} - -static void put_pi_state(struct futex_pi_state *pi_state) -{ - kfree(__put_pi_state(pi_state)); -} - -static void put_pi_state_atomic(struct futex_pi_state *pi_state, - struct list_head *to_free) -{ - if (__put_pi_state(pi_state)) - list_add(&pi_state->list, to_free); -} - -static void free_pi_state_list(struct list_head *to_free) -{ - struct futex_pi_state *p, *next; - - list_for_each_entry_safe(p, next, to_free, list) { - list_del(&p->list); - kfree(p); } } @@ -924,7 +902,6 @@ static void exit_pi_state_list(struct task_struct *curr) struct futex_pi_state *pi_state; struct futex_hash_bucket *hb; union futex_key key = FUTEX_KEY_INIT; - LIST_HEAD(to_free); if (!futex_cmpxchg_enabled) return; @@ -958,7 +935,7 @@ static void exit_pi_state_list(struct task_struct *curr) } raw_spin_unlock_irq(&curr->pi_lock); - raw_spin_lock(&hb->lock); + spin_lock(&hb->lock); raw_spin_lock_irq(&pi_state->pi_mutex.wait_lock); raw_spin_lock(&curr->pi_lock); /* @@ -968,8 +945,10 @@ static void exit_pi_state_list(struct task_struct *curr) if (head->next != next) { /* retain curr->pi_lock for the loop invariant */ raw_spin_unlock(&pi_state->pi_mutex.wait_lock); - raw_spin_unlock(&hb->lock); - put_pi_state_atomic(pi_state, &to_free); + raw_spin_unlock_irq(&curr->pi_lock); + spin_unlock(&hb->lock); + raw_spin_lock_irq(&curr->pi_lock); + put_pi_state(pi_state); continue; } @@ -980,7 +959,7 @@ static void exit_pi_state_list(struct task_struct *curr) raw_spin_unlock(&curr->pi_lock); raw_spin_unlock_irq(&pi_state->pi_mutex.wait_lock); - raw_spin_unlock(&hb->lock); + spin_unlock(&hb->lock); rt_mutex_futex_unlock(&pi_state->pi_mutex); put_pi_state(pi_state); @@ -988,8 +967,6 @@ static void exit_pi_state_list(struct task_struct *curr) raw_spin_lock_irq(&curr->pi_lock); } raw_spin_unlock_irq(&curr->pi_lock); - - free_pi_state_list(&to_free); } #else static inline void exit_pi_state_list(struct task_struct *curr) { } @@ -1530,7 +1507,7 @@ static void __unqueue_futex(struct futex_q *q) { struct futex_hash_bucket *hb; - if (WARN_ON_SMP(!q->lock_ptr || !raw_spin_is_locked(q->lock_ptr)) + if (WARN_ON_SMP(!q->lock_ptr || !spin_is_locked(q->lock_ptr)) || WARN_ON(plist_node_empty(&q->list))) return; @@ -1658,21 +1635,21 @@ static inline void double_lock_hb(struct futex_hash_bucket *hb1, struct futex_hash_bucket *hb2) { if (hb1 <= hb2) { - raw_spin_lock(&hb1->lock); + spin_lock(&hb1->lock); if (hb1 < hb2) - raw_spin_lock_nested(&hb2->lock, SINGLE_DEPTH_NESTING); + spin_lock_nested(&hb2->lock, SINGLE_DEPTH_NESTING); } else { /* hb1 > hb2 */ - raw_spin_lock(&hb2->lock); - raw_spin_lock_nested(&hb1->lock, SINGLE_DEPTH_NESTING); + spin_lock(&hb2->lock); + spin_lock_nested(&hb1->lock, SINGLE_DEPTH_NESTING); } } static inline void double_unlock_hb(struct futex_hash_bucket *hb1, struct futex_hash_bucket *hb2) { - raw_spin_unlock(&hb1->lock); + spin_unlock(&hb1->lock); if (hb1 != hb2) - raw_spin_unlock(&hb2->lock); + spin_unlock(&hb2->lock); } /* @@ -1700,7 +1677,7 @@ futex_wake(u32 __user *uaddr, unsigned int flags, int nr_wake, u32 bitset) if (!hb_waiters_pending(hb)) goto out_put_key; - raw_spin_lock(&hb->lock); + spin_lock(&hb->lock); plist_for_each_entry_safe(this, next, &hb->chain, list) { if (match_futex (&this->key, &key)) { @@ -1719,7 +1696,7 @@ futex_wake(u32 __user *uaddr, unsigned int flags, int nr_wake, u32 bitset) } } - raw_spin_unlock(&hb->lock); + spin_unlock(&hb->lock); wake_up_q(&wake_q); out_put_key: put_futex_key(&key); @@ -2032,7 +2009,6 @@ static int futex_requeue(u32 __user *uaddr1, unsigned int flags, struct futex_hash_bucket *hb1, *hb2; struct futex_q *this, *next; DEFINE_WAKE_Q(wake_q); - LIST_HEAD(to_free); if (nr_wake < 0 || nr_requeue < 0) return -EINVAL; @@ -2271,6 +2247,16 @@ static int futex_requeue(u32 __user *uaddr1, unsigned int flags, requeue_pi_wake_futex(this, &key2, hb2); drop_count++; continue; + } else if (ret == -EAGAIN) { + /* + * Waiter was woken by timeout or + * signal and has set pi_blocked_on to + * PI_WAKEUP_INPROGRESS before we + * tried to enqueue it on the rtmutex. + */ + this->pi_state = NULL; + put_pi_state(pi_state); + continue; } else if (ret) { /* * rt_mutex_start_proxy_lock() detected a @@ -2281,7 +2267,7 @@ static int futex_requeue(u32 __user *uaddr1, unsigned int flags, * object. */ this->pi_state = NULL; - put_pi_state_atomic(pi_state, &to_free); + put_pi_state(pi_state); /* * We stop queueing more waiters and let user * space deal with the mess. @@ -2298,7 +2284,7 @@ static int futex_requeue(u32 __user *uaddr1, unsigned int flags, * in futex_proxy_trylock_atomic() or in lookup_pi_state(). We * need to drop it here again. */ - put_pi_state_atomic(pi_state, &to_free); + put_pi_state(pi_state); out_unlock: double_unlock_hb(hb1, hb2); @@ -2319,7 +2305,6 @@ static int futex_requeue(u32 __user *uaddr1, unsigned int flags, out_put_key1: put_futex_key(&key1); out: - free_pi_state_list(&to_free); return ret ? ret : task_count; } @@ -2343,8 +2328,7 @@ static inline struct futex_hash_bucket *queue_lock(struct futex_q *q) q->lock_ptr = &hb->lock; - raw_spin_lock(&hb->lock); - + spin_lock(&hb->lock); return hb; } @@ -2352,7 +2336,7 @@ static inline void queue_unlock(struct futex_hash_bucket *hb) __releases(&hb->lock) { - raw_spin_unlock(&hb->lock); + spin_unlock(&hb->lock); hb_waiters_dec(hb); } @@ -2391,7 +2375,7 @@ static inline void queue_me(struct futex_q *q, struct futex_hash_bucket *hb) __releases(&hb->lock) { __queue_me(q, hb); - raw_spin_unlock(&hb->lock); + spin_unlock(&hb->lock); } /** @@ -2407,41 +2391,41 @@ static inline void queue_me(struct futex_q *q, struct futex_hash_bucket *hb) */ static int unqueue_me(struct futex_q *q) { - raw_spinlock_t *lock_ptr; + spinlock_t *lock_ptr; int ret = 0; /* In the common case we don't take the spinlock, which is nice. */ retry: /* - * q->lock_ptr can change between this read and the following - * raw_spin_lock. Use READ_ONCE to forbid the compiler from reloading - * q->lock_ptr and optimizing lock_ptr out of the logic below. + * q->lock_ptr can change between this read and the following spin_lock. + * Use READ_ONCE to forbid the compiler from reloading q->lock_ptr and + * optimizing lock_ptr out of the logic below. */ lock_ptr = READ_ONCE(q->lock_ptr); if (lock_ptr != NULL) { - raw_spin_lock(lock_ptr); + spin_lock(lock_ptr); /* * q->lock_ptr can change between reading it and - * raw_spin_lock(), causing us to take the wrong lock. This + * spin_lock(), causing us to take the wrong lock. This * corrects the race condition. * * Reasoning goes like this: if we have the wrong lock, * q->lock_ptr must have changed (maybe several times) - * between reading it and the raw_spin_lock(). It can - * change again after the raw_spin_lock() but only if it was - * already changed before the raw_spin_lock(). It cannot, + * between reading it and the spin_lock(). It can + * change again after the spin_lock() but only if it was + * already changed before the spin_lock(). It cannot, * however, change back to the original value. Therefore * we can detect whether we acquired the correct lock. */ if (unlikely(lock_ptr != q->lock_ptr)) { - raw_spin_unlock(lock_ptr); + spin_unlock(lock_ptr); goto retry; } __unqueue_futex(q); BUG_ON(q->pi_state); - raw_spin_unlock(lock_ptr); + spin_unlock(lock_ptr); ret = 1; } @@ -2457,16 +2441,13 @@ static int unqueue_me(struct futex_q *q) static void unqueue_me_pi(struct futex_q *q) __releases(q->lock_ptr) { - struct futex_pi_state *ps; - __unqueue_futex(q); BUG_ON(!q->pi_state); - ps = __put_pi_state(q->pi_state); + put_pi_state(q->pi_state); q->pi_state = NULL; - raw_spin_unlock(q->lock_ptr); - kfree(ps); + spin_unlock(q->lock_ptr); } static int fixup_pi_state_owner(u32 __user *uaddr, struct futex_q *q, @@ -2599,7 +2580,7 @@ static int fixup_pi_state_owner(u32 __user *uaddr, struct futex_q *q, */ handle_err: raw_spin_unlock_irq(&pi_state->pi_mutex.wait_lock); - raw_spin_unlock(q->lock_ptr); + spin_unlock(q->lock_ptr); switch (err) { case -EFAULT: @@ -2617,7 +2598,7 @@ static int fixup_pi_state_owner(u32 __user *uaddr, struct futex_q *q, break; } - raw_spin_lock(q->lock_ptr); + spin_lock(q->lock_ptr); raw_spin_lock_irq(&pi_state->pi_mutex.wait_lock); /* @@ -2713,7 +2694,7 @@ static void futex_wait_queue_me(struct futex_hash_bucket *hb, struct futex_q *q, /* * The task state is guaranteed to be set before another task can * wake it. set_current_state() is implemented using smp_store_mb() and - * queue_me() calls raw_spin_unlock() upon completion, both serializing + * queue_me() calls spin_unlock() upon completion, both serializing * access to the hash list and forcing another memory barrier. */ set_current_state(TASK_INTERRUPTIBLE); @@ -3013,7 +2994,15 @@ static int futex_lock_pi(u32 __user *uaddr, unsigned int flags, * before __rt_mutex_start_proxy_lock() is done. */ raw_spin_lock_irq(&q.pi_state->pi_mutex.wait_lock); - raw_spin_unlock(q.lock_ptr); + /* + * the migrate_disable() here disables migration in the in_atomic() fast + * path which is enabled again in the following spin_unlock(). We have + * one migrate_disable() pending in the slow-path which is reversed + * after the raw_spin_unlock_irq() where we leave the atomic context. + */ + migrate_disable(); + + spin_unlock(q.lock_ptr); /* * __rt_mutex_start_proxy_lock() unconditionally enqueues the @rt_waiter * such that futex_unlock_pi() is guaranteed to observe the waiter when @@ -3021,6 +3010,7 @@ static int futex_lock_pi(u32 __user *uaddr, unsigned int flags, */ ret = __rt_mutex_start_proxy_lock(&q.pi_state->pi_mutex, &rt_waiter, current); raw_spin_unlock_irq(&q.pi_state->pi_mutex.wait_lock); + migrate_enable(); if (ret) { if (ret == 1) @@ -3034,7 +3024,7 @@ static int futex_lock_pi(u32 __user *uaddr, unsigned int flags, ret = rt_mutex_wait_proxy_lock(&q.pi_state->pi_mutex, to, &rt_waiter); cleanup: - raw_spin_lock(q.lock_ptr); + spin_lock(q.lock_ptr); /* * If we failed to acquire the lock (deadlock/signal/timeout), we must * first acquire the hb->lock before removing the lock from the @@ -3135,7 +3125,7 @@ static int futex_unlock_pi(u32 __user *uaddr, unsigned int flags) return ret; hb = hash_futex(&key); - raw_spin_lock(&hb->lock); + spin_lock(&hb->lock); /* * Check waiters first. We do not trust user space values at @@ -3169,10 +3159,19 @@ static int futex_unlock_pi(u32 __user *uaddr, unsigned int flags) * rt_waiter. Also see the WARN in wake_futex_pi(). */ raw_spin_lock_irq(&pi_state->pi_mutex.wait_lock); - raw_spin_unlock(&hb->lock); + /* + * Magic trickery for now to make the RT migrate disable + * logic happy. The following spin_unlock() happens with + * interrupts disabled so the internal migrate_enable() + * won't undo the migrate_disable() which was issued when + * locking hb->lock. + */ + migrate_disable(); + spin_unlock(&hb->lock); /* drops pi_state->pi_mutex.wait_lock */ ret = wake_futex_pi(uaddr, uval, pi_state); + migrate_enable(); put_pi_state(pi_state); @@ -3208,7 +3207,7 @@ static int futex_unlock_pi(u32 __user *uaddr, unsigned int flags) * owner. */ if ((ret = cmpxchg_futex_value_locked(&curval, uaddr, uval, 0))) { - raw_spin_unlock(&hb->lock); + spin_unlock(&hb->lock); switch (ret) { case -EFAULT: goto pi_faulted; @@ -3228,7 +3227,7 @@ static int futex_unlock_pi(u32 __user *uaddr, unsigned int flags) ret = (curval == uval) ? 0 : -EAGAIN; out_unlock: - raw_spin_unlock(&hb->lock); + spin_unlock(&hb->lock); out_putkey: put_futex_key(&key); return ret; @@ -3344,7 +3343,7 @@ static int futex_wait_requeue_pi(u32 __user *uaddr, unsigned int flags, struct hrtimer_sleeper timeout, *to = NULL; struct futex_pi_state *pi_state = NULL; struct rt_mutex_waiter rt_waiter; - struct futex_hash_bucket *hb; + struct futex_hash_bucket *hb, *hb2; union futex_key key2 = FUTEX_KEY_INIT; struct futex_q q = futex_q_init; int res, ret; @@ -3402,20 +3401,55 @@ static int futex_wait_requeue_pi(u32 __user *uaddr, unsigned int flags, /* Queue the futex_q, drop the hb lock, wait for wakeup. */ futex_wait_queue_me(hb, &q, to); - raw_spin_lock(&hb->lock); - ret = handle_early_requeue_pi_wakeup(hb, &q, &key2, to); - raw_spin_unlock(&hb->lock); - if (ret) - goto out_put_keys; + /* + * On RT we must avoid races with requeue and trying to block + * on two mutexes (hb->lock and uaddr2's rtmutex) by + * serializing access to pi_blocked_on with pi_lock. + */ + raw_spin_lock_irq(¤t->pi_lock); + if (current->pi_blocked_on) { + /* + * We have been requeued or are in the process of + * being requeued. + */ + raw_spin_unlock_irq(¤t->pi_lock); + } else { + /* + * Setting pi_blocked_on to PI_WAKEUP_INPROGRESS + * prevents a concurrent requeue from moving us to the + * uaddr2 rtmutex. After that we can safely acquire + * (and possibly block on) hb->lock. + */ + current->pi_blocked_on = PI_WAKEUP_INPROGRESS; + raw_spin_unlock_irq(¤t->pi_lock); + + spin_lock(&hb->lock); + + /* + * Clean up pi_blocked_on. We might leak it otherwise + * when we succeeded with the hb->lock in the fast + * path. + */ + raw_spin_lock_irq(¤t->pi_lock); + current->pi_blocked_on = NULL; + raw_spin_unlock_irq(¤t->pi_lock); + + ret = handle_early_requeue_pi_wakeup(hb, &q, &key2, to); + spin_unlock(&hb->lock); + if (ret) + goto out_put_keys; + } /* - * In order for us to be here, we know our q.key == key2, and since - * we took the hb->lock above, we also know that futex_requeue() has - * completed and we no longer have to concern ourselves with a wakeup - * race with the atomic proxy lock acquisition by the requeue code. The - * futex_requeue dropped our key1 reference and incremented our key2 - * reference count. + * In order to be here, we have either been requeued, are in + * the process of being requeued, or requeue successfully + * acquired uaddr2 on our behalf. If pi_blocked_on was + * non-null above, we may be racing with a requeue. Do not + * rely on q->lock_ptr to be hb2->lock until after blocking on + * hb->lock or hb2->lock. The futex_requeue dropped our key1 + * reference and incremented our key2 reference count. */ + hb2 = hash_futex(&key2); /* Check if the requeue code acquired the second futex for us. */ if (!q.rt_waiter) { @@ -3424,9 +3458,8 @@ static int futex_wait_requeue_pi(u32 __user *uaddr, unsigned int flags, * did a lock-steal - fix up the PI-state in that case. */ if (q.pi_state && (q.pi_state->owner != current)) { - struct futex_pi_state *ps_free; - - raw_spin_lock(q.lock_ptr); + spin_lock(&hb2->lock); + BUG_ON(&hb2->lock != q.lock_ptr); ret = fixup_pi_state_owner(uaddr2, &q, current); if (ret && rt_mutex_owner(&q.pi_state->pi_mutex) == current) { pi_state = q.pi_state; @@ -3436,9 +3469,8 @@ static int futex_wait_requeue_pi(u32 __user *uaddr, unsigned int flags, * Drop the reference to the pi state which * the requeue_pi() code acquired for us. */ - ps_free = __put_pi_state(q.pi_state); - raw_spin_unlock(q.lock_ptr); - kfree(ps_free); + put_pi_state(q.pi_state); + spin_unlock(&hb2->lock); } } else { struct rt_mutex *pi_mutex; @@ -3452,7 +3484,8 @@ static int futex_wait_requeue_pi(u32 __user *uaddr, unsigned int flags, pi_mutex = &q.pi_state->pi_mutex; ret = rt_mutex_wait_proxy_lock(pi_mutex, to, &rt_waiter); - raw_spin_lock(q.lock_ptr); + spin_lock(&hb2->lock); + BUG_ON(&hb2->lock != q.lock_ptr); if (ret && !rt_mutex_cleanup_proxy_lock(pi_mutex, &rt_waiter)) ret = 0; @@ -4225,7 +4258,7 @@ static int __init futex_init(void) for (i = 0; i < futex_hashsize; i++) { atomic_set(&futex_queues[i].waiters, 0); plist_head_init(&futex_queues[i].chain); - raw_spin_lock_init(&futex_queues[i].lock); + spin_lock_init(&futex_queues[i].lock); } return 0; diff --git a/kernel/locking/rtmutex.c b/kernel/locking/rtmutex.c index e1497623780b..1177f2815040 100644 --- a/kernel/locking/rtmutex.c +++ b/kernel/locking/rtmutex.c @@ -142,6 +142,12 @@ static void fixup_rt_mutex_waiters(struct rt_mutex *lock) WRITE_ONCE(*p, owner & ~RT_MUTEX_HAS_WAITERS); } +static int rt_mutex_real_waiter(struct rt_mutex_waiter *waiter) +{ + return waiter && waiter != PI_WAKEUP_INPROGRESS && + waiter != PI_REQUEUE_INPROGRESS; +} + /* * We can speed up the acquire/release, if there's no debugging state to be * set up. @@ -415,7 +421,8 @@ int max_lock_depth = 1024; static inline struct rt_mutex *task_blocked_on_lock(struct task_struct *p) { - return p->pi_blocked_on ? p->pi_blocked_on->lock : NULL; + return rt_mutex_real_waiter(p->pi_blocked_on) ? + p->pi_blocked_on->lock : NULL; } /* @@ -551,7 +558,7 @@ static int rt_mutex_adjust_prio_chain(struct task_struct *task, * reached or the state of the chain has changed while we * dropped the locks. */ - if (!waiter) + if (!rt_mutex_real_waiter(waiter)) goto out_unlock_pi; /* @@ -1334,6 +1341,22 @@ static int task_blocks_on_rt_mutex(struct rt_mutex *lock, return -EDEADLK; raw_spin_lock(&task->pi_lock); + /* + * In the case of futex requeue PI, this will be a proxy + * lock. The task will wake unaware that it is enqueueed on + * this lock. Avoid blocking on two locks and corrupting + * pi_blocked_on via the PI_WAKEUP_INPROGRESS + * flag. futex_wait_requeue_pi() sets this when it wakes up + * before requeue (due to a signal or timeout). Do not enqueue + * the task if PI_WAKEUP_INPROGRESS is set. + */ + if (task != current && task->pi_blocked_on == PI_WAKEUP_INPROGRESS) { + raw_spin_unlock(&task->pi_lock); + return -EAGAIN; + } + + BUG_ON(rt_mutex_real_waiter(task->pi_blocked_on)); + waiter->task = task; waiter->lock = lock; waiter->prio = task->prio; @@ -1357,7 +1380,7 @@ static int task_blocks_on_rt_mutex(struct rt_mutex *lock, rt_mutex_enqueue_pi(owner, waiter); rt_mutex_adjust_prio(owner); - if (owner->pi_blocked_on) + if (rt_mutex_real_waiter(owner->pi_blocked_on)) chain_walk = 1; } else if (rt_mutex_cond_detect_deadlock(waiter, chwalk)) { chain_walk = 1; @@ -1457,7 +1480,7 @@ static void remove_waiter(struct rt_mutex *lock, { bool is_top_waiter = (waiter == rt_mutex_top_waiter(lock)); struct task_struct *owner = rt_mutex_owner(lock); - struct rt_mutex *next_lock; + struct rt_mutex *next_lock = NULL; lockdep_assert_held(&lock->wait_lock); @@ -1483,7 +1506,8 @@ static void remove_waiter(struct rt_mutex *lock, rt_mutex_adjust_prio(owner); /* Store the lock on which owner is blocked or NULL */ - next_lock = task_blocked_on_lock(owner); + if (rt_mutex_real_waiter(owner->pi_blocked_on)) + next_lock = task_blocked_on_lock(owner); raw_spin_unlock(&owner->pi_lock); @@ -1519,7 +1543,8 @@ void rt_mutex_adjust_pi(struct task_struct *task) raw_spin_lock_irqsave(&task->pi_lock, flags); waiter = task->pi_blocked_on; - if (!waiter || rt_mutex_waiter_equal(waiter, task_to_waiter(task))) { + if (!rt_mutex_real_waiter(waiter) || + rt_mutex_waiter_equal(waiter, task_to_waiter(task))) { raw_spin_unlock_irqrestore(&task->pi_lock, flags); return; } @@ -2333,6 +2358,34 @@ int __rt_mutex_start_proxy_lock(struct rt_mutex *lock, if (try_to_take_rt_mutex(lock, task, NULL)) return 1; +#ifdef CONFIG_PREEMPT_RT_FULL + /* + * In PREEMPT_RT there's an added race. + * If the task, that we are about to requeue, times out, + * it can set the PI_WAKEUP_INPROGRESS. This tells the requeue + * to skip this task. But right after the task sets + * its pi_blocked_on to PI_WAKEUP_INPROGRESS it can then + * block on the spin_lock(&hb->lock), which in RT is an rtmutex. + * This will replace the PI_WAKEUP_INPROGRESS with the actual + * lock that it blocks on. We *must not* place this task + * on this proxy lock in that case. + * + * To prevent this race, we first take the task's pi_lock + * and check if it has updated its pi_blocked_on. If it has, + * we assume that it woke up and we return -EAGAIN. + * Otherwise, we set the task's pi_blocked_on to + * PI_REQUEUE_INPROGRESS, so that if the task is waking up + * it will know that we are in the process of requeuing it. + */ + raw_spin_lock(&task->pi_lock); + if (task->pi_blocked_on) { + raw_spin_unlock(&task->pi_lock); + return -EAGAIN; + } + task->pi_blocked_on = PI_REQUEUE_INPROGRESS; + raw_spin_unlock(&task->pi_lock); +#endif + /* We enforce deadlock detection for futexes */ ret = task_blocks_on_rt_mutex(lock, waiter, task, RT_MUTEX_FULL_CHAINWALK); diff --git a/kernel/locking/rtmutex_common.h b/kernel/locking/rtmutex_common.h index 2f6662d052d6..2a157c78e18c 100644 --- a/kernel/locking/rtmutex_common.h +++ b/kernel/locking/rtmutex_common.h @@ -131,6 +131,9 @@ enum rtmutex_chainwalk { /* * PI-futex support (proxy locking functions, etc.): */ +#define PI_WAKEUP_INPROGRESS ((struct rt_mutex_waiter *) 1) +#define PI_REQUEUE_INPROGRESS ((struct rt_mutex_waiter *) 2) + extern struct task_struct *rt_mutex_next_owner(struct rt_mutex *lock); extern void rt_mutex_init_proxy_locked(struct rt_mutex *lock, struct task_struct *proxy_owner); From patchwork Fri Feb 21 21:24:41 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tom Zanussi X-Patchwork-Id: 213210 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.1 required=3.0 tests=DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D12FBC35666 for ; Fri, 21 Feb 2020 21:26:31 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 9AFDC206EF for ; Fri, 21 Feb 2020 21:26:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1582320391; bh=PL3ARuBtrQ1fVJQpeUG6r9odQMSABBgCWQJmf5DOwdk=; h=From:To:Subject:Date:In-Reply-To:References:In-Reply-To: References:List-ID:From; b=ncGy/SxnvZLAY/t0GG6ysCIL4XU250UlslvxZQn5+UxxxNoLqUl1Nc03FFL4b7aZK jDRvhvg1/hIgaC6C60I0yd0zaeKzA5RHbl0M2lAnT+tSaslOqmULaXJrzeqiXGueUO qsCTj7cBeBS6+C5TaE7qH5vcmUakG9BGzQ8L11O4= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729372AbgBUV02 (ORCPT ); Fri, 21 Feb 2020 16:26:28 -0500 Received: from mail.kernel.org ([198.145.29.99]:39236 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729292AbgBUVZ2 (ORCPT ); Fri, 21 Feb 2020 16:25:28 -0500 Received: from localhost.localdomain (c-98-220-238-81.hsd1.il.comcast.net [98.220.238.81]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 52BE824683; Fri, 21 Feb 2020 21:25:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1582320328; bh=PL3ARuBtrQ1fVJQpeUG6r9odQMSABBgCWQJmf5DOwdk=; h=From:To:Subject:Date:In-Reply-To:References:In-Reply-To: References:From; b=aHkctyYZVhKo76tPv28JvsQ0anWlNuvePoP5T9IrAdeE1kgdu1tNVpaLopZumSQ8G /ObzrF4H2/z/YwsCzkoqGHFl9ubk8EsHEQOoZol95GHlMxkM4JYOb6JG0XJO6b7Tsy ntOtFZELlEPNlg5B32ek5JAiQMg0jAcak9TUAPj8= From: zanussi@kernel.org To: LKML , linux-rt-users , Steven Rostedt , Thomas Gleixner , Carsten Emde , John Kacur , Sebastian Andrzej Siewior , Daniel Wagner , Tom Zanussi Subject: [PATCH RT 13/25] lib/ubsan: Don't seralize UBSAN report Date: Fri, 21 Feb 2020 15:24:41 -0600 Message-Id: <979d699889bdefd12405df031feb27f4256d895d.1582320278.git.zanussi@kernel.org> X-Mailer: git-send-email 2.14.1 In-Reply-To: References: In-Reply-To: References: Sender: linux-rt-users-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rt-users@vger.kernel.org From: Julien Grall v4.14.170-rt75-rc1 stable review patch. If anyone has any objections, please let me know. ----------- [ Upstream commmit 4702c28ac777b27acb499cbd5e8e787ce1a7d82d ] At the moment, UBSAN report will be serialized using a spin_lock(). On RT-systems, spinlocks are turned to rt_spin_lock and may sleep. This will result to the following splat if the undefined behavior is in a context that can sleep: | BUG: sleeping function called from invalid context at /src/linux/kernel/locking/rtmutex.c:968 | in_atomic(): 1, irqs_disabled(): 128, pid: 3447, name: make | 1 lock held by make/3447: | #0: 000000009a966332 (&mm->mmap_sem){++++}, at: do_page_fault+0x140/0x4f8 | Preemption disabled at: | [] rt_mutex_futex_unlock+0x4c/0xb0 | CPU: 3 PID: 3447 Comm: make Tainted: G W 5.2.14-rt7-01890-ge6e057589653 #911 | Call trace: | dump_backtrace+0x0/0x148 | show_stack+0x14/0x20 | dump_stack+0xbc/0x104 | ___might_sleep+0x154/0x210 | rt_spin_lock+0x68/0xa0 | ubsan_prologue+0x30/0x68 | handle_overflow+0x64/0xe0 | __ubsan_handle_add_overflow+0x10/0x18 | __lock_acquire+0x1c28/0x2a28 | lock_acquire+0xf0/0x370 | _raw_spin_lock_irqsave+0x58/0x78 | rt_mutex_futex_unlock+0x4c/0xb0 | rt_spin_unlock+0x28/0x70 | get_page_from_freelist+0x428/0x2b60 | __alloc_pages_nodemask+0x174/0x1708 | alloc_pages_vma+0x1ac/0x238 | __handle_mm_fault+0x4ac/0x10b0 | handle_mm_fault+0x1d8/0x3b0 | do_page_fault+0x1c8/0x4f8 | do_translation_fault+0xb8/0xe0 | do_mem_abort+0x3c/0x98 | el0_da+0x20/0x24 The spin_lock() will protect against multiple CPUs to output a report together, I guess to prevent them to be interleaved. However, they can still interleave with other messages (and even splat from __migth_sleep). So the lock usefulness seems pretty limited. Rather than trying to accomodate RT-system by switching to a raw_spin_lock(), the lock is now completely dropped. Link: https://lkml.kernel.org/r/20190920100835.14999-1-julien.grall@arm.com Reported-by: Andre Przywara Signed-off-by: Julien Grall Acked-by: Andrey Ryabinin Signed-off-by: Sebastian Andrzej Siewior Signed-off-by: Tom Zanussi Conflicts: lib/ubsan.c --- lib/ubsan.c | 76 ++++++++++++++++++++++--------------------------------------- 1 file changed, 27 insertions(+), 49 deletions(-) diff --git a/lib/ubsan.c b/lib/ubsan.c index c652b4a820cc..f94cfb3a41ed 100644 --- a/lib/ubsan.c +++ b/lib/ubsan.c @@ -147,26 +147,21 @@ static bool location_is_valid(struct source_location *loc) { return loc->file_name != NULL; } - -static DEFINE_SPINLOCK(report_lock); - -static void ubsan_prologue(struct source_location *location, - unsigned long *flags) +static void ubsan_prologue(struct source_location *location) { current->in_ubsan++; - spin_lock_irqsave(&report_lock, *flags); pr_err("========================================" "========================================\n"); print_source_location("UBSAN: Undefined behaviour in", location); } -static void ubsan_epilogue(unsigned long *flags) +static void ubsan_epilogue(void) { dump_stack(); pr_err("========================================" "========================================\n"); - spin_unlock_irqrestore(&report_lock, *flags); + current->in_ubsan--; } @@ -175,14 +170,13 @@ static void handle_overflow(struct overflow_data *data, void *lhs, { struct type_descriptor *type = data->type; - unsigned long flags; char lhs_val_str[VALUE_LENGTH]; char rhs_val_str[VALUE_LENGTH]; if (suppress_report(&data->location)) return; - ubsan_prologue(&data->location, &flags); + ubsan_prologue(&data->location); val_to_string(lhs_val_str, sizeof(lhs_val_str), type, lhs); val_to_string(rhs_val_str, sizeof(rhs_val_str), type, rhs); @@ -194,7 +188,7 @@ static void handle_overflow(struct overflow_data *data, void *lhs, rhs_val_str, type->type_name); - ubsan_epilogue(&flags); + ubsan_epilogue(); } void __ubsan_handle_add_overflow(struct overflow_data *data, @@ -222,20 +216,19 @@ EXPORT_SYMBOL(__ubsan_handle_mul_overflow); void __ubsan_handle_negate_overflow(struct overflow_data *data, void *old_val) { - unsigned long flags; char old_val_str[VALUE_LENGTH]; if (suppress_report(&data->location)) return; - ubsan_prologue(&data->location, &flags); + ubsan_prologue(&data->location); val_to_string(old_val_str, sizeof(old_val_str), data->type, old_val); pr_err("negation of %s cannot be represented in type %s:\n", old_val_str, data->type->type_name); - ubsan_epilogue(&flags); + ubsan_epilogue(); } EXPORT_SYMBOL(__ubsan_handle_negate_overflow); @@ -243,13 +236,12 @@ EXPORT_SYMBOL(__ubsan_handle_negate_overflow); void __ubsan_handle_divrem_overflow(struct overflow_data *data, void *lhs, void *rhs) { - unsigned long flags; char rhs_val_str[VALUE_LENGTH]; if (suppress_report(&data->location)) return; - ubsan_prologue(&data->location, &flags); + ubsan_prologue(&data->location); val_to_string(rhs_val_str, sizeof(rhs_val_str), data->type, rhs); @@ -259,58 +251,52 @@ void __ubsan_handle_divrem_overflow(struct overflow_data *data, else pr_err("division by zero\n"); - ubsan_epilogue(&flags); + ubsan_epilogue(); } EXPORT_SYMBOL(__ubsan_handle_divrem_overflow); static void handle_null_ptr_deref(struct type_mismatch_data_common *data) { - unsigned long flags; - if (suppress_report(data->location)) return; - ubsan_prologue(data->location, &flags); + ubsan_prologue(data->location); pr_err("%s null pointer of type %s\n", type_check_kinds[data->type_check_kind], data->type->type_name); - ubsan_epilogue(&flags); + ubsan_epilogue(); } static void handle_misaligned_access(struct type_mismatch_data_common *data, unsigned long ptr) { - unsigned long flags; - if (suppress_report(data->location)) return; - ubsan_prologue(data->location, &flags); + ubsan_prologue(data->location); pr_err("%s misaligned address %p for type %s\n", type_check_kinds[data->type_check_kind], (void *)ptr, data->type->type_name); pr_err("which requires %ld byte alignment\n", data->alignment); - ubsan_epilogue(&flags); + ubsan_epilogue(); } static void handle_object_size_mismatch(struct type_mismatch_data_common *data, unsigned long ptr) { - unsigned long flags; - if (suppress_report(data->location)) return; - ubsan_prologue(data->location, &flags); + ubsan_prologue(data->location); pr_err("%s address %p with insufficient space\n", type_check_kinds[data->type_check_kind], (void *) ptr); pr_err("for an object of type %s\n", data->type->type_name); - ubsan_epilogue(&flags); + ubsan_epilogue(); } static void ubsan_type_mismatch_common(struct type_mismatch_data_common *data, @@ -356,12 +342,10 @@ EXPORT_SYMBOL(__ubsan_handle_type_mismatch_v1); void __ubsan_handle_nonnull_return(struct nonnull_return_data *data) { - unsigned long flags; - if (suppress_report(&data->location)) return; - ubsan_prologue(&data->location, &flags); + ubsan_prologue(&data->location); pr_err("null pointer returned from function declared to never return null\n"); @@ -369,49 +353,46 @@ void __ubsan_handle_nonnull_return(struct nonnull_return_data *data) print_source_location("returns_nonnull attribute specified in", &data->attr_location); - ubsan_epilogue(&flags); + ubsan_epilogue(); } EXPORT_SYMBOL(__ubsan_handle_nonnull_return); void __ubsan_handle_vla_bound_not_positive(struct vla_bound_data *data, void *bound) { - unsigned long flags; char bound_str[VALUE_LENGTH]; if (suppress_report(&data->location)) return; - ubsan_prologue(&data->location, &flags); + ubsan_prologue(&data->location); val_to_string(bound_str, sizeof(bound_str), data->type, bound); pr_err("variable length array bound value %s <= 0\n", bound_str); - ubsan_epilogue(&flags); + ubsan_epilogue(); } EXPORT_SYMBOL(__ubsan_handle_vla_bound_not_positive); void __ubsan_handle_out_of_bounds(struct out_of_bounds_data *data, void *index) { - unsigned long flags; char index_str[VALUE_LENGTH]; if (suppress_report(&data->location)) return; - ubsan_prologue(&data->location, &flags); + ubsan_prologue(&data->location); val_to_string(index_str, sizeof(index_str), data->index_type, index); pr_err("index %s is out of range for type %s\n", index_str, data->array_type->type_name); - ubsan_epilogue(&flags); + ubsan_epilogue(); } EXPORT_SYMBOL(__ubsan_handle_out_of_bounds); void __ubsan_handle_shift_out_of_bounds(struct shift_out_of_bounds_data *data, void *lhs, void *rhs) { - unsigned long flags; struct type_descriptor *rhs_type = data->rhs_type; struct type_descriptor *lhs_type = data->lhs_type; char rhs_str[VALUE_LENGTH]; @@ -420,7 +401,7 @@ void __ubsan_handle_shift_out_of_bounds(struct shift_out_of_bounds_data *data, if (suppress_report(&data->location)) return; - ubsan_prologue(&data->location, &flags); + ubsan_prologue(&data->location); val_to_string(rhs_str, sizeof(rhs_str), rhs_type, rhs); val_to_string(lhs_str, sizeof(lhs_str), lhs_type, lhs); @@ -443,18 +424,16 @@ void __ubsan_handle_shift_out_of_bounds(struct shift_out_of_bounds_data *data, lhs_str, rhs_str, lhs_type->type_name); - ubsan_epilogue(&flags); + ubsan_epilogue(); } EXPORT_SYMBOL(__ubsan_handle_shift_out_of_bounds); void __ubsan_handle_builtin_unreachable(struct unreachable_data *data) { - unsigned long flags; - - ubsan_prologue(&data->location, &flags); + ubsan_prologue(&data->location); pr_err("calling __builtin_unreachable()\n"); - ubsan_epilogue(&flags); + ubsan_epilogue(); panic("can't return from __builtin_unreachable()"); } EXPORT_SYMBOL(__ubsan_handle_builtin_unreachable); @@ -462,19 +441,18 @@ EXPORT_SYMBOL(__ubsan_handle_builtin_unreachable); void __ubsan_handle_load_invalid_value(struct invalid_value_data *data, void *val) { - unsigned long flags; char val_str[VALUE_LENGTH]; if (suppress_report(&data->location)) return; - ubsan_prologue(&data->location, &flags); + ubsan_prologue(&data->location); val_to_string(val_str, sizeof(val_str), data->type, val); pr_err("load of value %s is not a valid value for type %s\n", val_str, data->type->type_name); - ubsan_epilogue(&flags); + ubsan_epilogue(); } EXPORT_SYMBOL(__ubsan_handle_load_invalid_value); From patchwork Fri Feb 21 21:24:42 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tom Zanussi X-Patchwork-Id: 213211 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.1 required=3.0 tests=DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 579ECC3566F for ; Fri, 21 Feb 2020 21:26:27 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 1F213206EF for ; Fri, 21 Feb 2020 21:26:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1582320387; bh=w+SEUaeZRJ1y/bxhM5og4B/G9JdKcfZzdUg3aHde21w=; h=From:To:Subject:Date:In-Reply-To:References:In-Reply-To: References:List-ID:From; b=g7g3voirPDMT5o4oOqf8VcKwO3oN2ytCov35dE+j/9OzZA8BfBIATGhmSTUlwmXNy 2yzZQHpEjAlgUa+xc87dXNQHjpguXU3yk8YfQfEQ1HUbFiEg3F0W7ffk3ZNohMzVDQ 7E/jHaNm/sls7/fwKVfsXdxpWf8VwR9Dgh8YJTH0= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729518AbgBUV0U (ORCPT ); Fri, 21 Feb 2020 16:26:20 -0500 Received: from mail.kernel.org ([198.145.29.99]:39246 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729172AbgBUVZa (ORCPT ); Fri, 21 Feb 2020 16:25:30 -0500 Received: from localhost.localdomain (c-98-220-238-81.hsd1.il.comcast.net [98.220.238.81]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 4484824684; Fri, 21 Feb 2020 21:25:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1582320329; bh=w+SEUaeZRJ1y/bxhM5og4B/G9JdKcfZzdUg3aHde21w=; h=From:To:Subject:Date:In-Reply-To:References:In-Reply-To: References:From; b=M8RQ9Iw18gfKkbfXrhaGwI+XsUw7O4bG6GDAWMiqEeO8NEd6DYjwKdF/h01BkM/o2 TXImbqz5xio776VzRWEHl6qIU/cvadiUSHcTaSlfz7Rf+IfVZ2n7sQ2lwjbuNYnmLE QNEItdrK8RZ5TZ52NK1SJa7vR+UAcGB59+JPH4TY= From: zanussi@kernel.org To: LKML , linux-rt-users , Steven Rostedt , Thomas Gleixner , Carsten Emde , John Kacur , Sebastian Andrzej Siewior , Daniel Wagner , Tom Zanussi Subject: [PATCH RT 14/25] kmemleak: Change the lock of kmemleak_object to raw_spinlock_t Date: Fri, 21 Feb 2020 15:24:42 -0600 Message-Id: <3da095f98a2d5bca50190b69799f5d8b57af47da.1582320278.git.zanussi@kernel.org> X-Mailer: git-send-email 2.14.1 In-Reply-To: References: In-Reply-To: References: Sender: linux-rt-users-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rt-users@vger.kernel.org From: Liu Haitao v4.14.170-rt75-rc1 stable review patch. If anyone has any objections, please let me know. ----------- [ Upstream commit 217847f57119b5fdd377bfa3d344613ddb98d9fc ] The commit ("kmemleak: Turn kmemleak_lock to raw spinlock on RT") changed the kmemleak_lock to raw spinlock. However the kmemleak_object->lock is held after the kmemleak_lock is held in scan_block(). Make the object->lock a raw_spinlock_t. Cc: stable-rt@vger.kernel.org Link: https://lkml.kernel.org/r/20190927082230.34152-1-yongxin.liu@windriver.com Signed-off-by: Liu Haitao Signed-off-by: Yongxin Liu Signed-off-by: Sebastian Andrzej Siewior Signed-off-by: Tom Zanussi --- mm/kmemleak.c | 72 +++++++++++++++++++++++++++++------------------------------ 1 file changed, 36 insertions(+), 36 deletions(-) diff --git a/mm/kmemleak.c b/mm/kmemleak.c index c18e23619f95..17718a11782b 100644 --- a/mm/kmemleak.c +++ b/mm/kmemleak.c @@ -148,7 +148,7 @@ struct kmemleak_scan_area { * (use_count) and freed using the RCU mechanism. */ struct kmemleak_object { - spinlock_t lock; + raw_spinlock_t lock; unsigned int flags; /* object status flags */ struct list_head object_list; struct list_head gray_list; @@ -562,7 +562,7 @@ static struct kmemleak_object *create_object(unsigned long ptr, size_t size, INIT_LIST_HEAD(&object->object_list); INIT_LIST_HEAD(&object->gray_list); INIT_HLIST_HEAD(&object->area_list); - spin_lock_init(&object->lock); + raw_spin_lock_init(&object->lock); atomic_set(&object->use_count, 1); object->flags = OBJECT_ALLOCATED; object->pointer = ptr; @@ -643,9 +643,9 @@ static void __delete_object(struct kmemleak_object *object) * Locking here also ensures that the corresponding memory block * cannot be freed when it is being scanned. */ - spin_lock_irqsave(&object->lock, flags); + raw_spin_lock_irqsave(&object->lock, flags); object->flags &= ~OBJECT_ALLOCATED; - spin_unlock_irqrestore(&object->lock, flags); + raw_spin_unlock_irqrestore(&object->lock, flags); put_object(object); } @@ -717,9 +717,9 @@ static void paint_it(struct kmemleak_object *object, int color) { unsigned long flags; - spin_lock_irqsave(&object->lock, flags); + raw_spin_lock_irqsave(&object->lock, flags); __paint_it(object, color); - spin_unlock_irqrestore(&object->lock, flags); + raw_spin_unlock_irqrestore(&object->lock, flags); } static void paint_ptr(unsigned long ptr, int color) @@ -779,7 +779,7 @@ static void add_scan_area(unsigned long ptr, size_t size, gfp_t gfp) goto out; } - spin_lock_irqsave(&object->lock, flags); + raw_spin_lock_irqsave(&object->lock, flags); if (size == SIZE_MAX) { size = object->pointer + object->size - ptr; } else if (ptr + size > object->pointer + object->size) { @@ -795,7 +795,7 @@ static void add_scan_area(unsigned long ptr, size_t size, gfp_t gfp) hlist_add_head(&area->node, &object->area_list); out_unlock: - spin_unlock_irqrestore(&object->lock, flags); + raw_spin_unlock_irqrestore(&object->lock, flags); out: put_object(object); } @@ -818,9 +818,9 @@ static void object_set_excess_ref(unsigned long ptr, unsigned long excess_ref) return; } - spin_lock_irqsave(&object->lock, flags); + raw_spin_lock_irqsave(&object->lock, flags); object->excess_ref = excess_ref; - spin_unlock_irqrestore(&object->lock, flags); + raw_spin_unlock_irqrestore(&object->lock, flags); put_object(object); } @@ -840,9 +840,9 @@ static void object_no_scan(unsigned long ptr) return; } - spin_lock_irqsave(&object->lock, flags); + raw_spin_lock_irqsave(&object->lock, flags); object->flags |= OBJECT_NO_SCAN; - spin_unlock_irqrestore(&object->lock, flags); + raw_spin_unlock_irqrestore(&object->lock, flags); put_object(object); } @@ -903,11 +903,11 @@ static void early_alloc(struct early_log *log) log->min_count, GFP_ATOMIC); if (!object) goto out; - spin_lock_irqsave(&object->lock, flags); + raw_spin_lock_irqsave(&object->lock, flags); for (i = 0; i < log->trace_len; i++) object->trace[i] = log->trace[i]; object->trace_len = log->trace_len; - spin_unlock_irqrestore(&object->lock, flags); + raw_spin_unlock_irqrestore(&object->lock, flags); out: rcu_read_unlock(); } @@ -1097,9 +1097,9 @@ void __ref kmemleak_update_trace(const void *ptr) return; } - spin_lock_irqsave(&object->lock, flags); + raw_spin_lock_irqsave(&object->lock, flags); object->trace_len = __save_stack_trace(object->trace); - spin_unlock_irqrestore(&object->lock, flags); + raw_spin_unlock_irqrestore(&object->lock, flags); put_object(object); } @@ -1335,7 +1335,7 @@ static void scan_block(void *_start, void *_end, * previously acquired in scan_object(). These locks are * enclosed by scan_mutex. */ - spin_lock_nested(&object->lock, SINGLE_DEPTH_NESTING); + raw_spin_lock_nested(&object->lock, SINGLE_DEPTH_NESTING); /* only pass surplus references (object already gray) */ if (color_gray(object)) { excess_ref = object->excess_ref; @@ -1344,7 +1344,7 @@ static void scan_block(void *_start, void *_end, excess_ref = 0; update_refs(object); } - spin_unlock(&object->lock); + raw_spin_unlock(&object->lock); if (excess_ref) { object = lookup_object(excess_ref, 0); @@ -1353,9 +1353,9 @@ static void scan_block(void *_start, void *_end, if (object == scanned) /* circular reference, ignore */ continue; - spin_lock_nested(&object->lock, SINGLE_DEPTH_NESTING); + raw_spin_lock_nested(&object->lock, SINGLE_DEPTH_NESTING); update_refs(object); - spin_unlock(&object->lock); + raw_spin_unlock(&object->lock); } } raw_spin_unlock_irqrestore(&kmemleak_lock, flags); @@ -1391,7 +1391,7 @@ static void scan_object(struct kmemleak_object *object) * Once the object->lock is acquired, the corresponding memory block * cannot be freed (the same lock is acquired in delete_object). */ - spin_lock_irqsave(&object->lock, flags); + raw_spin_lock_irqsave(&object->lock, flags); if (object->flags & OBJECT_NO_SCAN) goto out; if (!(object->flags & OBJECT_ALLOCATED)) @@ -1410,9 +1410,9 @@ static void scan_object(struct kmemleak_object *object) if (start >= end) break; - spin_unlock_irqrestore(&object->lock, flags); + raw_spin_unlock_irqrestore(&object->lock, flags); cond_resched(); - spin_lock_irqsave(&object->lock, flags); + raw_spin_lock_irqsave(&object->lock, flags); } while (object->flags & OBJECT_ALLOCATED); } else hlist_for_each_entry(area, &object->area_list, node) @@ -1420,7 +1420,7 @@ static void scan_object(struct kmemleak_object *object) (void *)(area->start + area->size), object); out: - spin_unlock_irqrestore(&object->lock, flags); + raw_spin_unlock_irqrestore(&object->lock, flags); } /* @@ -1473,7 +1473,7 @@ static void kmemleak_scan(void) /* prepare the kmemleak_object's */ rcu_read_lock(); list_for_each_entry_rcu(object, &object_list, object_list) { - spin_lock_irqsave(&object->lock, flags); + raw_spin_lock_irqsave(&object->lock, flags); #ifdef DEBUG /* * With a few exceptions there should be a maximum of @@ -1490,7 +1490,7 @@ static void kmemleak_scan(void) if (color_gray(object) && get_object(object)) list_add_tail(&object->gray_list, &gray_list); - spin_unlock_irqrestore(&object->lock, flags); + raw_spin_unlock_irqrestore(&object->lock, flags); } rcu_read_unlock(); @@ -1555,14 +1555,14 @@ static void kmemleak_scan(void) */ rcu_read_lock(); list_for_each_entry_rcu(object, &object_list, object_list) { - spin_lock_irqsave(&object->lock, flags); + raw_spin_lock_irqsave(&object->lock, flags); if (color_white(object) && (object->flags & OBJECT_ALLOCATED) && update_checksum(object) && get_object(object)) { /* color it gray temporarily */ object->count = object->min_count; list_add_tail(&object->gray_list, &gray_list); } - spin_unlock_irqrestore(&object->lock, flags); + raw_spin_unlock_irqrestore(&object->lock, flags); } rcu_read_unlock(); @@ -1582,13 +1582,13 @@ static void kmemleak_scan(void) */ rcu_read_lock(); list_for_each_entry_rcu(object, &object_list, object_list) { - spin_lock_irqsave(&object->lock, flags); + raw_spin_lock_irqsave(&object->lock, flags); if (unreferenced_object(object) && !(object->flags & OBJECT_REPORTED)) { object->flags |= OBJECT_REPORTED; new_leaks++; } - spin_unlock_irqrestore(&object->lock, flags); + raw_spin_unlock_irqrestore(&object->lock, flags); } rcu_read_unlock(); @@ -1740,10 +1740,10 @@ static int kmemleak_seq_show(struct seq_file *seq, void *v) struct kmemleak_object *object = v; unsigned long flags; - spin_lock_irqsave(&object->lock, flags); + raw_spin_lock_irqsave(&object->lock, flags); if ((object->flags & OBJECT_REPORTED) && unreferenced_object(object)) print_unreferenced(seq, object); - spin_unlock_irqrestore(&object->lock, flags); + raw_spin_unlock_irqrestore(&object->lock, flags); return 0; } @@ -1773,9 +1773,9 @@ static int dump_str_object_info(const char *str) return -EINVAL; } - spin_lock_irqsave(&object->lock, flags); + raw_spin_lock_irqsave(&object->lock, flags); dump_object_info(object); - spin_unlock_irqrestore(&object->lock, flags); + raw_spin_unlock_irqrestore(&object->lock, flags); put_object(object); return 0; @@ -1794,11 +1794,11 @@ static void kmemleak_clear(void) rcu_read_lock(); list_for_each_entry_rcu(object, &object_list, object_list) { - spin_lock_irqsave(&object->lock, flags); + raw_spin_lock_irqsave(&object->lock, flags); if ((object->flags & OBJECT_REPORTED) && unreferenced_object(object)) __paint_it(object, KMEMLEAK_GREY); - spin_unlock_irqrestore(&object->lock, flags); + raw_spin_unlock_irqrestore(&object->lock, flags); } rcu_read_unlock(); From patchwork Fri Feb 21 21:24:45 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Tom Zanussi X-Patchwork-Id: 213212 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.1 required=3.0 tests=DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 39474C3566F for ; Fri, 21 Feb 2020 21:26:17 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 0F93124650 for ; Fri, 21 Feb 2020 21:26:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1582320377; bh=hfjqjbH/69XjOXVTFuEk8cVS7pwRb367c/FI/hNju5Y=; h=From:To:Subject:Date:In-Reply-To:References:In-Reply-To: References:List-ID:From; b=lHoFVqPOIgdPFf+WuStJP/lt9osj/La+RD3hfwakDtHL9r4knuvH8vlN48pyZ4J9u UJZZiXha/uXlJ0J+VAXCO6nfT2KYbU+w28W83QSQv6uXgndQL2taSI7xHlqb66PCaV OL9M2ep+ho3QHA0meTi4t69ziUVCuyFhripgYzEg= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729806AbgBUV0N (ORCPT ); Fri, 21 Feb 2020 16:26:13 -0500 Received: from mail.kernel.org ([198.145.29.99]:39328 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729518AbgBUVZc (ORCPT ); Fri, 21 Feb 2020 16:25:32 -0500 Received: from localhost.localdomain (c-98-220-238-81.hsd1.il.comcast.net [98.220.238.81]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 1B1BD24687; Fri, 21 Feb 2020 21:25:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1582320331; bh=hfjqjbH/69XjOXVTFuEk8cVS7pwRb367c/FI/hNju5Y=; h=From:To:Subject:Date:In-Reply-To:References:In-Reply-To: References:From; b=BQTSZZYRCvqt/Gc6DKFHb7lQsz3gVxVvy5tCdDbqFkYXAjDg46yKonzu/BQklzMgO BkoiZ2dVAL4hAo2c/arGDPJ/1fO6YOSvt3mVjeq8wyq1CKijxfF1ImapPooCzMV9/L DbOzHsCWRfDQKhM0N5GyrcAY6Cc+0SIidvt/nEbE= From: zanussi@kernel.org To: LKML , linux-rt-users , Steven Rostedt , Thomas Gleixner , Carsten Emde , John Kacur , Sebastian Andrzej Siewior , Daniel Wagner , Tom Zanussi Subject: [PATCH RT 17/25] x86/fpu: Don't cache access to fpu_fpregs_owner_ctx Date: Fri, 21 Feb 2020 15:24:45 -0600 Message-Id: <25549e4ff2e5d78e663cf6e5cd8ed108ef03ff44.1582320278.git.zanussi@kernel.org> X-Mailer: git-send-email 2.14.1 In-Reply-To: References: In-Reply-To: References: MIME-Version: 1.0 Sender: linux-rt-users-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rt-users@vger.kernel.org From: Sebastian Andrzej Siewior v4.14.170-rt75-rc1 stable review patch. If anyone has any objections, please let me know. ----------- [ Upstream commit eb46d70e4455e49928f136f768f1e54646ab4ff7 ] The state/owner of the FPU is saved to fpu_fpregs_owner_ctx by pointing to the context that is currently loaded. It never changed during the lifetime of a task - it remained stable/constant. After deferred FPU registers loading until return to userland was implemented, the content of fpu_fpregs_owner_ctx may change during preemption and must not be cached. This went unnoticed for some time and was now noticed, in particular since gcc 9 is caching that load in copy_fpstate_to_sigframe() and reusing it in the retry loop: copy_fpstate_to_sigframe() load fpu_fpregs_owner_ctx and save on stack fpregs_lock() copy_fpregs_to_sigframe() /* failed */ fpregs_unlock() *** PREEMPTION, another uses FPU, changes fpu_fpregs_owner_ctx *** fault_in_pages_writeable() /* succeed, retry */ fpregs_lock() __fpregs_load_activate() fpregs_state_valid() /* uses fpu_fpregs_owner_ctx from stack */ copy_fpregs_to_sigframe() /* succeeds, random FPU content */ This is a comparison of the assembly produced by gcc 9, without vs with this patch: | # arch/x86/kernel/fpu/signal.c:173: if (!access_ok(buf, size)) | cmpq %rdx, %rax # tmp183, _4 | jb .L190 #, |-# arch/x86/include/asm/fpu/internal.h:512: return fpu == this_cpu_read_stable(fpu_fpregs_owner_ctx) && cpu == fpu->last_cpu; |-#APP |-# 512 "arch/x86/include/asm/fpu/internal.h" 1 |- movq %gs:fpu_fpregs_owner_ctx,%rax #, pfo_ret__ |-# 0 "" 2 |-#NO_APP |- movq %rax, -88(%rbp) # pfo_ret__, %sfp … |-# arch/x86/include/asm/fpu/internal.h:512: return fpu == this_cpu_read_stable(fpu_fpregs_owner_ctx) && cpu == fpu->last_cpu; |- movq -88(%rbp), %rcx # %sfp, pfo_ret__ |- cmpq %rcx, -64(%rbp) # pfo_ret__, %sfp |+# arch/x86/include/asm/fpu/internal.h:512: return fpu == this_cpu_read(fpu_fpregs_owner_ctx) && cpu == fpu->last_cpu; |+#APP |+# 512 "arch/x86/include/asm/fpu/internal.h" 1 |+ movq %gs:fpu_fpregs_owner_ctx(%rip),%rax # fpu_fpregs_owner_ctx, pfo_ret__ |+# 0 "" 2 |+# arch/x86/include/asm/fpu/internal.h:512: return fpu == this_cpu_read(fpu_fpregs_owner_ctx) && cpu == fpu->last_cpu; |+#NO_APP |+ cmpq %rax, -64(%rbp) # pfo_ret__, %sfp Use this_cpu_read() instead this_cpu_read_stable() to avoid caching of fpu_fpregs_owner_ctx during preemption points. The Fixes: tag points to the commit where deferred FPU loading was added. Since this commit, the compiler is no longer allowed to move the load of fpu_fpregs_owner_ctx somewhere else / outside of the locked section. A task preemption will change its value and stale content will be observed. [ bp: Massage. ] Debugged-by: Austin Clements Debugged-by: David Chase Debugged-by: Ian Lance Taylor Fixes: 5f409e20b7945 ("x86/fpu: Defer FPU state load until return to userspace") Signed-off-by: Sebastian Andrzej Siewior Signed-off-by: Borislav Petkov Reviewed-by: Rik van Riel Tested-by: Borislav Petkov Cc: Aubrey Li Cc: Austin Clements Cc: Barret Rhoden Cc: Dave Hansen Cc: David Chase Cc: "H. Peter Anvin" Cc: ian@airs.com Cc: Ingo Molnar Cc: Josh Bleecher Snyder Cc: Thomas Gleixner Cc: x86-ml Cc: stable-rt@vger.kernel.org Link: https://lkml.kernel.org/r/20191128085306.hxfa2o3knqtu4wfn@linutronix.de Link: https://bugzilla.kernel.org/show_bug.cgi?id=205663 Signed-off-by: Tom Zanussi --- arch/x86/include/asm/fpu/internal.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/x86/include/asm/fpu/internal.h b/arch/x86/include/asm/fpu/internal.h index fa2c93cb42a2..92e12f5d0d64 100644 --- a/arch/x86/include/asm/fpu/internal.h +++ b/arch/x86/include/asm/fpu/internal.h @@ -498,7 +498,7 @@ static inline void __fpu_invalidate_fpregs_state(struct fpu *fpu) static inline int fpregs_state_valid(struct fpu *fpu, unsigned int cpu) { - return fpu == this_cpu_read_stable(fpu_fpregs_owner_ctx) && cpu == fpu->last_cpu; + return fpu == this_cpu_read(fpu_fpregs_owner_ctx) && cpu == fpu->last_cpu; } /* From patchwork Fri Feb 21 21:24:46 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tom Zanussi X-Patchwork-Id: 213213 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.1 required=3.0 tests=DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 974BAC35666 for ; Fri, 21 Feb 2020 21:26:08 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 6AF0E24650 for ; Fri, 21 Feb 2020 21:26:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1582320368; bh=3uTiPfIG7wpkQkEHdtA6AbWRr9Vhwfb/STdSFMRNs8g=; h=From:To:Subject:Date:In-Reply-To:References:In-Reply-To: References:List-ID:From; b=qyD5JgPO95LyGLlWtdEWReYdBshva4lENpU5pFes3DlU72ppDKCVzObNQ8Al0cMTg aOxSYNAt4r6ZpkJu8ivYy5/1XAsd+5FBh7eipvgwTUjqUTHnA5DG09DsZmcAgUZUu8 NEk6cZ3iiT1O5Kai8hVlzoMPwFAZYRgz0yyWVEYM= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729636AbgBUVZg (ORCPT ); Fri, 21 Feb 2020 16:25:36 -0500 Received: from mail.kernel.org ([198.145.29.99]:39372 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729261AbgBUVZe (ORCPT ); Fri, 21 Feb 2020 16:25:34 -0500 Received: from localhost.localdomain (c-98-220-238-81.hsd1.il.comcast.net [98.220.238.81]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 0E3AA24689; Fri, 21 Feb 2020 21:25:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1582320332; bh=3uTiPfIG7wpkQkEHdtA6AbWRr9Vhwfb/STdSFMRNs8g=; h=From:To:Subject:Date:In-Reply-To:References:In-Reply-To: References:From; b=XHnMeivhoxOd+zIxwUw7YWqPqnupoV3v5C9TTxKk71/AhRiaAX0diJOq72Mt2j2Pt 3vBulrJv+z2/1ZO/sA9YBJFv90fnhH7XW60zuvqJ3rB+EbdRM4wFcXWIyLGvWudwAm pTy7j6KeyiKgNOszAzmwpG0cq2TiSj9wlDQSrYyI= From: zanussi@kernel.org To: LKML , linux-rt-users , Steven Rostedt , Thomas Gleixner , Carsten Emde , John Kacur , Sebastian Andrzej Siewior , Daniel Wagner , Tom Zanussi Subject: [PATCH RT 18/25] locking: Make spinlock_t and rwlock_t a RCU section on RT Date: Fri, 21 Feb 2020 15:24:46 -0600 Message-Id: <65e43fbb29570e76b45baa0cd9012782b06b0083.1582320278.git.zanussi@kernel.org> X-Mailer: git-send-email 2.14.1 In-Reply-To: References: In-Reply-To: References: Sender: linux-rt-users-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rt-users@vger.kernel.org From: Sebastian Andrzej Siewior v4.14.170-rt75-rc1 stable review patch. If anyone has any objections, please let me know. ----------- [ Upstream commit 84440022a0e1c8c936d61f8f97593674a295d409 ] On !RT a locked spinlock_t and rwlock_t disables preemption which implies a RCU read section. There is code that relies on that behaviour. Add an explicit RCU read section on RT while a sleeping lock (a lock which would disables preemption on !RT) acquired. Signed-off-by: Sebastian Andrzej Siewior Signed-off-by: Tom Zanussi --- kernel/locking/rtmutex.c | 6 ++++++ kernel/locking/rwlock-rt.c | 6 ++++++ 2 files changed, 12 insertions(+) diff --git a/kernel/locking/rtmutex.c b/kernel/locking/rtmutex.c index 4bc01a2a9a88..848d9ed6f053 100644 --- a/kernel/locking/rtmutex.c +++ b/kernel/locking/rtmutex.c @@ -1142,6 +1142,7 @@ void __sched rt_spin_lock_slowunlock(struct rt_mutex *lock) void __lockfunc rt_spin_lock(spinlock_t *lock) { sleeping_lock_inc(); + rcu_read_lock(); migrate_disable(); spin_acquire(&lock->dep_map, 0, 0, _RET_IP_); rt_spin_lock_fastlock(&lock->lock, rt_spin_lock_slowlock); @@ -1157,6 +1158,7 @@ void __lockfunc __rt_spin_lock(struct rt_mutex *lock) void __lockfunc rt_spin_lock_nested(spinlock_t *lock, int subclass) { sleeping_lock_inc(); + rcu_read_lock(); migrate_disable(); spin_acquire(&lock->dep_map, subclass, 0, _RET_IP_); rt_spin_lock_fastlock(&lock->lock, rt_spin_lock_slowlock); @@ -1170,6 +1172,7 @@ void __lockfunc rt_spin_unlock(spinlock_t *lock) spin_release(&lock->dep_map, 1, _RET_IP_); rt_spin_lock_fastunlock(&lock->lock, rt_spin_lock_slowunlock); migrate_enable(); + rcu_read_unlock(); sleeping_lock_dec(); } EXPORT_SYMBOL(rt_spin_unlock); @@ -1201,6 +1204,7 @@ int __lockfunc rt_spin_trylock(spinlock_t *lock) ret = __rt_mutex_trylock(&lock->lock); if (ret) { spin_acquire(&lock->dep_map, 0, 1, _RET_IP_); + rcu_read_lock(); } else { migrate_enable(); sleeping_lock_dec(); @@ -1217,6 +1221,7 @@ int __lockfunc rt_spin_trylock_bh(spinlock_t *lock) ret = __rt_mutex_trylock(&lock->lock); if (ret) { sleeping_lock_inc(); + rcu_read_lock(); migrate_disable(); spin_acquire(&lock->dep_map, 0, 1, _RET_IP_); } else @@ -1233,6 +1238,7 @@ int __lockfunc rt_spin_trylock_irqsave(spinlock_t *lock, unsigned long *flags) ret = __rt_mutex_trylock(&lock->lock); if (ret) { sleeping_lock_inc(); + rcu_read_lock(); migrate_disable(); spin_acquire(&lock->dep_map, 0, 1, _RET_IP_); } diff --git a/kernel/locking/rwlock-rt.c b/kernel/locking/rwlock-rt.c index c3b91205161c..0ae8c62ea832 100644 --- a/kernel/locking/rwlock-rt.c +++ b/kernel/locking/rwlock-rt.c @@ -310,6 +310,7 @@ int __lockfunc rt_read_trylock(rwlock_t *rwlock) ret = do_read_rt_trylock(rwlock); if (ret) { rwlock_acquire_read(&rwlock->dep_map, 0, 1, _RET_IP_); + rcu_read_lock(); } else { migrate_enable(); sleeping_lock_dec(); @@ -327,6 +328,7 @@ int __lockfunc rt_write_trylock(rwlock_t *rwlock) ret = do_write_rt_trylock(rwlock); if (ret) { rwlock_acquire(&rwlock->dep_map, 0, 1, _RET_IP_); + rcu_read_lock(); } else { migrate_enable(); sleeping_lock_dec(); @@ -338,6 +340,7 @@ EXPORT_SYMBOL(rt_write_trylock); void __lockfunc rt_read_lock(rwlock_t *rwlock) { sleeping_lock_inc(); + rcu_read_lock(); migrate_disable(); rwlock_acquire_read(&rwlock->dep_map, 0, 0, _RET_IP_); do_read_rt_lock(rwlock); @@ -347,6 +350,7 @@ EXPORT_SYMBOL(rt_read_lock); void __lockfunc rt_write_lock(rwlock_t *rwlock) { sleeping_lock_inc(); + rcu_read_lock(); migrate_disable(); rwlock_acquire(&rwlock->dep_map, 0, 0, _RET_IP_); do_write_rt_lock(rwlock); @@ -358,6 +362,7 @@ void __lockfunc rt_read_unlock(rwlock_t *rwlock) rwlock_release(&rwlock->dep_map, 1, _RET_IP_); do_read_rt_unlock(rwlock); migrate_enable(); + rcu_read_unlock(); sleeping_lock_dec(); } EXPORT_SYMBOL(rt_read_unlock); @@ -367,6 +372,7 @@ void __lockfunc rt_write_unlock(rwlock_t *rwlock) rwlock_release(&rwlock->dep_map, 1, _RET_IP_); do_write_rt_unlock(rwlock); migrate_enable(); + rcu_read_unlock(); sleeping_lock_dec(); } EXPORT_SYMBOL(rt_write_unlock); From patchwork Fri Feb 21 21:24:48 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tom Zanussi X-Patchwork-Id: 213216 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.1 required=3.0 tests=DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0D325C3566F for ; Fri, 21 Feb 2020 21:25:38 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id D9A1D24694 for ; Fri, 21 Feb 2020 21:25:37 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1582320337; bh=i0N2MrNKZHuX++taw3rFk2nCgTfZGw2eHHT1CJg4TdY=; h=From:To:Subject:Date:In-Reply-To:References:In-Reply-To: References:List-ID:From; b=z2sTDi4cYM/SoDx+bfOIFkGr7mKi0e0sJ7K+kXsrKyh5OVBa0q6W43yOE0pj9U04H yMxPna5JzKQdWNSwDsZZCKETWfdk+EY54tLUhI8abpm3FtPpDxLGmLqHnHurCj+9kp yLZTATdZe62e5z4tW1/prRoWzDQ+9BX9tT27mlVU= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729503AbgBUVZh (ORCPT ); Fri, 21 Feb 2020 16:25:37 -0500 Received: from mail.kernel.org ([198.145.29.99]:39406 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729612AbgBUVZf (ORCPT ); Fri, 21 Feb 2020 16:25:35 -0500 Received: from localhost.localdomain (c-98-220-238-81.hsd1.il.comcast.net [98.220.238.81]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id DD4A12468E; Fri, 21 Feb 2020 21:25:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1582320334; bh=i0N2MrNKZHuX++taw3rFk2nCgTfZGw2eHHT1CJg4TdY=; h=From:To:Subject:Date:In-Reply-To:References:In-Reply-To: References:From; b=loBbQsujoFNjW1nPPgpnThgEU8tmhdQ1wXas44PHwBj+Sxo410qVrrE0AniRmz3OG 4g/GjLhatI3Bdekhkp63NakEpEbpIjUGpMhPp8r/EFO/LS++ijDZqgNa7xCpXO3wpE T72oVIIJm/ZSCR6USjLz3/OHO2CeRd0OymwKx7DI= From: zanussi@kernel.org To: LKML , linux-rt-users , Steven Rostedt , Thomas Gleixner , Carsten Emde , John Kacur , Sebastian Andrzej Siewior , Daniel Wagner , Tom Zanussi Subject: [PATCH RT 20/25] kmemleak: Cosmetic changes Date: Fri, 21 Feb 2020 15:24:48 -0600 Message-Id: X-Mailer: git-send-email 2.14.1 In-Reply-To: References: In-Reply-To: References: Sender: linux-rt-users-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rt-users@vger.kernel.org From: Sebastian Andrzej Siewior v4.14.170-rt75-rc1 stable review patch. If anyone has any objections, please let me know. ----------- [ Upstream commit 65a387a0b45cdd6844b7c6269e6333c9f0113410 ] Align with the patch, that got sent upstream for review. Only cosmetic changes. Signed-off-by: Sebastian Andrzej Siewior Signed-off-by: Tom Zanussi --- mm/kmemleak.c | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-) diff --git a/mm/kmemleak.c b/mm/kmemleak.c index 17718a11782b..d7925ee4b052 100644 --- a/mm/kmemleak.c +++ b/mm/kmemleak.c @@ -26,7 +26,7 @@ * * The following locks and mutexes are used by kmemleak: * - * - kmemleak_lock (raw spinlock): protects the object_list modifications and + * - kmemleak_lock (raw_spinlock_t): protects the object_list modifications and * accesses to the object_tree_root. The object_list is the main list * holding the metadata (struct kmemleak_object) for the allocated memory * blocks. The object_tree_root is a red black tree used to look-up @@ -35,13 +35,13 @@ * object_tree_root in the create_object() function called from the * kmemleak_alloc() callback and removed in delete_object() called from the * kmemleak_free() callback - * - kmemleak_object.lock (spinlock): protects a kmemleak_object. Accesses to - * the metadata (e.g. count) are protected by this lock. Note that some - * members of this structure may be protected by other means (atomic or - * kmemleak_lock). This lock is also held when scanning the corresponding - * memory block to avoid the kernel freeing it via the kmemleak_free() - * callback. This is less heavyweight than holding a global lock like - * kmemleak_lock during scanning + * - kmemleak_object.lock (raw_spinlock_t): protects a kmemleak_object. + * Accesses to the metadata (e.g. count) are protected by this lock. Note + * that some members of this structure may be protected by other means + * (atomic or kmemleak_lock). This lock is also held when scanning the + * corresponding memory block to avoid the kernel freeing it via the + * kmemleak_free() callback. This is less heavyweight than holding a global + * lock like kmemleak_lock during scanning. * - scan_mutex (mutex): ensures that only one thread may scan the memory for * unreferenced objects at a time. The gray_list contains the objects which * are already referenced or marked as false positives and need to be @@ -197,7 +197,7 @@ static LIST_HEAD(object_list); static LIST_HEAD(gray_list); /* search tree for object boundaries */ static struct rb_root object_tree_root = RB_ROOT; -/* rw_lock protecting the access to object_list and object_tree_root */ +/* protecting the access to object_list and object_tree_root */ static DEFINE_RAW_SPINLOCK(kmemleak_lock); /* allocation caches for kmemleak internal data */ From patchwork Fri Feb 21 21:24:50 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tom Zanussi X-Patchwork-Id: 213214 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.1 required=3.0 tests=DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 86383C35666 for ; Fri, 21 Feb 2020 21:26:06 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 50BBF24650 for ; Fri, 21 Feb 2020 21:26:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1582320366; bh=Nk7ieIw7uhXZur3CHmo8FQJnmX7WOV26wnY3gtojN5Y=; h=From:To:Subject:Date:In-Reply-To:References:In-Reply-To: References:List-ID:From; b=LpVSo6h4tJ6HpUAiFwyzj3wqZRyF+gVhulQ9txMgCge3ffWRPp8u6EN0IdweeZ7WS XWemIH/C4Tp4YBGrjdcApr6crtH7+SvuEiOFSklzn7b/2ZpUTaQcCzlCQ++Tds8zJn ppuyHIDJOuvpHJRBVkb3i5DaZH0IqAvJ4ZWe8WZg= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729666AbgBUVZi (ORCPT ); Fri, 21 Feb 2020 16:25:38 -0500 Received: from mail.kernel.org ([198.145.29.99]:39406 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729637AbgBUVZh (ORCPT ); Fri, 21 Feb 2020 16:25:37 -0500 Received: from localhost.localdomain (c-98-220-238-81.hsd1.il.comcast.net [98.220.238.81]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id D0D9F24691; Fri, 21 Feb 2020 21:25:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1582320336; bh=Nk7ieIw7uhXZur3CHmo8FQJnmX7WOV26wnY3gtojN5Y=; h=From:To:Subject:Date:In-Reply-To:References:In-Reply-To: References:From; b=t2+l7r2v5eGPa1fJVGhheQhgtIDGXlbHp3exkiPBnJhalPX2vf3GIf03rxcpc1RYM eyb2ALnex4sBaaQqOltfzyCtOnEKl2NJ8NgEwnXS60GvNnG4B6m/f2g4Xm7f1oGxOw YIEWKO2MPcNbBOQ0f9Zj/Gno0WC8Gg/oY1ark/M0= From: zanussi@kernel.org To: LKML , linux-rt-users , Steven Rostedt , Thomas Gleixner , Carsten Emde , John Kacur , Sebastian Andrzej Siewior , Daniel Wagner , Tom Zanussi Subject: [PATCH RT 22/25] mm/memcontrol: Move misplaced local_unlock_irqrestore() Date: Fri, 21 Feb 2020 15:24:50 -0600 Message-Id: <960a2d2ccb60853d52f0dab91e391473dfa6035e.1582320278.git.zanussi@kernel.org> X-Mailer: git-send-email 2.14.1 In-Reply-To: References: In-Reply-To: References: Sender: linux-rt-users-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rt-users@vger.kernel.org From: Matt Fleming v4.14.170-rt75-rc1 stable review patch. If anyone has any objections, please let me know. ----------- [ Upstream commit 071a1d6a6e14d0dec240a8c67b425140d7f92f6a ] The comment about local_lock_irqsave() mentions just the counters and css_put_many()'s callback just invokes a worker so it is safe to move the unlock function after memcg_check_events() so css_put_many() can be invoked without the lock acquired. Cc: Daniel Wagner Signed-off-by: Matt Fleming [bigeasy: rewrote the patch description] Signed-off-by: Sebastian Andrzej Siewior Signed-off-by: Tom Zanussi --- mm/memcontrol.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 0503b31e2a87..a359a24ebd9f 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -6102,10 +6102,10 @@ void mem_cgroup_swapout(struct page *page, swp_entry_t entry) mem_cgroup_charge_statistics(memcg, page, PageTransHuge(page), -nr_entries); memcg_check_events(memcg, page); + local_unlock_irqrestore(event_lock, flags); if (!mem_cgroup_is_root(memcg)) css_put_many(&memcg->css, nr_entries); - local_unlock_irqrestore(event_lock, flags); } /** From patchwork Fri Feb 21 21:24:52 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tom Zanussi X-Patchwork-Id: 213215 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.1 required=3.0 tests=DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E5872C35669 for ; Fri, 21 Feb 2020 21:25:57 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id B4CF224650 for ; Fri, 21 Feb 2020 21:25:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1582320357; bh=L4UR3M1l22WWoKaD+AUUdwfvaMUZeDKgXjxjx9Cw+S0=; h=From:To:Subject:Date:In-Reply-To:References:In-Reply-To: References:List-ID:From; b=2RRBXY4P9gve0X8JhhvL8vCvYpROCCb5cgjPDmYXOO+Wkt7v30ihOp1qxfDACY/yt hSoMy0OBsOnvm3ctzpwCmwajY0XSmtSWLm3CTSfmf5/2i80RixDRNWJHkTgro5h/je zhG+bt9tZe3rl951pRLD/8fjDD2JeJ/s5GFYhdCw= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729784AbgBUVZ5 (ORCPT ); Fri, 21 Feb 2020 16:25:57 -0500 Received: from mail.kernel.org ([198.145.29.99]:39498 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729683AbgBUVZj (ORCPT ); Fri, 21 Feb 2020 16:25:39 -0500 Received: from localhost.localdomain (c-98-220-238-81.hsd1.il.comcast.net [98.220.238.81]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 02F7F24696; Fri, 21 Feb 2020 21:25:37 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1582320338; bh=L4UR3M1l22WWoKaD+AUUdwfvaMUZeDKgXjxjx9Cw+S0=; h=From:To:Subject:Date:In-Reply-To:References:In-Reply-To: References:From; b=wn140VYUwVHcWa5FXdNsp1yF7kYClQ7J7VBmrhNR/PyV2gkGysSjeU66mXYNRCukY WkTultJOAV/hQm/262GLzz4z4/45VvCNXscJCGx2KHegY5I/PTg9fFv7lHgCkk2L9y 7o0GfNstZpNETej41HufCVRp8+57iaZ6Z4UvqW3Q= From: zanussi@kernel.org To: LKML , linux-rt-users , Steven Rostedt , Thomas Gleixner , Carsten Emde , John Kacur , Sebastian Andrzej Siewior , Daniel Wagner , Tom Zanussi Subject: [PATCH RT 24/25] sched: Provide migrate_disable/enable() inlines Date: Fri, 21 Feb 2020 15:24:52 -0600 Message-Id: <5e82e4f7f3bc60945e64b2ee8ac429d6c5b51838.1582320278.git.zanussi@kernel.org> X-Mailer: git-send-email 2.14.1 In-Reply-To: References: In-Reply-To: References: Sender: linux-rt-users-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rt-users@vger.kernel.org From: Thomas Gleixner v4.14.170-rt75-rc1 stable review patch. If anyone has any objections, please let me know. ----------- [ Upstream commit 87d447be4100447b42229cce5e9b33c7915871eb ] Currently code which solely needs to prevent migration of a task uses preempt_disable()/enable() pairs. This is the only reliable way to do so as setting the task affinity to a single CPU can be undone by a setaffinity operation from a different task/process. It's also significantly faster. RT provides a seperate migrate_disable/enable() mechanism which does not disable preemption to achieve the semantic requirements of a (almost) fully preemptible kernel. As it is unclear from looking at a given code path whether the intention is to disable preemption or migration, introduce migrate_disable/enable() inline functions which can be used to annotate code which merely needs to disable migration. Map them to preempt_disable/enable() for now. The RT substitution will be provided later. Code which is annotated that way documents that it has no requirement to protect against reentrancy of a preempting task. Either this is not required at all or the call sites are already serialized by other means. Signed-off-by: Thomas Gleixner Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Juri Lelli Cc: Vincent Guittot Cc: Dietmar Eggemann Cc: Steven Rostedt Cc: Ben Segall Cc: Mel Gorman Signed-off-by: Sebastian Andrzej Siewior Signed-off-by: Tom Zanussi --- include/linux/preempt.h | 26 ++++++++++++++++++++++++-- 1 file changed, 24 insertions(+), 2 deletions(-) diff --git a/include/linux/preempt.h b/include/linux/preempt.h index 6728662a81e8..2e15fbc01eda 100644 --- a/include/linux/preempt.h +++ b/include/linux/preempt.h @@ -241,8 +241,30 @@ static inline int __migrate_disabled(struct task_struct *p) } #else -#define migrate_disable() preempt_disable() -#define migrate_enable() preempt_enable() +/** + * migrate_disable - Prevent migration of the current task + * + * Maps to preempt_disable() which also disables preemption. Use + * migrate_disable() to annotate that the intent is to prevent migration + * but not necessarily preemption. + * + * Can be invoked nested like preempt_disable() and needs the corresponding + * number of migrate_enable() invocations. + */ +#define migrate_disable() preempt_disable() + +/** + * migrate_enable - Allow migration of the current task + * + * Counterpart to migrate_disable(). + * + * As migrate_disable() can be invoked nested only the uttermost invocation + * reenables migration. + * + * Currently mapped to preempt_enable(). + */ +#define migrate_enable() preempt_enable() + static inline int __migrate_disabled(struct task_struct *p) { return 0;