From patchwork Thu Feb 27 14:33:14 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tom Zanussi X-Patchwork-Id: 213195 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.1 required=3.0 tests=DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 180CFC76591 for ; Thu, 27 Feb 2020 14:35:44 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id D4F0B24688 for ; Thu, 27 Feb 2020 14:35:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1582814143; bh=B6P641rWsf/Ec7BU5n0nO9BBlh7q4N6TPmpNhTYjtx0=; h=From:To:Subject:Date:In-Reply-To:References:In-Reply-To: References:List-ID:From; b=kN13fOqeNjUdkcQyPcuecJxS9TH+D4QMTVz6sgUnOUJmS9RuT7aBjIxEodfEd3qu7 DcUUyfCWjNmSxJWRElXOmxSvUpfM7PkyaGvBdiLO0knkpP3qRlmpN5Mem3OgsBNMt6 XGLFp2lHpGJs0kII3NZ7Fus/DsjlO9iOf2fHGGGI= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732878AbgB0Ofk (ORCPT ); Thu, 27 Feb 2020 09:35:40 -0500 Received: from mail.kernel.org ([198.145.29.99]:44962 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1733290AbgB0Od7 (ORCPT ); Thu, 27 Feb 2020 09:33:59 -0500 Received: from localhost.localdomain (c-98-220-238-81.hsd1.il.comcast.net [98.220.238.81]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 05EC1246AC; Thu, 27 Feb 2020 14:33:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1582814038; bh=B6P641rWsf/Ec7BU5n0nO9BBlh7q4N6TPmpNhTYjtx0=; h=From:To:Subject:Date:In-Reply-To:References:In-Reply-To: References:From; b=mCiueRR8QZjKZbFuVbeTBQaGCQklLQ3ZwH5RzuGh3HVQ+HEIh8loX88F05ZDsihIk Pe1aIfxat6LLZCWDgXYE5HpspB4GH6YdSYZJHCp5d3gxxeX3ScpS5aYIzrVxP5EuSb dqDmtdF7dhfCKLoLq058o62l0al6dBh4REpLcYPM= From: zanussi@kernel.org To: LKML , linux-rt-users , Steven Rostedt , Thomas Gleixner , Carsten Emde , John Kacur , Sebastian Andrzej Siewior , Daniel Wagner , Tom Zanussi Subject: [PATCH RT 03/23] i2c: hix5hd2: Remove IRQF_ONESHOT Date: Thu, 27 Feb 2020 08:33:14 -0600 Message-Id: <272250d7ac609c3bb6948e6ec4f8bb122b7f9360.1582814004.git.zanussi@kernel.org> X-Mailer: git-send-email 2.14.1 In-Reply-To: References: In-Reply-To: References: Sender: linux-rt-users-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rt-users@vger.kernel.org From: Sebastian Andrzej Siewior v4.14.170-rt75-rc2 stable review patch. If anyone has any objections, please let me know. ----------- [ Upstream commit e88b481f3f86f11e3243e0808a830e5ca5782a9d ] The drivers sets IRQF_ONESHOT and passes only a primary handler. The IRQ is masked while the primary is handler is invoked independently of IRQF_ONESHOT. With IRQF_ONESHOT the core code will not force-thread the interrupt and this is probably not intended. I *assume* that the original author copied the IRQ registration from another driver which passed a primary and secondary handler and removed the secondary handler but keeping the ONESHOT flag. Remove IRQF_ONESHOT. Signed-off-by: Sebastian Andrzej Siewior Signed-off-by: Tom Zanussi --- drivers/i2c/busses/i2c-hix5hd2.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/drivers/i2c/busses/i2c-hix5hd2.c b/drivers/i2c/busses/i2c-hix5hd2.c index bb68957d3da5..76c1a207ccc1 100644 --- a/drivers/i2c/busses/i2c-hix5hd2.c +++ b/drivers/i2c/busses/i2c-hix5hd2.c @@ -464,8 +464,7 @@ static int hix5hd2_i2c_probe(struct platform_device *pdev) hix5hd2_i2c_init(priv); ret = devm_request_irq(&pdev->dev, irq, hix5hd2_i2c_irq, - IRQF_NO_SUSPEND | IRQF_ONESHOT, - dev_name(&pdev->dev), priv); + IRQF_NO_SUSPEND, dev_name(&pdev->dev), priv); if (ret != 0) { dev_err(&pdev->dev, "cannot request HS-I2C IRQ %d\n", irq); goto err_clk; From patchwork Thu Feb 27 14:33:16 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Tom Zanussi X-Patchwork-Id: 213196 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.1 required=3.0 tests=DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 55D67C7658B for ; Thu, 27 Feb 2020 14:35:35 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 2BE0724688 for ; Thu, 27 Feb 2020 14:35:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1582814135; bh=dRaUUL+pxDwH61ORk0fXVR97JB5oqk3DOF9JstM3jIA=; h=From:To:Subject:Date:In-Reply-To:References:In-Reply-To: References:List-ID:From; b=TDebYXI1558KgKN46Q8J4v9HQMlHJsDJcguBIlrAzUFLrcjw/dRHdVc3QRjJKsFum opf2uhCMDk9HKCXDEvi91DrPbmUfBFLGdW7bz+5jIg4b6mcSOtQxuoLLUG+HwFN10w 3HtRAYBj0d9UwjEhMxnCxn7/KMJqlBY+eE9Aq7uk= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2387556AbgB0OeC (ORCPT ); Thu, 27 Feb 2020 09:34:02 -0500 Received: from mail.kernel.org ([198.145.29.99]:44996 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2387542AbgB0OeB (ORCPT ); Thu, 27 Feb 2020 09:34:01 -0500 Received: from localhost.localdomain (c-98-220-238-81.hsd1.il.comcast.net [98.220.238.81]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 10CAB2469C; Thu, 27 Feb 2020 14:34:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1582814040; bh=dRaUUL+pxDwH61ORk0fXVR97JB5oqk3DOF9JstM3jIA=; h=From:To:Subject:Date:In-Reply-To:References:In-Reply-To: References:From; b=G8fnLZxlouo0f999umOZ0icsH8EI7Lwk1I60UsGcsZT2d/JbrZLSMDHUNNfl8jbui /s01eKFW40jpu4vzEyWKgc6kY88Dm3J4H3ERHM8OsnlgoMVaK7vwfyPNebk/EOSfDl FPa0XPrulHbk9zF64tY5HB1LeCC2WyhA5EgnpLkQ= From: zanussi@kernel.org To: LKML , linux-rt-users , Steven Rostedt , Thomas Gleixner , Carsten Emde , John Kacur , Sebastian Andrzej Siewior , Daniel Wagner , Tom Zanussi Subject: [PATCH RT 05/23] =?utf-8?q?sched=3A_migrate=5Fdis/enable=3A_Use_s?= =?utf-8?q?leeping=5Flock=E2=80=A6=28=29_to_annotate_sleeping_point?= =?utf-8?q?s?= Date: Thu, 27 Feb 2020 08:33:16 -0600 Message-Id: <0078d43bf3e62b36f0890731c2f6fd277a93927b.1582814004.git.zanussi@kernel.org> X-Mailer: git-send-email 2.14.1 In-Reply-To: References: In-Reply-To: References: MIME-Version: 1.0 Sender: linux-rt-users-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rt-users@vger.kernel.org From: Scott Wood v4.14.170-rt75-rc2 stable review patch. If anyone has any objections, please let me know. ----------- [ Upstream commit 4230dd3824c3e1785504e6f757ce79a4b55651fa ] Without this, rcu_note_context_switch() will complain if an RCU read lock is held when migrate_enable() calls stop_one_cpu(). Likewise when migrate_disable() calls pin_current_cpu() which calls __read_rt_lock() -- which bypasses the part of the mutex code that calls sleeping_lock_inc(). Signed-off-by: Scott Wood [bigeasy: use sleeping_lock_…() ] Signed-off-by: Sebastian Andrzej Siewior Signed-off-by: Tom Zanussi Conflicts: kernel/sched/core.c --- kernel/cpu.c | 2 ++ kernel/sched/core.c | 3 +++ 2 files changed, 5 insertions(+) diff --git a/kernel/cpu.c b/kernel/cpu.c index 05b93cfa6fd9..9be794896d87 100644 --- a/kernel/cpu.c +++ b/kernel/cpu.c @@ -314,7 +314,9 @@ void pin_current_cpu(void) preempt_lazy_enable(); preempt_enable(); + sleeping_lock_inc(); __read_rt_lock(cpuhp_pin); + sleeping_lock_dec(); preempt_disable(); preempt_lazy_disable(); diff --git a/kernel/sched/core.c b/kernel/sched/core.c index fde47216af94..fcff75934bdc 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -7045,7 +7045,10 @@ void migrate_enable(void) unpin_current_cpu(); preempt_lazy_enable(); preempt_enable(); + + sleeping_lock_inc(); stop_one_cpu(task_cpu(p), migration_cpu_stop, &arg); + sleeping_lock_dec(); tlb_migrate_finish(p->mm); return; From patchwork Thu Feb 27 14:33:18 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tom Zanussi X-Patchwork-Id: 213204 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.1 required=3.0 tests=DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 79BEAC76567 for ; Thu, 27 Feb 2020 14:34:05 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 48751246B0 for ; Thu, 27 Feb 2020 14:34:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1582814045; bh=0cRiHDjg/zFYOP3ScRzQ8pBy26vmofMa73wNOqpm/3I=; h=From:To:Subject:Date:In-Reply-To:References:In-Reply-To: References:List-ID:From; b=HcZ7l2BVxtTfBJhwq2ZJ3vIHWUrWDXkyoPv6ScvPE1lwR+uX7vqzPFw4Bw0Ol779x u8b5hShjthfg9Cii1P4uuY16MH63yYYxKP65DY1+qGMhH06wdDek1kYNeoSBVenwlX 8ms0gwQ4xop/aiG1OAK7d31mrX/cxcWvqbHuqRRs= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2387628AbgB0OeE (ORCPT ); Thu, 27 Feb 2020 09:34:04 -0500 Received: from mail.kernel.org ([198.145.29.99]:45042 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2387559AbgB0OeD (ORCPT ); Thu, 27 Feb 2020 09:34:03 -0500 Received: from localhost.localdomain (c-98-220-238-81.hsd1.il.comcast.net [98.220.238.81]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id F2CF8246B5; Thu, 27 Feb 2020 14:34:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1582814042; bh=0cRiHDjg/zFYOP3ScRzQ8pBy26vmofMa73wNOqpm/3I=; h=From:To:Subject:Date:In-Reply-To:References:In-Reply-To: References:From; b=dq/9tXPTXZKNhgRaSp+m31BmJwxWtgqxda59Ew4j22PHqixTtNxprSYUjlNG7dmQz tadkNnliJQGNcqCbzOiLHpW8gSa0VxDYuPHgfbT2EzBl0kgm3lQ6jurnEb6qoRzHNG ZAQZBInHqRNoS7VPS7OUclXXeAY+99298c2E/n4c= From: zanussi@kernel.org To: LKML , linux-rt-users , Steven Rostedt , Thomas Gleixner , Carsten Emde , John Kacur , Sebastian Andrzej Siewior , Daniel Wagner , Tom Zanussi Subject: [PATCH RT 07/23] sched: Remove dead __migrate_disabled() check Date: Thu, 27 Feb 2020 08:33:18 -0600 Message-Id: <20499dc2581f16f080d212e97721992565f57669.1582814004.git.zanussi@kernel.org> X-Mailer: git-send-email 2.14.1 In-Reply-To: References: In-Reply-To: References: Sender: linux-rt-users-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rt-users@vger.kernel.org From: Scott Wood v4.14.170-rt75-rc2 stable review patch. If anyone has any objections, please let me know. ----------- [ Upstream commit 14d9272d534ea91262e15db99443fc5995c7c016 ] This code was unreachable given the __migrate_disabled() branch to "out" immediately beforehand. Signed-off-by: Scott Wood Signed-off-by: Sebastian Andrzej Siewior Signed-off-by: Tom Zanussi Conflicts: kernel/sched/core.c --- kernel/sched/core.c | 7 ------- 1 file changed, 7 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 8d6badac9225..4708129e8df1 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -1217,13 +1217,6 @@ static int __set_cpus_allowed_ptr(struct task_struct *p, if (cpumask_test_cpu(task_cpu(p), new_mask) || __migrate_disabled(p)) goto out; -#if defined(CONFIG_SMP) && defined(CONFIG_PREEMPT_RT_BASE) - if (__migrate_disabled(p)) { - p->migrate_disable_update = 1; - goto out; - } -#endif - if (task_running(rq, p) || p->state == TASK_WAKING) { struct migration_arg arg = { p, dest_cpu }; /* Need help from migration thread: drop lock and wait. */ From patchwork Thu Feb 27 14:33:19 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tom Zanussi X-Patchwork-Id: 213194 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.1 required=3.0 tests=DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7AFEDC7658C for ; Thu, 27 Feb 2020 14:35:32 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 49F8024688 for ; Thu, 27 Feb 2020 14:35:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1582814132; bh=Fo2bDZpGJ+4PkvElHct0HikKzFK2miB2beL6Mv/GKqk=; h=From:To:Subject:Date:In-Reply-To:References:In-Reply-To: References:List-ID:From; b=j2iiDPCTbFe2kAOZDSsU62EjZhlzY9yzWrbheAxyxlFBcRw3c4z/lMmP3xIW4l1Sk WUo0wK2fKW3BEBv6Sulb8gZ9d1isJs+t/9V9wpUKj5mQFxd1zzubywHuol9peIQqZ9 AxxWp3keb8ZORZRGoZ/CFN3bg1lZ4OanGKpBLgw8= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1733120AbgB0OeE (ORCPT ); Thu, 27 Feb 2020 09:34:04 -0500 Received: from mail.kernel.org ([198.145.29.99]:45058 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2387606AbgB0OeE (ORCPT ); Thu, 27 Feb 2020 09:34:04 -0500 Received: from localhost.localdomain (c-98-220-238-81.hsd1.il.comcast.net [98.220.238.81]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id E3D16246AF; Thu, 27 Feb 2020 14:34:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1582814043; bh=Fo2bDZpGJ+4PkvElHct0HikKzFK2miB2beL6Mv/GKqk=; h=From:To:Subject:Date:In-Reply-To:References:In-Reply-To: References:From; b=nQHl8csMI/vhtNi5QP/X+frnTRNGlnTgdKe/zquZfExF/9rxtBenUuZ5TqJhSo1pb zmAoeormRQ9fPNykU/Ip9YMofMU4/jnJ+7gW61qbFqmDTD2aYORau2vuS+S/KLPZBS a8oM98g2LOgn1V56vGrfvh1RR4QZLPQXC+MLiuhc= From: zanussi@kernel.org To: LKML , linux-rt-users , Steven Rostedt , Thomas Gleixner , Carsten Emde , John Kacur , Sebastian Andrzej Siewior , Daniel Wagner , Tom Zanussi Subject: [PATCH RT 08/23] sched: migrate disable: Protect cpus_ptr with lock Date: Thu, 27 Feb 2020 08:33:19 -0600 Message-Id: X-Mailer: git-send-email 2.14.1 In-Reply-To: References: In-Reply-To: References: Sender: linux-rt-users-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rt-users@vger.kernel.org From: Scott Wood v4.14.170-rt75-rc2 stable review patch. If anyone has any objections, please let me know. ----------- [ Upstream commit 27ee52a891ed2c7e2e2c8332ccae0de7c2674b09 ] Various places assume that cpus_ptr is protected by rq/pi locks, so don't change it before grabbing those locks. Signed-off-by: Scott Wood Signed-off-by: Sebastian Andrzej Siewior Signed-off-by: Tom Zanussi --- kernel/sched/core.c | 6 ++---- 1 file changed, 2 insertions(+), 4 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 4708129e8df1..189e6f08575e 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -6923,9 +6923,8 @@ migrate_disable_update_cpus_allowed(struct task_struct *p) struct rq *rq; struct rq_flags rf; - p->cpus_ptr = cpumask_of(smp_processor_id()); - rq = task_rq_lock(p, &rf); + p->cpus_ptr = cpumask_of(smp_processor_id()); update_nr_migratory(p, -1); p->nr_cpus_allowed = 1; task_rq_unlock(rq, p, &rf); @@ -6937,9 +6936,8 @@ migrate_enable_update_cpus_allowed(struct task_struct *p) struct rq *rq; struct rq_flags rf; - p->cpus_ptr = &p->cpus_mask; - rq = task_rq_lock(p, &rf); + p->cpus_ptr = &p->cpus_mask; p->nr_cpus_allowed = cpumask_weight(&p->cpus_mask); update_nr_migratory(p, 1); task_rq_unlock(rq, p, &rf); From patchwork Thu Feb 27 14:33:20 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tom Zanussi X-Patchwork-Id: 213198 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.1 required=3.0 tests=DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 53926C76589 for ; Thu, 27 Feb 2020 14:35:30 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 1FE8F2468F for ; Thu, 27 Feb 2020 14:35:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1582814130; bh=XC+WPtSkfTm5ePQdr+PHvKYb9iGLo7xCq4HxNroYENU=; h=From:To:Subject:Date:In-Reply-To:References:In-Reply-To: References:List-ID:From; b=HcIZzeLLu3WSWWGpLj7W4rl7W44W0BOWVdSfoSO5K2Eg2msIoanNikt8rlS/Z+7kI Dkuvdw+GOCSF9HbyJzNArQGGxvlx2cv/0TR96eoQ3gD6bUhKb5s0lsnaJ5zBPgpyXd d+E/pgBGX7L7wjHXE0wnKRH/lsdlOGqprKcEgZT0= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2387693AbgB0OeG (ORCPT ); Thu, 27 Feb 2020 09:34:06 -0500 Received: from mail.kernel.org ([198.145.29.99]:45086 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2387434AbgB0OeF (ORCPT ); Thu, 27 Feb 2020 09:34:05 -0500 Received: from localhost.localdomain (c-98-220-238-81.hsd1.il.comcast.net [98.220.238.81]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id EA247246B1; Thu, 27 Feb 2020 14:34:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1582814044; bh=XC+WPtSkfTm5ePQdr+PHvKYb9iGLo7xCq4HxNroYENU=; h=From:To:Subject:Date:In-Reply-To:References:In-Reply-To: References:From; b=x0B0/y/zBgjCbMsNzn4PMOOwaNeQLcJaRag7kp/vzs+l3GS1X4A9B/5asiEHb349p tta1we1QLW7MGZOK7myu04khcQWUaSNjRzGw6yMxDPFDS4W4I6LxALtBIhRv3I+iRu 4D2NC+WYK/OcBRsH7RocNrOt+wRBm/f88GtNHV4M= From: zanussi@kernel.org To: LKML , linux-rt-users , Steven Rostedt , Thomas Gleixner , Carsten Emde , John Kacur , Sebastian Andrzej Siewior , Daniel Wagner , Tom Zanussi Subject: [PATCH RT 09/23] lib/smp_processor_id: Don't use cpumask_equal() Date: Thu, 27 Feb 2020 08:33:20 -0600 Message-Id: X-Mailer: git-send-email 2.14.1 In-Reply-To: References: In-Reply-To: References: Sender: linux-rt-users-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rt-users@vger.kernel.org From: Waiman Long v4.14.170-rt75-rc2 stable review patch. If anyone has any objections, please let me know. ----------- [ Upstream commit 659252061477862f45b79e1de169e6030f5c8918 ] The check_preemption_disabled() function uses cpumask_equal() to see if the task is bounded to the current CPU only. cpumask_equal() calls memcmp() to do the comparison. As x86 doesn't have __HAVE_ARCH_MEMCMP, the slow memcmp() function in lib/string.c is used. On a RT kernel that call check_preemption_disabled() very frequently, below is the perf-record output of a certain microbenchmark: 42.75% 2.45% testpmd [kernel.kallsyms] [k] check_preemption_disabled 40.01% 39.97% testpmd [kernel.kallsyms] [k] memcmp We should avoid calling memcmp() in performance critical path. So the cpumask_equal() call is now replaced with an equivalent simpler check. Signed-off-by: Waiman Long Signed-off-by: Sebastian Andrzej Siewior Signed-off-by: Tom Zanussi --- lib/smp_processor_id.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/lib/smp_processor_id.c b/lib/smp_processor_id.c index 6f4a4ae881c8..9f3c8bb62e57 100644 --- a/lib/smp_processor_id.c +++ b/lib/smp_processor_id.c @@ -23,7 +23,7 @@ notrace static unsigned int check_preemption_disabled(const char *what1, * Kernel threads bound to a single CPU can safely use * smp_processor_id(): */ - if (cpumask_equal(current->cpus_ptr, cpumask_of(this_cpu))) + if (current->nr_cpus_allowed == 1) goto out; /* From patchwork Thu Feb 27 14:33:23 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tom Zanussi X-Patchwork-Id: 213197 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.1 required=3.0 tests=DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 20265C76587 for ; Thu, 27 Feb 2020 14:35:30 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id DF1F524688 for ; Thu, 27 Feb 2020 14:35:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1582814130; bh=S9hYBRr2oslBSoGEvFstSHCR36IHgttgfs/LOY2nOVo=; h=From:To:Subject:Date:In-Reply-To:References:In-Reply-To: References:List-ID:From; b=ww8XUYTB668TWDNqkUsu0PnX/3mlrdg40W4AYZ0HIs8nWPCh0td6bALX+OTAAJQph ammm7K96MwduFtfaj7+RUdEiswdZhTk6WBQ6QUtLWHdOh6l5zTeV9QuxzlTNjsYdNp Gn2u7QBdDITrV+W0NSQML4hDPF1uTAZ/CcAwdLCQ= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732846AbgB0OfV (ORCPT ); Thu, 27 Feb 2020 09:35:21 -0500 Received: from mail.kernel.org ([198.145.29.99]:45158 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2387724AbgB0OeI (ORCPT ); Thu, 27 Feb 2020 09:34:08 -0500 Received: from localhost.localdomain (c-98-220-238-81.hsd1.il.comcast.net [98.220.238.81]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id E464F246B7; Thu, 27 Feb 2020 14:34:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1582814047; bh=S9hYBRr2oslBSoGEvFstSHCR36IHgttgfs/LOY2nOVo=; h=From:To:Subject:Date:In-Reply-To:References:In-Reply-To: References:From; b=Ux+Ex1+XOTNGEJCrXdQopuYM/lrV+EY8ZJrx3qUhzDPBKlmIx9YVTLgjR5XhkUGE3 S1QbNIcBBl5Nowyk0TXmBLkubXm1YxPR0jTfOTKM/PUmqEj55FD80dw8s/hgm5Yo4r 1kHYboNtKj/IwaiYHz+D4ARCHXmOV+NkNiYIORag= From: zanussi@kernel.org To: LKML , linux-rt-users , Steven Rostedt , Thomas Gleixner , Carsten Emde , John Kacur , Sebastian Andrzej Siewior , Daniel Wagner , Tom Zanussi Subject: [PATCH RT 12/23] lib/ubsan: Don't seralize UBSAN report Date: Thu, 27 Feb 2020 08:33:23 -0600 Message-Id: <8ae9d606e0a8b357192e306ebf2d6f9720921473.1582814004.git.zanussi@kernel.org> X-Mailer: git-send-email 2.14.1 In-Reply-To: References: In-Reply-To: References: Sender: linux-rt-users-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rt-users@vger.kernel.org From: Julien Grall v4.14.170-rt75-rc2 stable review patch. If anyone has any objections, please let me know. ----------- [ Upstream commmit 4702c28ac777b27acb499cbd5e8e787ce1a7d82d ] At the moment, UBSAN report will be serialized using a spin_lock(). On RT-systems, spinlocks are turned to rt_spin_lock and may sleep. This will result to the following splat if the undefined behavior is in a context that can sleep: | BUG: sleeping function called from invalid context at /src/linux/kernel/locking/rtmutex.c:968 | in_atomic(): 1, irqs_disabled(): 128, pid: 3447, name: make | 1 lock held by make/3447: | #0: 000000009a966332 (&mm->mmap_sem){++++}, at: do_page_fault+0x140/0x4f8 | Preemption disabled at: | [] rt_mutex_futex_unlock+0x4c/0xb0 | CPU: 3 PID: 3447 Comm: make Tainted: G W 5.2.14-rt7-01890-ge6e057589653 #911 | Call trace: | dump_backtrace+0x0/0x148 | show_stack+0x14/0x20 | dump_stack+0xbc/0x104 | ___might_sleep+0x154/0x210 | rt_spin_lock+0x68/0xa0 | ubsan_prologue+0x30/0x68 | handle_overflow+0x64/0xe0 | __ubsan_handle_add_overflow+0x10/0x18 | __lock_acquire+0x1c28/0x2a28 | lock_acquire+0xf0/0x370 | _raw_spin_lock_irqsave+0x58/0x78 | rt_mutex_futex_unlock+0x4c/0xb0 | rt_spin_unlock+0x28/0x70 | get_page_from_freelist+0x428/0x2b60 | __alloc_pages_nodemask+0x174/0x1708 | alloc_pages_vma+0x1ac/0x238 | __handle_mm_fault+0x4ac/0x10b0 | handle_mm_fault+0x1d8/0x3b0 | do_page_fault+0x1c8/0x4f8 | do_translation_fault+0xb8/0xe0 | do_mem_abort+0x3c/0x98 | el0_da+0x20/0x24 The spin_lock() will protect against multiple CPUs to output a report together, I guess to prevent them to be interleaved. However, they can still interleave with other messages (and even splat from __migth_sleep). So the lock usefulness seems pretty limited. Rather than trying to accomodate RT-system by switching to a raw_spin_lock(), the lock is now completely dropped. Link: https://lkml.kernel.org/r/20190920100835.14999-1-julien.grall@arm.com Reported-by: Andre Przywara Signed-off-by: Julien Grall Acked-by: Andrey Ryabinin Signed-off-by: Sebastian Andrzej Siewior Signed-off-by: Tom Zanussi Conflicts: lib/ubsan.c --- lib/ubsan.c | 76 ++++++++++++++++++++++--------------------------------------- 1 file changed, 27 insertions(+), 49 deletions(-) diff --git a/lib/ubsan.c b/lib/ubsan.c index c652b4a820cc..f94cfb3a41ed 100644 --- a/lib/ubsan.c +++ b/lib/ubsan.c @@ -147,26 +147,21 @@ static bool location_is_valid(struct source_location *loc) { return loc->file_name != NULL; } - -static DEFINE_SPINLOCK(report_lock); - -static void ubsan_prologue(struct source_location *location, - unsigned long *flags) +static void ubsan_prologue(struct source_location *location) { current->in_ubsan++; - spin_lock_irqsave(&report_lock, *flags); pr_err("========================================" "========================================\n"); print_source_location("UBSAN: Undefined behaviour in", location); } -static void ubsan_epilogue(unsigned long *flags) +static void ubsan_epilogue(void) { dump_stack(); pr_err("========================================" "========================================\n"); - spin_unlock_irqrestore(&report_lock, *flags); + current->in_ubsan--; } @@ -175,14 +170,13 @@ static void handle_overflow(struct overflow_data *data, void *lhs, { struct type_descriptor *type = data->type; - unsigned long flags; char lhs_val_str[VALUE_LENGTH]; char rhs_val_str[VALUE_LENGTH]; if (suppress_report(&data->location)) return; - ubsan_prologue(&data->location, &flags); + ubsan_prologue(&data->location); val_to_string(lhs_val_str, sizeof(lhs_val_str), type, lhs); val_to_string(rhs_val_str, sizeof(rhs_val_str), type, rhs); @@ -194,7 +188,7 @@ static void handle_overflow(struct overflow_data *data, void *lhs, rhs_val_str, type->type_name); - ubsan_epilogue(&flags); + ubsan_epilogue(); } void __ubsan_handle_add_overflow(struct overflow_data *data, @@ -222,20 +216,19 @@ EXPORT_SYMBOL(__ubsan_handle_mul_overflow); void __ubsan_handle_negate_overflow(struct overflow_data *data, void *old_val) { - unsigned long flags; char old_val_str[VALUE_LENGTH]; if (suppress_report(&data->location)) return; - ubsan_prologue(&data->location, &flags); + ubsan_prologue(&data->location); val_to_string(old_val_str, sizeof(old_val_str), data->type, old_val); pr_err("negation of %s cannot be represented in type %s:\n", old_val_str, data->type->type_name); - ubsan_epilogue(&flags); + ubsan_epilogue(); } EXPORT_SYMBOL(__ubsan_handle_negate_overflow); @@ -243,13 +236,12 @@ EXPORT_SYMBOL(__ubsan_handle_negate_overflow); void __ubsan_handle_divrem_overflow(struct overflow_data *data, void *lhs, void *rhs) { - unsigned long flags; char rhs_val_str[VALUE_LENGTH]; if (suppress_report(&data->location)) return; - ubsan_prologue(&data->location, &flags); + ubsan_prologue(&data->location); val_to_string(rhs_val_str, sizeof(rhs_val_str), data->type, rhs); @@ -259,58 +251,52 @@ void __ubsan_handle_divrem_overflow(struct overflow_data *data, else pr_err("division by zero\n"); - ubsan_epilogue(&flags); + ubsan_epilogue(); } EXPORT_SYMBOL(__ubsan_handle_divrem_overflow); static void handle_null_ptr_deref(struct type_mismatch_data_common *data) { - unsigned long flags; - if (suppress_report(data->location)) return; - ubsan_prologue(data->location, &flags); + ubsan_prologue(data->location); pr_err("%s null pointer of type %s\n", type_check_kinds[data->type_check_kind], data->type->type_name); - ubsan_epilogue(&flags); + ubsan_epilogue(); } static void handle_misaligned_access(struct type_mismatch_data_common *data, unsigned long ptr) { - unsigned long flags; - if (suppress_report(data->location)) return; - ubsan_prologue(data->location, &flags); + ubsan_prologue(data->location); pr_err("%s misaligned address %p for type %s\n", type_check_kinds[data->type_check_kind], (void *)ptr, data->type->type_name); pr_err("which requires %ld byte alignment\n", data->alignment); - ubsan_epilogue(&flags); + ubsan_epilogue(); } static void handle_object_size_mismatch(struct type_mismatch_data_common *data, unsigned long ptr) { - unsigned long flags; - if (suppress_report(data->location)) return; - ubsan_prologue(data->location, &flags); + ubsan_prologue(data->location); pr_err("%s address %p with insufficient space\n", type_check_kinds[data->type_check_kind], (void *) ptr); pr_err("for an object of type %s\n", data->type->type_name); - ubsan_epilogue(&flags); + ubsan_epilogue(); } static void ubsan_type_mismatch_common(struct type_mismatch_data_common *data, @@ -356,12 +342,10 @@ EXPORT_SYMBOL(__ubsan_handle_type_mismatch_v1); void __ubsan_handle_nonnull_return(struct nonnull_return_data *data) { - unsigned long flags; - if (suppress_report(&data->location)) return; - ubsan_prologue(&data->location, &flags); + ubsan_prologue(&data->location); pr_err("null pointer returned from function declared to never return null\n"); @@ -369,49 +353,46 @@ void __ubsan_handle_nonnull_return(struct nonnull_return_data *data) print_source_location("returns_nonnull attribute specified in", &data->attr_location); - ubsan_epilogue(&flags); + ubsan_epilogue(); } EXPORT_SYMBOL(__ubsan_handle_nonnull_return); void __ubsan_handle_vla_bound_not_positive(struct vla_bound_data *data, void *bound) { - unsigned long flags; char bound_str[VALUE_LENGTH]; if (suppress_report(&data->location)) return; - ubsan_prologue(&data->location, &flags); + ubsan_prologue(&data->location); val_to_string(bound_str, sizeof(bound_str), data->type, bound); pr_err("variable length array bound value %s <= 0\n", bound_str); - ubsan_epilogue(&flags); + ubsan_epilogue(); } EXPORT_SYMBOL(__ubsan_handle_vla_bound_not_positive); void __ubsan_handle_out_of_bounds(struct out_of_bounds_data *data, void *index) { - unsigned long flags; char index_str[VALUE_LENGTH]; if (suppress_report(&data->location)) return; - ubsan_prologue(&data->location, &flags); + ubsan_prologue(&data->location); val_to_string(index_str, sizeof(index_str), data->index_type, index); pr_err("index %s is out of range for type %s\n", index_str, data->array_type->type_name); - ubsan_epilogue(&flags); + ubsan_epilogue(); } EXPORT_SYMBOL(__ubsan_handle_out_of_bounds); void __ubsan_handle_shift_out_of_bounds(struct shift_out_of_bounds_data *data, void *lhs, void *rhs) { - unsigned long flags; struct type_descriptor *rhs_type = data->rhs_type; struct type_descriptor *lhs_type = data->lhs_type; char rhs_str[VALUE_LENGTH]; @@ -420,7 +401,7 @@ void __ubsan_handle_shift_out_of_bounds(struct shift_out_of_bounds_data *data, if (suppress_report(&data->location)) return; - ubsan_prologue(&data->location, &flags); + ubsan_prologue(&data->location); val_to_string(rhs_str, sizeof(rhs_str), rhs_type, rhs); val_to_string(lhs_str, sizeof(lhs_str), lhs_type, lhs); @@ -443,18 +424,16 @@ void __ubsan_handle_shift_out_of_bounds(struct shift_out_of_bounds_data *data, lhs_str, rhs_str, lhs_type->type_name); - ubsan_epilogue(&flags); + ubsan_epilogue(); } EXPORT_SYMBOL(__ubsan_handle_shift_out_of_bounds); void __ubsan_handle_builtin_unreachable(struct unreachable_data *data) { - unsigned long flags; - - ubsan_prologue(&data->location, &flags); + ubsan_prologue(&data->location); pr_err("calling __builtin_unreachable()\n"); - ubsan_epilogue(&flags); + ubsan_epilogue(); panic("can't return from __builtin_unreachable()"); } EXPORT_SYMBOL(__ubsan_handle_builtin_unreachable); @@ -462,19 +441,18 @@ EXPORT_SYMBOL(__ubsan_handle_builtin_unreachable); void __ubsan_handle_load_invalid_value(struct invalid_value_data *data, void *val) { - unsigned long flags; char val_str[VALUE_LENGTH]; if (suppress_report(&data->location)) return; - ubsan_prologue(&data->location, &flags); + ubsan_prologue(&data->location); val_to_string(val_str, sizeof(val_str), data->type, val); pr_err("load of value %s is not a valid value for type %s\n", val_str, data->type->type_name); - ubsan_epilogue(&flags); + ubsan_epilogue(); } EXPORT_SYMBOL(__ubsan_handle_load_invalid_value); From patchwork Thu Feb 27 14:33:25 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tom Zanussi X-Patchwork-Id: 213199 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.1 required=3.0 tests=DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 512A7C7657E for ; Thu, 27 Feb 2020 14:35:12 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 26E2624697 for ; Thu, 27 Feb 2020 14:35:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1582814112; bh=eEY166ta87q0I5Yb9poafq3N1XnniQY60N3Cp98vY/c=; h=From:To:Subject:Date:In-Reply-To:References:In-Reply-To: References:List-ID:From; b=c4IwqaqQpuJM0qBRXG0tYLMyzP37UQlQXfgi5Qmb7P6D2ylzDGpNAudhMkPja6wK/ T0G2oOndxMRb+6HMkwBayQw2+0cgGU7w+Sh81P18KQiS7Ep9rCAjkFhz4edsHZI2U3 WIOA0+FcWYO268iaUtlu30uk90xwlVK8bFUKj5O8= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2387480AbgB0OfK (ORCPT ); Thu, 27 Feb 2020 09:35:10 -0500 Received: from mail.kernel.org ([198.145.29.99]:45158 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2387769AbgB0OeK (ORCPT ); Thu, 27 Feb 2020 09:34:10 -0500 Received: from localhost.localdomain (c-98-220-238-81.hsd1.il.comcast.net [98.220.238.81]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id C0FA3246B8; Thu, 27 Feb 2020 14:34:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1582814049; bh=eEY166ta87q0I5Yb9poafq3N1XnniQY60N3Cp98vY/c=; h=From:To:Subject:Date:In-Reply-To:References:In-Reply-To: References:From; b=Bc+tbI1J5rTPCQEfGflEqZB773VyzsVLDf1Su+OBbz8/YS6P9m9L090GSNKeUdbE6 WKrrsMusLA1oATsadBmiNRMmcr/a0aIxJ+LMIIt7fWM2CFq7VUEUExiPL+X9bmlg1z A0NZbSBwWzFFDHboSFzQbEundJ9DZc1k+mSCXXeA= From: zanussi@kernel.org To: LKML , linux-rt-users , Steven Rostedt , Thomas Gleixner , Carsten Emde , John Kacur , Sebastian Andrzej Siewior , Daniel Wagner , Tom Zanussi Subject: [PATCH RT 14/23] Revert "ARM: Initialize split page table locks for vector page" Date: Thu, 27 Feb 2020 08:33:25 -0600 Message-Id: X-Mailer: git-send-email 2.14.1 In-Reply-To: References: In-Reply-To: References: Sender: linux-rt-users-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rt-users@vger.kernel.org From: Sebastian Andrzej Siewior v4.14.170-rt75-rc2 stable review patch. If anyone has any objections, please let me know. ----------- [ Upstream commit 247074c44d8c3e619dfde6404a52295d8d671d38 ] I'm dropping this patch, with its original description: |ARM: Initialize split page table locks for vector page | |Without this patch, ARM can not use SPLIT_PTLOCK_CPUS if |PREEMPT_RT_FULL=y because vectors_user_mapping() creates a |VM_ALWAYSDUMP mapping of the vector page (address 0xffff0000), but no |ptl->lock has been allocated for the page. An attempt to coredump |that page will result in a kernel NULL pointer dereference when |follow_page() attempts to lock the page. | |The call tree to the NULL pointer dereference is: | | do_notify_resume() | get_signal_to_deliver() | do_coredump() | elf_core_dump() | get_dump_page() | __get_user_pages() | follow_page() | pte_offset_map_lock() <----- a #define | ... | rt_spin_lock() | |The underlying problem is exposed by mm-shrink-the-page-frame-to-rt-size.patch. The patch named mm-shrink-the-page-frame-to-rt-size.patch was dropped from the RT queue once the SPLIT_PTLOCK_CPUS feature (in a slightly different shape) went upstream (somewhere between v3.12 and v3.14). I can see that the patch still allocates a lock which wasn't there before. However I can't trigger a kernel oops like described in the patch by triggering a coredump. Signed-off-by: Sebastian Andrzej Siewior Signed-off-by: Tom Zanussi --- arch/arm/kernel/process.c | 24 ------------------------ 1 file changed, 24 deletions(-) diff --git a/arch/arm/kernel/process.c b/arch/arm/kernel/process.c index cf4e1452d4b4..d96714e1858c 100644 --- a/arch/arm/kernel/process.c +++ b/arch/arm/kernel/process.c @@ -325,30 +325,6 @@ unsigned long arch_randomize_brk(struct mm_struct *mm) } #ifdef CONFIG_MMU -/* - * CONFIG_SPLIT_PTLOCK_CPUS results in a page->ptl lock. If the lock is not - * initialized by pgtable_page_ctor() then a coredump of the vector page will - * fail. - */ -static int __init vectors_user_mapping_init_page(void) -{ - struct page *page; - unsigned long addr = 0xffff0000; - pgd_t *pgd; - pud_t *pud; - pmd_t *pmd; - - pgd = pgd_offset_k(addr); - pud = pud_offset(pgd, addr); - pmd = pmd_offset(pud, addr); - page = pmd_page(*(pmd)); - - pgtable_page_ctor(page); - - return 0; -} -late_initcall(vectors_user_mapping_init_page); - #ifdef CONFIG_KUSER_HELPERS /* * The vectors page is always readable from user space for the From patchwork Thu Feb 27 14:33:27 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tom Zanussi X-Patchwork-Id: 213200 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.1 required=3.0 tests=DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DB68FC7657A for ; Thu, 27 Feb 2020 14:35:03 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id B37F824697 for ; Thu, 27 Feb 2020 14:35:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1582814103; bh=R0LTnhxsgRuXLRhJuPDr/LBV2cxOdDlF+48t6yCQXxg=; h=From:To:Subject:Date:In-Reply-To:References:In-Reply-To: References:List-ID:From; b=dYbfqBdhGcxfCvu8qpYgflTc4HLh2jl4XyLMAgsoVYPiXWt0Qg8JA3DpKn3LkYUz7 StsZ/49MJqC17l42aXWrcLCYMZ1WqAvbmKHfzTsbjdZMCOq3QR5RrSaScsSqw3gEVq ou12MqwPffbmmtok00dda4Fcee5mwMD9kohwxyzE= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732921AbgB0Oe7 (ORCPT ); Thu, 27 Feb 2020 09:34:59 -0500 Received: from mail.kernel.org ([198.145.29.99]:45204 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2387911AbgB0OeO (ORCPT ); Thu, 27 Feb 2020 09:34:14 -0500 Received: from localhost.localdomain (c-98-220-238-81.hsd1.il.comcast.net [98.220.238.81]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 8084B246BB; Thu, 27 Feb 2020 14:34:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1582814053; bh=R0LTnhxsgRuXLRhJuPDr/LBV2cxOdDlF+48t6yCQXxg=; h=From:To:Subject:Date:In-Reply-To:References:In-Reply-To: References:From; b=WcnhcUDmCmTvUaXgs0G7oN8ifZTdQWtKw32t7dM4iyWDOlyoM83kt4RL6bNHhRnwS KDt9CqS6KIzjmQYz44xeGrxuECx3Wb1zBDRxut+4+eg+fpyp1VywjXIlWB+gRwYaiC WdM8OTiU438Dj0rG6+THrcOm9yWjUufzJ2FGeyO4= From: zanussi@kernel.org To: LKML , linux-rt-users , Steven Rostedt , Thomas Gleixner , Carsten Emde , John Kacur , Sebastian Andrzej Siewior , Daniel Wagner , Tom Zanussi Subject: [PATCH RT 16/23] sched: migrate_enable: Use select_fallback_rq() Date: Thu, 27 Feb 2020 08:33:27 -0600 Message-Id: X-Mailer: git-send-email 2.14.1 In-Reply-To: References: In-Reply-To: References: Sender: linux-rt-users-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rt-users@vger.kernel.org From: Scott Wood v4.14.170-rt75-rc2 stable review patch. If anyone has any objections, please let me know. ----------- [ Upstream commit adfa969d4cfcc995a9d866020124e50f1827d2d1 ] migrate_enable() currently open-codes a variant of select_fallback_rq(). However, it does not have the "No more Mr. Nice Guy" fallback and thus it will pass an invalid CPU to the migration thread if cpus_mask only contains a CPU that is !active. Signed-off-by: Scott Wood Signed-off-by: Sebastian Andrzej Siewior Signed-off-by: Tom Zanussi --- kernel/sched/core.c | 25 ++++++++++--------------- 1 file changed, 10 insertions(+), 15 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 189e6f08575e..46324d2099e3 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -7008,6 +7008,7 @@ void migrate_enable(void) if (p->migrate_disable_update) { struct rq *rq; struct rq_flags rf; + int cpu = task_cpu(p); rq = task_rq_lock(p, &rf); update_rq_clock(rq); @@ -7017,21 +7018,15 @@ void migrate_enable(void) p->migrate_disable_update = 0; - WARN_ON(smp_processor_id() != task_cpu(p)); - if (!cpumask_test_cpu(task_cpu(p), &p->cpus_mask)) { - const struct cpumask *cpu_valid_mask = cpu_active_mask; - struct migration_arg arg; - unsigned int dest_cpu; - - if (p->flags & PF_KTHREAD) { - /* - * Kernel threads are allowed on online && !active CPUs - */ - cpu_valid_mask = cpu_online_mask; - } - dest_cpu = cpumask_any_and(cpu_valid_mask, &p->cpus_mask); - arg.task = p; - arg.dest_cpu = dest_cpu; + WARN_ON(smp_processor_id() != cpu); + if (!cpumask_test_cpu(cpu, &p->cpus_mask)) { + struct migration_arg arg = { p }; + struct rq_flags rf; + + rq = task_rq_lock(p, &rf); + update_rq_clock(rq); + arg.dest_cpu = select_fallback_rq(cpu, p); + task_rq_unlock(rq, p, &rf); unpin_current_cpu(); preempt_lazy_enable(); From patchwork Thu Feb 27 14:33:28 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tom Zanussi X-Patchwork-Id: 213201 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.1 required=3.0 tests=DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EA123C76577 for ; Thu, 27 Feb 2020 14:34:56 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id AC8F324697 for ; Thu, 27 Feb 2020 14:34:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1582814096; bh=F0x+fxyyly+5ZWr71+dfCVZDMWRoEovwqhZv7pcUqLA=; h=From:To:Subject:Date:In-Reply-To:References:In-Reply-To: References:List-ID:From; b=qoDvWQtYn/d9ppIGe6X8R7CRn+JSuDiGjScRdG2JYg2MG7xQnj1ymBn9tc6qkDi/D scuiVBr1L251ELXKvfPtQgHWY643YYmEz4jSuVNn3O/OHjbQPAUmuQP+g8jqeZLU1Z C3hoKBG95FD3IMnOMOGw0Gk6ILULnCUi+PYKAOho= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732778AbgB0Oew (ORCPT ); Thu, 27 Feb 2020 09:34:52 -0500 Received: from mail.kernel.org ([198.145.29.99]:45252 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1733287AbgB0OeR (ORCPT ); Thu, 27 Feb 2020 09:34:17 -0500 Received: from localhost.localdomain (c-98-220-238-81.hsd1.il.comcast.net [98.220.238.81]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 3315D246B4; Thu, 27 Feb 2020 14:34:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1582814056; bh=F0x+fxyyly+5ZWr71+dfCVZDMWRoEovwqhZv7pcUqLA=; h=From:To:Subject:Date:In-Reply-To:References:In-Reply-To: References:From; b=0eph5yJB0EUPyVWhBaBSmIre8GZEvTlORdbzKFALKzbaM2y95/aHBSXaHAOYbVvMK zyk+3GMymioS0aLmOFYZnMaFZTLBC+DaVlIY/aptQMQwn1S5ZPzoDHedPzsR40ifq/ dKIaKiZWf5rf1W2kS+7KZQfcMroJGP+I466w1HME= From: zanussi@kernel.org To: LKML , linux-rt-users , Steven Rostedt , Thomas Gleixner , Carsten Emde , John Kacur , Sebastian Andrzej Siewior , Daniel Wagner , Tom Zanussi Subject: [PATCH RT 17/23] sched: Lazy migrate_disable processing Date: Thu, 27 Feb 2020 08:33:28 -0600 Message-Id: <0516467cdf379e349f48ae0dd59ed760a037b869.1582814004.git.zanussi@kernel.org> X-Mailer: git-send-email 2.14.1 In-Reply-To: References: In-Reply-To: References: Sender: linux-rt-users-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rt-users@vger.kernel.org From: Scott Wood v4.14.170-rt75-rc2 stable review patch. If anyone has any objections, please let me know. ----------- [ Upstream commit 425c5b38779a860062aa62219dc920d374b13c17 ] Avoid overhead on the majority of migrate disable/enable sequences by only manipulating scheduler data (and grabbing the relevant locks) when the task actually schedules while migrate-disabled. A kernel build showed around a 10% reduction in system time (with CONFIG_NR_CPUS=512). Instead of cpuhp_pin_lock, CPU hotplug is handled by keeping a per-CPU count of the number of pinned tasks (including tasks which have not scheduled in the migrate-disabled section); takedown_cpu() will wait until that reaches zero (confirmed by take_cpu_down() in stop machine context to deal with races) before migrating tasks off of the cpu. To simplify synchronization, updating cpus_mask is no longer deferred until migrate_enable(). This lets us not have to worry about migrate_enable() missing the update if it's on the fast path (didn't schedule during the migrate disabled section). It also makes the code a bit simpler and reduces deviation from mainline. While the main motivation for this is the performance benefit, lazy migrate disable also eliminates the restriction on calling migrate_disable() while atomic but leaving the atomic region prior to calling migrate_enable() -- though this won't help with local_bh_disable() (and thus rcutorture) unless something similar is done with the recently added local_lock. Signed-off-by: Scott Wood Signed-off-by: Sebastian Andrzej Siewior Signed-off-by: Tom Zanussi Conflicts: include/linux/sched.h init/init_task.c kernel/sched/core.c --- include/linux/cpu.h | 4 - include/linux/init_task.h | 9 +++ include/linux/sched.h | 11 +-- kernel/cpu.c | 103 ++++++++++---------------- kernel/sched/core.c | 182 +++++++++++++++++++--------------------------- kernel/sched/sched.h | 4 + lib/smp_processor_id.c | 3 + 7 files changed, 134 insertions(+), 182 deletions(-) diff --git a/include/linux/cpu.h b/include/linux/cpu.h index 580c1b5bee1e..0cb481727feb 100644 --- a/include/linux/cpu.h +++ b/include/linux/cpu.h @@ -127,8 +127,6 @@ extern void cpu_hotplug_disable(void); extern void cpu_hotplug_enable(void); void clear_tasks_mm_cpumask(int cpu); int cpu_down(unsigned int cpu); -extern void pin_current_cpu(void); -extern void unpin_current_cpu(void); #else /* CONFIG_HOTPLUG_CPU */ @@ -139,8 +137,6 @@ static inline void cpus_read_unlock(void) { } static inline void lockdep_assert_cpus_held(void) { } static inline void cpu_hotplug_disable(void) { } static inline void cpu_hotplug_enable(void) { } -static inline void pin_current_cpu(void) { } -static inline void unpin_current_cpu(void) { } #endif /* !CONFIG_HOTPLUG_CPU */ diff --git a/include/linux/init_task.h b/include/linux/init_task.h index ee3ff961b84c..3f7aa4dc7e1f 100644 --- a/include/linux/init_task.h +++ b/include/linux/init_task.h @@ -225,6 +225,14 @@ extern struct cred init_cred; #define INIT_TASK_SECURITY #endif +#if defined(CONFIG_SMP) && defined(CONFIG_PREEMPT_RT_BASE) && \ + defined(CONFIG_SCHED_DEBUG) +# define INIT_LAZY_MIGRATE(tsk) \ + .pinned_on_cpu = -1, +#else +# define INIT_LAZY_MIGRATE(tsk) +#endif + /* * INIT_TASK is used to set up the first task table, touch at * your own risk!. Base=0, limit=0x1fffff (=2MB) @@ -243,6 +251,7 @@ extern struct cred init_cred; .cpus_ptr = &tsk.cpus_mask, \ .cpus_mask = CPU_MASK_ALL, \ .nr_cpus_allowed= NR_CPUS, \ + INIT_LAZY_MIGRATE(tsk) \ .mm = NULL, \ .active_mm = &init_mm, \ .restart_block = { \ diff --git a/include/linux/sched.h b/include/linux/sched.h index 2bf136617d19..6e3eded72d8e 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -229,6 +229,8 @@ extern void io_schedule_finish(int token); extern long io_schedule_timeout(long timeout); extern void io_schedule(void); +int cpu_nr_pinned(int cpu); + /** * struct prev_cputime - snapshot of system and user cputime * @utime: time spent in user mode @@ -628,16 +630,13 @@ struct task_struct { cpumask_t cpus_mask; #if defined(CONFIG_SMP) && defined(CONFIG_PREEMPT_RT_BASE) int migrate_disable; - int migrate_disable_update; - int pinned_on_cpu; + bool migrate_disable_scheduled; # ifdef CONFIG_SCHED_DEBUG - int migrate_disable_atomic; + int pinned_on_cpu; # endif - #elif !defined(CONFIG_SMP) && defined(CONFIG_PREEMPT_RT_BASE) # ifdef CONFIG_SCHED_DEBUG int migrate_disable; - int migrate_disable_atomic; # endif #endif #ifdef CONFIG_PREEMPT_RT_FULL @@ -1883,4 +1882,6 @@ extern long sched_getaffinity(pid_t pid, struct cpumask *mask); #define TASK_SIZE_OF(tsk) TASK_SIZE #endif +extern struct task_struct *takedown_cpu_task; + #endif diff --git a/kernel/cpu.c b/kernel/cpu.c index 9be794896d87..17b1ed41bc06 100644 --- a/kernel/cpu.c +++ b/kernel/cpu.c @@ -75,11 +75,6 @@ static DEFINE_PER_CPU(struct cpuhp_cpu_state, cpuhp_state) = { .fail = CPUHP_INVALID, }; -#if defined(CONFIG_HOTPLUG_CPU) && defined(CONFIG_PREEMPT_RT_FULL) -static DEFINE_PER_CPU(struct rt_rw_lock, cpuhp_pin_lock) = \ - __RWLOCK_RT_INITIALIZER(cpuhp_pin_lock); -#endif - #if defined(CONFIG_LOCKDEP) && defined(CONFIG_SMP) static struct lockdep_map cpuhp_state_up_map = STATIC_LOCKDEP_MAP_INIT("cpuhp_state-up", &cpuhp_state_up_map); @@ -293,57 +288,6 @@ static int cpu_hotplug_disabled; #ifdef CONFIG_HOTPLUG_CPU -/** - * pin_current_cpu - Prevent the current cpu from being unplugged - */ -void pin_current_cpu(void) -{ -#ifdef CONFIG_PREEMPT_RT_FULL - struct rt_rw_lock *cpuhp_pin; - unsigned int cpu; - int ret; - -again: - cpuhp_pin = this_cpu_ptr(&cpuhp_pin_lock); - ret = __read_rt_trylock(cpuhp_pin); - if (ret) { - current->pinned_on_cpu = smp_processor_id(); - return; - } - cpu = smp_processor_id(); - preempt_lazy_enable(); - preempt_enable(); - - sleeping_lock_inc(); - __read_rt_lock(cpuhp_pin); - sleeping_lock_dec(); - - preempt_disable(); - preempt_lazy_disable(); - if (cpu != smp_processor_id()) { - __read_rt_unlock(cpuhp_pin); - goto again; - } - current->pinned_on_cpu = cpu; -#endif -} - -/** - * unpin_current_cpu - Allow unplug of current cpu - */ -void unpin_current_cpu(void) -{ -#ifdef CONFIG_PREEMPT_RT_FULL - struct rt_rw_lock *cpuhp_pin = this_cpu_ptr(&cpuhp_pin_lock); - - if (WARN_ON(current->pinned_on_cpu != smp_processor_id())) - cpuhp_pin = per_cpu_ptr(&cpuhp_pin_lock, current->pinned_on_cpu); - - current->pinned_on_cpu = -1; - __read_rt_unlock(cpuhp_pin); -#endif -} - DEFINE_STATIC_PERCPU_RWSEM(cpu_hotplug_lock); void cpus_read_lock(void) @@ -878,6 +822,15 @@ static int take_cpu_down(void *_param) int err, cpu = smp_processor_id(); int ret; +#ifdef CONFIG_PREEMPT_RT_BASE + /* + * If any tasks disabled migration before we got here, + * go back and sleep again. + */ + if (cpu_nr_pinned(cpu)) + return -EAGAIN; +#endif + /* Ensure this CPU doesn't handle any more interrupts. */ err = __cpu_disable(); if (err < 0) @@ -905,11 +858,10 @@ static int take_cpu_down(void *_param) return 0; } +struct task_struct *takedown_cpu_task; + static int takedown_cpu(unsigned int cpu) { -#ifdef CONFIG_PREEMPT_RT_FULL - struct rt_rw_lock *cpuhp_pin = per_cpu_ptr(&cpuhp_pin_lock, cpu); -#endif struct cpuhp_cpu_state *st = per_cpu_ptr(&cpuhp_state, cpu); int err; @@ -922,17 +874,38 @@ static int takedown_cpu(unsigned int cpu) */ irq_lock_sparse(); -#ifdef CONFIG_PREEMPT_RT_FULL - __write_rt_lock(cpuhp_pin); +#ifdef CONFIG_PREEMPT_RT_BASE + WARN_ON_ONCE(takedown_cpu_task); + takedown_cpu_task = current; + +again: + /* + * If a task pins this CPU after we pass this check, take_cpu_down + * will return -EAGAIN. + */ + for (;;) { + int nr_pinned; + + set_current_state(TASK_UNINTERRUPTIBLE); + nr_pinned = cpu_nr_pinned(cpu); + if (nr_pinned == 0) + break; + schedule(); + } + set_current_state(TASK_RUNNING); #endif /* * So now all preempt/rcu users must observe !cpu_active(). */ err = stop_machine_cpuslocked(take_cpu_down, NULL, cpumask_of(cpu)); +#ifdef CONFIG_PREEMPT_RT_BASE + if (err == -EAGAIN) + goto again; +#endif if (err) { -#ifdef CONFIG_PREEMPT_RT_FULL - __write_rt_unlock(cpuhp_pin); +#ifdef CONFIG_PREEMPT_RT_BASE + takedown_cpu_task = NULL; #endif /* CPU refused to die */ irq_unlock_sparse(); @@ -952,8 +925,8 @@ static int takedown_cpu(unsigned int cpu) wait_for_ap_thread(st, false); BUG_ON(st->state != CPUHP_AP_IDLE_DEAD); -#ifdef CONFIG_PREEMPT_RT_FULL - __write_rt_unlock(cpuhp_pin); +#ifdef CONFIG_PREEMPT_RT_BASE + takedown_cpu_task = NULL; #endif /* Interrupts are moved away from the dying cpu, reenable alloc/free */ irq_unlock_sparse(); diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 46324d2099e3..6066dc7bd7b2 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -1100,7 +1100,8 @@ static int migration_cpu_stop(void *data) void set_cpus_allowed_common(struct task_struct *p, const struct cpumask *new_mask) { cpumask_copy(&p->cpus_mask, new_mask); - p->nr_cpus_allowed = cpumask_weight(new_mask); + if (p->cpus_ptr == &p->cpus_mask) + p->nr_cpus_allowed = cpumask_weight(new_mask); } #if defined(CONFIG_SMP) && defined(CONFIG_PREEMPT_RT_BASE) @@ -1111,8 +1112,7 @@ int __migrate_disabled(struct task_struct *p) EXPORT_SYMBOL_GPL(__migrate_disabled); #endif -static void __do_set_cpus_allowed_tail(struct task_struct *p, - const struct cpumask *new_mask) +void do_set_cpus_allowed(struct task_struct *p, const struct cpumask *new_mask) { struct rq *rq = task_rq(p); bool queued, running; @@ -1141,20 +1141,6 @@ static void __do_set_cpus_allowed_tail(struct task_struct *p, set_curr_task(rq, p); } -void do_set_cpus_allowed(struct task_struct *p, const struct cpumask *new_mask) -{ -#if defined(CONFIG_SMP) && defined(CONFIG_PREEMPT_RT_BASE) - if (__migrate_disabled(p)) { - lockdep_assert_held(&p->pi_lock); - - cpumask_copy(&p->cpus_mask, new_mask); - p->migrate_disable_update = 1; - return; - } -#endif - __do_set_cpus_allowed_tail(p, new_mask); -} - /* * Change a given task's CPU affinity. Migrate the thread to a * proper CPU and schedule it away if the CPU it's executing on @@ -1214,7 +1200,8 @@ static int __set_cpus_allowed_ptr(struct task_struct *p, } /* Can the task run on the task's current CPU? If so, we're done */ - if (cpumask_test_cpu(task_cpu(p), new_mask) || __migrate_disabled(p)) + if (cpumask_test_cpu(task_cpu(p), new_mask) || + p->cpus_ptr != &p->cpus_mask) goto out; if (task_running(rq, p) || p->state == TASK_WAKING) { @@ -3325,6 +3312,8 @@ pick_next_task(struct rq *rq, struct task_struct *prev, struct rq_flags *rf) BUG(); } +static void migrate_disabled_sched(struct task_struct *p); + /* * __schedule() is the main scheduler function. * @@ -3392,6 +3381,9 @@ static void __sched notrace __schedule(bool preempt) rq_lock(rq, &rf); smp_mb__after_spinlock(); + if (__migrate_disabled(prev)) + migrate_disabled_sched(prev); + /* Promote REQ to ACT */ rq->clock_update_flags <<= 1; update_rq_clock(rq); @@ -5618,6 +5610,8 @@ static void migrate_tasks(struct rq *dead_rq, struct rq_flags *rf) BUG_ON(!next); put_prev_task(rq, next); + WARN_ON_ONCE(__migrate_disabled(next)); + /* * Rules for changing task_struct::cpus_mask are holding * both pi_lock and rq->lock, such that holding either @@ -6920,14 +6914,9 @@ update_nr_migratory(struct task_struct *p, long delta) static inline void migrate_disable_update_cpus_allowed(struct task_struct *p) { - struct rq *rq; - struct rq_flags rf; - - rq = task_rq_lock(p, &rf); p->cpus_ptr = cpumask_of(smp_processor_id()); update_nr_migratory(p, -1); p->nr_cpus_allowed = 1; - task_rq_unlock(rq, p, &rf); } static inline void @@ -6945,54 +6934,35 @@ migrate_enable_update_cpus_allowed(struct task_struct *p) void migrate_disable(void) { - struct task_struct *p = current; + preempt_disable(); - if (in_atomic() || irqs_disabled()) { + if (++current->migrate_disable == 1) { + this_rq()->nr_pinned++; + preempt_lazy_disable(); #ifdef CONFIG_SCHED_DEBUG - p->migrate_disable_atomic++; + WARN_ON_ONCE(current->pinned_on_cpu >= 0); + current->pinned_on_cpu = smp_processor_id(); #endif - return; - } -#ifdef CONFIG_SCHED_DEBUG - if (unlikely(p->migrate_disable_atomic)) { - tracing_off(); - WARN_ON_ONCE(1); } -#endif - if (p->migrate_disable) { - p->migrate_disable++; - return; - } + preempt_enable(); +} +EXPORT_SYMBOL(migrate_disable); - preempt_disable(); - preempt_lazy_disable(); - pin_current_cpu(); +static void migrate_disabled_sched(struct task_struct *p) +{ + if (p->migrate_disable_scheduled) + return; migrate_disable_update_cpus_allowed(p); - p->migrate_disable = 1; - - preempt_enable(); + p->migrate_disable_scheduled = 1; } -EXPORT_SYMBOL(migrate_disable); void migrate_enable(void) { struct task_struct *p = current; - - if (in_atomic() || irqs_disabled()) { -#ifdef CONFIG_SCHED_DEBUG - p->migrate_disable_atomic--; -#endif - return; - } - -#ifdef CONFIG_SCHED_DEBUG - if (unlikely(p->migrate_disable_atomic)) { - tracing_off(); - WARN_ON_ONCE(1); - } -#endif + struct rq *rq = this_rq(); + int cpu = task_cpu(p); WARN_ON_ONCE(p->migrate_disable <= 0); if (p->migrate_disable > 1) { @@ -7002,67 +6972,69 @@ void migrate_enable(void) preempt_disable(); +#ifdef CONFIG_SCHED_DEBUG + WARN_ON_ONCE(current->pinned_on_cpu != cpu); + current->pinned_on_cpu = -1; +#endif + + WARN_ON_ONCE(rq->nr_pinned < 1); + p->migrate_disable = 0; + rq->nr_pinned--; + if (rq->nr_pinned == 0 && unlikely(!cpu_active(cpu)) && + takedown_cpu_task) + wake_up_process(takedown_cpu_task); + + if (!p->migrate_disable_scheduled) + goto out; + + p->migrate_disable_scheduled = 0; + migrate_enable_update_cpus_allowed(p); - if (p->migrate_disable_update) { - struct rq *rq; + WARN_ON(smp_processor_id() != cpu); + if (!is_cpu_allowed(p, cpu)) { + struct migration_arg arg = { p }; struct rq_flags rf; - int cpu = task_cpu(p); rq = task_rq_lock(p, &rf); update_rq_clock(rq); - - __do_set_cpus_allowed_tail(p, &p->cpus_mask); + arg.dest_cpu = select_fallback_rq(cpu, p); task_rq_unlock(rq, p, &rf); - p->migrate_disable_update = 0; - - WARN_ON(smp_processor_id() != cpu); - if (!cpumask_test_cpu(cpu, &p->cpus_mask)) { - struct migration_arg arg = { p }; - struct rq_flags rf; + preempt_lazy_enable(); + preempt_enable(); - rq = task_rq_lock(p, &rf); - update_rq_clock(rq); - arg.dest_cpu = select_fallback_rq(cpu, p); - task_rq_unlock(rq, p, &rf); - - unpin_current_cpu(); - preempt_lazy_enable(); - preempt_enable(); - - sleeping_lock_inc(); - stop_one_cpu(task_cpu(p), migration_cpu_stop, &arg); - sleeping_lock_dec(); - tlb_migrate_finish(p->mm); + sleeping_lock_inc(); + stop_one_cpu(task_cpu(p), migration_cpu_stop, &arg); + sleeping_lock_dec(); + tlb_migrate_finish(p->mm); - return; - } + return; } - unpin_current_cpu(); + +out: preempt_lazy_enable(); preempt_enable(); } EXPORT_SYMBOL(migrate_enable); -#elif !defined(CONFIG_SMP) && defined(CONFIG_PREEMPT_RT_BASE) -void migrate_disable(void) +int cpu_nr_pinned(int cpu) { -#ifdef CONFIG_SCHED_DEBUG - struct task_struct *p = current; + struct rq *rq = cpu_rq(cpu); - if (in_atomic() || irqs_disabled()) { - p->migrate_disable_atomic++; - return; - } + return rq->nr_pinned; +} - if (unlikely(p->migrate_disable_atomic)) { - tracing_off(); - WARN_ON_ONCE(1); - } +#elif !defined(CONFIG_SMP) && defined(CONFIG_PREEMPT_RT_BASE) +static void migrate_disabled_sched(struct task_struct *p) +{ +} - p->migrate_disable++; +void migrate_disable(void) +{ +#ifdef CONFIG_SCHED_DEBUG + current->migrate_disable++; #endif barrier(); } @@ -7073,20 +7045,14 @@ void migrate_enable(void) #ifdef CONFIG_SCHED_DEBUG struct task_struct *p = current; - if (in_atomic() || irqs_disabled()) { - p->migrate_disable_atomic--; - return; - } - - if (unlikely(p->migrate_disable_atomic)) { - tracing_off(); - WARN_ON_ONCE(1); - } - WARN_ON_ONCE(p->migrate_disable <= 0); p->migrate_disable--; #endif barrier(); } EXPORT_SYMBOL(migrate_enable); +#else +static void migrate_disabled_sched(struct task_struct *p) +{ +} #endif diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 4ec44bcf7d6d..04c41c997a0e 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -811,6 +811,10 @@ struct rq { /* Must be inspected within a rcu lock section */ struct cpuidle_state *idle_state; #endif + +#if defined(CONFIG_PREEMPT_RT_BASE) && defined(CONFIG_SMP) + int nr_pinned; +#endif }; static inline int cpu_of(struct rq *rq) diff --git a/lib/smp_processor_id.c b/lib/smp_processor_id.c index 9f3c8bb62e57..7a0c19c282cc 100644 --- a/lib/smp_processor_id.c +++ b/lib/smp_processor_id.c @@ -23,6 +23,9 @@ notrace static unsigned int check_preemption_disabled(const char *what1, * Kernel threads bound to a single CPU can safely use * smp_processor_id(): */ + if (current->migrate_disable) + goto out; + if (current->nr_cpus_allowed == 1) goto out; From patchwork Thu Feb 27 14:33:30 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tom Zanussi X-Patchwork-Id: 213202 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.1 required=3.0 tests=DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DB4CDC76575 for ; Thu, 27 Feb 2020 14:34:52 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id A801D2468F for ; Thu, 27 Feb 2020 14:34:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1582814092; bh=gMfrwv6RHSJ4VB1zJJfyxToAQD1Aw5mPKIjS+nBgdHY=; h=From:To:Subject:Date:In-Reply-To:References:In-Reply-To: References:List-ID:From; b=t7YJKSPmDgKOVG1Ft2lf3QD89QtpLwHWJXOYI42EHSEms1+VSso4fQhoSGJdPgwkZ KxtpG6zyx5/FEYQEFE28DC8R9kKJJsZtELozJ4J8QrG/LndpGcWINczLiqcjH6ghET HTRCCuyjbccseSmHIdOgDZda8jNBlReza3Vu1Y9k= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732631AbgB0Oen (ORCPT ); Thu, 27 Feb 2020 09:34:43 -0500 Received: from mail.kernel.org ([198.145.29.99]:45284 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732925AbgB0OeS (ORCPT ); Thu, 27 Feb 2020 09:34:18 -0500 Received: from localhost.localdomain (c-98-220-238-81.hsd1.il.comcast.net [98.220.238.81]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 5C8E8246BC; Thu, 27 Feb 2020 14:34:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1582814058; bh=gMfrwv6RHSJ4VB1zJJfyxToAQD1Aw5mPKIjS+nBgdHY=; h=From:To:Subject:Date:In-Reply-To:References:In-Reply-To: References:From; b=Al50Yy0hvsU18UVtTF2Sqh0BhUul3kzNsKZq+oxHh0DLCAaMqpBW6TYQ1VcDg8wHx iOCpVYMTKphUs+GTDryJ4ges7Hrewr54tIsaKQ61Id/Ps9kYCtDGiCEocLfCPvDw5B U4vLhu0FXnDWim5JzAwVTVXbhEFfCTco5BdAl+DM= From: zanussi@kernel.org To: LKML , linux-rt-users , Steven Rostedt , Thomas Gleixner , Carsten Emde , John Kacur , Sebastian Andrzej Siewior , Daniel Wagner , Tom Zanussi Subject: [PATCH RT 19/23] sched/core: migrate_enable() must access takedown_cpu_task on !HOTPLUG_CPU Date: Thu, 27 Feb 2020 08:33:30 -0600 Message-Id: X-Mailer: git-send-email 2.14.1 In-Reply-To: References: In-Reply-To: References: Sender: linux-rt-users-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rt-users@vger.kernel.org From: Sebastian Andrzej Siewior v4.14.170-rt75-rc2 stable review patch. If anyone has any objections, please let me know. ----------- [ Upstream commit a61d1977f692e46bad99a100f264981ba08cb4bd ] The variable takedown_cpu_task is never declared/used on !HOTPLUG_CPU except for migrate_enable(). This leads to a link error. Don't use takedown_cpu_task in !HOTPLUG_CPU. Reported-by: Dick Hollenbeck Signed-off-by: Sebastian Andrzej Siewior Signed-off-by: Tom Zanussi --- kernel/cpu.c | 2 ++ kernel/sched/core.c | 2 ++ 2 files changed, 4 insertions(+) diff --git a/kernel/cpu.c b/kernel/cpu.c index 17b1ed41bc06..861712ebb81d 100644 --- a/kernel/cpu.c +++ b/kernel/cpu.c @@ -858,7 +858,9 @@ static int take_cpu_down(void *_param) return 0; } +#ifdef CONFIG_PREEMPT_RT_BASE struct task_struct *takedown_cpu_task; +#endif static int takedown_cpu(unsigned int cpu) { diff --git a/kernel/sched/core.c b/kernel/sched/core.c index ceddb1e27caf..e10e3956bb29 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -6987,9 +6987,11 @@ void migrate_enable(void) p->migrate_disable = 0; rq->nr_pinned--; +#ifdef CONFIG_HOTPLUG_CPU if (rq->nr_pinned == 0 && unlikely(!cpu_active(cpu)) && takedown_cpu_task) wake_up_process(takedown_cpu_task); +#endif if (!p->migrate_disable_scheduled) goto out; From patchwork Thu Feb 27 14:33:32 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tom Zanussi X-Patchwork-Id: 213203 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.1 required=3.0 tests=DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id F2BD2C7656E for ; Thu, 27 Feb 2020 14:34:40 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id CADDA24656 for ; Thu, 27 Feb 2020 14:34:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1582814080; bh=Sy9jfvG4hbrvYVDo6cy+cJejN6vAL21ZAoPDo/x87JY=; h=From:To:Subject:Date:In-Reply-To:References:In-Reply-To: References:List-ID:From; b=wh6uFI+yF0E/BezTJlWjubtBWQn8V7pfhKNkxsnAuyN7l5snCTdd8uCBm1I2tX9jq QpYbE1rAnbBXvyh8N/8D5chxYjgAY1i58yM3MmJrWTDuJpw6wlx0DXIMVwKxPh6Nml XoqssC7uGOwymj8PRgIKynp2uWIZ5UlFj2LzXpHw= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2388385AbgB0OeV (ORCPT ); Thu, 27 Feb 2020 09:34:21 -0500 Received: from mail.kernel.org ([198.145.29.99]:45304 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2388319AbgB0OeU (ORCPT ); Thu, 27 Feb 2020 09:34:20 -0500 Received: from localhost.localdomain (c-98-220-238-81.hsd1.il.comcast.net [98.220.238.81]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 7FE23246BF; Thu, 27 Feb 2020 14:34:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1582814060; bh=Sy9jfvG4hbrvYVDo6cy+cJejN6vAL21ZAoPDo/x87JY=; h=From:To:Subject:Date:In-Reply-To:References:In-Reply-To: References:From; b=SSis0gJkRwHfsbIF4u48iRggj1XnwszikkB/T8n3FHmZFGtq1zArY3Av9Zcatin5z QnflyxMYRE8rA0mTDbnvq7FteYAiPcT1aEWGLnGVioAFbc5sR1M+ocrccR20hmL94U FqMRoQawk2jrCkcdoant9DJHJx3ee3rIyk76rOMU= From: zanussi@kernel.org To: LKML , linux-rt-users , Steven Rostedt , Thomas Gleixner , Carsten Emde , John Kacur , Sebastian Andrzej Siewior , Daniel Wagner , Tom Zanussi Subject: [PATCH RT 21/23] sched: migrate_enable: Busy loop until the migration request is completed Date: Thu, 27 Feb 2020 08:33:32 -0600 Message-Id: X-Mailer: git-send-email 2.14.1 In-Reply-To: References: In-Reply-To: References: Sender: linux-rt-users-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rt-users@vger.kernel.org From: Sebastian Andrzej Siewior v4.14.170-rt75-rc2 stable review patch. If anyone has any objections, please let me know. ----------- [ Upstream commit 140d7f54a5fff02898d2ca9802b39548bf7455f1 ] If user task changes the CPU affinity mask of a running task it will dispatch migration request if the current CPU is no longer allowed. This might happen shortly before a task enters a migrate_disable() section. Upon leaving the migrate_disable() section, the task will notice that the current CPU is no longer allowed and will will dispatch its own migration request to move it off the current CPU. While invoking __schedule() the first migration request will be processed and the task returns on the "new" CPU with "arg.done = 0". Its own migration request will be processed shortly after and will result in memory corruption if the stack memory, designed for request, was used otherwise in the meantime. Spin until the migration request has been processed if it was accepted. Signed-off-by: Sebastian Andrzej Siewior Signed-off-by: Tom Zanussi --- kernel/sched/core.c | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index e10e3956bb29..f30bb249123b 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -7002,7 +7002,7 @@ void migrate_enable(void) WARN_ON(smp_processor_id() != cpu); if (!is_cpu_allowed(p, cpu)) { - struct migration_arg arg = { p }; + struct migration_arg arg = { .task = p }; struct cpu_stop_work work; struct rq_flags rf; @@ -7015,7 +7015,10 @@ void migrate_enable(void) &arg, &work); tlb_migrate_finish(p->mm); __schedule(true); - WARN_ON_ONCE(!arg.done && !work.disabled); + if (!work.disabled) { + while (!arg.done) + cpu_relax(); + } } out: