From patchwork Fri Jan 17 17:41:12 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Rostedt X-Patchwork-Id: 213271 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3644EC33C9E for ; Fri, 17 Jan 2020 17:43:33 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 0EEFA2467D for ; Fri, 17 Jan 2020 17:43:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729049AbgAQRla (ORCPT ); Fri, 17 Jan 2020 12:41:30 -0500 Received: from mail.kernel.org ([198.145.29.99]:39812 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727573AbgAQRl3 (ORCPT ); Fri, 17 Jan 2020 12:41:29 -0500 Received: from gandalf.local.home (cpe-66-24-58-225.stny.res.rr.com [66.24.58.225]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id AA7922082F; Fri, 17 Jan 2020 17:41:28 +0000 (UTC) Received: from rostedt by gandalf.local.home with local (Exim 4.93) (envelope-from ) id 1isVcd-000QMZ-Jp; Fri, 17 Jan 2020 12:41:27 -0500 Message-Id: <20200117174127.480853193@goodmis.org> User-Agent: quilt/0.65 Date: Fri, 17 Jan 2020 12:41:12 -0500 From: Steven Rostedt To: linux-kernel@vger.kernel.org, linux-rt-users Cc: Thomas Gleixner , Carsten Emde , Sebastian Andrzej Siewior , John Kacur , Julia Cartwright , Daniel Wagner , Tom Zanussi , Benjamin Rouxel , Kukjin Kim , Krzysztof Kozlowski , linux-samsung-soc@vger.kernel.org Subject: [PATCH RT 01/32] i2c: exynos5: Remove IRQF_ONESHOT References: <20200117174111.282847363@goodmis.org> MIME-Version: 1.0 Sender: linux-rt-users-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rt-users@vger.kernel.org 4.19.94-rt39-rc1 stable review patch. If anyone has any objections, please let me know. ------------------ From: Sebastian Andrzej Siewior [ Upstream commit 4b217df0ab3f7910c96e42091cc7d9f221d05f01 ] The drivers sets IRQF_ONESHOT and passes only a primary handler. The IRQ is masked while the primary is handler is invoked independently of IRQF_ONESHOT. With IRQF_ONESHOT the core code will not force-thread the interrupt and this is probably not intended. I *assume* that the original author copied the IRQ registration from another driver which passed a primary and secondary handler and removed the secondary handler but keeping the ONESHOT flag. Remove IRQF_ONESHOT. Reported-by: Benjamin Rouxel Tested-by: Benjamin Rouxel Cc: Kukjin Kim Cc: Krzysztof Kozlowski Cc: linux-samsung-soc@vger.kernel.org Signed-off-by: Sebastian Andrzej Siewior Signed-off-by: Steven Rostedt (VMware) --- drivers/i2c/busses/i2c-exynos5.c | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/drivers/i2c/busses/i2c-exynos5.c b/drivers/i2c/busses/i2c-exynos5.c index c1ce2299a76e..5c57ecf4b79e 100644 --- a/drivers/i2c/busses/i2c-exynos5.c +++ b/drivers/i2c/busses/i2c-exynos5.c @@ -800,9 +800,7 @@ static int exynos5_i2c_probe(struct platform_device *pdev) } ret = devm_request_irq(&pdev->dev, i2c->irq, exynos5_i2c_irq, - IRQF_NO_SUSPEND | IRQF_ONESHOT, - dev_name(&pdev->dev), i2c); - + IRQF_NO_SUSPEND, dev_name(&pdev->dev), i2c); if (ret != 0) { dev_err(&pdev->dev, "cannot request HS-I2C IRQ %d\n", i2c->irq); goto err_clk; From patchwork Fri Jan 17 17:41:13 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Rostedt X-Patchwork-Id: 213270 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 46457C33CB3 for ; Fri, 17 Jan 2020 17:43:42 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 1BD8324679 for ; Fri, 17 Jan 2020 17:43:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729826AbgAQRnh (ORCPT ); Fri, 17 Jan 2020 12:43:37 -0500 Received: from mail.kernel.org ([198.145.29.99]:39864 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727766AbgAQRl3 (ORCPT ); Fri, 17 Jan 2020 12:41:29 -0500 Received: from gandalf.local.home (cpe-66-24-58-225.stny.res.rr.com [66.24.58.225]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id E5CC521582; Fri, 17 Jan 2020 17:41:28 +0000 (UTC) Received: from rostedt by gandalf.local.home with local (Exim 4.93) (envelope-from ) id 1isVcd-000QN6-OQ; Fri, 17 Jan 2020 12:41:27 -0500 Message-Id: <20200117174127.641378857@goodmis.org> User-Agent: quilt/0.65 Date: Fri, 17 Jan 2020 12:41:13 -0500 From: Steven Rostedt To: linux-kernel@vger.kernel.org, linux-rt-users Cc: Thomas Gleixner , Carsten Emde , Sebastian Andrzej Siewior , John Kacur , Julia Cartwright , Daniel Wagner , Tom Zanussi Subject: [PATCH RT 02/32] i2c: hix5hd2: Remove IRQF_ONESHOT References: <20200117174111.282847363@goodmis.org> MIME-Version: 1.0 Sender: linux-rt-users-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rt-users@vger.kernel.org 4.19.94-rt39-rc1 stable review patch. If anyone has any objections, please let me know. ------------------ From: Sebastian Andrzej Siewior [ Upstream commit e88b481f3f86f11e3243e0808a830e5ca5782a9d ] The drivers sets IRQF_ONESHOT and passes only a primary handler. The IRQ is masked while the primary is handler is invoked independently of IRQF_ONESHOT. With IRQF_ONESHOT the core code will not force-thread the interrupt and this is probably not intended. I *assume* that the original author copied the IRQ registration from another driver which passed a primary and secondary handler and removed the secondary handler but keeping the ONESHOT flag. Remove IRQF_ONESHOT. Signed-off-by: Sebastian Andrzej Siewior Signed-off-by: Steven Rostedt (VMware) --- drivers/i2c/busses/i2c-hix5hd2.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/drivers/i2c/busses/i2c-hix5hd2.c b/drivers/i2c/busses/i2c-hix5hd2.c index 061a4bfb03f4..575aff50b19a 100644 --- a/drivers/i2c/busses/i2c-hix5hd2.c +++ b/drivers/i2c/busses/i2c-hix5hd2.c @@ -449,8 +449,7 @@ static int hix5hd2_i2c_probe(struct platform_device *pdev) hix5hd2_i2c_init(priv); ret = devm_request_irq(&pdev->dev, irq, hix5hd2_i2c_irq, - IRQF_NO_SUSPEND | IRQF_ONESHOT, - dev_name(&pdev->dev), priv); + IRQF_NO_SUSPEND, dev_name(&pdev->dev), priv); if (ret != 0) { dev_err(&pdev->dev, "cannot request HS-I2C IRQ %d\n", irq); goto err_clk; From patchwork Fri Jan 17 17:41:14 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Rostedt X-Patchwork-Id: 213272 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EFC32C33CB6 for ; Fri, 17 Jan 2020 17:43:25 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id C8D672467D for ; Fri, 17 Jan 2020 17:43:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729096AbgAQRla (ORCPT ); Fri, 17 Jan 2020 12:41:30 -0500 Received: from mail.kernel.org ([198.145.29.99]:39878 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728057AbgAQRl3 (ORCPT ); Fri, 17 Jan 2020 12:41:29 -0500 Received: from gandalf.local.home (cpe-66-24-58-225.stny.res.rr.com [66.24.58.225]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id EF1862176D; Fri, 17 Jan 2020 17:41:28 +0000 (UTC) Received: from rostedt by gandalf.local.home with local (Exim 4.93) (envelope-from ) id 1isVcd-000QNc-T2; Fri, 17 Jan 2020 12:41:27 -0500 Message-Id: <20200117174127.780104410@goodmis.org> User-Agent: quilt/0.65 Date: Fri, 17 Jan 2020 12:41:14 -0500 From: Steven Rostedt To: linux-kernel@vger.kernel.org, linux-rt-users Cc: Thomas Gleixner , Carsten Emde , Sebastian Andrzej Siewior , John Kacur , Julia Cartwright , Daniel Wagner , Tom Zanussi , Juri Lelli Subject: [PATCH RT 03/32] sched/deadline: Ensure inactive_timer runs in hardirq context References: <20200117174111.282847363@goodmis.org> MIME-Version: 1.0 Sender: linux-rt-users-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rt-users@vger.kernel.org 4.19.94-rt39-rc1 stable review patch. If anyone has any objections, please let me know. ------------------ From: Juri Lelli [ Upstream commit ba94e7aed7405c58251b1380e6e7d73aa8284b41 ] SCHED_DEADLINE inactive timer needs to run in hardirq context (as dl_task_timer already does) on PREEMPT_RT Change the mode to HRTIMER_MODE_REL_HARD. [ tglx: Fixed up the start site, so mode debugging works ] Signed-off-by: Juri Lelli Signed-off-by: Thomas Gleixner Link: https://lkml.kernel.org/r/20190731103715.4047-1-juri.lelli@redhat.com Signed-off-by: Sebastian Andrzej Siewior Signed-off-by: Steven Rostedt (VMware) --- kernel/sched/deadline.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c index 974a8f9b615a..929167a1d991 100644 --- a/kernel/sched/deadline.c +++ b/kernel/sched/deadline.c @@ -287,7 +287,7 @@ static void task_non_contending(struct task_struct *p) dl_se->dl_non_contending = 1; get_task_struct(p); - hrtimer_start(timer, ns_to_ktime(zerolag_time), HRTIMER_MODE_REL); + hrtimer_start(timer, ns_to_ktime(zerolag_time), HRTIMER_MODE_REL_HARD); } static void task_contending(struct sched_dl_entity *dl_se, int flags) @@ -1325,7 +1325,7 @@ void init_dl_inactive_task_timer(struct sched_dl_entity *dl_se) { struct hrtimer *timer = &dl_se->inactive_timer; - hrtimer_init(timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL); + hrtimer_init(timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL_HARD); timer->function = inactive_task_timer; } From patchwork Fri Jan 17 17:41:15 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Rostedt X-Patchwork-Id: 213269 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9BC87C33C9E for ; Fri, 17 Jan 2020 17:43:44 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 7A36924679 for ; Fri, 17 Jan 2020 17:43:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729535AbgAQRnn (ORCPT ); Fri, 17 Jan 2020 12:43:43 -0500 Received: from mail.kernel.org ([198.145.29.99]:39914 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728600AbgAQRl3 (ORCPT ); Fri, 17 Jan 2020 12:41:29 -0500 Received: from gandalf.local.home (cpe-66-24-58-225.stny.res.rr.com [66.24.58.225]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 254832192A; Fri, 17 Jan 2020 17:41:29 +0000 (UTC) Received: from rostedt by gandalf.local.home with local (Exim 4.93) (envelope-from ) id 1isVce-000QO8-1M; Fri, 17 Jan 2020 12:41:28 -0500 Message-Id: <20200117174127.925738696@goodmis.org> User-Agent: quilt/0.65 Date: Fri, 17 Jan 2020 12:41:15 -0500 From: Steven Rostedt To: linux-kernel@vger.kernel.org, linux-rt-users Cc: Thomas Gleixner , Carsten Emde , Sebastian Andrzej Siewior , John Kacur , Julia Cartwright , Daniel Wagner , Tom Zanussi , Clark Williams Subject: [PATCH RT 04/32] thermal/x86_pkg_temp: make pkg_temp_lock a raw spinlock References: <20200117174111.282847363@goodmis.org> MIME-Version: 1.0 Sender: linux-rt-users-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rt-users@vger.kernel.org 4.19.94-rt39-rc1 stable review patch. If anyone has any objections, please let me know. ------------------ From: Clark Williams [ Upstream commit 8b03bb3ea7861b70b506199a69b1c8f81fe2d4d0 ] The spinlock pkg_temp_lock has the potential of being taken in atomic context on v5.2-rt PREEMPT_RT. It's static and limited scope so go ahead and make it a raw spinlock. Signed-off-by: Clark Williams Signed-off-by: Sebastian Andrzej Siewior Signed-off-by: Steven Rostedt (VMware) --- drivers/thermal/x86_pkg_temp_thermal.c | 24 ++++++++++++------------ 1 file changed, 12 insertions(+), 12 deletions(-) diff --git a/drivers/thermal/x86_pkg_temp_thermal.c b/drivers/thermal/x86_pkg_temp_thermal.c index 1ef937d799e4..540becb78a0f 100644 --- a/drivers/thermal/x86_pkg_temp_thermal.c +++ b/drivers/thermal/x86_pkg_temp_thermal.c @@ -75,7 +75,7 @@ static int max_packages __read_mostly; /* Array of package pointers */ static struct pkg_device **packages; /* Serializes interrupt notification, work and hotplug */ -static DEFINE_SPINLOCK(pkg_temp_lock); +static DEFINE_RAW_SPINLOCK(pkg_temp_lock); /* Protects zone operation in the work function against hotplug removal */ static DEFINE_MUTEX(thermal_zone_mutex); @@ -291,12 +291,12 @@ static void pkg_temp_thermal_threshold_work_fn(struct work_struct *work) u64 msr_val, wr_val; mutex_lock(&thermal_zone_mutex); - spin_lock_irq(&pkg_temp_lock); + raw_spin_lock_irq(&pkg_temp_lock); ++pkg_work_cnt; pkgdev = pkg_temp_thermal_get_dev(cpu); if (!pkgdev) { - spin_unlock_irq(&pkg_temp_lock); + raw_spin_unlock_irq(&pkg_temp_lock); mutex_unlock(&thermal_zone_mutex); return; } @@ -310,7 +310,7 @@ static void pkg_temp_thermal_threshold_work_fn(struct work_struct *work) } enable_pkg_thres_interrupt(); - spin_unlock_irq(&pkg_temp_lock); + raw_spin_unlock_irq(&pkg_temp_lock); /* * If tzone is not NULL, then thermal_zone_mutex will prevent the @@ -335,7 +335,7 @@ static int pkg_thermal_notify(u64 msr_val) struct pkg_device *pkgdev; unsigned long flags; - spin_lock_irqsave(&pkg_temp_lock, flags); + raw_spin_lock_irqsave(&pkg_temp_lock, flags); ++pkg_interrupt_cnt; disable_pkg_thres_interrupt(); @@ -347,7 +347,7 @@ static int pkg_thermal_notify(u64 msr_val) pkg_thermal_schedule_work(pkgdev->cpu, &pkgdev->work); } - spin_unlock_irqrestore(&pkg_temp_lock, flags); + raw_spin_unlock_irqrestore(&pkg_temp_lock, flags); return 0; } @@ -393,9 +393,9 @@ static int pkg_temp_thermal_device_add(unsigned int cpu) pkgdev->msr_pkg_therm_high); cpumask_set_cpu(cpu, &pkgdev->cpumask); - spin_lock_irq(&pkg_temp_lock); + raw_spin_lock_irq(&pkg_temp_lock); packages[pkgid] = pkgdev; - spin_unlock_irq(&pkg_temp_lock); + raw_spin_unlock_irq(&pkg_temp_lock); return 0; } @@ -432,7 +432,7 @@ static int pkg_thermal_cpu_offline(unsigned int cpu) } /* Protect against work and interrupts */ - spin_lock_irq(&pkg_temp_lock); + raw_spin_lock_irq(&pkg_temp_lock); /* * Check whether this cpu was the current target and store the new @@ -464,9 +464,9 @@ static int pkg_thermal_cpu_offline(unsigned int cpu) * To cancel the work we need to drop the lock, otherwise * we might deadlock if the work needs to be flushed. */ - spin_unlock_irq(&pkg_temp_lock); + raw_spin_unlock_irq(&pkg_temp_lock); cancel_delayed_work_sync(&pkgdev->work); - spin_lock_irq(&pkg_temp_lock); + raw_spin_lock_irq(&pkg_temp_lock); /* * If this is not the last cpu in the package and the work * did not run after we dropped the lock above, then we @@ -477,7 +477,7 @@ static int pkg_thermal_cpu_offline(unsigned int cpu) pkg_thermal_schedule_work(target, &pkgdev->work); } - spin_unlock_irq(&pkg_temp_lock); + raw_spin_unlock_irq(&pkg_temp_lock); /* Final cleanup if this is the last cpu */ if (lastcpu) From patchwork Fri Jan 17 17:41:17 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Rostedt X-Patchwork-Id: 213273 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 925E6C33C9E for ; Fri, 17 Jan 2020 17:43:18 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 667E12467D for ; Fri, 17 Jan 2020 17:43:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729022AbgAQRnR (ORCPT ); Fri, 17 Jan 2020 12:43:17 -0500 Received: from mail.kernel.org ([198.145.29.99]:39980 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728984AbgAQRla (ORCPT ); Fri, 17 Jan 2020 12:41:30 -0500 Received: from gandalf.local.home (cpe-66-24-58-225.stny.res.rr.com [66.24.58.225]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 6A34322464; Fri, 17 Jan 2020 17:41:29 +0000 (UTC) Received: from rostedt by gandalf.local.home with local (Exim 4.93) (envelope-from ) id 1isVce-000QPA-Am; Fri, 17 Jan 2020 12:41:28 -0500 Message-Id: <20200117174128.213232389@goodmis.org> User-Agent: quilt/0.65 Date: Fri, 17 Jan 2020 12:41:17 -0500 From: Steven Rostedt To: linux-kernel@vger.kernel.org, linux-rt-users Cc: Thomas Gleixner , Carsten Emde , Sebastian Andrzej Siewior , John Kacur , Julia Cartwright , Daniel Wagner , Tom Zanussi , Marc Zyngier , Julien Grall Subject: [PATCH RT 06/32] KVM: arm/arm64: Let the timer expire in hardirq context on RT References: <20200117174111.282847363@goodmis.org> MIME-Version: 1.0 Sender: linux-rt-users-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rt-users@vger.kernel.org 4.19.94-rt39-rc1 stable review patch. If anyone has any objections, please let me know. ------------------ From: Thomas Gleixner [ Upstream commit 719cc080c914045a6e35787bf4dc3ba91cfd3efb ] The timers are canceled from an preempt-notifier which is invoked with disabled preemption which is not allowed on PREEMPT_RT. The timer callback is short so in could be invoked in hard-IRQ context on -RT. Let the timer expire on hard-IRQ context even on -RT. Signed-off-by: Thomas Gleixner Acked-by: Marc Zyngier Tested-by: Julien Grall Signed-off-by: Sebastian Andrzej Siewior Signed-off-by: Steven Rostedt (VMware) --- virt/kvm/arm/arch_timer.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c index 17cecc96f735..217d39f40393 100644 --- a/virt/kvm/arm/arch_timer.c +++ b/virt/kvm/arm/arch_timer.c @@ -67,7 +67,7 @@ static inline bool userspace_irqchip(struct kvm *kvm) static void soft_timer_start(struct hrtimer *hrt, u64 ns) { hrtimer_start(hrt, ktime_add_ns(ktime_get(), ns), - HRTIMER_MODE_ABS); + HRTIMER_MODE_ABS_HARD); } static void soft_timer_cancel(struct hrtimer *hrt, struct work_struct *work) @@ -638,10 +638,10 @@ void kvm_timer_vcpu_init(struct kvm_vcpu *vcpu) vcpu_ptimer(vcpu)->cntvoff = 0; INIT_WORK(&timer->expired, kvm_timer_inject_irq_work); - hrtimer_init(&timer->bg_timer, CLOCK_MONOTONIC, HRTIMER_MODE_ABS); + hrtimer_init(&timer->bg_timer, CLOCK_MONOTONIC, HRTIMER_MODE_ABS_HARD); timer->bg_timer.function = kvm_bg_timer_expire; - hrtimer_init(&timer->phys_timer, CLOCK_MONOTONIC, HRTIMER_MODE_ABS); + hrtimer_init(&timer->phys_timer, CLOCK_MONOTONIC, HRTIMER_MODE_ABS_HARD); timer->phys_timer.function = kvm_phys_timer_expire; vtimer->irq.irq = default_vtimer_irq.irq; From patchwork Fri Jan 17 17:41:21 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Rostedt X-Patchwork-Id: 213276 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1C8B4C33CB3 for ; Fri, 17 Jan 2020 17:42:49 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id EBDE6246B1 for ; Fri, 17 Jan 2020 17:42:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729133AbgAQRlb (ORCPT ); Fri, 17 Jan 2020 12:41:31 -0500 Received: from mail.kernel.org ([198.145.29.99]:40074 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729110AbgAQRla (ORCPT ); Fri, 17 Jan 2020 12:41:30 -0500 Received: from gandalf.local.home (cpe-66-24-58-225.stny.res.rr.com [66.24.58.225]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 0157D217F4; Fri, 17 Jan 2020 17:41:30 +0000 (UTC) Received: from rostedt by gandalf.local.home with local (Exim 4.93) (envelope-from ) id 1isVce-000QRF-TP; Fri, 17 Jan 2020 12:41:28 -0500 Message-Id: <20200117174128.792272889@goodmis.org> User-Agent: quilt/0.65 Date: Fri, 17 Jan 2020 12:41:21 -0500 From: Steven Rostedt To: linux-kernel@vger.kernel.org, linux-rt-users Cc: Thomas Gleixner , Carsten Emde , Sebastian Andrzej Siewior , John Kacur , Julia Cartwright , Daniel Wagner , Tom Zanussi , Julien Grall Subject: [PATCH RT 10/32] hrtimer: Prevent using hrtimer_grab_expiry_lock() on migration_base References: <20200117174111.282847363@goodmis.org> MIME-Version: 1.0 Sender: linux-rt-users-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rt-users@vger.kernel.org 4.19.94-rt39-rc1 stable review patch. If anyone has any objections, please let me know. ------------------ From: Julien Grall [ Upstream commit cef1b87f98823af923a386f3f69149acb212d4a1 ] As tglx puts it: |If base == migration_base then there is no point to lock soft_expiry_lock |simply because the timer is not executing the callback in soft irq context |and the whole lock/unlock dance can be avoided. Furthermore, all the path leading to hrtimer_grab_expiry_lock() assumes timer->base and timer->base->cpu_base are always non-NULL. So it is safe to remove the NULL checks here. Signed-off-by: Julien Grall Link: https://lkml.kernel.org/r/alpine.DEB.2.21.1908211557420.2223@nanos.tec.linutronix.de Signed-off-by: Steven Rostedt (VMware) [bigeasy: rewrite changelog] Signed-off-by: Sebastian Andrzej Siewior --- kernel/time/hrtimer.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/kernel/time/hrtimer.c b/kernel/time/hrtimer.c index 49d20fe8570f..1a5167c68310 100644 --- a/kernel/time/hrtimer.c +++ b/kernel/time/hrtimer.c @@ -943,7 +943,7 @@ void hrtimer_grab_expiry_lock(const struct hrtimer *timer) { struct hrtimer_clock_base *base = READ_ONCE(timer->base); - if (timer->is_soft && base && base->cpu_base) { + if (timer->is_soft && base != &migration_base) { spin_lock(&base->cpu_base->softirq_expiry_lock); spin_unlock(&base->cpu_base->softirq_expiry_lock); } From patchwork Fri Jan 17 17:41:23 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Rostedt X-Patchwork-Id: 213274 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 52E9CC33C9E for ; Fri, 17 Jan 2020 17:43:12 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 27CB024691 for ; Fri, 17 Jan 2020 17:43:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729409AbgAQRnE (ORCPT ); Fri, 17 Jan 2020 12:43:04 -0500 Received: from mail.kernel.org ([198.145.29.99]:40136 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729126AbgAQRla (ORCPT ); Fri, 17 Jan 2020 12:41:30 -0500 Received: from gandalf.local.home (cpe-66-24-58-225.stny.res.rr.com [66.24.58.225]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 47CAF24656; Fri, 17 Jan 2020 17:41:30 +0000 (UTC) Received: from rostedt by gandalf.local.home with local (Exim 4.93) (envelope-from ) id 1isVcf-000QSH-6m; Fri, 17 Jan 2020 12:41:29 -0500 Message-Id: <20200117174129.076462344@goodmis.org> User-Agent: quilt/0.65 Date: Fri, 17 Jan 2020 12:41:23 -0500 From: Steven Rostedt To: linux-kernel@vger.kernel.org, linux-rt-users Cc: Thomas Gleixner , Carsten Emde , Sebastian Andrzej Siewior , John Kacur , Julia Cartwright , Daniel Wagner , Tom Zanussi , kbuild test robot , Dan Carpenter Subject: [PATCH RT 12/32] posix-timers: Unlock expiry lock in the early return References: <20200117174111.282847363@goodmis.org> MIME-Version: 1.0 Sender: linux-rt-users-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rt-users@vger.kernel.org 4.19.94-rt39-rc1 stable review patch. If anyone has any objections, please let me know. ------------------ From: Sebastian Andrzej Siewior [ Upstream commit 356a2781375ec58521a9bc3f500488745990c242 ] Patch ("posix-timers: Add expiry lock") acquired a lock in run_posix_cpu_timers() but didn't drop the lock in the early return. Unlock the lock in the early return path. Reported-by: kbuild test robot Reported-by: Dan Carpenter Reviewed-by: Thomas Gleixner Signed-off-by: Sebastian Andrzej Siewior Signed-off-by: Steven Rostedt (VMware) --- kernel/time/posix-cpu-timers.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/kernel/time/posix-cpu-timers.c b/kernel/time/posix-cpu-timers.c index 765e700962ab..c9964dc3276b 100644 --- a/kernel/time/posix-cpu-timers.c +++ b/kernel/time/posix-cpu-timers.c @@ -1175,8 +1175,10 @@ static void __run_posix_cpu_timers(struct task_struct *tsk) expiry_lock = this_cpu_ptr(&cpu_timer_expiry_lock); spin_lock(expiry_lock); - if (!lock_task_sighand(tsk, &flags)) + if (!lock_task_sighand(tsk, &flags)) { + spin_unlock(expiry_lock); return; + } /* * Here we take off tsk->signal->cpu_timers[N] and * tsk->cpu_timers[N] all the timers that are firing, and From patchwork Fri Jan 17 17:41:25 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Rostedt X-Patchwork-Id: 213275 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D646BC33CB3 for ; Fri, 17 Jan 2020 17:43:02 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id B057B24699 for ; Fri, 17 Jan 2020 17:43:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729674AbgAQRm7 (ORCPT ); Fri, 17 Jan 2020 12:42:59 -0500 Received: from mail.kernel.org ([198.145.29.99]:39980 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729148AbgAQRlb (ORCPT ); Fri, 17 Jan 2020 12:41:31 -0500 Received: from gandalf.local.home (cpe-66-24-58-225.stny.res.rr.com [66.24.58.225]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id A24A6222C3; Fri, 17 Jan 2020 17:41:30 +0000 (UTC) Received: from rostedt by gandalf.local.home with local (Exim 4.93) (envelope-from ) id 1isVcf-000QTJ-J6; Fri, 17 Jan 2020 12:41:29 -0500 Message-Id: <20200117174129.434057274@goodmis.org> User-Agent: quilt/0.65 Date: Fri, 17 Jan 2020 12:41:25 -0500 From: Steven Rostedt To: linux-kernel@vger.kernel.org, linux-rt-users Cc: Thomas Gleixner , Carsten Emde , Sebastian Andrzej Siewior , John Kacur , Julia Cartwright , Daniel Wagner , Tom Zanussi , Scott Wood Subject: [PATCH RT 14/32] sched: __set_cpus_allowed_ptr: Check cpus_mask, not cpus_ptr References: <20200117174111.282847363@goodmis.org> MIME-Version: 1.0 Sender: linux-rt-users-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rt-users@vger.kernel.org 4.19.94-rt39-rc1 stable review patch. If anyone has any objections, please let me know. ------------------ From: Scott Wood [ Upstream commit e5606fb7b042db634ed62b4dd733d62e050e468f ] This function is concerned with the long-term cpu mask, not the transitory mask the task might have while migrate disabled. Before this patch, if a task was migrate disabled at the time __set_cpus_allowed_ptr() was called, and the new mask happened to be equal to the cpu that the task was running on, then the mask update would be lost. Signed-off-by: Scott Wood Signed-off-by: Sebastian Andrzej Siewior Signed-off-by: Steven Rostedt (VMware) --- kernel/sched/core.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 3413b9ebef1f..d6bd8129a390 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -1157,7 +1157,7 @@ static int __set_cpus_allowed_ptr(struct task_struct *p, goto out; } - if (cpumask_equal(p->cpus_ptr, new_mask)) + if (cpumask_equal(&p->cpus_mask, new_mask)) goto out; dest_cpu = cpumask_any_and(cpu_valid_mask, new_mask); From patchwork Fri Jan 17 17:41:27 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Rostedt X-Patchwork-Id: 213284 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2A484C33C9E for ; Fri, 17 Jan 2020 17:41:34 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 0B33624699 for ; Fri, 17 Jan 2020 17:41:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729259AbgAQRlc (ORCPT ); Fri, 17 Jan 2020 12:41:32 -0500 Received: from mail.kernel.org ([198.145.29.99]:40012 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729174AbgAQRlb (ORCPT ); Fri, 17 Jan 2020 12:41:31 -0500 Received: from gandalf.local.home (cpe-66-24-58-225.stny.res.rr.com [66.24.58.225]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 035542465A; Fri, 17 Jan 2020 17:41:31 +0000 (UTC) Received: from rostedt by gandalf.local.home with local (Exim 4.93) (envelope-from ) id 1isVcf-000QUL-TE; Fri, 17 Jan 2020 12:41:29 -0500 Message-Id: <20200117174129.785425698@goodmis.org> User-Agent: quilt/0.65 Date: Fri, 17 Jan 2020 12:41:27 -0500 From: Steven Rostedt To: linux-kernel@vger.kernel.org, linux-rt-users Cc: Thomas Gleixner , Carsten Emde , Sebastian Andrzej Siewior , John Kacur , Julia Cartwright , Daniel Wagner , Tom Zanussi , Scott Wood Subject: [PATCH RT 16/32] sched: migrate disable: Protect cpus_ptr with lock References: <20200117174111.282847363@goodmis.org> MIME-Version: 1.0 Sender: linux-rt-users-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rt-users@vger.kernel.org 4.19.94-rt39-rc1 stable review patch. If anyone has any objections, please let me know. ------------------ From: Scott Wood [ Upstream commit 27ee52a891ed2c7e2e2c8332ccae0de7c2674b09 ] Various places assume that cpus_ptr is protected by rq/pi locks, so don't change it before grabbing those locks. Signed-off-by: Scott Wood Signed-off-by: Sebastian Andrzej Siewior Signed-off-by: Steven Rostedt (VMware) --- kernel/sched/core.c | 6 ++---- 1 file changed, 2 insertions(+), 4 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index a29f33e776d0..d9a3f88508ee 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -7250,9 +7250,8 @@ migrate_disable_update_cpus_allowed(struct task_struct *p) struct rq *rq; struct rq_flags rf; - p->cpus_ptr = cpumask_of(smp_processor_id()); - rq = task_rq_lock(p, &rf); + p->cpus_ptr = cpumask_of(smp_processor_id()); update_nr_migratory(p, -1); p->nr_cpus_allowed = 1; task_rq_unlock(rq, p, &rf); @@ -7264,9 +7263,8 @@ migrate_enable_update_cpus_allowed(struct task_struct *p) struct rq *rq; struct rq_flags rf; - p->cpus_ptr = &p->cpus_mask; - rq = task_rq_lock(p, &rf); + p->cpus_ptr = &p->cpus_mask; p->nr_cpus_allowed = cpumask_weight(&p->cpus_mask); update_nr_migratory(p, 1); task_rq_unlock(rq, p, &rf); From patchwork Fri Jan 17 17:41:29 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Rostedt X-Patchwork-Id: 213282 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B633EC33CB3 for ; Fri, 17 Jan 2020 17:41:58 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 8537F246A7 for ; Fri, 17 Jan 2020 17:41:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729437AbgAQRlz (ORCPT ); Fri, 17 Jan 2020 12:41:55 -0500 Received: from mail.kernel.org ([198.145.29.99]:39982 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729196AbgAQRld (ORCPT ); Fri, 17 Jan 2020 12:41:33 -0500 Received: from gandalf.local.home (cpe-66-24-58-225.stny.res.rr.com [66.24.58.225]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 533FA24689; Fri, 17 Jan 2020 17:41:31 +0000 (UTC) Received: from rostedt by gandalf.local.home with local (Exim 4.93) (envelope-from ) id 1isVcg-000QVN-7S; Fri, 17 Jan 2020 12:41:30 -0500 Message-Id: <20200117174130.113095376@goodmis.org> User-Agent: quilt/0.65 Date: Fri, 17 Jan 2020 12:41:29 -0500 From: Steven Rostedt To: linux-kernel@vger.kernel.org, linux-rt-users Cc: Thomas Gleixner , Carsten Emde , Sebastian Andrzej Siewior , John Kacur , Julia Cartwright , Daniel Wagner , Tom Zanussi Subject: [PATCH RT 18/32] futex: Make the futex_hash_bucket spinlock_t again and bring back its old state References: <20200117174111.282847363@goodmis.org> MIME-Version: 1.0 Sender: linux-rt-users-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rt-users@vger.kernel.org 4.19.94-rt39-rc1 stable review patch. If anyone has any objections, please let me know. ------------------ From: Sebastian Andrzej Siewior [ Upstream commit 954ad80c23edfe71f4e8ce70b961eac884320c3a ] This is an all-in-one patch that reverts the patches: futex: Make the futex_hash_bucket lock raw futex: Delay deallocation of pi_state and adds back the old patches we had: futex: workaround migrate_disable/enable in different context rtmutex: Handle the various new futex race conditions futex: Fix bug on when a requeued RT task times out futex: Ensure lock/unlock symetry versus pi_lock and hash bucket lock Signed-off-by: Sebastian Andrzej Siewior Signed-off-by: Steven Rostedt (VMware) --- kernel/futex.c | 230 ++++++++++++++++++-------------- kernel/locking/rtmutex.c | 65 ++++++++- kernel/locking/rtmutex_common.h | 3 + 3 files changed, 194 insertions(+), 104 deletions(-) diff --git a/kernel/futex.c b/kernel/futex.c index 0b8cff8d9162..e815cf542b82 100644 --- a/kernel/futex.c +++ b/kernel/futex.c @@ -243,7 +243,7 @@ struct futex_q { struct plist_node list; struct task_struct *task; - raw_spinlock_t *lock_ptr; + spinlock_t *lock_ptr; union futex_key key; struct futex_pi_state *pi_state; struct rt_mutex_waiter *rt_waiter; @@ -264,7 +264,7 @@ static const struct futex_q futex_q_init = { */ struct futex_hash_bucket { atomic_t waiters; - raw_spinlock_t lock; + spinlock_t lock; struct plist_head chain; } ____cacheline_aligned_in_smp; @@ -825,13 +825,13 @@ static void get_pi_state(struct futex_pi_state *pi_state) * Drops a reference to the pi_state object and frees or caches it * when the last reference is gone. */ -static struct futex_pi_state *__put_pi_state(struct futex_pi_state *pi_state) +static void put_pi_state(struct futex_pi_state *pi_state) { if (!pi_state) - return NULL; + return; if (!atomic_dec_and_test(&pi_state->refcount)) - return NULL; + return; /* * If pi_state->owner is NULL, the owner is most probably dying @@ -851,7 +851,9 @@ static struct futex_pi_state *__put_pi_state(struct futex_pi_state *pi_state) raw_spin_unlock_irq(&pi_state->pi_mutex.wait_lock); } - if (!current->pi_state_cache) { + if (current->pi_state_cache) { + kfree(pi_state); + } else { /* * pi_state->list is already empty. * clear pi_state->owner. @@ -860,30 +862,6 @@ static struct futex_pi_state *__put_pi_state(struct futex_pi_state *pi_state) pi_state->owner = NULL; atomic_set(&pi_state->refcount, 1); current->pi_state_cache = pi_state; - pi_state = NULL; - } - return pi_state; -} - -static void put_pi_state(struct futex_pi_state *pi_state) -{ - kfree(__put_pi_state(pi_state)); -} - -static void put_pi_state_atomic(struct futex_pi_state *pi_state, - struct list_head *to_free) -{ - if (__put_pi_state(pi_state)) - list_add(&pi_state->list, to_free); -} - -static void free_pi_state_list(struct list_head *to_free) -{ - struct futex_pi_state *p, *next; - - list_for_each_entry_safe(p, next, to_free, list) { - list_del(&p->list); - kfree(p); } } @@ -900,7 +878,6 @@ void exit_pi_state_list(struct task_struct *curr) struct futex_pi_state *pi_state; struct futex_hash_bucket *hb; union futex_key key = FUTEX_KEY_INIT; - LIST_HEAD(to_free); if (!futex_cmpxchg_enabled) return; @@ -934,7 +911,7 @@ void exit_pi_state_list(struct task_struct *curr) } raw_spin_unlock_irq(&curr->pi_lock); - raw_spin_lock(&hb->lock); + spin_lock(&hb->lock); raw_spin_lock_irq(&pi_state->pi_mutex.wait_lock); raw_spin_lock(&curr->pi_lock); /* @@ -944,8 +921,10 @@ void exit_pi_state_list(struct task_struct *curr) if (head->next != next) { /* retain curr->pi_lock for the loop invariant */ raw_spin_unlock(&pi_state->pi_mutex.wait_lock); - raw_spin_unlock(&hb->lock); - put_pi_state_atomic(pi_state, &to_free); + raw_spin_unlock_irq(&curr->pi_lock); + spin_unlock(&hb->lock); + raw_spin_lock_irq(&curr->pi_lock); + put_pi_state(pi_state); continue; } @@ -956,7 +935,7 @@ void exit_pi_state_list(struct task_struct *curr) raw_spin_unlock(&curr->pi_lock); raw_spin_unlock_irq(&pi_state->pi_mutex.wait_lock); - raw_spin_unlock(&hb->lock); + spin_unlock(&hb->lock); rt_mutex_futex_unlock(&pi_state->pi_mutex); put_pi_state(pi_state); @@ -964,8 +943,6 @@ void exit_pi_state_list(struct task_struct *curr) raw_spin_lock_irq(&curr->pi_lock); } raw_spin_unlock_irq(&curr->pi_lock); - - free_pi_state_list(&to_free); } #endif @@ -1452,7 +1429,7 @@ static void __unqueue_futex(struct futex_q *q) { struct futex_hash_bucket *hb; - if (WARN_ON_SMP(!q->lock_ptr || !raw_spin_is_locked(q->lock_ptr)) + if (WARN_ON_SMP(!q->lock_ptr || !spin_is_locked(q->lock_ptr)) || WARN_ON(plist_node_empty(&q->list))) return; @@ -1580,21 +1557,21 @@ static inline void double_lock_hb(struct futex_hash_bucket *hb1, struct futex_hash_bucket *hb2) { if (hb1 <= hb2) { - raw_spin_lock(&hb1->lock); + spin_lock(&hb1->lock); if (hb1 < hb2) - raw_spin_lock_nested(&hb2->lock, SINGLE_DEPTH_NESTING); + spin_lock_nested(&hb2->lock, SINGLE_DEPTH_NESTING); } else { /* hb1 > hb2 */ - raw_spin_lock(&hb2->lock); - raw_spin_lock_nested(&hb1->lock, SINGLE_DEPTH_NESTING); + spin_lock(&hb2->lock); + spin_lock_nested(&hb1->lock, SINGLE_DEPTH_NESTING); } } static inline void double_unlock_hb(struct futex_hash_bucket *hb1, struct futex_hash_bucket *hb2) { - raw_spin_unlock(&hb1->lock); + spin_unlock(&hb1->lock); if (hb1 != hb2) - raw_spin_unlock(&hb2->lock); + spin_unlock(&hb2->lock); } /* @@ -1622,7 +1599,7 @@ futex_wake(u32 __user *uaddr, unsigned int flags, int nr_wake, u32 bitset) if (!hb_waiters_pending(hb)) goto out_put_key; - raw_spin_lock(&hb->lock); + spin_lock(&hb->lock); plist_for_each_entry_safe(this, next, &hb->chain, list) { if (match_futex (&this->key, &key)) { @@ -1641,7 +1618,7 @@ futex_wake(u32 __user *uaddr, unsigned int flags, int nr_wake, u32 bitset) } } - raw_spin_unlock(&hb->lock); + spin_unlock(&hb->lock); wake_up_q(&wake_q); out_put_key: put_futex_key(&key); @@ -1948,7 +1925,6 @@ static int futex_requeue(u32 __user *uaddr1, unsigned int flags, struct futex_hash_bucket *hb1, *hb2; struct futex_q *this, *next; DEFINE_WAKE_Q(wake_q); - LIST_HEAD(to_free); if (nr_wake < 0 || nr_requeue < 0) return -EINVAL; @@ -2176,6 +2152,16 @@ static int futex_requeue(u32 __user *uaddr1, unsigned int flags, requeue_pi_wake_futex(this, &key2, hb2); drop_count++; continue; + } else if (ret == -EAGAIN) { + /* + * Waiter was woken by timeout or + * signal and has set pi_blocked_on to + * PI_WAKEUP_INPROGRESS before we + * tried to enqueue it on the rtmutex. + */ + this->pi_state = NULL; + put_pi_state(pi_state); + continue; } else if (ret) { /* * rt_mutex_start_proxy_lock() detected a @@ -2186,7 +2172,7 @@ static int futex_requeue(u32 __user *uaddr1, unsigned int flags, * object. */ this->pi_state = NULL; - put_pi_state_atomic(pi_state, &to_free); + put_pi_state(pi_state); /* * We stop queueing more waiters and let user * space deal with the mess. @@ -2203,7 +2189,7 @@ static int futex_requeue(u32 __user *uaddr1, unsigned int flags, * in futex_proxy_trylock_atomic() or in lookup_pi_state(). We * need to drop it here again. */ - put_pi_state_atomic(pi_state, &to_free); + put_pi_state(pi_state); out_unlock: double_unlock_hb(hb1, hb2); @@ -2224,7 +2210,6 @@ static int futex_requeue(u32 __user *uaddr1, unsigned int flags, out_put_key1: put_futex_key(&key1); out: - free_pi_state_list(&to_free); return ret ? ret : task_count; } @@ -2248,7 +2233,7 @@ static inline struct futex_hash_bucket *queue_lock(struct futex_q *q) q->lock_ptr = &hb->lock; - raw_spin_lock(&hb->lock); /* implies smp_mb(); (A) */ + spin_lock(&hb->lock); /* implies smp_mb(); (A) */ return hb; } @@ -2256,7 +2241,7 @@ static inline void queue_unlock(struct futex_hash_bucket *hb) __releases(&hb->lock) { - raw_spin_unlock(&hb->lock); + spin_unlock(&hb->lock); hb_waiters_dec(hb); } @@ -2295,7 +2280,7 @@ static inline void queue_me(struct futex_q *q, struct futex_hash_bucket *hb) __releases(&hb->lock) { __queue_me(q, hb); - raw_spin_unlock(&hb->lock); + spin_unlock(&hb->lock); } /** @@ -2311,41 +2296,41 @@ static inline void queue_me(struct futex_q *q, struct futex_hash_bucket *hb) */ static int unqueue_me(struct futex_q *q) { - raw_spinlock_t *lock_ptr; + spinlock_t *lock_ptr; int ret = 0; /* In the common case we don't take the spinlock, which is nice. */ retry: /* - * q->lock_ptr can change between this read and the following - * raw_spin_lock. Use READ_ONCE to forbid the compiler from reloading - * q->lock_ptr and optimizing lock_ptr out of the logic below. + * q->lock_ptr can change between this read and the following spin_lock. + * Use READ_ONCE to forbid the compiler from reloading q->lock_ptr and + * optimizing lock_ptr out of the logic below. */ lock_ptr = READ_ONCE(q->lock_ptr); if (lock_ptr != NULL) { - raw_spin_lock(lock_ptr); + spin_lock(lock_ptr); /* * q->lock_ptr can change between reading it and - * raw_spin_lock(), causing us to take the wrong lock. This + * spin_lock(), causing us to take the wrong lock. This * corrects the race condition. * * Reasoning goes like this: if we have the wrong lock, * q->lock_ptr must have changed (maybe several times) - * between reading it and the raw_spin_lock(). It can - * change again after the raw_spin_lock() but only if it was - * already changed before the raw_spin_lock(). It cannot, + * between reading it and the spin_lock(). It can + * change again after the spin_lock() but only if it was + * already changed before the spin_lock(). It cannot, * however, change back to the original value. Therefore * we can detect whether we acquired the correct lock. */ if (unlikely(lock_ptr != q->lock_ptr)) { - raw_spin_unlock(lock_ptr); + spin_unlock(lock_ptr); goto retry; } __unqueue_futex(q); BUG_ON(q->pi_state); - raw_spin_unlock(lock_ptr); + spin_unlock(lock_ptr); ret = 1; } @@ -2361,16 +2346,13 @@ static int unqueue_me(struct futex_q *q) static void unqueue_me_pi(struct futex_q *q) __releases(q->lock_ptr) { - struct futex_pi_state *ps; - __unqueue_futex(q); BUG_ON(!q->pi_state); - ps = __put_pi_state(q->pi_state); + put_pi_state(q->pi_state); q->pi_state = NULL; - raw_spin_unlock(q->lock_ptr); - kfree(ps); + spin_unlock(q->lock_ptr); } static int fixup_pi_state_owner(u32 __user *uaddr, struct futex_q *q, @@ -2503,7 +2485,7 @@ static int fixup_pi_state_owner(u32 __user *uaddr, struct futex_q *q, */ handle_err: raw_spin_unlock_irq(&pi_state->pi_mutex.wait_lock); - raw_spin_unlock(q->lock_ptr); + spin_unlock(q->lock_ptr); switch (err) { case -EFAULT: @@ -2521,7 +2503,7 @@ static int fixup_pi_state_owner(u32 __user *uaddr, struct futex_q *q, break; } - raw_spin_lock(q->lock_ptr); + spin_lock(q->lock_ptr); raw_spin_lock_irq(&pi_state->pi_mutex.wait_lock); /* @@ -2617,7 +2599,7 @@ static void futex_wait_queue_me(struct futex_hash_bucket *hb, struct futex_q *q, /* * The task state is guaranteed to be set before another task can * wake it. set_current_state() is implemented using smp_store_mb() and - * queue_me() calls raw_spin_unlock() upon completion, both serializing + * queue_me() calls spin_unlock() upon completion, both serializing * access to the hash list and forcing another memory barrier. */ set_current_state(TASK_INTERRUPTIBLE); @@ -2908,7 +2890,15 @@ static int futex_lock_pi(u32 __user *uaddr, unsigned int flags, * before __rt_mutex_start_proxy_lock() is done. */ raw_spin_lock_irq(&q.pi_state->pi_mutex.wait_lock); - raw_spin_unlock(q.lock_ptr); + /* + * the migrate_disable() here disables migration in the in_atomic() fast + * path which is enabled again in the following spin_unlock(). We have + * one migrate_disable() pending in the slow-path which is reversed + * after the raw_spin_unlock_irq() where we leave the atomic context. + */ + migrate_disable(); + + spin_unlock(q.lock_ptr); /* * __rt_mutex_start_proxy_lock() unconditionally enqueues the @rt_waiter * such that futex_unlock_pi() is guaranteed to observe the waiter when @@ -2916,6 +2906,7 @@ static int futex_lock_pi(u32 __user *uaddr, unsigned int flags, */ ret = __rt_mutex_start_proxy_lock(&q.pi_state->pi_mutex, &rt_waiter, current); raw_spin_unlock_irq(&q.pi_state->pi_mutex.wait_lock); + migrate_enable(); if (ret) { if (ret == 1) @@ -2929,7 +2920,7 @@ static int futex_lock_pi(u32 __user *uaddr, unsigned int flags, ret = rt_mutex_wait_proxy_lock(&q.pi_state->pi_mutex, to, &rt_waiter); cleanup: - raw_spin_lock(q.lock_ptr); + spin_lock(q.lock_ptr); /* * If we failed to acquire the lock (deadlock/signal/timeout), we must * first acquire the hb->lock before removing the lock from the @@ -3030,7 +3021,7 @@ static int futex_unlock_pi(u32 __user *uaddr, unsigned int flags) return ret; hb = hash_futex(&key); - raw_spin_lock(&hb->lock); + spin_lock(&hb->lock); /* * Check waiters first. We do not trust user space values at @@ -3064,10 +3055,19 @@ static int futex_unlock_pi(u32 __user *uaddr, unsigned int flags) * rt_waiter. Also see the WARN in wake_futex_pi(). */ raw_spin_lock_irq(&pi_state->pi_mutex.wait_lock); - raw_spin_unlock(&hb->lock); + /* + * Magic trickery for now to make the RT migrate disable + * logic happy. The following spin_unlock() happens with + * interrupts disabled so the internal migrate_enable() + * won't undo the migrate_disable() which was issued when + * locking hb->lock. + */ + migrate_disable(); + spin_unlock(&hb->lock); /* drops pi_state->pi_mutex.wait_lock */ ret = wake_futex_pi(uaddr, uval, pi_state); + migrate_enable(); put_pi_state(pi_state); @@ -3103,7 +3103,7 @@ static int futex_unlock_pi(u32 __user *uaddr, unsigned int flags) * owner. */ if ((ret = cmpxchg_futex_value_locked(&curval, uaddr, uval, 0))) { - raw_spin_unlock(&hb->lock); + spin_unlock(&hb->lock); switch (ret) { case -EFAULT: goto pi_faulted; @@ -3123,7 +3123,7 @@ static int futex_unlock_pi(u32 __user *uaddr, unsigned int flags) ret = (curval == uval) ? 0 : -EAGAIN; out_unlock: - raw_spin_unlock(&hb->lock); + spin_unlock(&hb->lock); out_putkey: put_futex_key(&key); return ret; @@ -3239,7 +3239,7 @@ static int futex_wait_requeue_pi(u32 __user *uaddr, unsigned int flags, struct hrtimer_sleeper timeout, *to = NULL; struct futex_pi_state *pi_state = NULL; struct rt_mutex_waiter rt_waiter; - struct futex_hash_bucket *hb; + struct futex_hash_bucket *hb, *hb2; union futex_key key2 = FUTEX_KEY_INIT; struct futex_q q = futex_q_init; int res, ret; @@ -3297,20 +3297,55 @@ static int futex_wait_requeue_pi(u32 __user *uaddr, unsigned int flags, /* Queue the futex_q, drop the hb lock, wait for wakeup. */ futex_wait_queue_me(hb, &q, to); - raw_spin_lock(&hb->lock); - ret = handle_early_requeue_pi_wakeup(hb, &q, &key2, to); - raw_spin_unlock(&hb->lock); - if (ret) - goto out_put_keys; + /* + * On RT we must avoid races with requeue and trying to block + * on two mutexes (hb->lock and uaddr2's rtmutex) by + * serializing access to pi_blocked_on with pi_lock. + */ + raw_spin_lock_irq(¤t->pi_lock); + if (current->pi_blocked_on) { + /* + * We have been requeued or are in the process of + * being requeued. + */ + raw_spin_unlock_irq(¤t->pi_lock); + } else { + /* + * Setting pi_blocked_on to PI_WAKEUP_INPROGRESS + * prevents a concurrent requeue from moving us to the + * uaddr2 rtmutex. After that we can safely acquire + * (and possibly block on) hb->lock. + */ + current->pi_blocked_on = PI_WAKEUP_INPROGRESS; + raw_spin_unlock_irq(¤t->pi_lock); + + spin_lock(&hb->lock); + + /* + * Clean up pi_blocked_on. We might leak it otherwise + * when we succeeded with the hb->lock in the fast + * path. + */ + raw_spin_lock_irq(¤t->pi_lock); + current->pi_blocked_on = NULL; + raw_spin_unlock_irq(¤t->pi_lock); + + ret = handle_early_requeue_pi_wakeup(hb, &q, &key2, to); + spin_unlock(&hb->lock); + if (ret) + goto out_put_keys; + } /* - * In order for us to be here, we know our q.key == key2, and since - * we took the hb->lock above, we also know that futex_requeue() has - * completed and we no longer have to concern ourselves with a wakeup - * race with the atomic proxy lock acquisition by the requeue code. The - * futex_requeue dropped our key1 reference and incremented our key2 - * reference count. + * In order to be here, we have either been requeued, are in + * the process of being requeued, or requeue successfully + * acquired uaddr2 on our behalf. If pi_blocked_on was + * non-null above, we may be racing with a requeue. Do not + * rely on q->lock_ptr to be hb2->lock until after blocking on + * hb->lock or hb2->lock. The futex_requeue dropped our key1 + * reference and incremented our key2 reference count. */ + hb2 = hash_futex(&key2); /* Check if the requeue code acquired the second futex for us. */ if (!q.rt_waiter) { @@ -3319,9 +3354,8 @@ static int futex_wait_requeue_pi(u32 __user *uaddr, unsigned int flags, * did a lock-steal - fix up the PI-state in that case. */ if (q.pi_state && (q.pi_state->owner != current)) { - struct futex_pi_state *ps_free; - - raw_spin_lock(q.lock_ptr); + spin_lock(&hb2->lock); + BUG_ON(&hb2->lock != q.lock_ptr); ret = fixup_pi_state_owner(uaddr2, &q, current); if (ret && rt_mutex_owner(&q.pi_state->pi_mutex) == current) { pi_state = q.pi_state; @@ -3331,9 +3365,8 @@ static int futex_wait_requeue_pi(u32 __user *uaddr, unsigned int flags, * Drop the reference to the pi state which * the requeue_pi() code acquired for us. */ - ps_free = __put_pi_state(q.pi_state); - raw_spin_unlock(q.lock_ptr); - kfree(ps_free); + put_pi_state(q.pi_state); + spin_unlock(&hb2->lock); } } else { struct rt_mutex *pi_mutex; @@ -3347,7 +3380,8 @@ static int futex_wait_requeue_pi(u32 __user *uaddr, unsigned int flags, pi_mutex = &q.pi_state->pi_mutex; ret = rt_mutex_wait_proxy_lock(pi_mutex, to, &rt_waiter); - raw_spin_lock(q.lock_ptr); + spin_lock(&hb2->lock); + BUG_ON(&hb2->lock != q.lock_ptr); if (ret && !rt_mutex_cleanup_proxy_lock(pi_mutex, &rt_waiter)) ret = 0; @@ -4014,7 +4048,7 @@ static int __init futex_init(void) for (i = 0; i < futex_hashsize; i++) { atomic_set(&futex_queues[i].waiters, 0); plist_head_init(&futex_queues[i].chain); - raw_spin_lock_init(&futex_queues[i].lock); + spin_lock_init(&futex_queues[i].lock); } return 0; diff --git a/kernel/locking/rtmutex.c b/kernel/locking/rtmutex.c index 44a33057a83a..2a9bf2443acc 100644 --- a/kernel/locking/rtmutex.c +++ b/kernel/locking/rtmutex.c @@ -142,6 +142,12 @@ static void fixup_rt_mutex_waiters(struct rt_mutex *lock) WRITE_ONCE(*p, owner & ~RT_MUTEX_HAS_WAITERS); } +static int rt_mutex_real_waiter(struct rt_mutex_waiter *waiter) +{ + return waiter && waiter != PI_WAKEUP_INPROGRESS && + waiter != PI_REQUEUE_INPROGRESS; +} + /* * We can speed up the acquire/release, if there's no debugging state to be * set up. @@ -415,7 +421,8 @@ int max_lock_depth = 1024; static inline struct rt_mutex *task_blocked_on_lock(struct task_struct *p) { - return p->pi_blocked_on ? p->pi_blocked_on->lock : NULL; + return rt_mutex_real_waiter(p->pi_blocked_on) ? + p->pi_blocked_on->lock : NULL; } /* @@ -551,7 +558,7 @@ static int rt_mutex_adjust_prio_chain(struct task_struct *task, * reached or the state of the chain has changed while we * dropped the locks. */ - if (!waiter) + if (!rt_mutex_real_waiter(waiter)) goto out_unlock_pi; /* @@ -1321,6 +1328,22 @@ static int task_blocks_on_rt_mutex(struct rt_mutex *lock, return -EDEADLK; raw_spin_lock(&task->pi_lock); + /* + * In the case of futex requeue PI, this will be a proxy + * lock. The task will wake unaware that it is enqueueed on + * this lock. Avoid blocking on two locks and corrupting + * pi_blocked_on via the PI_WAKEUP_INPROGRESS + * flag. futex_wait_requeue_pi() sets this when it wakes up + * before requeue (due to a signal or timeout). Do not enqueue + * the task if PI_WAKEUP_INPROGRESS is set. + */ + if (task != current && task->pi_blocked_on == PI_WAKEUP_INPROGRESS) { + raw_spin_unlock(&task->pi_lock); + return -EAGAIN; + } + + BUG_ON(rt_mutex_real_waiter(task->pi_blocked_on)); + waiter->task = task; waiter->lock = lock; waiter->prio = task->prio; @@ -1344,7 +1367,7 @@ static int task_blocks_on_rt_mutex(struct rt_mutex *lock, rt_mutex_enqueue_pi(owner, waiter); rt_mutex_adjust_prio(owner); - if (owner->pi_blocked_on) + if (rt_mutex_real_waiter(owner->pi_blocked_on)) chain_walk = 1; } else if (rt_mutex_cond_detect_deadlock(waiter, chwalk)) { chain_walk = 1; @@ -1444,7 +1467,7 @@ static void remove_waiter(struct rt_mutex *lock, { bool is_top_waiter = (waiter == rt_mutex_top_waiter(lock)); struct task_struct *owner = rt_mutex_owner(lock); - struct rt_mutex *next_lock; + struct rt_mutex *next_lock = NULL; lockdep_assert_held(&lock->wait_lock); @@ -1470,7 +1493,8 @@ static void remove_waiter(struct rt_mutex *lock, rt_mutex_adjust_prio(owner); /* Store the lock on which owner is blocked or NULL */ - next_lock = task_blocked_on_lock(owner); + if (rt_mutex_real_waiter(owner->pi_blocked_on)) + next_lock = task_blocked_on_lock(owner); raw_spin_unlock(&owner->pi_lock); @@ -1506,7 +1530,8 @@ void rt_mutex_adjust_pi(struct task_struct *task) raw_spin_lock_irqsave(&task->pi_lock, flags); waiter = task->pi_blocked_on; - if (!waiter || rt_mutex_waiter_equal(waiter, task_to_waiter(task))) { + if (!rt_mutex_real_waiter(waiter) || + rt_mutex_waiter_equal(waiter, task_to_waiter(task))) { raw_spin_unlock_irqrestore(&task->pi_lock, flags); return; } @@ -2325,6 +2350,34 @@ int __rt_mutex_start_proxy_lock(struct rt_mutex *lock, if (try_to_take_rt_mutex(lock, task, NULL)) return 1; +#ifdef CONFIG_PREEMPT_RT_FULL + /* + * In PREEMPT_RT there's an added race. + * If the task, that we are about to requeue, times out, + * it can set the PI_WAKEUP_INPROGRESS. This tells the requeue + * to skip this task. But right after the task sets + * its pi_blocked_on to PI_WAKEUP_INPROGRESS it can then + * block on the spin_lock(&hb->lock), which in RT is an rtmutex. + * This will replace the PI_WAKEUP_INPROGRESS with the actual + * lock that it blocks on. We *must not* place this task + * on this proxy lock in that case. + * + * To prevent this race, we first take the task's pi_lock + * and check if it has updated its pi_blocked_on. If it has, + * we assume that it woke up and we return -EAGAIN. + * Otherwise, we set the task's pi_blocked_on to + * PI_REQUEUE_INPROGRESS, so that if the task is waking up + * it will know that we are in the process of requeuing it. + */ + raw_spin_lock(&task->pi_lock); + if (task->pi_blocked_on) { + raw_spin_unlock(&task->pi_lock); + return -EAGAIN; + } + task->pi_blocked_on = PI_REQUEUE_INPROGRESS; + raw_spin_unlock(&task->pi_lock); +#endif + /* We enforce deadlock detection for futexes */ ret = task_blocks_on_rt_mutex(lock, waiter, task, RT_MUTEX_FULL_CHAINWALK); diff --git a/kernel/locking/rtmutex_common.h b/kernel/locking/rtmutex_common.h index 758dc43872e5..546aaf058b9e 100644 --- a/kernel/locking/rtmutex_common.h +++ b/kernel/locking/rtmutex_common.h @@ -132,6 +132,9 @@ enum rtmutex_chainwalk { /* * PI-futex support (proxy locking functions, etc.): */ +#define PI_WAKEUP_INPROGRESS ((struct rt_mutex_waiter *) 1) +#define PI_REQUEUE_INPROGRESS ((struct rt_mutex_waiter *) 2) + extern struct task_struct *rt_mutex_next_owner(struct rt_mutex *lock); extern void rt_mutex_init_proxy_locked(struct rt_mutex *lock, struct task_struct *proxy_owner); From patchwork Fri Jan 17 17:41:31 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Rostedt X-Patchwork-Id: 213277 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 26C94C33CB3 for ; Fri, 17 Jan 2020 17:42:45 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id F134724695 for ; Fri, 17 Jan 2020 17:42:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729640AbgAQRmi (ORCPT ); Fri, 17 Jan 2020 12:42:38 -0500 Received: from mail.kernel.org ([198.145.29.99]:40040 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729210AbgAQRlc (ORCPT ); Fri, 17 Jan 2020 12:41:32 -0500 Received: from gandalf.local.home (cpe-66-24-58-225.stny.res.rr.com [66.24.58.225]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 96C932468B; Fri, 17 Jan 2020 17:41:31 +0000 (UTC) Received: from rostedt by gandalf.local.home with local (Exim 4.93) (envelope-from ) id 1isVcg-000QWP-H9; Fri, 17 Jan 2020 12:41:30 -0500 Message-Id: <20200117174130.415082386@goodmis.org> User-Agent: quilt/0.65 Date: Fri, 17 Jan 2020 12:41:31 -0500 From: Steven Rostedt To: linux-kernel@vger.kernel.org, linux-rt-users Cc: Thomas Gleixner , Carsten Emde , Sebastian Andrzej Siewior , John Kacur , Julia Cartwright , Daniel Wagner , Tom Zanussi , Andre Przywara , Julien Grall , Andrey Ryabinin Subject: [PATCH RT 20/32] lib/ubsan: Dont seralize UBSAN report References: <20200117174111.282847363@goodmis.org> MIME-Version: 1.0 Sender: linux-rt-users-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rt-users@vger.kernel.org 4.19.94-rt39-rc1 stable review patch. If anyone has any objections, please let me know. ------------------ From: Julien Grall [ Upstream commit 4702c28ac777b27acb499cbd5e8e787ce1a7d82d ] At the moment, UBSAN report will be serialized using a spin_lock(). On RT-systems, spinlocks are turned to rt_spin_lock and may sleep. This will result to the following splat if the undefined behavior is in a context that can sleep: | BUG: sleeping function called from invalid context at /src/linux/kernel/locking/rtmutex.c:968 | in_atomic(): 1, irqs_disabled(): 128, pid: 3447, name: make | 1 lock held by make/3447: | #0: 000000009a966332 (&mm->mmap_sem){++++}, at: do_page_fault+0x140/0x4f8 | Preemption disabled at: | [] rt_mutex_futex_unlock+0x4c/0xb0 | CPU: 3 PID: 3447 Comm: make Tainted: G W 5.2.14-rt7-01890-ge6e057589653 #911 | Call trace: | dump_backtrace+0x0/0x148 | show_stack+0x14/0x20 | dump_stack+0xbc/0x104 | ___might_sleep+0x154/0x210 | rt_spin_lock+0x68/0xa0 | ubsan_prologue+0x30/0x68 | handle_overflow+0x64/0xe0 | __ubsan_handle_add_overflow+0x10/0x18 | __lock_acquire+0x1c28/0x2a28 | lock_acquire+0xf0/0x370 | _raw_spin_lock_irqsave+0x58/0x78 | rt_mutex_futex_unlock+0x4c/0xb0 | rt_spin_unlock+0x28/0x70 | get_page_from_freelist+0x428/0x2b60 | __alloc_pages_nodemask+0x174/0x1708 | alloc_pages_vma+0x1ac/0x238 | __handle_mm_fault+0x4ac/0x10b0 | handle_mm_fault+0x1d8/0x3b0 | do_page_fault+0x1c8/0x4f8 | do_translation_fault+0xb8/0xe0 | do_mem_abort+0x3c/0x98 | el0_da+0x20/0x24 The spin_lock() will protect against multiple CPUs to output a report together, I guess to prevent them to be interleaved. However, they can still interleave with other messages (and even splat from __migth_sleep). So the lock usefulness seems pretty limited. Rather than trying to accomodate RT-system by switching to a raw_spin_lock(), the lock is now completely dropped. Link: https://lkml.kernel.org/r/20190920100835.14999-1-julien.grall@arm.com Reported-by: Andre Przywara Signed-off-by: Julien Grall Acked-by: Andrey Ryabinin Signed-off-by: Sebastian Andrzej Siewior Signed-off-by: Steven Rostedt (VMware) --- lib/ubsan.c | 64 +++++++++++++++++++---------------------------------- 1 file changed, 23 insertions(+), 41 deletions(-) diff --git a/lib/ubsan.c b/lib/ubsan.c index 1e9e2ab25539..5830cc9a2164 100644 --- a/lib/ubsan.c +++ b/lib/ubsan.c @@ -143,25 +143,21 @@ static void val_to_string(char *str, size_t size, struct type_descriptor *type, } } -static DEFINE_SPINLOCK(report_lock); - -static void ubsan_prologue(struct source_location *location, - unsigned long *flags) +static void ubsan_prologue(struct source_location *location) { current->in_ubsan++; - spin_lock_irqsave(&report_lock, *flags); pr_err("========================================" "========================================\n"); print_source_location("UBSAN: Undefined behaviour in", location); } -static void ubsan_epilogue(unsigned long *flags) +static void ubsan_epilogue(void) { dump_stack(); pr_err("========================================" "========================================\n"); - spin_unlock_irqrestore(&report_lock, *flags); + current->in_ubsan--; } @@ -170,14 +166,13 @@ static void handle_overflow(struct overflow_data *data, void *lhs, { struct type_descriptor *type = data->type; - unsigned long flags; char lhs_val_str[VALUE_LENGTH]; char rhs_val_str[VALUE_LENGTH]; if (suppress_report(&data->location)) return; - ubsan_prologue(&data->location, &flags); + ubsan_prologue(&data->location); val_to_string(lhs_val_str, sizeof(lhs_val_str), type, lhs); val_to_string(rhs_val_str, sizeof(rhs_val_str), type, rhs); @@ -189,7 +184,7 @@ static void handle_overflow(struct overflow_data *data, void *lhs, rhs_val_str, type->type_name); - ubsan_epilogue(&flags); + ubsan_epilogue(); } void __ubsan_handle_add_overflow(struct overflow_data *data, @@ -217,20 +212,19 @@ EXPORT_SYMBOL(__ubsan_handle_mul_overflow); void __ubsan_handle_negate_overflow(struct overflow_data *data, void *old_val) { - unsigned long flags; char old_val_str[VALUE_LENGTH]; if (suppress_report(&data->location)) return; - ubsan_prologue(&data->location, &flags); + ubsan_prologue(&data->location); val_to_string(old_val_str, sizeof(old_val_str), data->type, old_val); pr_err("negation of %s cannot be represented in type %s:\n", old_val_str, data->type->type_name); - ubsan_epilogue(&flags); + ubsan_epilogue(); } EXPORT_SYMBOL(__ubsan_handle_negate_overflow); @@ -238,13 +232,12 @@ EXPORT_SYMBOL(__ubsan_handle_negate_overflow); void __ubsan_handle_divrem_overflow(struct overflow_data *data, void *lhs, void *rhs) { - unsigned long flags; char rhs_val_str[VALUE_LENGTH]; if (suppress_report(&data->location)) return; - ubsan_prologue(&data->location, &flags); + ubsan_prologue(&data->location); val_to_string(rhs_val_str, sizeof(rhs_val_str), data->type, rhs); @@ -254,58 +247,52 @@ void __ubsan_handle_divrem_overflow(struct overflow_data *data, else pr_err("division by zero\n"); - ubsan_epilogue(&flags); + ubsan_epilogue(); } EXPORT_SYMBOL(__ubsan_handle_divrem_overflow); static void handle_null_ptr_deref(struct type_mismatch_data_common *data) { - unsigned long flags; - if (suppress_report(data->location)) return; - ubsan_prologue(data->location, &flags); + ubsan_prologue(data->location); pr_err("%s null pointer of type %s\n", type_check_kinds[data->type_check_kind], data->type->type_name); - ubsan_epilogue(&flags); + ubsan_epilogue(); } static void handle_misaligned_access(struct type_mismatch_data_common *data, unsigned long ptr) { - unsigned long flags; - if (suppress_report(data->location)) return; - ubsan_prologue(data->location, &flags); + ubsan_prologue(data->location); pr_err("%s misaligned address %p for type %s\n", type_check_kinds[data->type_check_kind], (void *)ptr, data->type->type_name); pr_err("which requires %ld byte alignment\n", data->alignment); - ubsan_epilogue(&flags); + ubsan_epilogue(); } static void handle_object_size_mismatch(struct type_mismatch_data_common *data, unsigned long ptr) { - unsigned long flags; - if (suppress_report(data->location)) return; - ubsan_prologue(data->location, &flags); + ubsan_prologue(data->location); pr_err("%s address %p with insufficient space\n", type_check_kinds[data->type_check_kind], (void *) ptr); pr_err("for an object of type %s\n", data->type->type_name); - ubsan_epilogue(&flags); + ubsan_epilogue(); } static void ubsan_type_mismatch_common(struct type_mismatch_data_common *data, @@ -369,25 +356,23 @@ EXPORT_SYMBOL(__ubsan_handle_vla_bound_not_positive); void __ubsan_handle_out_of_bounds(struct out_of_bounds_data *data, void *index) { - unsigned long flags; char index_str[VALUE_LENGTH]; if (suppress_report(&data->location)) return; - ubsan_prologue(&data->location, &flags); + ubsan_prologue(&data->location); val_to_string(index_str, sizeof(index_str), data->index_type, index); pr_err("index %s is out of range for type %s\n", index_str, data->array_type->type_name); - ubsan_epilogue(&flags); + ubsan_epilogue(); } EXPORT_SYMBOL(__ubsan_handle_out_of_bounds); void __ubsan_handle_shift_out_of_bounds(struct shift_out_of_bounds_data *data, void *lhs, void *rhs) { - unsigned long flags; struct type_descriptor *rhs_type = data->rhs_type; struct type_descriptor *lhs_type = data->lhs_type; char rhs_str[VALUE_LENGTH]; @@ -396,7 +381,7 @@ void __ubsan_handle_shift_out_of_bounds(struct shift_out_of_bounds_data *data, if (suppress_report(&data->location)) return; - ubsan_prologue(&data->location, &flags); + ubsan_prologue(&data->location); val_to_string(rhs_str, sizeof(rhs_str), rhs_type, rhs); val_to_string(lhs_str, sizeof(lhs_str), lhs_type, lhs); @@ -419,18 +404,16 @@ void __ubsan_handle_shift_out_of_bounds(struct shift_out_of_bounds_data *data, lhs_str, rhs_str, lhs_type->type_name); - ubsan_epilogue(&flags); + ubsan_epilogue(); } EXPORT_SYMBOL(__ubsan_handle_shift_out_of_bounds); void __ubsan_handle_builtin_unreachable(struct unreachable_data *data) { - unsigned long flags; - - ubsan_prologue(&data->location, &flags); + ubsan_prologue(&data->location); pr_err("calling __builtin_unreachable()\n"); - ubsan_epilogue(&flags); + ubsan_epilogue(); panic("can't return from __builtin_unreachable()"); } EXPORT_SYMBOL(__ubsan_handle_builtin_unreachable); @@ -438,19 +421,18 @@ EXPORT_SYMBOL(__ubsan_handle_builtin_unreachable); void __ubsan_handle_load_invalid_value(struct invalid_value_data *data, void *val) { - unsigned long flags; char val_str[VALUE_LENGTH]; if (suppress_report(&data->location)) return; - ubsan_prologue(&data->location, &flags); + ubsan_prologue(&data->location); val_to_string(val_str, sizeof(val_str), data->type, val); pr_err("load of value %s is not a valid value for type %s\n", val_str, data->type->type_name); - ubsan_epilogue(&flags); + ubsan_epilogue(); } EXPORT_SYMBOL(__ubsan_handle_load_invalid_value); From patchwork Fri Jan 17 17:41:34 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Rostedt X-Patchwork-Id: 213279 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C7BA7C33CB7 for ; Fri, 17 Jan 2020 17:42:31 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 960E52467D for ; Fri, 17 Jan 2020 17:42:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729567AbgAQRmU (ORCPT ); Fri, 17 Jan 2020 12:42:20 -0500 Received: from mail.kernel.org ([198.145.29.99]:40012 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729246AbgAQRlc (ORCPT ); Fri, 17 Jan 2020 12:41:32 -0500 Received: from gandalf.local.home (cpe-66-24-58-225.stny.res.rr.com [66.24.58.225]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 14AE62468C; Fri, 17 Jan 2020 17:41:32 +0000 (UTC) Received: from rostedt by gandalf.local.home with local (Exim 4.93) (envelope-from ) id 1isVcg-000QXx-Vn; Fri, 17 Jan 2020 12:41:31 -0500 Message-Id: <20200117174130.870953899@goodmis.org> User-Agent: quilt/0.65 Date: Fri, 17 Jan 2020 12:41:34 -0500 From: Steven Rostedt To: linux-kernel@vger.kernel.org, linux-rt-users Cc: Thomas Gleixner , Carsten Emde , Sebastian Andrzej Siewior , John Kacur , Julia Cartwright , Daniel Wagner , Tom Zanussi , Scott Wood Subject: [PATCH RT 23/32] sched: Lazy migrate_disable processing References: <20200117174111.282847363@goodmis.org> MIME-Version: 1.0 Sender: linux-rt-users-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rt-users@vger.kernel.org 4.19.94-rt39-rc1 stable review patch. If anyone has any objections, please let me know. ------------------ From: Scott Wood [ Upstream commit 425c5b38779a860062aa62219dc920d374b13c17 ] Avoid overhead on the majority of migrate disable/enable sequences by only manipulating scheduler data (and grabbing the relevant locks) when the task actually schedules while migrate-disabled. A kernel build showed around a 10% reduction in system time (with CONFIG_NR_CPUS=512). Instead of cpuhp_pin_lock, CPU hotplug is handled by keeping a per-CPU count of the number of pinned tasks (including tasks which have not scheduled in the migrate-disabled section); takedown_cpu() will wait until that reaches zero (confirmed by take_cpu_down() in stop machine context to deal with races) before migrating tasks off of the cpu. To simplify synchronization, updating cpus_mask is no longer deferred until migrate_enable(). This lets us not have to worry about migrate_enable() missing the update if it's on the fast path (didn't schedule during the migrate disabled section). It also makes the code a bit simpler and reduces deviation from mainline. While the main motivation for this is the performance benefit, lazy migrate disable also eliminates the restriction on calling migrate_disable() while atomic but leaving the atomic region prior to calling migrate_enable() -- though this won't help with local_bh_disable() (and thus rcutorture) unless something similar is done with the recently added local_lock. Signed-off-by: Scott Wood Signed-off-by: Sebastian Andrzej Siewior Signed-off-by: Steven Rostedt (VMware) --- include/linux/cpu.h | 4 - include/linux/sched.h | 11 +-- init/init_task.c | 4 + kernel/cpu.c | 103 +++++++++-------------- kernel/sched/core.c | 182 +++++++++++++++++------------------------ kernel/sched/sched.h | 4 + lib/smp_processor_id.c | 3 + 7 files changed, 129 insertions(+), 182 deletions(-) diff --git a/include/linux/cpu.h b/include/linux/cpu.h index e67645924404..87347ccbba0c 100644 --- a/include/linux/cpu.h +++ b/include/linux/cpu.h @@ -118,8 +118,6 @@ extern void cpu_hotplug_disable(void); extern void cpu_hotplug_enable(void); void clear_tasks_mm_cpumask(int cpu); int cpu_down(unsigned int cpu); -extern void pin_current_cpu(void); -extern void unpin_current_cpu(void); #else /* CONFIG_HOTPLUG_CPU */ @@ -131,8 +129,6 @@ static inline int cpus_read_trylock(void) { return true; } static inline void lockdep_assert_cpus_held(void) { } static inline void cpu_hotplug_disable(void) { } static inline void cpu_hotplug_enable(void) { } -static inline void pin_current_cpu(void) { } -static inline void unpin_current_cpu(void) { } #endif /* !CONFIG_HOTPLUG_CPU */ diff --git a/include/linux/sched.h b/include/linux/sched.h index 3c213ec3d3b5..392a1ed7efd2 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -227,6 +227,8 @@ extern void io_schedule_finish(int token); extern long io_schedule_timeout(long timeout); extern void io_schedule(void); +int cpu_nr_pinned(int cpu); + /** * struct prev_cputime - snapshot of system and user cputime * @utime: time spent in user mode @@ -670,16 +672,13 @@ struct task_struct { cpumask_t cpus_mask; #if defined(CONFIG_SMP) && defined(CONFIG_PREEMPT_RT_BASE) int migrate_disable; - int migrate_disable_update; - int pinned_on_cpu; + bool migrate_disable_scheduled; # ifdef CONFIG_SCHED_DEBUG - int migrate_disable_atomic; + int pinned_on_cpu; # endif - #elif !defined(CONFIG_SMP) && defined(CONFIG_PREEMPT_RT_BASE) # ifdef CONFIG_SCHED_DEBUG int migrate_disable; - int migrate_disable_atomic; # endif #endif #ifdef CONFIG_PREEMPT_RT_FULL @@ -2058,4 +2057,6 @@ static inline void rseq_syscall(struct pt_regs *regs) #endif +extern struct task_struct *takedown_cpu_task; + #endif diff --git a/init/init_task.c b/init/init_task.c index 9e3362748214..4e5af4616dbd 100644 --- a/init/init_task.c +++ b/init/init_task.c @@ -80,6 +80,10 @@ struct task_struct init_task .cpus_ptr = &init_task.cpus_mask, .cpus_mask = CPU_MASK_ALL, .nr_cpus_allowed= NR_CPUS, +#if defined(CONFIG_SMP) && defined(CONFIG_PREEMPT_RT_BASE) && \ + defined(CONFIG_SCHED_DEBUG) + .pinned_on_cpu = -1, +#endif .mm = NULL, .active_mm = &init_mm, .restart_block = { diff --git a/kernel/cpu.c b/kernel/cpu.c index 7170fbd35a22..5366c8c69c2f 100644 --- a/kernel/cpu.c +++ b/kernel/cpu.c @@ -75,11 +75,6 @@ static DEFINE_PER_CPU(struct cpuhp_cpu_state, cpuhp_state) = { .fail = CPUHP_INVALID, }; -#if defined(CONFIG_HOTPLUG_CPU) && defined(CONFIG_PREEMPT_RT_FULL) -static DEFINE_PER_CPU(struct rt_rw_lock, cpuhp_pin_lock) = \ - __RWLOCK_RT_INITIALIZER(cpuhp_pin_lock); -#endif - #if defined(CONFIG_LOCKDEP) && defined(CONFIG_SMP) static struct lockdep_map cpuhp_state_up_map = STATIC_LOCKDEP_MAP_INIT("cpuhp_state-up", &cpuhp_state_up_map); @@ -286,57 +281,6 @@ static int cpu_hotplug_disabled; #ifdef CONFIG_HOTPLUG_CPU -/** - * pin_current_cpu - Prevent the current cpu from being unplugged - */ -void pin_current_cpu(void) -{ -#ifdef CONFIG_PREEMPT_RT_FULL - struct rt_rw_lock *cpuhp_pin; - unsigned int cpu; - int ret; - -again: - cpuhp_pin = this_cpu_ptr(&cpuhp_pin_lock); - ret = __read_rt_trylock(cpuhp_pin); - if (ret) { - current->pinned_on_cpu = smp_processor_id(); - return; - } - cpu = smp_processor_id(); - preempt_lazy_enable(); - preempt_enable(); - - sleeping_lock_inc(); - __read_rt_lock(cpuhp_pin); - sleeping_lock_dec(); - - preempt_disable(); - preempt_lazy_disable(); - if (cpu != smp_processor_id()) { - __read_rt_unlock(cpuhp_pin); - goto again; - } - current->pinned_on_cpu = cpu; -#endif -} - -/** - * unpin_current_cpu - Allow unplug of current cpu - */ -void unpin_current_cpu(void) -{ -#ifdef CONFIG_PREEMPT_RT_FULL - struct rt_rw_lock *cpuhp_pin = this_cpu_ptr(&cpuhp_pin_lock); - - if (WARN_ON(current->pinned_on_cpu != smp_processor_id())) - cpuhp_pin = per_cpu_ptr(&cpuhp_pin_lock, current->pinned_on_cpu); - - current->pinned_on_cpu = -1; - __read_rt_unlock(cpuhp_pin); -#endif -} - DEFINE_STATIC_PERCPU_RWSEM(cpu_hotplug_lock); void cpus_read_lock(void) @@ -866,6 +810,15 @@ static int take_cpu_down(void *_param) int err, cpu = smp_processor_id(); int ret; +#ifdef CONFIG_PREEMPT_RT_BASE + /* + * If any tasks disabled migration before we got here, + * go back and sleep again. + */ + if (cpu_nr_pinned(cpu)) + return -EAGAIN; +#endif + /* Ensure this CPU doesn't handle any more interrupts. */ err = __cpu_disable(); if (err < 0) @@ -893,11 +846,10 @@ static int take_cpu_down(void *_param) return 0; } +struct task_struct *takedown_cpu_task; + static int takedown_cpu(unsigned int cpu) { -#ifdef CONFIG_PREEMPT_RT_FULL - struct rt_rw_lock *cpuhp_pin = per_cpu_ptr(&cpuhp_pin_lock, cpu); -#endif struct cpuhp_cpu_state *st = per_cpu_ptr(&cpuhp_state, cpu); int err; @@ -910,17 +862,38 @@ static int takedown_cpu(unsigned int cpu) */ irq_lock_sparse(); -#ifdef CONFIG_PREEMPT_RT_FULL - __write_rt_lock(cpuhp_pin); +#ifdef CONFIG_PREEMPT_RT_BASE + WARN_ON_ONCE(takedown_cpu_task); + takedown_cpu_task = current; + +again: + /* + * If a task pins this CPU after we pass this check, take_cpu_down + * will return -EAGAIN. + */ + for (;;) { + int nr_pinned; + + set_current_state(TASK_UNINTERRUPTIBLE); + nr_pinned = cpu_nr_pinned(cpu); + if (nr_pinned == 0) + break; + schedule(); + } + set_current_state(TASK_RUNNING); #endif /* * So now all preempt/rcu users must observe !cpu_active(). */ err = stop_machine_cpuslocked(take_cpu_down, NULL, cpumask_of(cpu)); +#ifdef CONFIG_PREEMPT_RT_BASE + if (err == -EAGAIN) + goto again; +#endif if (err) { -#ifdef CONFIG_PREEMPT_RT_FULL - __write_rt_unlock(cpuhp_pin); +#ifdef CONFIG_PREEMPT_RT_BASE + takedown_cpu_task = NULL; #endif /* CPU refused to die */ irq_unlock_sparse(); @@ -940,8 +913,8 @@ static int takedown_cpu(unsigned int cpu) wait_for_ap_thread(st, false); BUG_ON(st->state != CPUHP_AP_IDLE_DEAD); -#ifdef CONFIG_PREEMPT_RT_FULL - __write_rt_unlock(cpuhp_pin); +#ifdef CONFIG_PREEMPT_RT_BASE + takedown_cpu_task = NULL; #endif /* Interrupts are moved away from the dying cpu, reenable alloc/free */ irq_unlock_sparse(); diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 6fd3f7b4d7d8..e97ac751aad2 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -1065,7 +1065,8 @@ static int migration_cpu_stop(void *data) void set_cpus_allowed_common(struct task_struct *p, const struct cpumask *new_mask) { cpumask_copy(&p->cpus_mask, new_mask); - p->nr_cpus_allowed = cpumask_weight(new_mask); + if (p->cpus_ptr == &p->cpus_mask) + p->nr_cpus_allowed = cpumask_weight(new_mask); } #if defined(CONFIG_SMP) && defined(CONFIG_PREEMPT_RT_BASE) @@ -1076,8 +1077,7 @@ int __migrate_disabled(struct task_struct *p) EXPORT_SYMBOL_GPL(__migrate_disabled); #endif -static void __do_set_cpus_allowed_tail(struct task_struct *p, - const struct cpumask *new_mask) +void do_set_cpus_allowed(struct task_struct *p, const struct cpumask *new_mask) { struct rq *rq = task_rq(p); bool queued, running; @@ -1106,20 +1106,6 @@ static void __do_set_cpus_allowed_tail(struct task_struct *p, set_curr_task(rq, p); } -void do_set_cpus_allowed(struct task_struct *p, const struct cpumask *new_mask) -{ -#if defined(CONFIG_SMP) && defined(CONFIG_PREEMPT_RT_BASE) - if (__migrate_disabled(p)) { - lockdep_assert_held(&p->pi_lock); - - cpumask_copy(&p->cpus_mask, new_mask); - p->migrate_disable_update = 1; - return; - } -#endif - __do_set_cpus_allowed_tail(p, new_mask); -} - /* * Change a given task's CPU affinity. Migrate the thread to a * proper CPU and schedule it away if the CPU it's executing on @@ -1179,7 +1165,8 @@ static int __set_cpus_allowed_ptr(struct task_struct *p, } /* Can the task run on the task's current CPU? If so, we're done */ - if (cpumask_test_cpu(task_cpu(p), new_mask) || __migrate_disabled(p)) + if (cpumask_test_cpu(task_cpu(p), new_mask) || + p->cpus_ptr != &p->cpus_mask) goto out; if (task_running(rq, p) || p->state == TASK_WAKING) { @@ -3454,6 +3441,8 @@ pick_next_task(struct rq *rq, struct task_struct *prev, struct rq_flags *rf) BUG(); } +static void migrate_disabled_sched(struct task_struct *p); + /* * __schedule() is the main scheduler function. * @@ -3524,6 +3513,9 @@ static void __sched notrace __schedule(bool preempt) rq_lock(rq, &rf); smp_mb__after_spinlock(); + if (__migrate_disabled(prev)) + migrate_disabled_sched(prev); + /* Promote REQ to ACT */ rq->clock_update_flags <<= 1; update_rq_clock(rq); @@ -5779,6 +5771,8 @@ static void migrate_tasks(struct rq *dead_rq, struct rq_flags *rf) BUG_ON(!next); put_prev_task(rq, next); + WARN_ON_ONCE(__migrate_disabled(next)); + /* * Rules for changing task_struct::cpus_mask are holding * both pi_lock and rq->lock, such that holding either @@ -7247,14 +7241,9 @@ update_nr_migratory(struct task_struct *p, long delta) static inline void migrate_disable_update_cpus_allowed(struct task_struct *p) { - struct rq *rq; - struct rq_flags rf; - - rq = task_rq_lock(p, &rf); p->cpus_ptr = cpumask_of(smp_processor_id()); update_nr_migratory(p, -1); p->nr_cpus_allowed = 1; - task_rq_unlock(rq, p, &rf); } static inline void @@ -7272,54 +7261,35 @@ migrate_enable_update_cpus_allowed(struct task_struct *p) void migrate_disable(void) { - struct task_struct *p = current; + preempt_disable(); - if (in_atomic() || irqs_disabled()) { + if (++current->migrate_disable == 1) { + this_rq()->nr_pinned++; + preempt_lazy_disable(); #ifdef CONFIG_SCHED_DEBUG - p->migrate_disable_atomic++; + WARN_ON_ONCE(current->pinned_on_cpu >= 0); + current->pinned_on_cpu = smp_processor_id(); #endif - return; - } -#ifdef CONFIG_SCHED_DEBUG - if (unlikely(p->migrate_disable_atomic)) { - tracing_off(); - WARN_ON_ONCE(1); } -#endif - if (p->migrate_disable) { - p->migrate_disable++; - return; - } + preempt_enable(); +} +EXPORT_SYMBOL(migrate_disable); - preempt_disable(); - preempt_lazy_disable(); - pin_current_cpu(); +static void migrate_disabled_sched(struct task_struct *p) +{ + if (p->migrate_disable_scheduled) + return; migrate_disable_update_cpus_allowed(p); - p->migrate_disable = 1; - - preempt_enable(); + p->migrate_disable_scheduled = 1; } -EXPORT_SYMBOL(migrate_disable); void migrate_enable(void) { struct task_struct *p = current; - - if (in_atomic() || irqs_disabled()) { -#ifdef CONFIG_SCHED_DEBUG - p->migrate_disable_atomic--; -#endif - return; - } - -#ifdef CONFIG_SCHED_DEBUG - if (unlikely(p->migrate_disable_atomic)) { - tracing_off(); - WARN_ON_ONCE(1); - } -#endif + struct rq *rq = this_rq(); + int cpu = task_cpu(p); WARN_ON_ONCE(p->migrate_disable <= 0); if (p->migrate_disable > 1) { @@ -7329,67 +7299,69 @@ void migrate_enable(void) preempt_disable(); +#ifdef CONFIG_SCHED_DEBUG + WARN_ON_ONCE(current->pinned_on_cpu != cpu); + current->pinned_on_cpu = -1; +#endif + + WARN_ON_ONCE(rq->nr_pinned < 1); + p->migrate_disable = 0; + rq->nr_pinned--; + if (rq->nr_pinned == 0 && unlikely(!cpu_active(cpu)) && + takedown_cpu_task) + wake_up_process(takedown_cpu_task); + + if (!p->migrate_disable_scheduled) + goto out; + + p->migrate_disable_scheduled = 0; + migrate_enable_update_cpus_allowed(p); - if (p->migrate_disable_update) { - struct rq *rq; + WARN_ON(smp_processor_id() != cpu); + if (!is_cpu_allowed(p, cpu)) { + struct migration_arg arg = { p }; struct rq_flags rf; - int cpu = task_cpu(p); rq = task_rq_lock(p, &rf); update_rq_clock(rq); - - __do_set_cpus_allowed_tail(p, &p->cpus_mask); + arg.dest_cpu = select_fallback_rq(cpu, p); task_rq_unlock(rq, p, &rf); - p->migrate_disable_update = 0; - - WARN_ON(smp_processor_id() != cpu); - if (!cpumask_test_cpu(cpu, &p->cpus_mask)) { - struct migration_arg arg = { p }; - struct rq_flags rf; + preempt_lazy_enable(); + preempt_enable(); - rq = task_rq_lock(p, &rf); - update_rq_clock(rq); - arg.dest_cpu = select_fallback_rq(cpu, p); - task_rq_unlock(rq, p, &rf); - - unpin_current_cpu(); - preempt_lazy_enable(); - preempt_enable(); - - sleeping_lock_inc(); - stop_one_cpu(task_cpu(p), migration_cpu_stop, &arg); - sleeping_lock_dec(); - tlb_migrate_finish(p->mm); + sleeping_lock_inc(); + stop_one_cpu(task_cpu(p), migration_cpu_stop, &arg); + sleeping_lock_dec(); + tlb_migrate_finish(p->mm); - return; - } + return; } - unpin_current_cpu(); + +out: preempt_lazy_enable(); preempt_enable(); } EXPORT_SYMBOL(migrate_enable); -#elif !defined(CONFIG_SMP) && defined(CONFIG_PREEMPT_RT_BASE) -void migrate_disable(void) +int cpu_nr_pinned(int cpu) { -#ifdef CONFIG_SCHED_DEBUG - struct task_struct *p = current; + struct rq *rq = cpu_rq(cpu); - if (in_atomic() || irqs_disabled()) { - p->migrate_disable_atomic++; - return; - } + return rq->nr_pinned; +} - if (unlikely(p->migrate_disable_atomic)) { - tracing_off(); - WARN_ON_ONCE(1); - } +#elif !defined(CONFIG_SMP) && defined(CONFIG_PREEMPT_RT_BASE) +static void migrate_disabled_sched(struct task_struct *p) +{ +} - p->migrate_disable++; +void migrate_disable(void) +{ +#ifdef CONFIG_SCHED_DEBUG + current->migrate_disable++; #endif barrier(); } @@ -7400,20 +7372,14 @@ void migrate_enable(void) #ifdef CONFIG_SCHED_DEBUG struct task_struct *p = current; - if (in_atomic() || irqs_disabled()) { - p->migrate_disable_atomic--; - return; - } - - if (unlikely(p->migrate_disable_atomic)) { - tracing_off(); - WARN_ON_ONCE(1); - } - WARN_ON_ONCE(p->migrate_disable <= 0); p->migrate_disable--; #endif barrier(); } EXPORT_SYMBOL(migrate_enable); +#else +static void migrate_disabled_sched(struct task_struct *p) +{ +} #endif diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index c90574112bca..78fa5911dd55 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -913,6 +913,10 @@ struct rq { /* Must be inspected within a rcu lock section */ struct cpuidle_state *idle_state; #endif + +#if defined(CONFIG_PREEMPT_RT_BASE) && defined(CONFIG_SMP) + int nr_pinned; +#endif }; static inline int cpu_of(struct rq *rq) diff --git a/lib/smp_processor_id.c b/lib/smp_processor_id.c index b8a8a8db2d75..0c80992aa337 100644 --- a/lib/smp_processor_id.c +++ b/lib/smp_processor_id.c @@ -22,6 +22,9 @@ notrace static unsigned int check_preemption_disabled(const char *what1, * Kernel threads bound to a single CPU can safely use * smp_processor_id(): */ + if (current->migrate_disable) + goto out; + if (current->nr_cpus_allowed == 1) goto out; From patchwork Fri Jan 17 17:41:37 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Rostedt X-Patchwork-Id: 213278 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C4B46C33C9E for ; Fri, 17 Jan 2020 17:42:34 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id A734F2467D for ; Fri, 17 Jan 2020 17:42:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729616AbgAQRmc (ORCPT ); Fri, 17 Jan 2020 12:42:32 -0500 Received: from mail.kernel.org ([198.145.29.99]:40040 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729267AbgAQRlc (ORCPT ); Fri, 17 Jan 2020 12:41:32 -0500 Received: from gandalf.local.home (cpe-66-24-58-225.stny.res.rr.com [66.24.58.225]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 7A0D324690; Fri, 17 Jan 2020 17:41:32 +0000 (UTC) Received: from rostedt by gandalf.local.home with local (Exim 4.93) (envelope-from ) id 1isVch-000QZV-Dy; Fri, 17 Jan 2020 12:41:31 -0500 Message-Id: <20200117174131.301899171@goodmis.org> User-Agent: quilt/0.65 Date: Fri, 17 Jan 2020 12:41:37 -0500 From: Steven Rostedt To: linux-kernel@vger.kernel.org, linux-rt-users Cc: Thomas Gleixner , Carsten Emde , Sebastian Andrzej Siewior , John Kacur , Julia Cartwright , Daniel Wagner , Tom Zanussi Subject: [PATCH RT 26/32] Revert "ARM: Initialize split page table locks for vector page" References: <20200117174111.282847363@goodmis.org> MIME-Version: 1.0 Sender: linux-rt-users-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rt-users@vger.kernel.org 4.19.94-rt39-rc1 stable review patch. If anyone has any objections, please let me know. ------------------ From: Sebastian Andrzej Siewior [ Upstream commit 247074c44d8c3e619dfde6404a52295d8d671d38 ] I'm dropping this patch, with its original description: |ARM: Initialize split page table locks for vector page | |Without this patch, ARM can not use SPLIT_PTLOCK_CPUS if |PREEMPT_RT_FULL=y because vectors_user_mapping() creates a |VM_ALWAYSDUMP mapping of the vector page (address 0xffff0000), but no |ptl->lock has been allocated for the page. An attempt to coredump |that page will result in a kernel NULL pointer dereference when |follow_page() attempts to lock the page. | |The call tree to the NULL pointer dereference is: | | do_notify_resume() | get_signal_to_deliver() | do_coredump() | elf_core_dump() | get_dump_page() | __get_user_pages() | follow_page() | pte_offset_map_lock() <----- a #define | ... | rt_spin_lock() | |The underlying problem is exposed by mm-shrink-the-page-frame-to-rt-size.patch. The patch named mm-shrink-the-page-frame-to-rt-size.patch was dropped from the RT queue once the SPLIT_PTLOCK_CPUS feature (in a slightly different shape) went upstream (somewhere between v3.12 and v3.14). I can see that the patch still allocates a lock which wasn't there before. However I can't trigger a kernel oops like described in the patch by triggering a coredump. Signed-off-by: Sebastian Andrzej Siewior Signed-off-by: Steven Rostedt (VMware) --- arch/arm/kernel/process.c | 24 ------------------------ 1 file changed, 24 deletions(-) diff --git a/arch/arm/kernel/process.c b/arch/arm/kernel/process.c index 8d3c7ce34c24..82ab015bf42b 100644 --- a/arch/arm/kernel/process.c +++ b/arch/arm/kernel/process.c @@ -324,30 +324,6 @@ unsigned long arch_randomize_brk(struct mm_struct *mm) } #ifdef CONFIG_MMU -/* - * CONFIG_SPLIT_PTLOCK_CPUS results in a page->ptl lock. If the lock is not - * initialized by pgtable_page_ctor() then a coredump of the vector page will - * fail. - */ -static int __init vectors_user_mapping_init_page(void) -{ - struct page *page; - unsigned long addr = 0xffff0000; - pgd_t *pgd; - pud_t *pud; - pmd_t *pmd; - - pgd = pgd_offset_k(addr); - pud = pud_offset(pgd, addr); - pmd = pmd_offset(pud, addr); - page = pmd_page(*(pmd)); - - pgtable_page_ctor(page); - - return 0; -} -late_initcall(vectors_user_mapping_init_page); - #ifdef CONFIG_KUSER_HELPERS /* * The vectors page is always readable from user space for the From patchwork Fri Jan 17 17:41:39 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Rostedt X-Patchwork-Id: 213280 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 82505C33C9E for ; Fri, 17 Jan 2020 17:42:19 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 645842467D for ; Fri, 17 Jan 2020 17:42:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729496AbgAQRmI (ORCPT ); Fri, 17 Jan 2020 12:42:08 -0500 Received: from mail.kernel.org ([198.145.29.99]:40600 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729274AbgAQRld (ORCPT ); Fri, 17 Jan 2020 12:41:33 -0500 Received: from gandalf.local.home (cpe-66-24-58-225.stny.res.rr.com [66.24.58.225]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id C3F4D24692; Fri, 17 Jan 2020 17:41:32 +0000 (UTC) Received: from rostedt by gandalf.local.home with local (Exim 4.93) (envelope-from ) id 1isVch-000QaX-NX; Fri, 17 Jan 2020 12:41:31 -0500 Message-Id: <20200117174131.609781576@goodmis.org> User-Agent: quilt/0.65 Date: Fri, 17 Jan 2020 12:41:39 -0500 From: Steven Rostedt To: linux-kernel@vger.kernel.org, linux-rt-users Cc: Thomas Gleixner , Carsten Emde , Sebastian Andrzej Siewior , John Kacur , Julia Cartwright , Daniel Wagner , Tom Zanussi Subject: [PATCH RT 28/32] locking: Make spinlock_t and rwlock_t a RCU section on RT References: <20200117174111.282847363@goodmis.org> MIME-Version: 1.0 Sender: linux-rt-users-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rt-users@vger.kernel.org 4.19.94-rt39-rc1 stable review patch. If anyone has any objections, please let me know. ------------------ From: Sebastian Andrzej Siewior [ Upstream commit 84440022a0e1c8c936d61f8f97593674a295d409 ] On !RT a locked spinlock_t and rwlock_t disables preemption which implies a RCU read section. There is code that relies on that behaviour. Add an explicit RCU read section on RT while a sleeping lock (a lock which would disables preemption on !RT) acquired. Signed-off-by: Sebastian Andrzej Siewior Signed-off-by: Steven Rostedt (VMware) --- kernel/locking/rtmutex.c | 6 ++++++ kernel/locking/rwlock-rt.c | 6 ++++++ 2 files changed, 12 insertions(+) diff --git a/kernel/locking/rtmutex.c b/kernel/locking/rtmutex.c index 63b3d6f306fa..c7d3ae01b4e5 100644 --- a/kernel/locking/rtmutex.c +++ b/kernel/locking/rtmutex.c @@ -1142,6 +1142,7 @@ void __sched rt_spin_lock_slowunlock(struct rt_mutex *lock) void __lockfunc rt_spin_lock(spinlock_t *lock) { sleeping_lock_inc(); + rcu_read_lock(); migrate_disable(); spin_acquire(&lock->dep_map, 0, 0, _RET_IP_); rt_spin_lock_fastlock(&lock->lock, rt_spin_lock_slowlock); @@ -1157,6 +1158,7 @@ void __lockfunc __rt_spin_lock(struct rt_mutex *lock) void __lockfunc rt_spin_lock_nested(spinlock_t *lock, int subclass) { sleeping_lock_inc(); + rcu_read_lock(); migrate_disable(); spin_acquire(&lock->dep_map, subclass, 0, _RET_IP_); rt_spin_lock_fastlock(&lock->lock, rt_spin_lock_slowlock); @@ -1170,6 +1172,7 @@ void __lockfunc rt_spin_unlock(spinlock_t *lock) spin_release(&lock->dep_map, 1, _RET_IP_); rt_spin_lock_fastunlock(&lock->lock, rt_spin_lock_slowunlock); migrate_enable(); + rcu_read_unlock(); sleeping_lock_dec(); } EXPORT_SYMBOL(rt_spin_unlock); @@ -1201,6 +1204,7 @@ int __lockfunc rt_spin_trylock(spinlock_t *lock) ret = __rt_mutex_trylock(&lock->lock); if (ret) { spin_acquire(&lock->dep_map, 0, 1, _RET_IP_); + rcu_read_lock(); } else { migrate_enable(); sleeping_lock_dec(); @@ -1217,6 +1221,7 @@ int __lockfunc rt_spin_trylock_bh(spinlock_t *lock) ret = __rt_mutex_trylock(&lock->lock); if (ret) { sleeping_lock_inc(); + rcu_read_lock(); migrate_disable(); spin_acquire(&lock->dep_map, 0, 1, _RET_IP_); } else @@ -1233,6 +1238,7 @@ int __lockfunc rt_spin_trylock_irqsave(spinlock_t *lock, unsigned long *flags) ret = __rt_mutex_trylock(&lock->lock); if (ret) { sleeping_lock_inc(); + rcu_read_lock(); migrate_disable(); spin_acquire(&lock->dep_map, 0, 1, _RET_IP_); } diff --git a/kernel/locking/rwlock-rt.c b/kernel/locking/rwlock-rt.c index c3b91205161c..0ae8c62ea832 100644 --- a/kernel/locking/rwlock-rt.c +++ b/kernel/locking/rwlock-rt.c @@ -310,6 +310,7 @@ int __lockfunc rt_read_trylock(rwlock_t *rwlock) ret = do_read_rt_trylock(rwlock); if (ret) { rwlock_acquire_read(&rwlock->dep_map, 0, 1, _RET_IP_); + rcu_read_lock(); } else { migrate_enable(); sleeping_lock_dec(); @@ -327,6 +328,7 @@ int __lockfunc rt_write_trylock(rwlock_t *rwlock) ret = do_write_rt_trylock(rwlock); if (ret) { rwlock_acquire(&rwlock->dep_map, 0, 1, _RET_IP_); + rcu_read_lock(); } else { migrate_enable(); sleeping_lock_dec(); @@ -338,6 +340,7 @@ EXPORT_SYMBOL(rt_write_trylock); void __lockfunc rt_read_lock(rwlock_t *rwlock) { sleeping_lock_inc(); + rcu_read_lock(); migrate_disable(); rwlock_acquire_read(&rwlock->dep_map, 0, 0, _RET_IP_); do_read_rt_lock(rwlock); @@ -347,6 +350,7 @@ EXPORT_SYMBOL(rt_read_lock); void __lockfunc rt_write_lock(rwlock_t *rwlock) { sleeping_lock_inc(); + rcu_read_lock(); migrate_disable(); rwlock_acquire(&rwlock->dep_map, 0, 0, _RET_IP_); do_write_rt_lock(rwlock); @@ -358,6 +362,7 @@ void __lockfunc rt_read_unlock(rwlock_t *rwlock) rwlock_release(&rwlock->dep_map, 1, _RET_IP_); do_read_rt_unlock(rwlock); migrate_enable(); + rcu_read_unlock(); sleeping_lock_dec(); } EXPORT_SYMBOL(rt_read_unlock); @@ -367,6 +372,7 @@ void __lockfunc rt_write_unlock(rwlock_t *rwlock) rwlock_release(&rwlock->dep_map, 1, _RET_IP_); do_write_rt_unlock(rwlock); migrate_enable(); + rcu_read_unlock(); sleeping_lock_dec(); } EXPORT_SYMBOL(rt_write_unlock); From patchwork Fri Jan 17 17:41:41 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Rostedt X-Patchwork-Id: 213281 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 99CA2C33CB6 for ; Fri, 17 Jan 2020 17:42:17 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 7024424691 for ; Fri, 17 Jan 2020 17:42:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729522AbgAQRmJ (ORCPT ); Fri, 17 Jan 2020 12:42:09 -0500 Received: from mail.kernel.org ([198.145.29.99]:40040 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729281AbgAQRld (ORCPT ); Fri, 17 Jan 2020 12:41:33 -0500 Received: from gandalf.local.home (cpe-66-24-58-225.stny.res.rr.com [66.24.58.225]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 14B3524683; Fri, 17 Jan 2020 17:41:33 +0000 (UTC) Received: from rostedt by gandalf.local.home with local (Exim 4.93) (envelope-from ) id 1isVci-000QbZ-0K; Fri, 17 Jan 2020 12:41:32 -0500 Message-Id: <20200117174131.893547195@goodmis.org> User-Agent: quilt/0.65 Date: Fri, 17 Jan 2020 12:41:41 -0500 From: Steven Rostedt To: linux-kernel@vger.kernel.org, linux-rt-users Cc: Thomas Gleixner , Carsten Emde , Sebastian Andrzej Siewior , John Kacur , Julia Cartwright , Daniel Wagner , Tom Zanussi , Daniel Wagner Subject: [PATCH RT 30/32] lib/smp_processor_id: Adjust check_preemption_disabled() References: <20200117174111.282847363@goodmis.org> MIME-Version: 1.0 Sender: linux-rt-users-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rt-users@vger.kernel.org 4.19.94-rt39-rc1 stable review patch. If anyone has any objections, please let me know. ------------------ From: Daniel Wagner [ Upstream commit af3c1c5fdf177870fb5e6e16b24e374696ab28f5 ] The current->migrate_disable counter is not always defined leading to build failures with DEBUG_PREEMPT && !PREEMPT_RT_BASE. Restrict the access to ->migrate_disable to same set where ->migrate_disable is modified. Signed-off-by: Daniel Wagner Signed-off-by: Steven Rostedt (VMware) [bigeasy: adjust condition + description] Signed-off-by: Sebastian Andrzej Siewior --- lib/smp_processor_id.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/lib/smp_processor_id.c b/lib/smp_processor_id.c index 0c80992aa337..2e7398534b66 100644 --- a/lib/smp_processor_id.c +++ b/lib/smp_processor_id.c @@ -22,8 +22,10 @@ notrace static unsigned int check_preemption_disabled(const char *what1, * Kernel threads bound to a single CPU can safely use * smp_processor_id(): */ +#if defined(CONFIG_PREEMPT_RT_BASE) && (defined(CONFIG_SMP) || defined(CONFIG_SCHED_DEBUG)) if (current->migrate_disable) goto out; +#endif if (current->nr_cpus_allowed == 1) goto out; From patchwork Fri Jan 17 17:41:43 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Rostedt X-Patchwork-Id: 213283 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6B063C33C9E for ; Fri, 17 Jan 2020 17:41:54 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id AC2C7246A7 for ; Fri, 17 Jan 2020 17:41:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729305AbgAQRle (ORCPT ); Fri, 17 Jan 2020 12:41:34 -0500 Received: from mail.kernel.org ([198.145.29.99]:40012 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729287AbgAQRld (ORCPT ); Fri, 17 Jan 2020 12:41:33 -0500 Received: from gandalf.local.home (cpe-66-24-58-225.stny.res.rr.com [66.24.58.225]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 5953424696; Fri, 17 Jan 2020 17:41:33 +0000 (UTC) Received: from rostedt by gandalf.local.home with local (Exim 4.93) (envelope-from ) id 1isVci-000Qcb-9R; Fri, 17 Jan 2020 12:41:32 -0500 Message-Id: <20200117174132.175284420@goodmis.org> User-Agent: quilt/0.65 Date: Fri, 17 Jan 2020 12:41:43 -0500 From: Steven Rostedt To: linux-kernel@vger.kernel.org, linux-rt-users Cc: Thomas Gleixner , Carsten Emde , Sebastian Andrzej Siewior , John Kacur , Julia Cartwright , Daniel Wagner , Tom Zanussi Subject: [PATCH RT 32/32] Linux 4.19.94-rt39-rc1 References: <20200117174111.282847363@goodmis.org> MIME-Version: 1.0 Sender: linux-rt-users-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rt-users@vger.kernel.org 4.19.94-rt39-rc1 stable review patch. If anyone has any objections, please let me know. ------------------ From: "Steven Rostedt (VMware)" --- localversion-rt | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/localversion-rt b/localversion-rt index 49bae8d6aa67..03dd0b091a29 100644 --- a/localversion-rt +++ b/localversion-rt @@ -1 +1 @@ --rt38 +-rt39-rc1