From patchwork Thu Jan 23 20:39:33 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Rostedt X-Patchwork-Id: 213254 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 664F4C33CB6 for ; Thu, 23 Jan 2020 20:41:47 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 38440246B1 for ; Thu, 23 Jan 2020 20:41:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729221AbgAWUjo (ORCPT ); Thu, 23 Jan 2020 15:39:44 -0500 Received: from mail.kernel.org ([198.145.29.99]:45746 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729159AbgAWUjo (ORCPT ); Thu, 23 Jan 2020 15:39:44 -0500 Received: from gandalf.local.home (cpe-66-24-58-225.stny.res.rr.com [66.24.58.225]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id E64952467E; Thu, 23 Jan 2020 20:39:43 +0000 (UTC) Received: from rostedt by gandalf.local.home with local (Exim 4.93) (envelope-from ) id 1iujGQ-000mVT-Rv; Thu, 23 Jan 2020 15:39:42 -0500 Message-Id: <20200123203942.751467858@goodmis.org> User-Agent: quilt/0.65 Date: Thu, 23 Jan 2020 15:39:33 -0500 From: Steven Rostedt To: linux-kernel@vger.kernel.org, linux-rt-users Cc: Thomas Gleixner , Carsten Emde , Sebastian Andrzej Siewior , John Kacur , Julia Cartwright , Daniel Wagner , Tom Zanussi , Juri Lelli Subject: [PATCH RT 03/30] sched/deadline: Ensure inactive_timer runs in hardirq context References: <20200123203930.646725253@goodmis.org> MIME-Version: 1.0 Sender: linux-rt-users-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rt-users@vger.kernel.org 4.19.94-rt39-rc2 stable review patch. If anyone has any objections, please let me know. ------------------ From: Juri Lelli [ Upstream commit ba94e7aed7405c58251b1380e6e7d73aa8284b41 ] SCHED_DEADLINE inactive timer needs to run in hardirq context (as dl_task_timer already does) on PREEMPT_RT Change the mode to HRTIMER_MODE_REL_HARD. [ tglx: Fixed up the start site, so mode debugging works ] Signed-off-by: Juri Lelli Signed-off-by: Thomas Gleixner Link: https://lkml.kernel.org/r/20190731103715.4047-1-juri.lelli@redhat.com Signed-off-by: Sebastian Andrzej Siewior Signed-off-by: Steven Rostedt (VMware) --- kernel/sched/deadline.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c index 974a8f9b615a..929167a1d991 100644 --- a/kernel/sched/deadline.c +++ b/kernel/sched/deadline.c @@ -287,7 +287,7 @@ static void task_non_contending(struct task_struct *p) dl_se->dl_non_contending = 1; get_task_struct(p); - hrtimer_start(timer, ns_to_ktime(zerolag_time), HRTIMER_MODE_REL); + hrtimer_start(timer, ns_to_ktime(zerolag_time), HRTIMER_MODE_REL_HARD); } static void task_contending(struct sched_dl_entity *dl_se, int flags) @@ -1325,7 +1325,7 @@ void init_dl_inactive_task_timer(struct sched_dl_entity *dl_se) { struct hrtimer *timer = &dl_se->inactive_timer; - hrtimer_init(timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL); + hrtimer_init(timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL_HARD); timer->function = inactive_task_timer; } From patchwork Thu Jan 23 20:39:35 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Rostedt X-Patchwork-Id: 213257 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B91ECC2D0DB for ; Thu, 23 Jan 2020 20:41:24 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 83C35246B0 for ; Thu, 23 Jan 2020 20:41:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729799AbgAWUlX (ORCPT ); Thu, 23 Jan 2020 15:41:23 -0500 Received: from mail.kernel.org ([198.145.29.99]:45834 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729224AbgAWUjp (ORCPT ); Thu, 23 Jan 2020 15:39:45 -0500 Received: from gandalf.local.home (cpe-66-24-58-225.stny.res.rr.com [66.24.58.225]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 3990D22522; Thu, 23 Jan 2020 20:39:44 +0000 (UTC) Received: from rostedt by gandalf.local.home with local (Exim 4.93) (envelope-from ) id 1iujGR-000mWR-4w; Thu, 23 Jan 2020 15:39:43 -0500 Message-Id: <20200123203943.036685630@goodmis.org> User-Agent: quilt/0.65 Date: Thu, 23 Jan 2020 15:39:35 -0500 From: Steven Rostedt To: linux-kernel@vger.kernel.org, linux-rt-users Cc: Thomas Gleixner , Carsten Emde , Sebastian Andrzej Siewior , John Kacur , Julia Cartwright , Daniel Wagner , Tom Zanussi Subject: [PATCH RT 05/30] dma-buf: Use seqlock_t instread disabling preemption References: <20200123203930.646725253@goodmis.org> MIME-Version: 1.0 Sender: linux-rt-users-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rt-users@vger.kernel.org 4.19.94-rt39-rc2 stable review patch. If anyone has any objections, please let me know. ------------------ From: Sebastian Andrzej Siewior [ Upstream commit 240610aa31094f51f299f06eb8dae8d4cd8d4500 ] "dma reservation" disables preemption while acquiring the write access for "seqcount" and then may acquire a spinlock_t. Replace the seqcount with a seqlock_t which provides seqcount like semantic and lock for writer. Link: https://lkml.kernel.org/r/f410b429-db86-f81c-7c67-f563fa808b62@free.fr Signed-off-by: Sebastian Andrzej Siewior Signed-off-by: Steven Rostedt (VMware) --- drivers/dma-buf/dma-buf.c | 8 ++-- drivers/dma-buf/reservation.c | 43 +++++++------------ .../gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c | 6 +-- drivers/gpu/drm/i915/i915_gem.c | 10 ++--- include/linux/reservation.h | 4 +- 5 files changed, 29 insertions(+), 42 deletions(-) diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c index 69842145c223..4c3ef46e7149 100644 --- a/drivers/dma-buf/dma-buf.c +++ b/drivers/dma-buf/dma-buf.c @@ -179,7 +179,7 @@ static __poll_t dma_buf_poll(struct file *file, poll_table *poll) return 0; retry: - seq = read_seqcount_begin(&resv->seq); + seq = read_seqbegin(&resv->seq); rcu_read_lock(); fobj = rcu_dereference(resv->fence); @@ -188,7 +188,7 @@ static __poll_t dma_buf_poll(struct file *file, poll_table *poll) else shared_count = 0; fence_excl = rcu_dereference(resv->fence_excl); - if (read_seqcount_retry(&resv->seq, seq)) { + if (read_seqretry(&resv->seq, seq)) { rcu_read_unlock(); goto retry; } @@ -1046,12 +1046,12 @@ static int dma_buf_debug_show(struct seq_file *s, void *unused) robj = buf_obj->resv; while (true) { - seq = read_seqcount_begin(&robj->seq); + seq = read_seqbegin(&robj->seq); rcu_read_lock(); fobj = rcu_dereference(robj->fence); shared_count = fobj ? fobj->shared_count : 0; fence = rcu_dereference(robj->fence_excl); - if (!read_seqcount_retry(&robj->seq, seq)) + if (!read_seqretry(&robj->seq, seq)) break; rcu_read_unlock(); } diff --git a/drivers/dma-buf/reservation.c b/drivers/dma-buf/reservation.c index 49ab09468ba1..f11d58492216 100644 --- a/drivers/dma-buf/reservation.c +++ b/drivers/dma-buf/reservation.c @@ -109,8 +109,7 @@ reservation_object_add_shared_inplace(struct reservation_object *obj, dma_fence_get(fence); - preempt_disable(); - write_seqcount_begin(&obj->seq); + write_seqlock(&obj->seq); for (i = 0; i < fobj->shared_count; ++i) { struct dma_fence *old_fence; @@ -121,8 +120,7 @@ reservation_object_add_shared_inplace(struct reservation_object *obj, if (old_fence->context == fence->context) { /* memory barrier is added by write_seqcount_begin */ RCU_INIT_POINTER(fobj->shared[i], fence); - write_seqcount_end(&obj->seq); - preempt_enable(); + write_sequnlock(&obj->seq); dma_fence_put(old_fence); return; @@ -146,8 +144,7 @@ reservation_object_add_shared_inplace(struct reservation_object *obj, fobj->shared_count++; } - write_seqcount_end(&obj->seq); - preempt_enable(); + write_sequnlock(&obj->seq); dma_fence_put(signaled); } @@ -191,15 +188,13 @@ reservation_object_add_shared_replace(struct reservation_object *obj, fobj->shared_count++; done: - preempt_disable(); - write_seqcount_begin(&obj->seq); + write_seqlock(&obj->seq); /* * RCU_INIT_POINTER can be used here, * seqcount provides the necessary barriers */ RCU_INIT_POINTER(obj->fence, fobj); - write_seqcount_end(&obj->seq); - preempt_enable(); + write_sequnlock(&obj->seq); if (!old) return; @@ -259,14 +254,11 @@ void reservation_object_add_excl_fence(struct reservation_object *obj, if (fence) dma_fence_get(fence); - preempt_disable(); - write_seqcount_begin(&obj->seq); - /* write_seqcount_begin provides the necessary memory barrier */ + write_seqlock(&obj->seq); RCU_INIT_POINTER(obj->fence_excl, fence); if (old) old->shared_count = 0; - write_seqcount_end(&obj->seq); - preempt_enable(); + write_sequnlock(&obj->seq); /* inplace update, no shared fences */ while (i--) @@ -349,13 +341,10 @@ int reservation_object_copy_fences(struct reservation_object *dst, src_list = reservation_object_get_list(dst); old = reservation_object_get_excl(dst); - preempt_disable(); - write_seqcount_begin(&dst->seq); - /* write_seqcount_begin provides the necessary memory barrier */ + write_seqlock(&dst->seq); RCU_INIT_POINTER(dst->fence_excl, new); RCU_INIT_POINTER(dst->fence, dst_list); - write_seqcount_end(&dst->seq); - preempt_enable(); + write_sequnlock(&dst->seq); if (src_list) kfree_rcu(src_list, rcu); @@ -396,7 +385,7 @@ int reservation_object_get_fences_rcu(struct reservation_object *obj, shared_count = i = 0; rcu_read_lock(); - seq = read_seqcount_begin(&obj->seq); + seq = read_seqbegin(&obj->seq); fence_excl = rcu_dereference(obj->fence_excl); if (fence_excl && !dma_fence_get_rcu(fence_excl)) @@ -445,7 +434,7 @@ int reservation_object_get_fences_rcu(struct reservation_object *obj, } } - if (i != shared_count || read_seqcount_retry(&obj->seq, seq)) { + if (i != shared_count || read_seqretry(&obj->seq, seq)) { while (i--) dma_fence_put(shared[i]); dma_fence_put(fence_excl); @@ -494,7 +483,7 @@ long reservation_object_wait_timeout_rcu(struct reservation_object *obj, retry: shared_count = 0; - seq = read_seqcount_begin(&obj->seq); + seq = read_seqbegin(&obj->seq); rcu_read_lock(); i = -1; @@ -541,7 +530,7 @@ long reservation_object_wait_timeout_rcu(struct reservation_object *obj, rcu_read_unlock(); if (fence) { - if (read_seqcount_retry(&obj->seq, seq)) { + if (read_seqretry(&obj->seq, seq)) { dma_fence_put(fence); goto retry; } @@ -597,7 +586,7 @@ bool reservation_object_test_signaled_rcu(struct reservation_object *obj, retry: ret = true; shared_count = 0; - seq = read_seqcount_begin(&obj->seq); + seq = read_seqbegin(&obj->seq); if (test_all) { unsigned i; @@ -618,7 +607,7 @@ bool reservation_object_test_signaled_rcu(struct reservation_object *obj, break; } - if (read_seqcount_retry(&obj->seq, seq)) + if (read_seqretry(&obj->seq, seq)) goto retry; } @@ -631,7 +620,7 @@ bool reservation_object_test_signaled_rcu(struct reservation_object *obj, if (ret < 0) goto retry; - if (read_seqcount_retry(&obj->seq, seq)) + if (read_seqretry(&obj->seq, seq)) goto retry; } } diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c index f92597c292fe..10c675850aac 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c @@ -261,11 +261,9 @@ static int amdgpu_amdkfd_remove_eviction_fence(struct amdgpu_bo *bo, } /* Install the new fence list, seqcount provides the barriers */ - preempt_disable(); - write_seqcount_begin(&resv->seq); + write_seqlock(&resv->seq); RCU_INIT_POINTER(resv->fence, new); - write_seqcount_end(&resv->seq); - preempt_enable(); + write_sequnlock(&resv->seq); /* Drop the references to the removed fences or move them to ef_list */ for (i = j, k = 0; i < old->shared_count; ++i) { diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c index c7d05ac7af3c..d484e79316bf 100644 --- a/drivers/gpu/drm/i915/i915_gem.c +++ b/drivers/gpu/drm/i915/i915_gem.c @@ -516,7 +516,7 @@ i915_gem_object_wait_reservation(struct reservation_object *resv, long timeout, struct intel_rps_client *rps_client) { - unsigned int seq = __read_seqcount_begin(&resv->seq); + unsigned int seq = read_seqbegin(&resv->seq); struct dma_fence *excl; bool prune_fences = false; @@ -569,9 +569,9 @@ i915_gem_object_wait_reservation(struct reservation_object *resv, * signaled and that the reservation object has not been changed (i.e. * no new fences have been added). */ - if (prune_fences && !__read_seqcount_retry(&resv->seq, seq)) { + if (prune_fences && !read_seqretry(&resv->seq, seq)) { if (reservation_object_trylock(resv)) { - if (!__read_seqcount_retry(&resv->seq, seq)) + if (!read_seqretry(&resv->seq, seq)) reservation_object_add_excl_fence(resv, NULL); reservation_object_unlock(resv); } @@ -4615,7 +4615,7 @@ i915_gem_busy_ioctl(struct drm_device *dev, void *data, * */ retry: - seq = raw_read_seqcount(&obj->resv->seq); + seq = read_seqbegin(&obj->resv->seq); /* Translate the exclusive fence to the READ *and* WRITE engine */ args->busy = busy_check_writer(rcu_dereference(obj->resv->fence_excl)); @@ -4633,7 +4633,7 @@ i915_gem_busy_ioctl(struct drm_device *dev, void *data, } } - if (args->busy && read_seqcount_retry(&obj->resv->seq, seq)) + if (args->busy && read_seqretry(&obj->resv->seq, seq)) goto retry; err = 0; diff --git a/include/linux/reservation.h b/include/linux/reservation.h index 02166e815afb..0b31df1af698 100644 --- a/include/linux/reservation.h +++ b/include/linux/reservation.h @@ -72,7 +72,7 @@ struct reservation_object_list { */ struct reservation_object { struct ww_mutex lock; - seqcount_t seq; + seqlock_t seq; struct dma_fence __rcu *fence_excl; struct reservation_object_list __rcu *fence; @@ -92,7 +92,7 @@ reservation_object_init(struct reservation_object *obj) { ww_mutex_init(&obj->lock, &reservation_ww_class); - __seqcount_init(&obj->seq, reservation_seqcount_string, &reservation_seqcount_class); + seqlock_init(&obj->seq); RCU_INIT_POINTER(obj->fence, NULL); RCU_INIT_POINTER(obj->fence_excl, NULL); obj->staged = NULL; From patchwork Thu Jan 23 20:39:36 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Rostedt X-Patchwork-Id: 213255 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E310CC2D0DB for ; Thu, 23 Jan 2020 20:41:41 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id B9CEB246B1 for ; Thu, 23 Jan 2020 20:41:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729465AbgAWUlk (ORCPT ); Thu, 23 Jan 2020 15:41:40 -0500 Received: from mail.kernel.org ([198.145.29.99]:45840 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729232AbgAWUjo (ORCPT ); Thu, 23 Jan 2020 15:39:44 -0500 Received: from gandalf.local.home (cpe-66-24-58-225.stny.res.rr.com [66.24.58.225]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 5C33E24673; Thu, 23 Jan 2020 20:39:44 +0000 (UTC) Received: from rostedt by gandalf.local.home with local (Exim 4.93) (envelope-from ) id 1iujGR-000mWv-9l; Thu, 23 Jan 2020 15:39:43 -0500 Message-Id: <20200123203943.183615371@goodmis.org> User-Agent: quilt/0.65 Date: Thu, 23 Jan 2020 15:39:36 -0500 From: Steven Rostedt To: linux-kernel@vger.kernel.org, linux-rt-users Cc: Thomas Gleixner , Carsten Emde , Sebastian Andrzej Siewior , John Kacur , Julia Cartwright , Daniel Wagner , Tom Zanussi , Marc Zyngier , Julien Grall Subject: [PATCH RT 06/30] KVM: arm/arm64: Let the timer expire in hardirq context on RT References: <20200123203930.646725253@goodmis.org> MIME-Version: 1.0 Sender: linux-rt-users-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rt-users@vger.kernel.org 4.19.94-rt39-rc2 stable review patch. If anyone has any objections, please let me know. ------------------ From: Thomas Gleixner [ Upstream commit 719cc080c914045a6e35787bf4dc3ba91cfd3efb ] The timers are canceled from an preempt-notifier which is invoked with disabled preemption which is not allowed on PREEMPT_RT. The timer callback is short so in could be invoked in hard-IRQ context on -RT. Let the timer expire on hard-IRQ context even on -RT. Signed-off-by: Thomas Gleixner Acked-by: Marc Zyngier Tested-by: Julien Grall Signed-off-by: Sebastian Andrzej Siewior Signed-off-by: Steven Rostedt (VMware) --- virt/kvm/arm/arch_timer.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c index 17cecc96f735..217d39f40393 100644 --- a/virt/kvm/arm/arch_timer.c +++ b/virt/kvm/arm/arch_timer.c @@ -67,7 +67,7 @@ static inline bool userspace_irqchip(struct kvm *kvm) static void soft_timer_start(struct hrtimer *hrt, u64 ns) { hrtimer_start(hrt, ktime_add_ns(ktime_get(), ns), - HRTIMER_MODE_ABS); + HRTIMER_MODE_ABS_HARD); } static void soft_timer_cancel(struct hrtimer *hrt, struct work_struct *work) @@ -638,10 +638,10 @@ void kvm_timer_vcpu_init(struct kvm_vcpu *vcpu) vcpu_ptimer(vcpu)->cntvoff = 0; INIT_WORK(&timer->expired, kvm_timer_inject_irq_work); - hrtimer_init(&timer->bg_timer, CLOCK_MONOTONIC, HRTIMER_MODE_ABS); + hrtimer_init(&timer->bg_timer, CLOCK_MONOTONIC, HRTIMER_MODE_ABS_HARD); timer->bg_timer.function = kvm_bg_timer_expire; - hrtimer_init(&timer->phys_timer, CLOCK_MONOTONIC, HRTIMER_MODE_ABS); + hrtimer_init(&timer->phys_timer, CLOCK_MONOTONIC, HRTIMER_MODE_ABS_HARD); timer->phys_timer.function = kvm_phys_timer_expire; vtimer->irq.irq = default_vtimer_irq.irq; From patchwork Thu Jan 23 20:39:37 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Rostedt X-Patchwork-Id: 213268 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8F5EAC2D0DB for ; Thu, 23 Jan 2020 20:39:46 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 6648024696 for ; Thu, 23 Jan 2020 20:39:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729159AbgAWUjp (ORCPT ); Thu, 23 Jan 2020 15:39:45 -0500 Received: from mail.kernel.org ([198.145.29.99]:45846 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729235AbgAWUjp (ORCPT ); Thu, 23 Jan 2020 15:39:45 -0500 Received: from gandalf.local.home (cpe-66-24-58-225.stny.res.rr.com [66.24.58.225]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 7F91224687; Thu, 23 Jan 2020 20:39:44 +0000 (UTC) Received: from rostedt by gandalf.local.home with local (Exim 4.93) (envelope-from ) id 1iujGR-000mXP-EG; Thu, 23 Jan 2020 15:39:43 -0500 Message-Id: <20200123203943.326004971@goodmis.org> User-Agent: quilt/0.65 Date: Thu, 23 Jan 2020 15:39:37 -0500 From: Steven Rostedt To: linux-kernel@vger.kernel.org, linux-rt-users Cc: Thomas Gleixner , Carsten Emde , Sebastian Andrzej Siewior , John Kacur , Julia Cartwright , Daniel Wagner , Tom Zanussi Subject: [PATCH RT 07/30] x86: preempt: Check preemption level before looking at lazy-preempt References: <20200123203930.646725253@goodmis.org> MIME-Version: 1.0 Sender: linux-rt-users-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rt-users@vger.kernel.org 4.19.94-rt39-rc2 stable review patch. If anyone has any objections, please let me know. ------------------ From: Sebastian Andrzej Siewior [ Upstream commit 19fc8557f2323c52b26561651ed4d51fc688a740 ] Before evaluating the lazy-preempt state it must be ensure that the preempt-count is zero. Signed-off-by: Sebastian Andrzej Siewior Signed-off-by: Steven Rostedt (VMware) --- arch/x86/include/asm/preempt.h | 2 ++ 1 file changed, 2 insertions(+) diff --git a/arch/x86/include/asm/preempt.h b/arch/x86/include/asm/preempt.h index f66708779274..afa0e42ccdd1 100644 --- a/arch/x86/include/asm/preempt.h +++ b/arch/x86/include/asm/preempt.h @@ -96,6 +96,8 @@ static __always_inline bool __preempt_count_dec_and_test(void) if (____preempt_count_dec_and_test()) return true; #ifdef CONFIG_PREEMPT_LAZY + if (preempt_count()) + return false; if (current_thread_info()->preempt_lazy_count) return false; return test_thread_flag(TIF_NEED_RESCHED_LAZY); From patchwork Thu Jan 23 20:39:40 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Rostedt X-Patchwork-Id: 213256 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A71ACC33CB6 for ; Thu, 23 Jan 2020 20:41:34 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 7821C246B0 for ; Thu, 23 Jan 2020 20:41:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729785AbgAWUlW (ORCPT ); Thu, 23 Jan 2020 15:41:22 -0500 Received: from mail.kernel.org ([198.145.29.99]:45840 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729259AbgAWUjp (ORCPT ); Thu, 23 Jan 2020 15:39:45 -0500 Received: from gandalf.local.home (cpe-66-24-58-225.stny.res.rr.com [66.24.58.225]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id E0A3324685; Thu, 23 Jan 2020 20:39:44 +0000 (UTC) Received: from rostedt by gandalf.local.home with local (Exim 4.93) (envelope-from ) id 1iujGR-000mYr-S5; Thu, 23 Jan 2020 15:39:43 -0500 Message-Id: <20200123203943.749508731@goodmis.org> User-Agent: quilt/0.65 Date: Thu, 23 Jan 2020 15:39:40 -0500 From: Steven Rostedt To: linux-kernel@vger.kernel.org, linux-rt-users Cc: Thomas Gleixner , Carsten Emde , Sebastian Andrzej Siewior , John Kacur , Julia Cartwright , Daniel Wagner , Tom Zanussi , Julien Grall Subject: [PATCH RT 10/30] hrtimer: Prevent using hrtimer_grab_expiry_lock() on migration_base References: <20200123203930.646725253@goodmis.org> MIME-Version: 1.0 Sender: linux-rt-users-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rt-users@vger.kernel.org 4.19.94-rt39-rc2 stable review patch. If anyone has any objections, please let me know. ------------------ From: Julien Grall [ Upstream commit cef1b87f98823af923a386f3f69149acb212d4a1 ] As tglx puts it: |If base == migration_base then there is no point to lock soft_expiry_lock |simply because the timer is not executing the callback in soft irq context |and the whole lock/unlock dance can be avoided. Furthermore, all the path leading to hrtimer_grab_expiry_lock() assumes timer->base and timer->base->cpu_base are always non-NULL. So it is safe to remove the NULL checks here. Signed-off-by: Julien Grall Link: https://lkml.kernel.org/r/alpine.DEB.2.21.1908211557420.2223@nanos.tec.linutronix.de Signed-off-by: Steven Rostedt (VMware) [bigeasy: rewrite changelog] Signed-off-by: Sebastian Andrzej Siewior --- kernel/time/hrtimer.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/kernel/time/hrtimer.c b/kernel/time/hrtimer.c index 49d20fe8570f..1a5167c68310 100644 --- a/kernel/time/hrtimer.c +++ b/kernel/time/hrtimer.c @@ -943,7 +943,7 @@ void hrtimer_grab_expiry_lock(const struct hrtimer *timer) { struct hrtimer_clock_base *base = READ_ONCE(timer->base); - if (timer->is_soft && base && base->cpu_base) { + if (timer->is_soft && base != &migration_base) { spin_lock(&base->cpu_base->softirq_expiry_lock); spin_unlock(&base->cpu_base->softirq_expiry_lock); } From patchwork Thu Jan 23 20:39:41 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Rostedt X-Patchwork-Id: 213259 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4AA9FC33CB6 for ; Thu, 23 Jan 2020 20:41:12 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 21AFC246B0 for ; Thu, 23 Jan 2020 20:41:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729742AbgAWUlK (ORCPT ); Thu, 23 Jan 2020 15:41:10 -0500 Received: from mail.kernel.org ([198.145.29.99]:45846 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729267AbgAWUjp (ORCPT ); Thu, 23 Jan 2020 15:39:45 -0500 Received: from gandalf.local.home (cpe-66-24-58-225.stny.res.rr.com [66.24.58.225]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 0EEDE24688; Thu, 23 Jan 2020 20:39:45 +0000 (UTC) Received: from rostedt by gandalf.local.home with local (Exim 4.93) (envelope-from ) id 1iujGS-000mZL-0K; Thu, 23 Jan 2020 15:39:44 -0500 Message-Id: <20200123203943.890155938@goodmis.org> User-Agent: quilt/0.65 Date: Thu, 23 Jan 2020 15:39:41 -0500 From: Steven Rostedt To: linux-kernel@vger.kernel.org, linux-rt-users Cc: Thomas Gleixner , Carsten Emde , Sebastian Andrzej Siewior , John Kacur , Julia Cartwright , Daniel Wagner , Tom Zanussi Subject: [PATCH RT 11/30] hrtimer: Add a missing bracket and hide `migration_base on !SMP References: <20200123203930.646725253@goodmis.org> MIME-Version: 1.0 Sender: linux-rt-users-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rt-users@vger.kernel.org 4.19.94-rt39-rc2 stable review patch. If anyone has any objections, please let me know. ------------------ From: Sebastian Andrzej Siewior [ Upstream commit 47b6de0b7f22c28b40275aeede3993d807668c3b ] [ Upstream commit 5d2295f3a93b04986d069ebeaf5b07725f9096c1 ] The recent change to avoid taking the expiry lock when a timer is currently migrated missed to add a bracket at the end of the if statement leading to compile errors. Since that commit the variable `migration_base' is always used but it is only available on SMP configuration thus leading to another compile error. The changelog says "The timer base and base->cpu_base cannot be NULL in the code path", so it is safe to limit this check to SMP configurations only. Add the missing bracket to the if statement and hide `migration_base' behind CONFIG_SMP bars. [ tglx: Mark the functions inline ... ] Fixes: 68b2c8c1e4210 ("hrtimer: Don't take expiry_lock when timer is currently migrated") Signed-off-by: Sebastian Andrzej Siewior Signed-off-by: Thomas Gleixner Link: https://lkml.kernel.org/r/20190904145527.eah7z56ntwobqm6j@linutronix.de Signed-off-by: Steven Rostedt (VMware) [bigeasy: port back to RT] Signed-off-by: Sebastian Andrzej Siewior --- kernel/time/hrtimer.c | 12 +++++++++++- 1 file changed, 11 insertions(+), 1 deletion(-) diff --git a/kernel/time/hrtimer.c b/kernel/time/hrtimer.c index 1a5167c68310..e54a95de8b79 100644 --- a/kernel/time/hrtimer.c +++ b/kernel/time/hrtimer.c @@ -150,6 +150,11 @@ static struct hrtimer_cpu_base migration_cpu_base = { #define migration_base migration_cpu_base.clock_base[0] +static inline bool is_migration_base(struct hrtimer_clock_base *base) +{ + return base == &migration_base; +} + /* * We are using hashed locking: holding per_cpu(hrtimer_bases)[n].lock * means that all timers which are tied to this base via timer->base are @@ -274,6 +279,11 @@ switch_hrtimer_base(struct hrtimer *timer, struct hrtimer_clock_base *base, #else /* CONFIG_SMP */ +static inline bool is_migration_base(struct hrtimer_clock_base *base) +{ + return false; +} + static inline struct hrtimer_clock_base * lock_hrtimer_base(const struct hrtimer *timer, unsigned long *flags) { @@ -943,7 +953,7 @@ void hrtimer_grab_expiry_lock(const struct hrtimer *timer) { struct hrtimer_clock_base *base = READ_ONCE(timer->base); - if (timer->is_soft && base != &migration_base) { + if (timer->is_soft && is_migration_base(base)) { spin_lock(&base->cpu_base->softirq_expiry_lock); spin_unlock(&base->cpu_base->softirq_expiry_lock); } From patchwork Thu Jan 23 20:39:42 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Rostedt X-Patchwork-Id: 213258 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 22D52C2D0DB for ; Thu, 23 Jan 2020 20:41:17 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id EF506246B0 for ; Thu, 23 Jan 2020 20:41:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729762AbgAWUlQ (ORCPT ); Thu, 23 Jan 2020 15:41:16 -0500 Received: from mail.kernel.org ([198.145.29.99]:45944 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729273AbgAWUjp (ORCPT ); Thu, 23 Jan 2020 15:39:45 -0500 Received: from gandalf.local.home (cpe-66-24-58-225.stny.res.rr.com [66.24.58.225]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 35F752467F; Thu, 23 Jan 2020 20:39:45 +0000 (UTC) Received: from rostedt by gandalf.local.home with local (Exim 4.93) (envelope-from ) id 1iujGS-000mZp-4q; Thu, 23 Jan 2020 15:39:44 -0500 Message-Id: <20200123203944.031880140@goodmis.org> User-Agent: quilt/0.65 Date: Thu, 23 Jan 2020 15:39:42 -0500 From: Steven Rostedt To: linux-kernel@vger.kernel.org, linux-rt-users Cc: Thomas Gleixner , Carsten Emde , Sebastian Andrzej Siewior , John Kacur , Julia Cartwright , Daniel Wagner , Tom Zanussi , kbuild test robot , Dan Carpenter Subject: [PATCH RT 12/30] posix-timers: Unlock expiry lock in the early return References: <20200123203930.646725253@goodmis.org> MIME-Version: 1.0 Sender: linux-rt-users-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rt-users@vger.kernel.org 4.19.94-rt39-rc2 stable review patch. If anyone has any objections, please let me know. ------------------ From: Sebastian Andrzej Siewior [ Upstream commit 356a2781375ec58521a9bc3f500488745990c242 ] Patch ("posix-timers: Add expiry lock") acquired a lock in run_posix_cpu_timers() but didn't drop the lock in the early return. Unlock the lock in the early return path. Reported-by: kbuild test robot Reported-by: Dan Carpenter Reviewed-by: Thomas Gleixner Signed-off-by: Sebastian Andrzej Siewior Signed-off-by: Steven Rostedt (VMware) --- kernel/time/posix-cpu-timers.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/kernel/time/posix-cpu-timers.c b/kernel/time/posix-cpu-timers.c index 765e700962ab..c9964dc3276b 100644 --- a/kernel/time/posix-cpu-timers.c +++ b/kernel/time/posix-cpu-timers.c @@ -1175,8 +1175,10 @@ static void __run_posix_cpu_timers(struct task_struct *tsk) expiry_lock = this_cpu_ptr(&cpu_timer_expiry_lock); spin_lock(expiry_lock); - if (!lock_task_sighand(tsk, &flags)) + if (!lock_task_sighand(tsk, &flags)) { + spin_unlock(expiry_lock); return; + } /* * Here we take off tsk->signal->cpu_timers[N] and * tsk->cpu_timers[N] all the timers that are firing, and From patchwork Thu Jan 23 20:39:44 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Rostedt X-Patchwork-Id: 213260 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 42541C2D0DB for ; Thu, 23 Jan 2020 20:41:05 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 141CB246B2 for ; Thu, 23 Jan 2020 20:41:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729308AbgAWUjq (ORCPT ); Thu, 23 Jan 2020 15:39:46 -0500 Received: from mail.kernel.org ([198.145.29.99]:45990 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729281AbgAWUjp (ORCPT ); Thu, 23 Jan 2020 15:39:45 -0500 Received: from gandalf.local.home (cpe-66-24-58-225.stny.res.rr.com [66.24.58.225]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 783AA24655; Thu, 23 Jan 2020 20:39:45 +0000 (UTC) Received: from rostedt by gandalf.local.home with local (Exim 4.93) (envelope-from ) id 1iujGS-000man-Dp; Thu, 23 Jan 2020 15:39:44 -0500 Message-Id: <20200123203944.308854439@goodmis.org> User-Agent: quilt/0.65 Date: Thu, 23 Jan 2020 15:39:44 -0500 From: Steven Rostedt To: linux-kernel@vger.kernel.org, linux-rt-users Cc: Thomas Gleixner , Carsten Emde , Sebastian Andrzej Siewior , John Kacur , Julia Cartwright , Daniel Wagner , Tom Zanussi , Scott Wood Subject: [PATCH RT 14/30] sched: __set_cpus_allowed_ptr: Check cpus_mask, not cpus_ptr References: <20200123203930.646725253@goodmis.org> MIME-Version: 1.0 Sender: linux-rt-users-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rt-users@vger.kernel.org 4.19.94-rt39-rc2 stable review patch. If anyone has any objections, please let me know. ------------------ From: Scott Wood [ Upstream commit e5606fb7b042db634ed62b4dd733d62e050e468f ] This function is concerned with the long-term cpu mask, not the transitory mask the task might have while migrate disabled. Before this patch, if a task was migrate disabled at the time __set_cpus_allowed_ptr() was called, and the new mask happened to be equal to the cpu that the task was running on, then the mask update would be lost. Signed-off-by: Scott Wood Signed-off-by: Sebastian Andrzej Siewior Signed-off-by: Steven Rostedt (VMware) --- kernel/sched/core.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 3413b9ebef1f..d6bd8129a390 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -1157,7 +1157,7 @@ static int __set_cpus_allowed_ptr(struct task_struct *p, goto out; } - if (cpumask_equal(p->cpus_ptr, new_mask)) + if (cpumask_equal(&p->cpus_mask, new_mask)) goto out; dest_cpu = cpumask_any_and(cpu_valid_mask, new_mask); From patchwork Thu Jan 23 20:39:45 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Rostedt X-Patchwork-Id: 213264 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C8D7AC2D0DB for ; Thu, 23 Jan 2020 20:40:32 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 9AFB6246B3 for ; Thu, 23 Jan 2020 20:40:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729353AbgAWUjr (ORCPT ); Thu, 23 Jan 2020 15:39:47 -0500 Received: from mail.kernel.org ([198.145.29.99]:45878 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729287AbgAWUjq (ORCPT ); Thu, 23 Jan 2020 15:39:46 -0500 Received: from gandalf.local.home (cpe-66-24-58-225.stny.res.rr.com [66.24.58.225]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 9C7F62468A; Thu, 23 Jan 2020 20:39:45 +0000 (UTC) Received: from rostedt by gandalf.local.home with local (Exim 4.93) (envelope-from ) id 1iujGS-000mbH-IM; Thu, 23 Jan 2020 15:39:44 -0500 Message-Id: <20200123203944.448213441@goodmis.org> User-Agent: quilt/0.65 Date: Thu, 23 Jan 2020 15:39:45 -0500 From: Steven Rostedt To: linux-kernel@vger.kernel.org, linux-rt-users Cc: Thomas Gleixner , Carsten Emde , Sebastian Andrzej Siewior , John Kacur , Julia Cartwright , Daniel Wagner , Tom Zanussi , Scott Wood Subject: [PATCH RT 15/30] sched: Remove dead __migrate_disabled() check References: <20200123203930.646725253@goodmis.org> MIME-Version: 1.0 Sender: linux-rt-users-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rt-users@vger.kernel.org 4.19.94-rt39-rc2 stable review patch. If anyone has any objections, please let me know. ------------------ From: Scott Wood [ Upstream commit 14d9272d534ea91262e15db99443fc5995c7c016 ] This code was unreachable given the __migrate_disabled() branch to "out" immediately beforehand. Signed-off-by: Scott Wood Signed-off-by: Sebastian Andrzej Siewior Signed-off-by: Steven Rostedt (VMware) --- kernel/sched/core.c | 7 ------- 1 file changed, 7 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index d6bd8129a390..a29f33e776d0 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -1182,13 +1182,6 @@ static int __set_cpus_allowed_ptr(struct task_struct *p, if (cpumask_test_cpu(task_cpu(p), new_mask) || __migrate_disabled(p)) goto out; -#if defined(CONFIG_SMP) && defined(CONFIG_PREEMPT_RT_BASE) - if (__migrate_disabled(p)) { - p->migrate_disable_update = 1; - goto out; - } -#endif - if (task_running(rq, p) || p->state == TASK_WAKING) { struct migration_arg arg = { p, dest_cpu }; /* Need help from migration thread: drop lock and wait. */ From patchwork Thu Jan 23 20:39:51 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Rostedt X-Patchwork-Id: 213263 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4570EC33CAF for ; Thu, 23 Jan 2020 20:40:44 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 0828D206D4 for ; Thu, 23 Jan 2020 20:40:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729604AbgAWUke (ORCPT ); Thu, 23 Jan 2020 15:40:34 -0500 Received: from mail.kernel.org ([198.145.29.99]:45990 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729318AbgAWUjr (ORCPT ); Thu, 23 Jan 2020 15:39:47 -0500 Received: from gandalf.local.home (cpe-66-24-58-225.stny.res.rr.com [66.24.58.225]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 8076424698; Thu, 23 Jan 2020 20:39:46 +0000 (UTC) Received: from rostedt by gandalf.local.home with local (Exim 4.93) (envelope-from ) id 1iujGT-000meB-Dw; Thu, 23 Jan 2020 15:39:45 -0500 Message-Id: <20200123203945.314725398@goodmis.org> User-Agent: quilt/0.65 Date: Thu, 23 Jan 2020 15:39:51 -0500 From: Steven Rostedt To: linux-kernel@vger.kernel.org, linux-rt-users Cc: Thomas Gleixner , Carsten Emde , Sebastian Andrzej Siewior , John Kacur , Julia Cartwright , Daniel Wagner , Tom Zanussi , stable-rt@vger.kernel.org, Liu Haitao , Yongxin Liu Subject: [PATCH RT 21/30] kmemleak: Change the lock of kmemleak_object to raw_spinlock_t References: <20200123203930.646725253@goodmis.org> MIME-Version: 1.0 Sender: linux-rt-users-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rt-users@vger.kernel.org 4.19.94-rt39-rc2 stable review patch. If anyone has any objections, please let me know. ------------------ From: Liu Haitao [ Upstream commit 217847f57119b5fdd377bfa3d344613ddb98d9fc ] The commit ("kmemleak: Turn kmemleak_lock to raw spinlock on RT") changed the kmemleak_lock to raw spinlock. However the kmemleak_object->lock is held after the kmemleak_lock is held in scan_block(). Make the object->lock a raw_spinlock_t. Cc: stable-rt@vger.kernel.org Link: https://lkml.kernel.org/r/20190927082230.34152-1-yongxin.liu@windriver.com Signed-off-by: Liu Haitao Signed-off-by: Yongxin Liu Signed-off-by: Sebastian Andrzej Siewior Signed-off-by: Steven Rostedt (VMware) --- mm/kmemleak.c | 72 +++++++++++++++++++++++++-------------------------- 1 file changed, 36 insertions(+), 36 deletions(-) diff --git a/mm/kmemleak.c b/mm/kmemleak.c index 92ce99b15f2b..e5f5eeed338d 100644 --- a/mm/kmemleak.c +++ b/mm/kmemleak.c @@ -147,7 +147,7 @@ struct kmemleak_scan_area { * (use_count) and freed using the RCU mechanism. */ struct kmemleak_object { - spinlock_t lock; + raw_spinlock_t lock; unsigned int flags; /* object status flags */ struct list_head object_list; struct list_head gray_list; @@ -561,7 +561,7 @@ static struct kmemleak_object *create_object(unsigned long ptr, size_t size, INIT_LIST_HEAD(&object->object_list); INIT_LIST_HEAD(&object->gray_list); INIT_HLIST_HEAD(&object->area_list); - spin_lock_init(&object->lock); + raw_spin_lock_init(&object->lock); atomic_set(&object->use_count, 1); object->flags = OBJECT_ALLOCATED; object->pointer = ptr; @@ -642,9 +642,9 @@ static void __delete_object(struct kmemleak_object *object) * Locking here also ensures that the corresponding memory block * cannot be freed when it is being scanned. */ - spin_lock_irqsave(&object->lock, flags); + raw_spin_lock_irqsave(&object->lock, flags); object->flags &= ~OBJECT_ALLOCATED; - spin_unlock_irqrestore(&object->lock, flags); + raw_spin_unlock_irqrestore(&object->lock, flags); put_object(object); } @@ -716,9 +716,9 @@ static void paint_it(struct kmemleak_object *object, int color) { unsigned long flags; - spin_lock_irqsave(&object->lock, flags); + raw_spin_lock_irqsave(&object->lock, flags); __paint_it(object, color); - spin_unlock_irqrestore(&object->lock, flags); + raw_spin_unlock_irqrestore(&object->lock, flags); } static void paint_ptr(unsigned long ptr, int color) @@ -778,7 +778,7 @@ static void add_scan_area(unsigned long ptr, size_t size, gfp_t gfp) goto out; } - spin_lock_irqsave(&object->lock, flags); + raw_spin_lock_irqsave(&object->lock, flags); if (size == SIZE_MAX) { size = object->pointer + object->size - ptr; } else if (ptr + size > object->pointer + object->size) { @@ -794,7 +794,7 @@ static void add_scan_area(unsigned long ptr, size_t size, gfp_t gfp) hlist_add_head(&area->node, &object->area_list); out_unlock: - spin_unlock_irqrestore(&object->lock, flags); + raw_spin_unlock_irqrestore(&object->lock, flags); out: put_object(object); } @@ -817,9 +817,9 @@ static void object_set_excess_ref(unsigned long ptr, unsigned long excess_ref) return; } - spin_lock_irqsave(&object->lock, flags); + raw_spin_lock_irqsave(&object->lock, flags); object->excess_ref = excess_ref; - spin_unlock_irqrestore(&object->lock, flags); + raw_spin_unlock_irqrestore(&object->lock, flags); put_object(object); } @@ -839,9 +839,9 @@ static void object_no_scan(unsigned long ptr) return; } - spin_lock_irqsave(&object->lock, flags); + raw_spin_lock_irqsave(&object->lock, flags); object->flags |= OBJECT_NO_SCAN; - spin_unlock_irqrestore(&object->lock, flags); + raw_spin_unlock_irqrestore(&object->lock, flags); put_object(object); } @@ -902,11 +902,11 @@ static void early_alloc(struct early_log *log) log->min_count, GFP_ATOMIC); if (!object) goto out; - spin_lock_irqsave(&object->lock, flags); + raw_spin_lock_irqsave(&object->lock, flags); for (i = 0; i < log->trace_len; i++) object->trace[i] = log->trace[i]; object->trace_len = log->trace_len; - spin_unlock_irqrestore(&object->lock, flags); + raw_spin_unlock_irqrestore(&object->lock, flags); out: rcu_read_unlock(); } @@ -1096,9 +1096,9 @@ void __ref kmemleak_update_trace(const void *ptr) return; } - spin_lock_irqsave(&object->lock, flags); + raw_spin_lock_irqsave(&object->lock, flags); object->trace_len = __save_stack_trace(object->trace); - spin_unlock_irqrestore(&object->lock, flags); + raw_spin_unlock_irqrestore(&object->lock, flags); put_object(object); } @@ -1344,7 +1344,7 @@ static void scan_block(void *_start, void *_end, * previously acquired in scan_object(). These locks are * enclosed by scan_mutex. */ - spin_lock_nested(&object->lock, SINGLE_DEPTH_NESTING); + raw_spin_lock_nested(&object->lock, SINGLE_DEPTH_NESTING); /* only pass surplus references (object already gray) */ if (color_gray(object)) { excess_ref = object->excess_ref; @@ -1353,7 +1353,7 @@ static void scan_block(void *_start, void *_end, excess_ref = 0; update_refs(object); } - spin_unlock(&object->lock); + raw_spin_unlock(&object->lock); if (excess_ref) { object = lookup_object(excess_ref, 0); @@ -1362,9 +1362,9 @@ static void scan_block(void *_start, void *_end, if (object == scanned) /* circular reference, ignore */ continue; - spin_lock_nested(&object->lock, SINGLE_DEPTH_NESTING); + raw_spin_lock_nested(&object->lock, SINGLE_DEPTH_NESTING); update_refs(object); - spin_unlock(&object->lock); + raw_spin_unlock(&object->lock); } } raw_spin_unlock_irqrestore(&kmemleak_lock, flags); @@ -1400,7 +1400,7 @@ static void scan_object(struct kmemleak_object *object) * Once the object->lock is acquired, the corresponding memory block * cannot be freed (the same lock is acquired in delete_object). */ - spin_lock_irqsave(&object->lock, flags); + raw_spin_lock_irqsave(&object->lock, flags); if (object->flags & OBJECT_NO_SCAN) goto out; if (!(object->flags & OBJECT_ALLOCATED)) @@ -1419,9 +1419,9 @@ static void scan_object(struct kmemleak_object *object) if (start >= end) break; - spin_unlock_irqrestore(&object->lock, flags); + raw_spin_unlock_irqrestore(&object->lock, flags); cond_resched(); - spin_lock_irqsave(&object->lock, flags); + raw_spin_lock_irqsave(&object->lock, flags); } while (object->flags & OBJECT_ALLOCATED); } else hlist_for_each_entry(area, &object->area_list, node) @@ -1429,7 +1429,7 @@ static void scan_object(struct kmemleak_object *object) (void *)(area->start + area->size), object); out: - spin_unlock_irqrestore(&object->lock, flags); + raw_spin_unlock_irqrestore(&object->lock, flags); } /* @@ -1482,7 +1482,7 @@ static void kmemleak_scan(void) /* prepare the kmemleak_object's */ rcu_read_lock(); list_for_each_entry_rcu(object, &object_list, object_list) { - spin_lock_irqsave(&object->lock, flags); + raw_spin_lock_irqsave(&object->lock, flags); #ifdef DEBUG /* * With a few exceptions there should be a maximum of @@ -1499,7 +1499,7 @@ static void kmemleak_scan(void) if (color_gray(object) && get_object(object)) list_add_tail(&object->gray_list, &gray_list); - spin_unlock_irqrestore(&object->lock, flags); + raw_spin_unlock_irqrestore(&object->lock, flags); } rcu_read_unlock(); @@ -1564,14 +1564,14 @@ static void kmemleak_scan(void) */ rcu_read_lock(); list_for_each_entry_rcu(object, &object_list, object_list) { - spin_lock_irqsave(&object->lock, flags); + raw_spin_lock_irqsave(&object->lock, flags); if (color_white(object) && (object->flags & OBJECT_ALLOCATED) && update_checksum(object) && get_object(object)) { /* color it gray temporarily */ object->count = object->min_count; list_add_tail(&object->gray_list, &gray_list); } - spin_unlock_irqrestore(&object->lock, flags); + raw_spin_unlock_irqrestore(&object->lock, flags); } rcu_read_unlock(); @@ -1591,13 +1591,13 @@ static void kmemleak_scan(void) */ rcu_read_lock(); list_for_each_entry_rcu(object, &object_list, object_list) { - spin_lock_irqsave(&object->lock, flags); + raw_spin_lock_irqsave(&object->lock, flags); if (unreferenced_object(object) && !(object->flags & OBJECT_REPORTED)) { object->flags |= OBJECT_REPORTED; new_leaks++; } - spin_unlock_irqrestore(&object->lock, flags); + raw_spin_unlock_irqrestore(&object->lock, flags); } rcu_read_unlock(); @@ -1749,10 +1749,10 @@ static int kmemleak_seq_show(struct seq_file *seq, void *v) struct kmemleak_object *object = v; unsigned long flags; - spin_lock_irqsave(&object->lock, flags); + raw_spin_lock_irqsave(&object->lock, flags); if ((object->flags & OBJECT_REPORTED) && unreferenced_object(object)) print_unreferenced(seq, object); - spin_unlock_irqrestore(&object->lock, flags); + raw_spin_unlock_irqrestore(&object->lock, flags); return 0; } @@ -1782,9 +1782,9 @@ static int dump_str_object_info(const char *str) return -EINVAL; } - spin_lock_irqsave(&object->lock, flags); + raw_spin_lock_irqsave(&object->lock, flags); dump_object_info(object); - spin_unlock_irqrestore(&object->lock, flags); + raw_spin_unlock_irqrestore(&object->lock, flags); put_object(object); return 0; @@ -1803,11 +1803,11 @@ static void kmemleak_clear(void) rcu_read_lock(); list_for_each_entry_rcu(object, &object_list, object_list) { - spin_lock_irqsave(&object->lock, flags); + raw_spin_lock_irqsave(&object->lock, flags); if ((object->flags & OBJECT_REPORTED) && unreferenced_object(object)) __paint_it(object, KMEMLEAK_GREY); - spin_unlock_irqrestore(&object->lock, flags); + raw_spin_unlock_irqrestore(&object->lock, flags); } rcu_read_unlock(); From patchwork Thu Jan 23 20:39:52 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Rostedt X-Patchwork-Id: 213265 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C43E4C33CB6 for ; Thu, 23 Jan 2020 20:40:29 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 90C72246B3 for ; Thu, 23 Jan 2020 20:40:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729378AbgAWUjs (ORCPT ); Thu, 23 Jan 2020 15:39:48 -0500 Received: from mail.kernel.org ([198.145.29.99]:45846 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729335AbgAWUjr (ORCPT ); Thu, 23 Jan 2020 15:39:47 -0500 Received: from gandalf.local.home (cpe-66-24-58-225.stny.res.rr.com [66.24.58.225]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 9B3F624696; Thu, 23 Jan 2020 20:39:46 +0000 (UTC) Received: from rostedt by gandalf.local.home with local (Exim 4.93) (envelope-from ) id 1iujGT-000mef-Iq; Thu, 23 Jan 2020 15:39:45 -0500 Message-Id: <20200123203945.463187246@goodmis.org> User-Agent: quilt/0.65 Date: Thu, 23 Jan 2020 15:39:52 -0500 From: Steven Rostedt To: linux-kernel@vger.kernel.org, linux-rt-users Cc: Thomas Gleixner , Carsten Emde , Sebastian Andrzej Siewior , John Kacur , Julia Cartwright , Daniel Wagner , Tom Zanussi , Scott Wood Subject: [PATCH RT 22/30] sched: migrate_enable: Use select_fallback_rq() References: <20200123203930.646725253@goodmis.org> MIME-Version: 1.0 Sender: linux-rt-users-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rt-users@vger.kernel.org 4.19.94-rt39-rc2 stable review patch. If anyone has any objections, please let me know. ------------------ From: Scott Wood [ Upstream commit adfa969d4cfcc995a9d866020124e50f1827d2d1 ] migrate_enable() currently open-codes a variant of select_fallback_rq(). However, it does not have the "No more Mr. Nice Guy" fallback and thus it will pass an invalid CPU to the migration thread if cpus_mask only contains a CPU that is !active. Signed-off-by: Scott Wood Signed-off-by: Sebastian Andrzej Siewior Signed-off-by: Steven Rostedt (VMware) --- kernel/sched/core.c | 25 ++++++++++--------------- 1 file changed, 10 insertions(+), 15 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index d9a3f88508ee..6fd3f7b4d7d8 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -7335,6 +7335,7 @@ void migrate_enable(void) if (p->migrate_disable_update) { struct rq *rq; struct rq_flags rf; + int cpu = task_cpu(p); rq = task_rq_lock(p, &rf); update_rq_clock(rq); @@ -7344,21 +7345,15 @@ void migrate_enable(void) p->migrate_disable_update = 0; - WARN_ON(smp_processor_id() != task_cpu(p)); - if (!cpumask_test_cpu(task_cpu(p), &p->cpus_mask)) { - const struct cpumask *cpu_valid_mask = cpu_active_mask; - struct migration_arg arg; - unsigned int dest_cpu; - - if (p->flags & PF_KTHREAD) { - /* - * Kernel threads are allowed on online && !active CPUs - */ - cpu_valid_mask = cpu_online_mask; - } - dest_cpu = cpumask_any_and(cpu_valid_mask, &p->cpus_mask); - arg.task = p; - arg.dest_cpu = dest_cpu; + WARN_ON(smp_processor_id() != cpu); + if (!cpumask_test_cpu(cpu, &p->cpus_mask)) { + struct migration_arg arg = { p }; + struct rq_flags rf; + + rq = task_rq_lock(p, &rf); + update_rq_clock(rq); + arg.dest_cpu = select_fallback_rq(cpu, p); + task_rq_unlock(rq, p, &rf); unpin_current_cpu(); preempt_lazy_enable(); From patchwork Thu Jan 23 20:39:53 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Rostedt X-Patchwork-Id: 213267 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 090FFC2D0DB for ; Thu, 23 Jan 2020 20:39:50 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id BD47924676 for ; Thu, 23 Jan 2020 20:39:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729399AbgAWUjt (ORCPT ); Thu, 23 Jan 2020 15:39:49 -0500 Received: from mail.kernel.org ([198.145.29.99]:45862 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729336AbgAWUjs (ORCPT ); Thu, 23 Jan 2020 15:39:48 -0500 Received: from gandalf.local.home (cpe-66-24-58-225.stny.res.rr.com [66.24.58.225]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id C587424697; Thu, 23 Jan 2020 20:39:46 +0000 (UTC) Received: from rostedt by gandalf.local.home with local (Exim 4.93) (envelope-from ) id 1iujGT-000mf9-Nd; Thu, 23 Jan 2020 15:39:45 -0500 Message-Id: <20200123203945.609696284@goodmis.org> User-Agent: quilt/0.65 Date: Thu, 23 Jan 2020 15:39:53 -0500 From: Steven Rostedt To: linux-kernel@vger.kernel.org, linux-rt-users Cc: Thomas Gleixner , Carsten Emde , Sebastian Andrzej Siewior , John Kacur , Julia Cartwright , Daniel Wagner , Tom Zanussi , Scott Wood Subject: [PATCH RT 23/30] sched: Lazy migrate_disable processing References: <20200123203930.646725253@goodmis.org> MIME-Version: 1.0 Sender: linux-rt-users-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rt-users@vger.kernel.org 4.19.94-rt39-rc2 stable review patch. If anyone has any objections, please let me know. ------------------ From: Scott Wood [ Upstream commit 425c5b38779a860062aa62219dc920d374b13c17 ] Avoid overhead on the majority of migrate disable/enable sequences by only manipulating scheduler data (and grabbing the relevant locks) when the task actually schedules while migrate-disabled. A kernel build showed around a 10% reduction in system time (with CONFIG_NR_CPUS=512). Instead of cpuhp_pin_lock, CPU hotplug is handled by keeping a per-CPU count of the number of pinned tasks (including tasks which have not scheduled in the migrate-disabled section); takedown_cpu() will wait until that reaches zero (confirmed by take_cpu_down() in stop machine context to deal with races) before migrating tasks off of the cpu. To simplify synchronization, updating cpus_mask is no longer deferred until migrate_enable(). This lets us not have to worry about migrate_enable() missing the update if it's on the fast path (didn't schedule during the migrate disabled section). It also makes the code a bit simpler and reduces deviation from mainline. While the main motivation for this is the performance benefit, lazy migrate disable also eliminates the restriction on calling migrate_disable() while atomic but leaving the atomic region prior to calling migrate_enable() -- though this won't help with local_bh_disable() (and thus rcutorture) unless something similar is done with the recently added local_lock. Signed-off-by: Scott Wood Signed-off-by: Sebastian Andrzej Siewior Signed-off-by: Steven Rostedt (VMware) --- include/linux/cpu.h | 4 - include/linux/sched.h | 11 +-- init/init_task.c | 4 + kernel/cpu.c | 103 +++++++++-------------- kernel/sched/core.c | 182 +++++++++++++++++------------------------ kernel/sched/sched.h | 4 + lib/smp_processor_id.c | 3 + 7 files changed, 129 insertions(+), 182 deletions(-) diff --git a/include/linux/cpu.h b/include/linux/cpu.h index e67645924404..87347ccbba0c 100644 --- a/include/linux/cpu.h +++ b/include/linux/cpu.h @@ -118,8 +118,6 @@ extern void cpu_hotplug_disable(void); extern void cpu_hotplug_enable(void); void clear_tasks_mm_cpumask(int cpu); int cpu_down(unsigned int cpu); -extern void pin_current_cpu(void); -extern void unpin_current_cpu(void); #else /* CONFIG_HOTPLUG_CPU */ @@ -131,8 +129,6 @@ static inline int cpus_read_trylock(void) { return true; } static inline void lockdep_assert_cpus_held(void) { } static inline void cpu_hotplug_disable(void) { } static inline void cpu_hotplug_enable(void) { } -static inline void pin_current_cpu(void) { } -static inline void unpin_current_cpu(void) { } #endif /* !CONFIG_HOTPLUG_CPU */ diff --git a/include/linux/sched.h b/include/linux/sched.h index 3c213ec3d3b5..392a1ed7efd2 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -227,6 +227,8 @@ extern void io_schedule_finish(int token); extern long io_schedule_timeout(long timeout); extern void io_schedule(void); +int cpu_nr_pinned(int cpu); + /** * struct prev_cputime - snapshot of system and user cputime * @utime: time spent in user mode @@ -670,16 +672,13 @@ struct task_struct { cpumask_t cpus_mask; #if defined(CONFIG_SMP) && defined(CONFIG_PREEMPT_RT_BASE) int migrate_disable; - int migrate_disable_update; - int pinned_on_cpu; + bool migrate_disable_scheduled; # ifdef CONFIG_SCHED_DEBUG - int migrate_disable_atomic; + int pinned_on_cpu; # endif - #elif !defined(CONFIG_SMP) && defined(CONFIG_PREEMPT_RT_BASE) # ifdef CONFIG_SCHED_DEBUG int migrate_disable; - int migrate_disable_atomic; # endif #endif #ifdef CONFIG_PREEMPT_RT_FULL @@ -2058,4 +2057,6 @@ static inline void rseq_syscall(struct pt_regs *regs) #endif +extern struct task_struct *takedown_cpu_task; + #endif diff --git a/init/init_task.c b/init/init_task.c index 9e3362748214..4e5af4616dbd 100644 --- a/init/init_task.c +++ b/init/init_task.c @@ -80,6 +80,10 @@ struct task_struct init_task .cpus_ptr = &init_task.cpus_mask, .cpus_mask = CPU_MASK_ALL, .nr_cpus_allowed= NR_CPUS, +#if defined(CONFIG_SMP) && defined(CONFIG_PREEMPT_RT_BASE) && \ + defined(CONFIG_SCHED_DEBUG) + .pinned_on_cpu = -1, +#endif .mm = NULL, .active_mm = &init_mm, .restart_block = { diff --git a/kernel/cpu.c b/kernel/cpu.c index 7170fbd35a22..5366c8c69c2f 100644 --- a/kernel/cpu.c +++ b/kernel/cpu.c @@ -75,11 +75,6 @@ static DEFINE_PER_CPU(struct cpuhp_cpu_state, cpuhp_state) = { .fail = CPUHP_INVALID, }; -#if defined(CONFIG_HOTPLUG_CPU) && defined(CONFIG_PREEMPT_RT_FULL) -static DEFINE_PER_CPU(struct rt_rw_lock, cpuhp_pin_lock) = \ - __RWLOCK_RT_INITIALIZER(cpuhp_pin_lock); -#endif - #if defined(CONFIG_LOCKDEP) && defined(CONFIG_SMP) static struct lockdep_map cpuhp_state_up_map = STATIC_LOCKDEP_MAP_INIT("cpuhp_state-up", &cpuhp_state_up_map); @@ -286,57 +281,6 @@ static int cpu_hotplug_disabled; #ifdef CONFIG_HOTPLUG_CPU -/** - * pin_current_cpu - Prevent the current cpu from being unplugged - */ -void pin_current_cpu(void) -{ -#ifdef CONFIG_PREEMPT_RT_FULL - struct rt_rw_lock *cpuhp_pin; - unsigned int cpu; - int ret; - -again: - cpuhp_pin = this_cpu_ptr(&cpuhp_pin_lock); - ret = __read_rt_trylock(cpuhp_pin); - if (ret) { - current->pinned_on_cpu = smp_processor_id(); - return; - } - cpu = smp_processor_id(); - preempt_lazy_enable(); - preempt_enable(); - - sleeping_lock_inc(); - __read_rt_lock(cpuhp_pin); - sleeping_lock_dec(); - - preempt_disable(); - preempt_lazy_disable(); - if (cpu != smp_processor_id()) { - __read_rt_unlock(cpuhp_pin); - goto again; - } - current->pinned_on_cpu = cpu; -#endif -} - -/** - * unpin_current_cpu - Allow unplug of current cpu - */ -void unpin_current_cpu(void) -{ -#ifdef CONFIG_PREEMPT_RT_FULL - struct rt_rw_lock *cpuhp_pin = this_cpu_ptr(&cpuhp_pin_lock); - - if (WARN_ON(current->pinned_on_cpu != smp_processor_id())) - cpuhp_pin = per_cpu_ptr(&cpuhp_pin_lock, current->pinned_on_cpu); - - current->pinned_on_cpu = -1; - __read_rt_unlock(cpuhp_pin); -#endif -} - DEFINE_STATIC_PERCPU_RWSEM(cpu_hotplug_lock); void cpus_read_lock(void) @@ -866,6 +810,15 @@ static int take_cpu_down(void *_param) int err, cpu = smp_processor_id(); int ret; +#ifdef CONFIG_PREEMPT_RT_BASE + /* + * If any tasks disabled migration before we got here, + * go back and sleep again. + */ + if (cpu_nr_pinned(cpu)) + return -EAGAIN; +#endif + /* Ensure this CPU doesn't handle any more interrupts. */ err = __cpu_disable(); if (err < 0) @@ -893,11 +846,10 @@ static int take_cpu_down(void *_param) return 0; } +struct task_struct *takedown_cpu_task; + static int takedown_cpu(unsigned int cpu) { -#ifdef CONFIG_PREEMPT_RT_FULL - struct rt_rw_lock *cpuhp_pin = per_cpu_ptr(&cpuhp_pin_lock, cpu); -#endif struct cpuhp_cpu_state *st = per_cpu_ptr(&cpuhp_state, cpu); int err; @@ -910,17 +862,38 @@ static int takedown_cpu(unsigned int cpu) */ irq_lock_sparse(); -#ifdef CONFIG_PREEMPT_RT_FULL - __write_rt_lock(cpuhp_pin); +#ifdef CONFIG_PREEMPT_RT_BASE + WARN_ON_ONCE(takedown_cpu_task); + takedown_cpu_task = current; + +again: + /* + * If a task pins this CPU after we pass this check, take_cpu_down + * will return -EAGAIN. + */ + for (;;) { + int nr_pinned; + + set_current_state(TASK_UNINTERRUPTIBLE); + nr_pinned = cpu_nr_pinned(cpu); + if (nr_pinned == 0) + break; + schedule(); + } + set_current_state(TASK_RUNNING); #endif /* * So now all preempt/rcu users must observe !cpu_active(). */ err = stop_machine_cpuslocked(take_cpu_down, NULL, cpumask_of(cpu)); +#ifdef CONFIG_PREEMPT_RT_BASE + if (err == -EAGAIN) + goto again; +#endif if (err) { -#ifdef CONFIG_PREEMPT_RT_FULL - __write_rt_unlock(cpuhp_pin); +#ifdef CONFIG_PREEMPT_RT_BASE + takedown_cpu_task = NULL; #endif /* CPU refused to die */ irq_unlock_sparse(); @@ -940,8 +913,8 @@ static int takedown_cpu(unsigned int cpu) wait_for_ap_thread(st, false); BUG_ON(st->state != CPUHP_AP_IDLE_DEAD); -#ifdef CONFIG_PREEMPT_RT_FULL - __write_rt_unlock(cpuhp_pin); +#ifdef CONFIG_PREEMPT_RT_BASE + takedown_cpu_task = NULL; #endif /* Interrupts are moved away from the dying cpu, reenable alloc/free */ irq_unlock_sparse(); diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 6fd3f7b4d7d8..e97ac751aad2 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -1065,7 +1065,8 @@ static int migration_cpu_stop(void *data) void set_cpus_allowed_common(struct task_struct *p, const struct cpumask *new_mask) { cpumask_copy(&p->cpus_mask, new_mask); - p->nr_cpus_allowed = cpumask_weight(new_mask); + if (p->cpus_ptr == &p->cpus_mask) + p->nr_cpus_allowed = cpumask_weight(new_mask); } #if defined(CONFIG_SMP) && defined(CONFIG_PREEMPT_RT_BASE) @@ -1076,8 +1077,7 @@ int __migrate_disabled(struct task_struct *p) EXPORT_SYMBOL_GPL(__migrate_disabled); #endif -static void __do_set_cpus_allowed_tail(struct task_struct *p, - const struct cpumask *new_mask) +void do_set_cpus_allowed(struct task_struct *p, const struct cpumask *new_mask) { struct rq *rq = task_rq(p); bool queued, running; @@ -1106,20 +1106,6 @@ static void __do_set_cpus_allowed_tail(struct task_struct *p, set_curr_task(rq, p); } -void do_set_cpus_allowed(struct task_struct *p, const struct cpumask *new_mask) -{ -#if defined(CONFIG_SMP) && defined(CONFIG_PREEMPT_RT_BASE) - if (__migrate_disabled(p)) { - lockdep_assert_held(&p->pi_lock); - - cpumask_copy(&p->cpus_mask, new_mask); - p->migrate_disable_update = 1; - return; - } -#endif - __do_set_cpus_allowed_tail(p, new_mask); -} - /* * Change a given task's CPU affinity. Migrate the thread to a * proper CPU and schedule it away if the CPU it's executing on @@ -1179,7 +1165,8 @@ static int __set_cpus_allowed_ptr(struct task_struct *p, } /* Can the task run on the task's current CPU? If so, we're done */ - if (cpumask_test_cpu(task_cpu(p), new_mask) || __migrate_disabled(p)) + if (cpumask_test_cpu(task_cpu(p), new_mask) || + p->cpus_ptr != &p->cpus_mask) goto out; if (task_running(rq, p) || p->state == TASK_WAKING) { @@ -3454,6 +3441,8 @@ pick_next_task(struct rq *rq, struct task_struct *prev, struct rq_flags *rf) BUG(); } +static void migrate_disabled_sched(struct task_struct *p); + /* * __schedule() is the main scheduler function. * @@ -3524,6 +3513,9 @@ static void __sched notrace __schedule(bool preempt) rq_lock(rq, &rf); smp_mb__after_spinlock(); + if (__migrate_disabled(prev)) + migrate_disabled_sched(prev); + /* Promote REQ to ACT */ rq->clock_update_flags <<= 1; update_rq_clock(rq); @@ -5779,6 +5771,8 @@ static void migrate_tasks(struct rq *dead_rq, struct rq_flags *rf) BUG_ON(!next); put_prev_task(rq, next); + WARN_ON_ONCE(__migrate_disabled(next)); + /* * Rules for changing task_struct::cpus_mask are holding * both pi_lock and rq->lock, such that holding either @@ -7247,14 +7241,9 @@ update_nr_migratory(struct task_struct *p, long delta) static inline void migrate_disable_update_cpus_allowed(struct task_struct *p) { - struct rq *rq; - struct rq_flags rf; - - rq = task_rq_lock(p, &rf); p->cpus_ptr = cpumask_of(smp_processor_id()); update_nr_migratory(p, -1); p->nr_cpus_allowed = 1; - task_rq_unlock(rq, p, &rf); } static inline void @@ -7272,54 +7261,35 @@ migrate_enable_update_cpus_allowed(struct task_struct *p) void migrate_disable(void) { - struct task_struct *p = current; + preempt_disable(); - if (in_atomic() || irqs_disabled()) { + if (++current->migrate_disable == 1) { + this_rq()->nr_pinned++; + preempt_lazy_disable(); #ifdef CONFIG_SCHED_DEBUG - p->migrate_disable_atomic++; + WARN_ON_ONCE(current->pinned_on_cpu >= 0); + current->pinned_on_cpu = smp_processor_id(); #endif - return; - } -#ifdef CONFIG_SCHED_DEBUG - if (unlikely(p->migrate_disable_atomic)) { - tracing_off(); - WARN_ON_ONCE(1); } -#endif - if (p->migrate_disable) { - p->migrate_disable++; - return; - } + preempt_enable(); +} +EXPORT_SYMBOL(migrate_disable); - preempt_disable(); - preempt_lazy_disable(); - pin_current_cpu(); +static void migrate_disabled_sched(struct task_struct *p) +{ + if (p->migrate_disable_scheduled) + return; migrate_disable_update_cpus_allowed(p); - p->migrate_disable = 1; - - preempt_enable(); + p->migrate_disable_scheduled = 1; } -EXPORT_SYMBOL(migrate_disable); void migrate_enable(void) { struct task_struct *p = current; - - if (in_atomic() || irqs_disabled()) { -#ifdef CONFIG_SCHED_DEBUG - p->migrate_disable_atomic--; -#endif - return; - } - -#ifdef CONFIG_SCHED_DEBUG - if (unlikely(p->migrate_disable_atomic)) { - tracing_off(); - WARN_ON_ONCE(1); - } -#endif + struct rq *rq = this_rq(); + int cpu = task_cpu(p); WARN_ON_ONCE(p->migrate_disable <= 0); if (p->migrate_disable > 1) { @@ -7329,67 +7299,69 @@ void migrate_enable(void) preempt_disable(); +#ifdef CONFIG_SCHED_DEBUG + WARN_ON_ONCE(current->pinned_on_cpu != cpu); + current->pinned_on_cpu = -1; +#endif + + WARN_ON_ONCE(rq->nr_pinned < 1); + p->migrate_disable = 0; + rq->nr_pinned--; + if (rq->nr_pinned == 0 && unlikely(!cpu_active(cpu)) && + takedown_cpu_task) + wake_up_process(takedown_cpu_task); + + if (!p->migrate_disable_scheduled) + goto out; + + p->migrate_disable_scheduled = 0; + migrate_enable_update_cpus_allowed(p); - if (p->migrate_disable_update) { - struct rq *rq; + WARN_ON(smp_processor_id() != cpu); + if (!is_cpu_allowed(p, cpu)) { + struct migration_arg arg = { p }; struct rq_flags rf; - int cpu = task_cpu(p); rq = task_rq_lock(p, &rf); update_rq_clock(rq); - - __do_set_cpus_allowed_tail(p, &p->cpus_mask); + arg.dest_cpu = select_fallback_rq(cpu, p); task_rq_unlock(rq, p, &rf); - p->migrate_disable_update = 0; - - WARN_ON(smp_processor_id() != cpu); - if (!cpumask_test_cpu(cpu, &p->cpus_mask)) { - struct migration_arg arg = { p }; - struct rq_flags rf; + preempt_lazy_enable(); + preempt_enable(); - rq = task_rq_lock(p, &rf); - update_rq_clock(rq); - arg.dest_cpu = select_fallback_rq(cpu, p); - task_rq_unlock(rq, p, &rf); - - unpin_current_cpu(); - preempt_lazy_enable(); - preempt_enable(); - - sleeping_lock_inc(); - stop_one_cpu(task_cpu(p), migration_cpu_stop, &arg); - sleeping_lock_dec(); - tlb_migrate_finish(p->mm); + sleeping_lock_inc(); + stop_one_cpu(task_cpu(p), migration_cpu_stop, &arg); + sleeping_lock_dec(); + tlb_migrate_finish(p->mm); - return; - } + return; } - unpin_current_cpu(); + +out: preempt_lazy_enable(); preempt_enable(); } EXPORT_SYMBOL(migrate_enable); -#elif !defined(CONFIG_SMP) && defined(CONFIG_PREEMPT_RT_BASE) -void migrate_disable(void) +int cpu_nr_pinned(int cpu) { -#ifdef CONFIG_SCHED_DEBUG - struct task_struct *p = current; + struct rq *rq = cpu_rq(cpu); - if (in_atomic() || irqs_disabled()) { - p->migrate_disable_atomic++; - return; - } + return rq->nr_pinned; +} - if (unlikely(p->migrate_disable_atomic)) { - tracing_off(); - WARN_ON_ONCE(1); - } +#elif !defined(CONFIG_SMP) && defined(CONFIG_PREEMPT_RT_BASE) +static void migrate_disabled_sched(struct task_struct *p) +{ +} - p->migrate_disable++; +void migrate_disable(void) +{ +#ifdef CONFIG_SCHED_DEBUG + current->migrate_disable++; #endif barrier(); } @@ -7400,20 +7372,14 @@ void migrate_enable(void) #ifdef CONFIG_SCHED_DEBUG struct task_struct *p = current; - if (in_atomic() || irqs_disabled()) { - p->migrate_disable_atomic--; - return; - } - - if (unlikely(p->migrate_disable_atomic)) { - tracing_off(); - WARN_ON_ONCE(1); - } - WARN_ON_ONCE(p->migrate_disable <= 0); p->migrate_disable--; #endif barrier(); } EXPORT_SYMBOL(migrate_enable); +#else +static void migrate_disabled_sched(struct task_struct *p) +{ +} #endif diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index c90574112bca..78fa5911dd55 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -913,6 +913,10 @@ struct rq { /* Must be inspected within a rcu lock section */ struct cpuidle_state *idle_state; #endif + +#if defined(CONFIG_PREEMPT_RT_BASE) && defined(CONFIG_SMP) + int nr_pinned; +#endif }; static inline int cpu_of(struct rq *rq) diff --git a/lib/smp_processor_id.c b/lib/smp_processor_id.c index b8a8a8db2d75..0c80992aa337 100644 --- a/lib/smp_processor_id.c +++ b/lib/smp_processor_id.c @@ -22,6 +22,9 @@ notrace static unsigned int check_preemption_disabled(const char *what1, * Kernel threads bound to a single CPU can safely use * smp_processor_id(): */ + if (current->migrate_disable) + goto out; + if (current->nr_cpus_allowed == 1) goto out; From patchwork Thu Jan 23 20:39:54 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Rostedt X-Patchwork-Id: 213261 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 71219C33CAF for ; Thu, 23 Jan 2020 20:40:58 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 4B38E24691 for ; Thu, 23 Jan 2020 20:40:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729600AbgAWUkq (ORCPT ); Thu, 23 Jan 2020 15:40:46 -0500 Received: from mail.kernel.org ([198.145.29.99]:45878 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729339AbgAWUjr (ORCPT ); Thu, 23 Jan 2020 15:39:47 -0500 Received: from gandalf.local.home (cpe-66-24-58-225.stny.res.rr.com [66.24.58.225]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id EC92524693; Thu, 23 Jan 2020 20:39:46 +0000 (UTC) Received: from rostedt by gandalf.local.home with local (Exim 4.93) (envelope-from ) id 1iujGT-000mfd-SX; Thu, 23 Jan 2020 15:39:45 -0500 Message-Id: <20200123203945.766477535@goodmis.org> User-Agent: quilt/0.65 Date: Thu, 23 Jan 2020 15:39:54 -0500 From: Steven Rostedt To: linux-kernel@vger.kernel.org, linux-rt-users Cc: Thomas Gleixner , Carsten Emde , Sebastian Andrzej Siewior , John Kacur , Julia Cartwright , Daniel Wagner , Tom Zanussi , Scott Wood Subject: [PATCH RT 24/30] sched: migrate_enable: Use stop_one_cpu_nowait() References: <20200123203930.646725253@goodmis.org> MIME-Version: 1.0 Sender: linux-rt-users-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rt-users@vger.kernel.org 4.19.94-rt39-rc2 stable review patch. If anyone has any objections, please let me know. ------------------ From: Scott Wood [ Upstream commit 6b39a1fa8c53cae08dc03afdae193b7d3a78a173 ] migrate_enable() can be called with current->state != TASK_RUNNING. Avoid clobbering the existing state by using stop_one_cpu_nowait(). Since we're stopping the current cpu, we know that we won't get past __schedule() until migration_cpu_stop() has run (at least up to the point of migrating us to another cpu). Signed-off-by: Scott Wood Signed-off-by: Sebastian Andrzej Siewior Signed-off-by: Steven Rostedt (VMware) --- include/linux/stop_machine.h | 2 ++ kernel/sched/core.c | 23 +++++++++++++---------- kernel/stop_machine.c | 7 +++++-- 3 files changed, 20 insertions(+), 12 deletions(-) diff --git a/include/linux/stop_machine.h b/include/linux/stop_machine.h index 6d3635c86dbe..82fc686ddd9e 100644 --- a/include/linux/stop_machine.h +++ b/include/linux/stop_machine.h @@ -26,6 +26,8 @@ struct cpu_stop_work { cpu_stop_fn_t fn; void *arg; struct cpu_stop_done *done; + /* Did not run due to disabled stopper; for nowait debug checks */ + bool disabled; }; int stop_one_cpu(unsigned int cpu, cpu_stop_fn_t fn, void *arg); diff --git a/kernel/sched/core.c b/kernel/sched/core.c index e97ac751aad2..e465381b464d 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -990,6 +990,7 @@ static struct rq *move_queued_task(struct rq *rq, struct rq_flags *rf, struct migration_arg { struct task_struct *task; int dest_cpu; + bool done; }; /* @@ -1025,6 +1026,11 @@ static int migration_cpu_stop(void *data) struct task_struct *p = arg->task; struct rq *rq = this_rq(); struct rq_flags rf; + int dest_cpu = arg->dest_cpu; + + /* We don't look at arg after this point. */ + smp_mb(); + arg->done = true; /* * The original target CPU might have gone down and we might @@ -1047,9 +1053,9 @@ static int migration_cpu_stop(void *data) */ if (task_rq(p) == rq) { if (task_on_rq_queued(p)) - rq = __migrate_task(rq, &rf, p, arg->dest_cpu); + rq = __migrate_task(rq, &rf, p, dest_cpu); else - p->wake_cpu = arg->dest_cpu; + p->wake_cpu = dest_cpu; } rq_unlock(rq, &rf); raw_spin_unlock(&p->pi_lock); @@ -7322,6 +7328,7 @@ void migrate_enable(void) WARN_ON(smp_processor_id() != cpu); if (!is_cpu_allowed(p, cpu)) { struct migration_arg arg = { p }; + struct cpu_stop_work work; struct rq_flags rf; rq = task_rq_lock(p, &rf); @@ -7329,15 +7336,11 @@ void migrate_enable(void) arg.dest_cpu = select_fallback_rq(cpu, p); task_rq_unlock(rq, p, &rf); - preempt_lazy_enable(); - preempt_enable(); - - sleeping_lock_inc(); - stop_one_cpu(task_cpu(p), migration_cpu_stop, &arg); - sleeping_lock_dec(); + stop_one_cpu_nowait(task_cpu(p), migration_cpu_stop, + &arg, &work); tlb_migrate_finish(p->mm); - - return; + __schedule(true); + WARN_ON_ONCE(!arg.done && !work.disabled); } out: diff --git a/kernel/stop_machine.c b/kernel/stop_machine.c index 067cb83f37ea..2d15c0d50625 100644 --- a/kernel/stop_machine.c +++ b/kernel/stop_machine.c @@ -86,8 +86,11 @@ static bool cpu_stop_queue_work(unsigned int cpu, struct cpu_stop_work *work) enabled = stopper->enabled; if (enabled) __cpu_stop_queue_work(stopper, work, &wakeq); - else if (work->done) - cpu_stop_signal_done(work->done); + else { + work->disabled = true; + if (work->done) + cpu_stop_signal_done(work->done); + } raw_spin_unlock_irqrestore(&stopper->lock, flags); wake_up_q(&wakeq); From patchwork Thu Jan 23 20:39:55 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Rostedt X-Patchwork-Id: 213262 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1195EC2D0DB for ; Thu, 23 Jan 2020 20:40:48 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id DB80824691 for ; Thu, 23 Jan 2020 20:40:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729654AbgAWUkq (ORCPT ); Thu, 23 Jan 2020 15:40:46 -0500 Received: from mail.kernel.org ([198.145.29.99]:45834 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729346AbgAWUjr (ORCPT ); Thu, 23 Jan 2020 15:39:47 -0500 Received: from gandalf.local.home (cpe-66-24-58-225.stny.res.rr.com [66.24.58.225]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 17D402469B; Thu, 23 Jan 2020 20:39:47 +0000 (UTC) Received: from rostedt by gandalf.local.home with local (Exim 4.93) (envelope-from ) id 1iujGU-000mg7-14; Thu, 23 Jan 2020 15:39:46 -0500 Message-Id: <20200123203945.906394629@goodmis.org> User-Agent: quilt/0.65 Date: Thu, 23 Jan 2020 15:39:55 -0500 From: Steven Rostedt To: linux-kernel@vger.kernel.org, linux-rt-users Cc: Thomas Gleixner , Carsten Emde , Sebastian Andrzej Siewior , John Kacur , Julia Cartwright , Daniel Wagner , Tom Zanussi Subject: [PATCH RT 25/30] Revert "ARM: Initialize split page table locks for vector page" References: <20200123203930.646725253@goodmis.org> MIME-Version: 1.0 Sender: linux-rt-users-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rt-users@vger.kernel.org 4.19.94-rt39-rc2 stable review patch. If anyone has any objections, please let me know. ------------------ From: Sebastian Andrzej Siewior [ Upstream commit 247074c44d8c3e619dfde6404a52295d8d671d38 ] I'm dropping this patch, with its original description: |ARM: Initialize split page table locks for vector page | |Without this patch, ARM can not use SPLIT_PTLOCK_CPUS if |PREEMPT_RT_FULL=y because vectors_user_mapping() creates a |VM_ALWAYSDUMP mapping of the vector page (address 0xffff0000), but no |ptl->lock has been allocated for the page. An attempt to coredump |that page will result in a kernel NULL pointer dereference when |follow_page() attempts to lock the page. | |The call tree to the NULL pointer dereference is: | | do_notify_resume() | get_signal_to_deliver() | do_coredump() | elf_core_dump() | get_dump_page() | __get_user_pages() | follow_page() | pte_offset_map_lock() <----- a #define | ... | rt_spin_lock() | |The underlying problem is exposed by mm-shrink-the-page-frame-to-rt-size.patch. The patch named mm-shrink-the-page-frame-to-rt-size.patch was dropped from the RT queue once the SPLIT_PTLOCK_CPUS feature (in a slightly different shape) went upstream (somewhere between v3.12 and v3.14). I can see that the patch still allocates a lock which wasn't there before. However I can't trigger a kernel oops like described in the patch by triggering a coredump. Signed-off-by: Sebastian Andrzej Siewior Signed-off-by: Steven Rostedt (VMware) --- arch/arm/kernel/process.c | 24 ------------------------ 1 file changed, 24 deletions(-) diff --git a/arch/arm/kernel/process.c b/arch/arm/kernel/process.c index 8d3c7ce34c24..82ab015bf42b 100644 --- a/arch/arm/kernel/process.c +++ b/arch/arm/kernel/process.c @@ -324,30 +324,6 @@ unsigned long arch_randomize_brk(struct mm_struct *mm) } #ifdef CONFIG_MMU -/* - * CONFIG_SPLIT_PTLOCK_CPUS results in a page->ptl lock. If the lock is not - * initialized by pgtable_page_ctor() then a coredump of the vector page will - * fail. - */ -static int __init vectors_user_mapping_init_page(void) -{ - struct page *page; - unsigned long addr = 0xffff0000; - pgd_t *pgd; - pud_t *pud; - pmd_t *pmd; - - pgd = pgd_offset_k(addr); - pud = pud_offset(pgd, addr); - pmd = pmd_offset(pud, addr); - page = pmd_page(*(pmd)); - - pgtable_page_ctor(page); - - return 0; -} -late_initcall(vectors_user_mapping_init_page); - #ifdef CONFIG_KUSER_HELPERS /* * The vectors page is always readable from user space for the From patchwork Thu Jan 23 20:39:57 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Rostedt X-Patchwork-Id: 213266 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 28398C2D0DB for ; Thu, 23 Jan 2020 20:40:26 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id E9F60246B3 for ; Thu, 23 Jan 2020 20:40:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729406AbgAWUjt (ORCPT ); Thu, 23 Jan 2020 15:39:49 -0500 Received: from mail.kernel.org ([198.145.29.99]:45974 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729355AbgAWUjs (ORCPT ); Thu, 23 Jan 2020 15:39:48 -0500 Received: from gandalf.local.home (cpe-66-24-58-225.stny.res.rr.com [66.24.58.225]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 5E74724695; Thu, 23 Jan 2020 20:39:47 +0000 (UTC) Received: from rostedt by gandalf.local.home with local (Exim 4.93) (envelope-from ) id 1iujGU-000mh5-AE; Thu, 23 Jan 2020 15:39:46 -0500 Message-Id: <20200123203946.200669575@goodmis.org> User-Agent: quilt/0.65 Date: Thu, 23 Jan 2020 15:39:57 -0500 From: Steven Rostedt To: linux-kernel@vger.kernel.org, linux-rt-users Cc: Thomas Gleixner , Carsten Emde , Sebastian Andrzej Siewior , John Kacur , Julia Cartwright , Daniel Wagner , Tom Zanussi , Dick Hollenbeck Subject: [PATCH RT 27/30] sched/core: migrate_enable() must access takedown_cpu_task on !HOTPLUG_CPU References: <20200123203930.646725253@goodmis.org> MIME-Version: 1.0 Sender: linux-rt-users-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rt-users@vger.kernel.org 4.19.94-rt39-rc2 stable review patch. If anyone has any objections, please let me know. ------------------ From: Sebastian Andrzej Siewior [ Upstream commit a61d1977f692e46bad99a100f264981ba08cb4bd ] The variable takedown_cpu_task is never declared/used on !HOTPLUG_CPU except for migrate_enable(). This leads to a link error. Don't use takedown_cpu_task in !HOTPLUG_CPU. Reported-by: Dick Hollenbeck Signed-off-by: Sebastian Andrzej Siewior Signed-off-by: Steven Rostedt (VMware) --- kernel/cpu.c | 2 ++ kernel/sched/core.c | 2 ++ 2 files changed, 4 insertions(+) diff --git a/kernel/cpu.c b/kernel/cpu.c index 5366c8c69c2f..b9d7ac61d707 100644 --- a/kernel/cpu.c +++ b/kernel/cpu.c @@ -846,7 +846,9 @@ static int take_cpu_down(void *_param) return 0; } +#ifdef CONFIG_PREEMPT_RT_BASE struct task_struct *takedown_cpu_task; +#endif static int takedown_cpu(unsigned int cpu) { diff --git a/kernel/sched/core.c b/kernel/sched/core.c index e465381b464d..cbd76324babd 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -7314,9 +7314,11 @@ void migrate_enable(void) p->migrate_disable = 0; rq->nr_pinned--; +#ifdef CONFIG_HOTPLUG_CPU if (rq->nr_pinned == 0 && unlikely(!cpu_active(cpu)) && takedown_cpu_task) wake_up_process(takedown_cpu_task); +#endif if (!p->migrate_disable_scheduled) goto out;