From patchwork Tue Aug 22 18:01:49 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 715990 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 37965EE49B2 for ; Tue, 22 Aug 2023 18:02:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229633AbjHVSCl (ORCPT ); Tue, 22 Aug 2023 14:02:41 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52510 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229658AbjHVSCg (ORCPT ); Tue, 22 Aug 2023 14:02:36 -0400 Received: from mail-pl1-x631.google.com (mail-pl1-x631.google.com [IPv6:2607:f8b0:4864:20::631]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1B9D0CD7; Tue, 22 Aug 2023 11:02:34 -0700 (PDT) Received: by mail-pl1-x631.google.com with SMTP id d9443c01a7336-1bc0d39b52cso31428515ad.2; Tue, 22 Aug 2023 11:02:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1692727353; x=1693332153; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=WHzqSThmy913fuGKpn4uXf/XGbug/XgU3SVaHteOTFw=; b=WPZoI+C/3/TAgQYvvcQN3m8uDX7PrfAY02kIgdL0Oz9khOGf+e8dzjXhTDYiVuc9Bk DUaUbyZ1NO3qataURGw8oCNtmXl5E9xzTCYEa8GS6krPVvd9IgMBN1dUteEHAWOV74pd Me89kMXmtLHqKx7okwKC3CWYKkUIA/gRf/diLBrvf7Q7AHME/4vDRy99D2eKMuA/r+bS 9HYEbFHW7hu/1+TiAvjKZOp+TUQPGZnMAgbSRGaT4HI59DfoGKIvmWuYUqZpMxJBoaii 6EY4fesEl1gdxAe7MZvh6USiVcO9QnZVbyZxsTMDg+DqOLSAeWfOTbp0atTPWyptegbg 5XVA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692727353; x=1693332153; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=WHzqSThmy913fuGKpn4uXf/XGbug/XgU3SVaHteOTFw=; b=EuGwcHLN+sillOtvhj9O1J028iPKCFSRW3Yayj5uKNdUnFjiBcn4Ojt0vy+lCP3TS1 FW03VOr9hqMJcEZXobzT/Wqfe91OhRLCc8c5q8rLVCfZjfTZbLksSgUed/swsRAaV3yH tw5hMYcmj/eTi1buO0IBvYtauw8byWSnYTvJF/ZfU4lbyMbVlwy+0pMtPjHcGn9I08BZ 8DiZOJPWEciiIreKTf0HZzqKK7kJON0L0vN1BgJb7uEVIs4vjGuQepiPnLR2usN2Wa7U cIMuobFXkFbaAnH0aObRcIxW+fA+IFsQ+PnHCL9JLxB2p3LfC5OkgsZCx6fOkKTXURtY gyVw== X-Gm-Message-State: AOJu0YxaHm2iRaAZ3MSg/SAbW7XwUvtyoJHBJ5Q5IJH3daYrY5jHjrfm bdZnyi+b2FjQEkicFKeXn6afaOlC+V8= X-Google-Smtp-Source: AGHT+IHJD1eQ/G8YKK/mS9Iz26wqBwKUfs1uHyXTORe1dqpKt/R9XL71xbqqG0PZqx/D0SSK0BXcMg== X-Received: by 2002:a17:902:a414:b0:1bd:b8c8:98f8 with SMTP id p20-20020a170902a41400b001bdb8c898f8mr6778141plq.4.1692727353437; Tue, 22 Aug 2023 11:02:33 -0700 (PDT) Received: from localhost ([2a00:79e1:abd:4a00:6c80:7c10:75a0:44f4]) by smtp.gmail.com with ESMTPSA id t7-20020a170902e84700b001b898595be7sm9347196plg.291.2023.08.22.11.02.32 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 22 Aug 2023 11:02:32 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, Rob Clark , MyungJoo Ham , Kyungmin Park , Chanwoo Choi , linux-pm@vger.kernel.org (open list:DEVICE FREQUENCY (DEVFREQ)), linux-kernel@vger.kernel.org (open list) Subject: [PATCH v5 02/11] PM / devfreq: Teach lockdep about locking order Date: Tue, 22 Aug 2023 11:01:49 -0700 Message-ID: <20230822180208.95556-3-robdclark@gmail.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230822180208.95556-1-robdclark@gmail.com> References: <20230822180208.95556-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org From: Rob Clark This will make it easier to catch places doing allocations that can trigger reclaim under devfreq->lock. Because devfreq->lock is held over various devfreq_dev_profile callbacks, there might be some fallout if those callbacks do allocations that can trigger reclaim, but I've looked through the various callback implementations and don't see anything obvious. If it does trigger any lockdep splats, those should be fixed. Signed-off-by: Rob Clark --- drivers/devfreq/devfreq.c | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/drivers/devfreq/devfreq.c b/drivers/devfreq/devfreq.c index e5558ec68ce8..81add6064406 100644 --- a/drivers/devfreq/devfreq.c +++ b/drivers/devfreq/devfreq.c @@ -817,6 +817,12 @@ struct devfreq *devfreq_add_device(struct device *dev, } mutex_init(&devfreq->lock); + + /* Teach lockdep about lock ordering wrt. shrinker: */ + fs_reclaim_acquire(GFP_KERNEL); + might_lock(&devfreq->lock); + fs_reclaim_release(GFP_KERNEL); + devfreq->dev.parent = dev; devfreq->dev.class = devfreq_class; devfreq->dev.release = devfreq_dev_release; From patchwork Tue Aug 22 18:01:50 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 715989 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0E713EE49B7 for ; Tue, 22 Aug 2023 18:02:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229674AbjHVSCm (ORCPT ); Tue, 22 Aug 2023 14:02:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52554 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229693AbjHVSCi (ORCPT ); Tue, 22 Aug 2023 14:02:38 -0400 Received: from mail-pj1-x102b.google.com (mail-pj1-x102b.google.com [IPv6:2607:f8b0:4864:20::102b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6777F185; Tue, 22 Aug 2023 11:02:36 -0700 (PDT) Received: by mail-pj1-x102b.google.com with SMTP id 98e67ed59e1d1-26d63b608d4so1902404a91.2; Tue, 22 Aug 2023 11:02:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1692727356; x=1693332156; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=jwi88nXkWgcM9/OEm1nLYgH6sllE6KibtJsTy1TkfRY=; b=EG7ioNIUIceTPWyCUvSdDcnxfTj11NqrOW3xhYTfkYwJeISXyeQAe8qhaUeVGhVC33 YInKNYGuzwqKylb23Id2fHc1EPI5GkHMBGZVNiiRrOaFM2inyQVotYc88yB0+ap5oteZ ptoJaasGdzceRe9xgY0as2WIHBcbq5sHJ28cxQPpSUM9Nx94Lt86TyIO0a/iqQqKbsvV 0S1hll/g6wPR4QafMBuY3Zkluaf/CmLiCjfHMPMkiN5EgvL99GYELAUPgznmSs/62Vz7 MEXK1KDX83QEqQSsL5W/re7Dlon2s6xwTWtGdXR92ZOLcX2szBCkNrLxrjwcTiQca89R 4k1A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692727356; x=1693332156; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=jwi88nXkWgcM9/OEm1nLYgH6sllE6KibtJsTy1TkfRY=; b=hzpDgQ+uzenzt5jZGYJMzrFcJP02EHS3D6xxQj5qNIaHaM/nevkX7Wd3/PAdmxLm2d sZ7f7ACUQRuDQgRsO6fu+H1SDB7RRxZ6kT6wYAqLGqU6tOgy+u73A2I4w+ny7wS43ygA ESjLm9dsPuIeIhYfAiEuv103uJU6A7ksBEyIZC/5+bwOwTeIbrljFYt57GsVQFq1zBMS IE8Z42AQbL7eS1y6WsWumt++igY87eWbPtpDZlQNxeiJokN4Wvokq6oPx5GUXSMQVQAG fIc/BnKuH7aL1eLxJbLY4Rsj7KdxxUc/vzTGa2fLKX2erTAsbGQKUe8PPuPVtMZcmJz7 HdJQ== X-Gm-Message-State: AOJu0Yy6hpH3e2pEhBYy8BVjNf4zKdQ0Lvk+K5wZozzdWrlGv7s0DTK6 AYRZJRnyhmtzpkAuXYAU+PM= X-Google-Smtp-Source: AGHT+IHW2p3YcWjF+o1eUimt4hk+4aWpf0Seb42+6ZeecRRQBEykeMw+z/Rs5eTQFK9r2A62SzogVw== X-Received: by 2002:a17:90b:4c81:b0:26b:219f:3399 with SMTP id my1-20020a17090b4c8100b0026b219f3399mr6286786pjb.35.1692727355656; Tue, 22 Aug 2023 11:02:35 -0700 (PDT) Received: from localhost ([2a00:79e1:abd:4a00:6c80:7c10:75a0:44f4]) by smtp.gmail.com with ESMTPSA id n10-20020a17090a670a00b0025c1cfdb93esm8183747pjj.13.2023.08.22.11.02.34 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 22 Aug 2023 11:02:35 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, Rob Clark , "Rafael J . Wysocki" , Pavel Machek , Len Brown , Greg Kroah-Hartman , linux-pm@vger.kernel.org (open list:HIBERNATION (aka Software Suspend, aka swsusp)), linux-kernel@vger.kernel.org (open list) Subject: [PATCH v5 03/11] PM / QoS: Fix constraints alloc vs reclaim locking Date: Tue, 22 Aug 2023 11:01:50 -0700 Message-ID: <20230822180208.95556-4-robdclark@gmail.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230822180208.95556-1-robdclark@gmail.com> References: <20230822180208.95556-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org From: Rob Clark In the process of adding lockdep annotation for drm GPU scheduler's job_run() to detect potential deadlock against shrinker/reclaim, I hit this lockdep splat: ====================================================== WARNING: possible circular locking dependency detected 6.2.0-rc8-debug+ #558 Tainted: G W ------------------------------------------------------ ring0/125 is trying to acquire lock: ffffffd6d6ce0f28 (dev_pm_qos_mtx){+.+.}-{3:3}, at: dev_pm_qos_update_request+0x38/0x68 but task is already holding lock: ffffff8087239208 (&gpu->active_lock){+.+.}-{3:3}, at: msm_gpu_submit+0xec/0x178 which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #4 (&gpu->active_lock){+.+.}-{3:3}: __mutex_lock+0xcc/0x3c8 mutex_lock_nested+0x30/0x44 msm_gpu_submit+0xec/0x178 msm_job_run+0x78/0x150 drm_sched_main+0x290/0x370 kthread+0xf0/0x100 ret_from_fork+0x10/0x20 -> #3 (dma_fence_map){++++}-{0:0}: __dma_fence_might_wait+0x74/0xc0 dma_resv_lockdep+0x1f4/0x2f4 do_one_initcall+0x104/0x2bc kernel_init_freeable+0x344/0x34c kernel_init+0x30/0x134 ret_from_fork+0x10/0x20 -> #2 (mmu_notifier_invalidate_range_start){+.+.}-{0:0}: fs_reclaim_acquire+0x80/0xa8 slab_pre_alloc_hook.constprop.0+0x40/0x25c __kmem_cache_alloc_node+0x60/0x1cc __kmalloc+0xd8/0x100 topology_parse_cpu_capacity+0x8c/0x178 get_cpu_for_node+0x88/0xc4 parse_cluster+0x1b0/0x28c parse_cluster+0x8c/0x28c init_cpu_topology+0x168/0x188 smp_prepare_cpus+0x24/0xf8 kernel_init_freeable+0x18c/0x34c kernel_init+0x30/0x134 ret_from_fork+0x10/0x20 -> #1 (fs_reclaim){+.+.}-{0:0}: __fs_reclaim_acquire+0x3c/0x48 fs_reclaim_acquire+0x54/0xa8 slab_pre_alloc_hook.constprop.0+0x40/0x25c __kmem_cache_alloc_node+0x60/0x1cc kmalloc_trace+0x50/0xa8 dev_pm_qos_constraints_allocate+0x38/0x100 __dev_pm_qos_add_request+0xb0/0x1e8 dev_pm_qos_add_request+0x58/0x80 dev_pm_qos_expose_latency_limit+0x60/0x13c register_cpu+0x12c/0x130 topology_init+0xac/0xbc do_one_initcall+0x104/0x2bc kernel_init_freeable+0x344/0x34c kernel_init+0x30/0x134 ret_from_fork+0x10/0x20 -> #0 (dev_pm_qos_mtx){+.+.}-{3:3}: __lock_acquire+0xe00/0x1060 lock_acquire+0x1e0/0x2f8 __mutex_lock+0xcc/0x3c8 mutex_lock_nested+0x30/0x44 dev_pm_qos_update_request+0x38/0x68 msm_devfreq_boost+0x40/0x70 msm_devfreq_active+0xc0/0xf0 msm_gpu_submit+0x10c/0x178 msm_job_run+0x78/0x150 drm_sched_main+0x290/0x370 kthread+0xf0/0x100 ret_from_fork+0x10/0x20 other info that might help us debug this: Chain exists of: dev_pm_qos_mtx --> dma_fence_map --> &gpu->active_lock Possible unsafe locking scenario: CPU0 CPU1 ---- ---- lock(&gpu->active_lock); lock(dma_fence_map); lock(&gpu->active_lock); lock(dev_pm_qos_mtx); *** DEADLOCK *** 3 locks held by ring0/123: #0: ffffff8087251170 (&gpu->lock){+.+.}-{3:3}, at: msm_job_run+0x64/0x150 #1: ffffffd00b0e57e8 (dma_fence_map){++++}-{0:0}, at: msm_job_run+0x68/0x150 #2: ffffff8087251208 (&gpu->active_lock){+.+.}-{3:3}, at: msm_gpu_submit+0xec/0x178 stack backtrace: CPU: 6 PID: 123 Comm: ring0 Not tainted 6.2.0-rc8-debug+ #559 Hardware name: Google Lazor (rev1 - 2) with LTE (DT) Call trace: dump_backtrace.part.0+0xb4/0xf8 show_stack+0x20/0x38 dump_stack_lvl+0x9c/0xd0 dump_stack+0x18/0x34 print_circular_bug+0x1b4/0x1f0 check_noncircular+0x78/0xac __lock_acquire+0xe00/0x1060 lock_acquire+0x1e0/0x2f8 __mutex_lock+0xcc/0x3c8 mutex_lock_nested+0x30/0x44 dev_pm_qos_update_request+0x38/0x68 msm_devfreq_boost+0x40/0x70 msm_devfreq_active+0xc0/0xf0 msm_gpu_submit+0x10c/0x178 msm_job_run+0x78/0x150 drm_sched_main+0x290/0x370 kthread+0xf0/0x100 ret_from_fork+0x10/0x20 The issue is that dev_pm_qos_mtx is held in the runpm suspend/resume (or freq change) path, but it is also held across allocations that could recurse into shrinker. Solve this by changing dev_pm_qos_constraints_allocate() into a function that can be called unconditionally before the device qos object is needed and before aquiring dev_pm_qos_mtx. This way the allocations can be done without holding the mutex. In the case that we raced with another thread to allocate the qos object, detect this *after* acquiring the dev_pm_qos_mtx and simply free the redundant allocations. Suggested-by: Rafael J. Wysocki Signed-off-by: Rob Clark --- drivers/base/power/qos.c | 76 +++++++++++++++++++++++++++++----------- 1 file changed, 56 insertions(+), 20 deletions(-) diff --git a/drivers/base/power/qos.c b/drivers/base/power/qos.c index 8e93167f1783..7e95760d16dc 100644 --- a/drivers/base/power/qos.c +++ b/drivers/base/power/qos.c @@ -185,27 +185,33 @@ static int apply_constraint(struct dev_pm_qos_request *req, } /* - * dev_pm_qos_constraints_allocate + * dev_pm_qos_constraints_allocate: Allocate and initializes qos constraints * @dev: device to allocate data for * - * Called at the first call to add_request, for constraint data allocation - * Must be called with the dev_pm_qos_mtx mutex held + * Called to allocate constraints before dev_pm_qos_mtx mutex is held. Should + * be matched with a call to dev_pm_qos_constraints_set() once dev_pm_qos_mtx + * is held. */ -static int dev_pm_qos_constraints_allocate(struct device *dev) +static struct dev_pm_qos *dev_pm_qos_constraints_allocate(struct device *dev) { struct dev_pm_qos *qos; struct pm_qos_constraints *c; struct blocking_notifier_head *n; - qos = kzalloc(sizeof(*qos), GFP_KERNEL); + /* + * If constraints are already allocated, we can skip speculatively + * allocating a new one, as we don't have to work about qos transitioning + * from non-null to null. The constraints are only freed on device + * removal. + */ + if (dev->power.qos) + return NULL; + + qos = kzalloc(sizeof(*qos) + 3 * sizeof(*n), GFP_KERNEL); if (!qos) - return -ENOMEM; + return NULL; - n = kzalloc(3 * sizeof(*n), GFP_KERNEL); - if (!n) { - kfree(qos); - return -ENOMEM; - } + n = (struct blocking_notifier_head *)(qos + 1); c = &qos->resume_latency; plist_head_init(&c->list); @@ -227,11 +233,29 @@ static int dev_pm_qos_constraints_allocate(struct device *dev) INIT_LIST_HEAD(&qos->flags.list); + return qos; +} + +/* + * dev_pm_qos_constraints_set: Ensure dev->power.qos is set + * + * If dev->power.qos is already set, free the newly allocated qos constraints. + * Otherwise set dev->power.qos. Must be called with dev_pm_qos_mtx held. + * + * This split unsynchronized allocation and synchronized set moves allocation + * out from under dev_pm_qos_mtx, so that lockdep does does not get angry about + * drivers which use dev_pm_qos in paths related to shrinker/reclaim. + */ +static void dev_pm_qos_constraints_set(struct device *dev, struct dev_pm_qos *qos) +{ + if (dev->power.qos) { + kfree(qos); + return; + } + spin_lock_irq(&dev->power.lock); dev->power.qos = qos; spin_unlock_irq(&dev->power.lock); - - return 0; } static void __dev_pm_qos_hide_latency_limit(struct device *dev); @@ -309,7 +333,6 @@ void dev_pm_qos_constraints_destroy(struct device *dev) dev->power.qos = ERR_PTR(-ENODEV); spin_unlock_irq(&dev->power.lock); - kfree(qos->resume_latency.notifiers); kfree(qos); out: @@ -341,7 +364,7 @@ static int __dev_pm_qos_add_request(struct device *dev, if (IS_ERR(dev->power.qos)) ret = -ENODEV; else if (!dev->power.qos) - ret = dev_pm_qos_constraints_allocate(dev); + ret = -ENOMEM; trace_dev_pm_qos_add_request(dev_name(dev), type, value); if (ret) @@ -388,9 +411,11 @@ static int __dev_pm_qos_add_request(struct device *dev, int dev_pm_qos_add_request(struct device *dev, struct dev_pm_qos_request *req, enum dev_pm_qos_req_type type, s32 value) { + struct dev_pm_qos *qos = dev_pm_qos_constraints_allocate(dev); int ret; mutex_lock(&dev_pm_qos_mtx); + dev_pm_qos_constraints_set(dev, qos); ret = __dev_pm_qos_add_request(dev, req, type, value); mutex_unlock(&dev_pm_qos_mtx); return ret; @@ -535,14 +560,15 @@ EXPORT_SYMBOL_GPL(dev_pm_qos_remove_request); int dev_pm_qos_add_notifier(struct device *dev, struct notifier_block *notifier, enum dev_pm_qos_req_type type) { + struct dev_pm_qos *qos = dev_pm_qos_constraints_allocate(dev); int ret = 0; mutex_lock(&dev_pm_qos_mtx); + dev_pm_qos_constraints_set(dev, qos); + if (IS_ERR(dev->power.qos)) ret = -ENODEV; - else if (!dev->power.qos) - ret = dev_pm_qos_constraints_allocate(dev); if (ret) goto unlock; @@ -903,12 +929,22 @@ s32 dev_pm_qos_get_user_latency_tolerance(struct device *dev) */ int dev_pm_qos_update_user_latency_tolerance(struct device *dev, s32 val) { - int ret; + struct dev_pm_qos *qos = dev_pm_qos_constraints_allocate(dev); + int ret = 0; mutex_lock(&dev_pm_qos_mtx); - if (IS_ERR_OR_NULL(dev->power.qos) - || !dev->power.qos->latency_tolerance_req) { + dev_pm_qos_constraints_set(dev, qos); + + if (IS_ERR(dev->power.qos)) + ret = -ENODEV; + else if (!dev->power.qos) + ret = -ENOMEM; + + if (ret) + goto out; + + if (!dev->power.qos->latency_tolerance_req) { struct dev_pm_qos_request *req; if (val < 0) { From patchwork Tue Aug 22 18:01:52 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 715988 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 089A1EE49B1 for ; Tue, 22 Aug 2023 18:02:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229735AbjHVSCs (ORCPT ); Tue, 22 Aug 2023 14:02:48 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36380 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229682AbjHVSCm (ORCPT ); Tue, 22 Aug 2023 14:02:42 -0400 Received: from mail-pl1-x630.google.com (mail-pl1-x630.google.com [IPv6:2607:f8b0:4864:20::630]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 208C7CC; Tue, 22 Aug 2023 11:02:41 -0700 (PDT) Received: by mail-pl1-x630.google.com with SMTP id d9443c01a7336-1bf078d5fb7so31628845ad.0; Tue, 22 Aug 2023 11:02:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1692727360; x=1693332160; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=pQPrHId/lCHDPLFw57GJfdnbAiLt5erdzCNTXGSDhdo=; b=Ie2ANdwPuvvaJZxkLtrV0mygSDZzPCCbtaixTP8JhVB/Dc8l9V7ZwtmGdhFz5nHUp8 0pwhfQZmLpaJW8iDA2VwHvcAgkKpV0TMqB9fKSlHP7RDEocY2yvNGjhM95DxrPb8fidf FlIJaGh+hNfhxNGHXNRMtQy2Bwu71FYUO/2lsS+d0iD75UwGCdHdk+430CWP03moOP/g vh1w62NFiGunZUR5wq3X8AXFQHYj7/ZNM0nvpnAGGzLB7biAnJZAKd/hS/75nPJidBRd wyB7HRXthbT3p8bRWgKtAX5Wh9DXBfund8EKANorutH3NVpoOehVuli/wxpVea/0kRs9 FAUA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692727360; x=1693332160; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=pQPrHId/lCHDPLFw57GJfdnbAiLt5erdzCNTXGSDhdo=; b=NV7eOdpwyf22aGloh5Tr5HzLMuSfyhavubVTydg6XeL7aHzsiplqRTRJFZ7DTlu433 9li4AMWSfqo+oBPVRlzm/JLB+OvMkx5eKRuz0Xy+M2sJQ4AvVkWQNu+uNBX0z/p1af/d k3NpP2hbLRcczPZ4JpMYJMOE27jbPExYYbe1IZQXHEOqfgerfK/NP+EBOxuOrNTHWBBx ijSVwbO33Sa/XnSUDJEuz1BAAECgSS2b1xc2RbkvF76ij1NIcG4MKi2Rk23rdQMGi3vN I0xOOoi10XCiaeff3HlCLaKgzdxqgwqSwgdduUltMoab7kZAp+R977DLqLlQfd2/hsrK vwsg== X-Gm-Message-State: AOJu0YzscdrVEfr/oNOXZ4rmcPIag44pQsjdHcH351knpanIQsKhZ8KQ Jb4g2RTziYHLGK6jTinb03I= X-Google-Smtp-Source: AGHT+IGraTUb9fjtbBbpbEedDJNHXKDUjifcWUB+DLrPWbQJCnkqOHt2z8O4M1Y+WpOEnMa46nzFqw== X-Received: by 2002:a17:902:c086:b0:1bc:5924:2da2 with SMTP id j6-20020a170902c08600b001bc59242da2mr8036097pld.56.1692727360397; Tue, 22 Aug 2023 11:02:40 -0700 (PDT) Received: from localhost ([2a00:79e1:abd:4a00:6c80:7c10:75a0:44f4]) by smtp.gmail.com with ESMTPSA id f5-20020a170902684500b001bf6ea340b3sm5744775pln.116.2023.08.22.11.02.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 22 Aug 2023 11:02:39 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, Rob Clark , "Rafael J. Wysocki" , Pavel Machek , Len Brown , Greg Kroah-Hartman , linux-pm@vger.kernel.org (open list:HIBERNATION (aka Software Suspend, aka swsusp)), linux-kernel@vger.kernel.org (open list) Subject: [PATCH v5 05/11] PM / QoS: Teach lockdep about dev_pm_qos_mtx locking order Date: Tue, 22 Aug 2023 11:01:52 -0700 Message-ID: <20230822180208.95556-6-robdclark@gmail.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230822180208.95556-1-robdclark@gmail.com> References: <20230822180208.95556-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org From: Rob Clark Annotate dev_pm_qos_mtx to teach lockdep to scream about allocations that could trigger reclaim under dev_pm_qos_mtx. Signed-off-by: Rob Clark --- drivers/base/power/qos.c | 11 +++++++++++ 1 file changed, 11 insertions(+) diff --git a/drivers/base/power/qos.c b/drivers/base/power/qos.c index 09834f3354d7..2018c805a6f1 100644 --- a/drivers/base/power/qos.c +++ b/drivers/base/power/qos.c @@ -1017,3 +1017,14 @@ void dev_pm_qos_hide_latency_tolerance(struct device *dev) pm_runtime_put(dev); } EXPORT_SYMBOL_GPL(dev_pm_qos_hide_latency_tolerance); + +static int __init dev_pm_qos_init(void) +{ + /* Teach lockdep about lock ordering wrt. shrinker: */ + fs_reclaim_acquire(GFP_KERNEL); + might_lock(&dev_pm_qos_mtx); + fs_reclaim_release(GFP_KERNEL); + + return 0; +} +early_initcall(dev_pm_qos_init); From patchwork Tue Aug 22 18:01:54 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 715987 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5E2C1EE49B1 for ; Tue, 22 Aug 2023 18:02:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229764AbjHVSCy (ORCPT ); Tue, 22 Aug 2023 14:02:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36506 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229731AbjHVSCs (ORCPT ); Tue, 22 Aug 2023 14:02:48 -0400 Received: from mail-pf1-x434.google.com (mail-pf1-x434.google.com [IPv6:2607:f8b0:4864:20::434]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D8C40E65; Tue, 22 Aug 2023 11:02:45 -0700 (PDT) Received: by mail-pf1-x434.google.com with SMTP id d2e1a72fcca58-68a3c55532fso1927710b3a.3; Tue, 22 Aug 2023 11:02:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1692727365; x=1693332165; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=GuYmHdMlJJm2W9mV95yYZgg2mXFJPDcfJ3tRYlFlHrM=; b=gqFsS4g898zloSUsvUOxRyxm13ck+hm8PgrmGdFF5H3jcXSedee1lUl2lXxL1gcu2v RWXKkTldqfGm0XNCOOoSTf6/T3iV/OCF9hcOHhM5RF9anNmQKwL+EzX1otqxzjW93w3T iL+SFbFFpmBvIv2bKl31eb0+TPhU4dM96amBC67LQWNQ3EFxjkt/GUkm3I8AR0d2SS1h GRcw4uMgshDoDI6JBqCAV6owJnFssXoOk+9szBIP6tO568OCvqg7uYbcIbEGEfqlnTSG j6FlH4hBlV+qLudlrqHiTPvE0YwWfd2cM7CACjfG7mcSTsJyYRp36SuV3CxyEmmpMC6g RDOA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692727365; x=1693332165; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=GuYmHdMlJJm2W9mV95yYZgg2mXFJPDcfJ3tRYlFlHrM=; b=a7CG2JcJq8nylKu0LkanW66b83g9l66SOnfXdkIdJbOIZBTUDJOQMmb0FIjoImNHQz vpPxdrkTkmfKCfcEph5rL7ZOIPR30/8nWWNXVemgGJSTIoOIT8+7nXpVpF7g42MrDhkt e/ltx6WzdplphgXpKLTtjHUrUw/rvRmtaZYk8VRk2h/y9Dfra5/M0jDrTo/4RS9N51f6 L8D8cGzJiwYXaj2s7RpW1hPbYa2AKN75kEc9RFhdi2Ai859Dt5Viu4Vtt94vVY8HIve9 vXa4uMI/8fThfJp0Ybr79Jg/S9ntjqniL/HvvoKoZ3X83jKiRwE8fTAR760UDYukpDdE 6ujg== X-Gm-Message-State: AOJu0YwEAS5aBojQjGoinDEjeaGrWr+Lc5MOxFaXl/aV22gdhYT9lY2x V5yEwPxkXeZXdbCs6y6s4kw= X-Google-Smtp-Source: AGHT+IEGI5zSTja+ppqkBQ3KGECxtd60QD4ZBadZiwQVZDV8aMFXhaN30CDm1Y076HbxUUkkvQ9D7A== X-Received: by 2002:a05:6a00:cd0:b0:66d:263f:d923 with SMTP id b16-20020a056a000cd000b0066d263fd923mr9211500pfv.20.1692727365270; Tue, 22 Aug 2023 11:02:45 -0700 (PDT) Received: from localhost ([2a00:79e1:abd:4a00:6c80:7c10:75a0:44f4]) by smtp.gmail.com with ESMTPSA id k5-20020aa792c5000000b0068a0922b1f0sm6226293pfa.137.2023.08.22.11.02.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 22 Aug 2023 11:02:44 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, Rob Clark , Georgi Djakov , linux-pm@vger.kernel.org (open list:INTERCONNECT API), linux-kernel@vger.kernel.org (open list) Subject: [PATCH v5 07/11] interconnect: Teach lockdep about icc_bw_lock order Date: Tue, 22 Aug 2023 11:01:54 -0700 Message-ID: <20230822180208.95556-8-robdclark@gmail.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230822180208.95556-1-robdclark@gmail.com> References: <20230822180208.95556-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org From: Rob Clark Teach lockdep that icc_bw_lock is needed in code paths that could deadlock if they trigger reclaim. Signed-off-by: Rob Clark --- drivers/interconnect/core.c | 10 +++++++++- 1 file changed, 9 insertions(+), 1 deletion(-) diff --git a/drivers/interconnect/core.c b/drivers/interconnect/core.c index e15a92a79df1..1afbc4f7c6e7 100644 --- a/drivers/interconnect/core.c +++ b/drivers/interconnect/core.c @@ -1041,13 +1041,21 @@ void icc_sync_state(struct device *dev) } } } + mutex_unlock(&icc_bw_lock); mutex_unlock(&icc_lock); } EXPORT_SYMBOL_GPL(icc_sync_state); static int __init icc_init(void) { - struct device_node *root = of_find_node_by_path("/"); + struct device_node *root; + + /* Teach lockdep about lock ordering wrt. shrinker: */ + fs_reclaim_acquire(GFP_KERNEL); + might_lock(&icc_bw_lock); + fs_reclaim_release(GFP_KERNEL); + + root = of_find_node_by_path("/"); providers_count = of_count_icc_providers(root); of_node_put(root);