From patchwork Fri Jun 19 14:31:47 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg Kroah-Hartman X-Patchwork-Id: 223968 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 41707C433E0 for ; Fri, 19 Jun 2020 15:50:42 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1FDB620706 for ; Fri, 19 Jun 2020 15:50:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1592581842; bh=9EqlyXrb0fCdUy1tdVyJjkzG9fI7Ep6+jbw1tv+gUtg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=wqEvm1u5NoVs3NnmjQpyevxKcSxiRJCNaad7/TVAnF6CXjlIuKgkw7it4lgK+RPZR yz2B/eNFngwjNFCpsN12EGF3hJQZfMndjHRkjZtQCpnLSwQ5a1QkOoJ5j+rcksh8Te w6AEPGC5dMi8PhA4nU8o0BgA6nEVh+P78Ol7XWCw= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2392409AbgFSPu1 (ORCPT ); Fri, 19 Jun 2020 11:50:27 -0400 Received: from mail.kernel.org ([198.145.29.99]:56370 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2390835AbgFSPYi (ORCPT ); Fri, 19 Jun 2020 11:24:38 -0400 Received: from localhost (83-86-89-107.cable.dynamic.v4.ziggo.nl [83.86.89.107]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 7E5A721548; Fri, 19 Jun 2020 15:24:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1592580277; bh=9EqlyXrb0fCdUy1tdVyJjkzG9fI7Ep6+jbw1tv+gUtg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=qqTwiw5Gm/fNdbfuweIpvAbbYUQs3dHKxXdBBhaiVwvj3KUSvjnGa4FlxcpDjpS0P eEDgY9aCOjarwVKIZxcDzVdn+vpbL71ra2gG//I7Q/yTNE6QauN778YFE3Rlb0LLZN 5qf0ntzmKNCnbUrVLrGgpQv33Y6N6NpIseI7p3p4= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Tejun Heo , Andy Newell , Jens Axboe , Sasha Levin Subject: [PATCH 5.7 189/376] iocost: dont let vrate run wild while theres no saturation signal Date: Fri, 19 Jun 2020 16:31:47 +0200 Message-Id: <20200619141719.296173569@linuxfoundation.org> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20200619141710.350494719@linuxfoundation.org> References: <20200619141710.350494719@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Sender: stable-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Tejun Heo [ Upstream commit 81ca627a933063fa63a6d4c66425de822a2ab7f5 ] When the QoS targets are met and nothing is being throttled, there's no way to tell how saturated the underlying device is - it could be almost entirely idle, at the cusp of saturation or anywhere inbetween. Given that there's no information, it's best to keep vrate as-is in this state. Before 7cd806a9a953 ("iocost: improve nr_lagging handling"), this was the case - if the device isn't missing QoS targets and nothing is being throttled, busy_level was reset to zero. While fixing nr_lagging handling, 7cd806a9a953 ("iocost: improve nr_lagging handling") broke this. Now, while the device is hitting QoS targets and nothing is being throttled, vrate keeps getting adjusted according to the existing busy_level. This led to vrate keeping climing till it hits max when there's an IO issuer with limited request concurrency if the vrate started low. vrate starts getting adjusted upwards until the issuer can issue IOs w/o being throttled. From then on, QoS targets keeps getting met and nothing on the system needs throttling and vrate keeps getting increased due to the existing busy_level. This patch makes the following changes to the busy_level logic. * Reset busy_level if nr_shortages is zero to avoid the above scenario. * Make non-zero nr_lagging block lowering nr_level but still clear positive busy_level if there's clear non-saturation signal - QoS targets are met and nr_shortages is non-zero. nr_lagging's role is preventing adjusting vrate upwards while there are long-running commands and it shouldn't keep busy_level positive while there's clear non-saturation signal. * Restructure code for clarity and add comments. Signed-off-by: Tejun Heo Reported-by: Andy Newell Fixes: 7cd806a9a953 ("iocost: improve nr_lagging handling") Signed-off-by: Jens Axboe Signed-off-by: Sasha Levin --- block/blk-iocost.c | 28 ++++++++++++++++++++++++---- 1 file changed, 24 insertions(+), 4 deletions(-) diff --git a/block/blk-iocost.c b/block/blk-iocost.c index 7c1fe605d0d6..ef193389fffe 100644 --- a/block/blk-iocost.c +++ b/block/blk-iocost.c @@ -1543,19 +1543,39 @@ skip_surplus_transfers: if (rq_wait_pct > RQ_WAIT_BUSY_PCT || missed_ppm[READ] > ppm_rthr || missed_ppm[WRITE] > ppm_wthr) { + /* clearly missing QoS targets, slow down vrate */ ioc->busy_level = max(ioc->busy_level, 0); ioc->busy_level++; } else if (rq_wait_pct <= RQ_WAIT_BUSY_PCT * UNBUSY_THR_PCT / 100 && missed_ppm[READ] <= ppm_rthr * UNBUSY_THR_PCT / 100 && missed_ppm[WRITE] <= ppm_wthr * UNBUSY_THR_PCT / 100) { - /* take action iff there is contention */ - if (nr_shortages && !nr_lagging) { + /* QoS targets are being met with >25% margin */ + if (nr_shortages) { + /* + * We're throttling while the device has spare + * capacity. If vrate was being slowed down, stop. + */ ioc->busy_level = min(ioc->busy_level, 0); - /* redistribute surpluses first */ - if (!nr_surpluses) + + /* + * If there are IOs spanning multiple periods, wait + * them out before pushing the device harder. If + * there are surpluses, let redistribution work it + * out first. + */ + if (!nr_lagging && !nr_surpluses) ioc->busy_level--; + } else { + /* + * Nobody is being throttled and the users aren't + * issuing enough IOs to saturate the device. We + * simply don't know how close the device is to + * saturation. Coast. + */ + ioc->busy_level = 0; } } else { + /* inside the hysterisis margin, we're good */ ioc->busy_level = 0; }