From patchwork Tue Jun 25 05:12:43 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Valente X-Patchwork-Id: 167672 Delivered-To: patch@linaro.org Received: by 2002:a92:4782:0:0:0:0:0 with SMTP id e2csp5086043ilk; Mon, 24 Jun 2019 22:13:23 -0700 (PDT) X-Google-Smtp-Source: APXvYqylTPrmIFjo0UYKxH53nwmnudO44OX4Cw45dnEHjettsmABFyiSMT7rtZ9HkiJrJCjPmWC5 X-Received: by 2002:a17:902:2862:: with SMTP id e89mr17511556plb.258.1561439603276; Mon, 24 Jun 2019 22:13:23 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1561439603; cv=none; d=google.com; s=arc-20160816; b=P+VjRYJBWYONmvN61Ae3Q3e2PQXMwj6Hjv2z2BnFB+bq1CylKhKp5sO3i80d/QKxSW lvChrV93a94TCYM/OWQCqag/d5OrJXieDMNpamQHfUZsJf5qXxuwNfqFDLb8faOvFm0J hfj1pggOBISSVMX1wht61/dZA10de/VlsmOlfza1Ci8JXOPeAYPCIZR4p2qtx2zAucjT 88w234OWc1UCmPmOAHn2m2wSJ80WNcgqx4fzS/JzsDgagval+kasrtNab7n0j+klnJCq GpAheVAhvO9IDTJ8LtS6VStya3dDoSOBFbNEC/xRK439F1A0sRJLjvRCExarPiVx7Spu GEJQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=13Oq+3lknK+OOIpb6iHMyHSTB86UWpthoaBFOe5XCcE=; b=xn3eHivXvE9ZJ7FswaDkkLNT2bt4qT0GD6AvGGuKEauc1dV7voXsggjhaIqkQ+y6Ge I3b+uU4byv2HHvzX5EZirKz6cpUsa9Jo1xRtlETxIXDiKPLw9yVMHOzwexY/cTUTUqub z+cfBnyKrj7JvxuaDZe281UkxLaLO+47jlQik83Ac7XZjkz1RuYgjxSPyrpWiAJQH6t6 SR+BgYnaxqC8+pAKx34+zorKbodQXBY+HdjwPJ1ibxfADzJ6P0iFRUE2ohdCLeNfMIxM PhPxSHHpYFzS+ssgdaRAhyqsmV7YCvSG2VjWKQ8uXyX19bU+oSfgoM5tUR7DnSZMIMll B50g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b="lguB/sG4"; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id o24si13228270pll.280.2019.06.24.22.13.22; Mon, 24 Jun 2019 22:13:23 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b="lguB/sG4"; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728063AbfFYFNT (ORCPT + 30 others); Tue, 25 Jun 2019 01:13:19 -0400 Received: from mail-wm1-f68.google.com ([209.85.128.68]:53166 "EHLO mail-wm1-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726930AbfFYFNI (ORCPT ); Tue, 25 Jun 2019 01:13:08 -0400 Received: by mail-wm1-f68.google.com with SMTP id s3so1379630wms.2 for ; Mon, 24 Jun 2019 22:13:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=13Oq+3lknK+OOIpb6iHMyHSTB86UWpthoaBFOe5XCcE=; b=lguB/sG4w2WVpo8gMGCUgUWhrmQ8C1yHw3r0GeMoSMpHgkib9Lc8QSSxR7J2+NwQlE hOtX+ugNI4cDJtZvnr0ndoA5MPtuBdAZa4x634KDEGli5jjBcQFzPrvAqI4N+Ez4MH5w 2w7HISziG2jjeo5ylGNLqNzY2tx+QAABoVD1qVlsdanr2//psoBD45QD6J9cvaY9QdYP irRTdGYBG7r1QOey0Ux2+6NZo3XFNduJgYS6rXdMD5N6UKtbAU4jVV6gCu5G/FIvZBw7 Yj7JeN/IZmeR5QUwHRaPil2huqVRvV+/BdhJVBd3pBA8qD602ZAMx6rMBF70On4fQu6E tC/Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=13Oq+3lknK+OOIpb6iHMyHSTB86UWpthoaBFOe5XCcE=; b=s3D9mszecDqhMMNm4n4MwDb6I7jdjLqcPbHIjeWEbwLfAx+33DGFHGfZeCbmMJmIIR sE+rcRv+IHbArCRzReHAv0Tc5H/UmYT1lZg/X/K275eviU7p5XZaUJBTMtUTd5C68F5f m1oz/HUBw239y3lCzm3VTCH4OYrHKPnwzqbioqCQDX4H/JqEDESm/ZjeNtnRQDAuF8PL 7yXAEl9vvGBOtB7PlnUofCbIOUtzhZbKaR7RMZg/MrBeAxhgdMn62RZp5+Gup6Gol2+r VROlTl/Rp/j1tRPXUpC3z1zKJLDJxHRDkPQpelh+7kobWE7fQ/V+86MEAaNht7aSeByc 5ziA== X-Gm-Message-State: APjAAAWBmipXtdrDP0jsH0KFUwGEEuPgVBRdXBNlOWRjTIaJVwn5MTzj 8h41rbsbJSb+Q72S0NLRzxrHjg== X-Received: by 2002:a1c:618a:: with SMTP id v132mr18010389wmb.17.1561439585263; Mon, 24 Jun 2019 22:13:05 -0700 (PDT) Received: from localhost.localdomain (146-241-102-168.dyn.eolo.it. [146.241.102.168]) by smtp.gmail.com with ESMTPSA id q20sm28543149wra.36.2019.06.24.22.13.04 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 24 Jun 2019 22:13:04 -0700 (PDT) From: Paolo Valente To: Jens Axboe Cc: linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, ulf.hansson@linaro.org, linus.walleij@linaro.org, bfq-iosched@googlegroups.com, oleksandr@natalenko.name, bottura.nicola95@gmail.com, srivatsa@csail.mit.edu, Paolo Valente Subject: [PATCH BUGFIX IMPROVEMENT V2 1/7] block, bfq: reset inject limit when think-time state changes Date: Tue, 25 Jun 2019 07:12:43 +0200 Message-Id: <20190625051249.39265-2-paolo.valente@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190625051249.39265-1-paolo.valente@linaro.org> References: <20190625051249.39265-1-paolo.valente@linaro.org> MIME-Version: 1.0 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Until the base value of the request service times gets finally computed for a bfq_queue, the inject limit does depend on the think-time state (short|long). The limit must be 0 or 1 if the think time is deemed, respectively, as short or long. However, such a check and possible limit update is performed only periodically, once per second. So, to make the injection mechanism much more reactive, this commit performs the update also every time the think-time state changes. In addition, in the following special case, this commit lets the inject limit of a bfq_queue bfqq remain equal to 1 even if bfqq's think time is short: bfqq's I/O is synchronized with that of some other queue, i.e., bfqq may receive new I/O only after the I/O of the other queue is completed. Keeping the inject limit to 1 allows the blocking I/O to be served while bfqq is in service. And this is very convenient both for bfqq and for the total throughput, as explained in detail in the comments in bfq_update_has_short_ttime(). Reported-by: Srivatsa S. Bhat (VMware) Tested-by: Srivatsa S. Bhat (VMware) Signed-off-by: Paolo Valente --- block/bfq-iosched.c | 219 ++++++++++++++++++++++++++++++-------------- 1 file changed, 151 insertions(+), 68 deletions(-) -- 2.20.1 diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c index 44c6bbcd7720..9bc10198ddff 100644 --- a/block/bfq-iosched.c +++ b/block/bfq-iosched.c @@ -1735,6 +1735,72 @@ static void bfq_bfqq_handle_idle_busy_switch(struct bfq_data *bfqd, false, BFQQE_PREEMPTED); } +static void bfq_reset_inject_limit(struct bfq_data *bfqd, + struct bfq_queue *bfqq) +{ + /* invalidate baseline total service time */ + bfqq->last_serv_time_ns = 0; + + /* + * Reset pointer in case we are waiting for + * some request completion. + */ + bfqd->waited_rq = NULL; + + /* + * If bfqq has a short think time, then start by setting the + * inject limit to 0 prudentially, because the service time of + * an injected I/O request may be higher than the think time + * of bfqq, and therefore, if one request was injected when + * bfqq remains empty, this injected request might delay the + * service of the next I/O request for bfqq significantly. In + * case bfqq can actually tolerate some injection, then the + * adaptive update will however raise the limit soon. This + * lucky circumstance holds exactly because bfqq has a short + * think time, and thus, after remaining empty, is likely to + * get new I/O enqueued---and then completed---before being + * expired. This is the very pattern that gives the + * limit-update algorithm the chance to measure the effect of + * injection on request service times, and then to update the + * limit accordingly. + * + * However, in the following special case, the inject limit is + * left to 1 even if the think time is short: bfqq's I/O is + * synchronized with that of some other queue, i.e., bfqq may + * receive new I/O only after the I/O of the other queue is + * completed. Keeping the inject limit to 1 allows the + * blocking I/O to be served while bfqq is in service. And + * this is very convenient both for bfqq and for overall + * throughput, as explained in detail in the comments in + * bfq_update_has_short_ttime(). + * + * On the opposite end, if bfqq has a long think time, then + * start directly by 1, because: + * a) on the bright side, keeping at most one request in + * service in the drive is unlikely to cause any harm to the + * latency of bfqq's requests, as the service time of a single + * request is likely to be lower than the think time of bfqq; + * b) on the downside, after becoming empty, bfqq is likely to + * expire before getting its next request. With this request + * arrival pattern, it is very hard to sample total service + * times and update the inject limit accordingly (see comments + * on bfq_update_inject_limit()). So the limit is likely to be + * never, or at least seldom, updated. As a consequence, by + * setting the limit to 1, we avoid that no injection ever + * occurs with bfqq. On the downside, this proactive step + * further reduces chances to actually compute the baseline + * total service time. Thus it reduces chances to execute the + * limit-update algorithm and possibly raise the limit to more + * than 1. + */ + if (bfq_bfqq_has_short_ttime(bfqq)) + bfqq->inject_limit = 0; + else + bfqq->inject_limit = 1; + + bfqq->decrease_time_jif = jiffies; +} + static void bfq_add_request(struct request *rq) { struct bfq_queue *bfqq = RQ_BFQQ(rq); @@ -1755,71 +1821,8 @@ static void bfq_add_request(struct request *rq) * bfq_update_inject_limit(). */ if (time_is_before_eq_jiffies(bfqq->decrease_time_jif + - msecs_to_jiffies(1000))) { - /* invalidate baseline total service time */ - bfqq->last_serv_time_ns = 0; - - /* - * Reset pointer in case we are waiting for - * some request completion. - */ - bfqd->waited_rq = NULL; - - /* - * If bfqq has a short think time, then start - * by setting the inject limit to 0 - * prudentially, because the service time of - * an injected I/O request may be higher than - * the think time of bfqq, and therefore, if - * one request was injected when bfqq remains - * empty, this injected request might delay - * the service of the next I/O request for - * bfqq significantly. In case bfqq can - * actually tolerate some injection, then the - * adaptive update will however raise the - * limit soon. This lucky circumstance holds - * exactly because bfqq has a short think - * time, and thus, after remaining empty, is - * likely to get new I/O enqueued---and then - * completed---before being expired. This is - * the very pattern that gives the - * limit-update algorithm the chance to - * measure the effect of injection on request - * service times, and then to update the limit - * accordingly. - * - * On the opposite end, if bfqq has a long - * think time, then start directly by 1, - * because: - * a) on the bright side, keeping at most one - * request in service in the drive is unlikely - * to cause any harm to the latency of bfqq's - * requests, as the service time of a single - * request is likely to be lower than the - * think time of bfqq; - * b) on the downside, after becoming empty, - * bfqq is likely to expire before getting its - * next request. With this request arrival - * pattern, it is very hard to sample total - * service times and update the inject limit - * accordingly (see comments on - * bfq_update_inject_limit()). So the limit is - * likely to be never, or at least seldom, - * updated. As a consequence, by setting the - * limit to 1, we avoid that no injection ever - * occurs with bfqq. On the downside, this - * proactive step further reduces chances to - * actually compute the baseline total service - * time. Thus it reduces chances to execute the - * limit-update algorithm and possibly raise the - * limit to more than 1. - */ - if (bfq_bfqq_has_short_ttime(bfqq)) - bfqq->inject_limit = 0; - else - bfqq->inject_limit = 1; - bfqq->decrease_time_jif = jiffies; - } + msecs_to_jiffies(1000))) + bfq_reset_inject_limit(bfqd, bfqq); /* * The following conditions must hold to setup a new @@ -4855,7 +4858,7 @@ static void bfq_update_has_short_ttime(struct bfq_data *bfqd, struct bfq_queue *bfqq, struct bfq_io_cq *bic) { - bool has_short_ttime = true; + bool has_short_ttime = true, state_changed; /* * No need to update has_short_ttime if bfqq is async or in @@ -4880,13 +4883,93 @@ static void bfq_update_has_short_ttime(struct bfq_data *bfqd, bfqq->ttime.ttime_mean > bfqd->bfq_slice_idle)) has_short_ttime = false; - bfq_log_bfqq(bfqd, bfqq, "update_has_short_ttime: has_short_ttime %d", - has_short_ttime); + state_changed = has_short_ttime != bfq_bfqq_has_short_ttime(bfqq); if (has_short_ttime) bfq_mark_bfqq_has_short_ttime(bfqq); else bfq_clear_bfqq_has_short_ttime(bfqq); + + /* + * Until the base value for the total service time gets + * finally computed for bfqq, the inject limit does depend on + * the think-time state (short|long). In particular, the limit + * is 0 or 1 if the think time is deemed, respectively, as + * short or long (details in the comments in + * bfq_update_inject_limit()). Accordingly, the next + * instructions reset the inject limit if the think-time state + * has changed and the above base value is still to be + * computed. + * + * However, the reset is performed only if more than 100 ms + * have elapsed since the last update of the inject limit, or + * (inclusive) if the change is from short to long think + * time. The reason for this waiting is as follows. + * + * bfqq may have a long think time because of a + * synchronization with some other queue, i.e., because the + * I/O of some other queue may need to be completed for bfqq + * to receive new I/O. This happens, e.g., if bfqq is + * associated with a process that does some sync. A sync + * generates extra blocking I/O, which must be completed + * before the process associated with bfqq can go on with its + * I/O. + * + * If such a synchronization is actually in place, then, + * without injection on bfqq, the blocking I/O cannot happen + * to served while bfqq is in service. As a consequence, if + * bfqq is granted I/O-dispatch-plugging, then bfqq remains + * empty, and no I/O is dispatched, until the idle timeout + * fires. This is likely to result in lower bandwidth and + * higher latencies for bfqq, and in a severe loss of total + * throughput. + * + * On the opposite end, a non-zero inject limit may allow the + * I/O that blocks bfqq to be executed soon, and therefore + * bfqq to receive new I/O soon. But, if this actually + * happens, then the next think-time sample for bfqq may be + * very low. This in turn may cause bfqq's think time to be + * deemed short. Without the 100 ms barrier, this new state + * change would cause the body of the next if to be executed + * immediately. But this would set to 0 the inject + * limit. Without injection, the blocking I/O would cause the + * think time of bfqq to become long again, and therefore the + * inject limit to be raised again, and so on. The only effect + * of such a steady oscillation between the two think-time + * states would be to prevent effective injection on bfqq. + * + * In contrast, if the inject limit is not reset during such a + * long time interval as 100 ms, then the number of short + * think time samples can grow significantly before the reset + * is allowed. As a consequence, the think time state can + * become stable before the reset. There will be no state + * change when the 100 ms elapse, and therefore no reset of + * the inject limit. The inject limit remains steadily equal + * to 1 both during and after the 100 ms. So injection can be + * performed at all times, and throughput gets boosted. + * + * An inject limit equal to 1 is however in conflict, in + * general, with the fact that the think time of bfqq is + * short, because injection may be likely to delay bfqq's I/O + * (as explained in the comments in + * bfq_update_inject_limit()). But this does not happen in + * this special case, because bfqq's low think time is due to + * an effective handling of a synchronization, through + * injection. In this special case, bfqq's I/O does not get + * delayed by injection; on the contrary, bfqq's I/O is + * brought forward, because it is not blocked for + * milliseconds. + * + * In addition, during the 100 ms, the base value for the + * total service time is likely to get finally computed, + * freeing the inject limit from its relation with the think + * time. + */ + if (state_changed && bfqq->last_serv_time_ns == 0 && + (time_is_before_eq_jiffies(bfqq->decrease_time_jif + + msecs_to_jiffies(100)) || + !has_short_ttime)) + bfq_reset_inject_limit(bfqd, bfqq); } /* From patchwork Tue Jun 25 05:12:44 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Valente X-Patchwork-Id: 167671 Delivered-To: patch@linaro.org Received: by 2002:a92:4782:0:0:0:0:0 with SMTP id e2csp5085946ilk; Mon, 24 Jun 2019 22:13:16 -0700 (PDT) X-Google-Smtp-Source: APXvYqyCzJIQI0H6/LE5JE2eh2x25wkjFKPLnvoslyCPWzojOCaeGiv+F24r/DrhqRdGVWVHli+K X-Received: by 2002:a17:90a:21cc:: with SMTP id q70mr30367016pjc.56.1561439596747; Mon, 24 Jun 2019 22:13:16 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1561439596; cv=none; d=google.com; s=arc-20160816; b=LaPMxKbcznuM5knClryYyK67uIN4NqZVmoafP856IKasLq5OFAjRki3NOQ2y3H743S GEpadot/Sy1Mz4umRGx7yPhxmqjz5jpSBHSHNGG3OpnEFVx8kFUUUhP4+GEHD57yOxvp wZSbhkK/JnTp63sHAGtkTg495yDTuxWDMqGKOk9As7/VO1UyLnPC45DMKZWo8BtuRvS1 rt6t3bZw24keX67Z47O2J5u6pFTNQ2u10IWpvIwMPS/3rttrl+Z+tH21+hx/qW4tfxZR yVYRlCaIqPew4glvFGrW8x99fJzp/mQjdsI67+8ojw44RoeLAyuhN6HjmU5CvWGbgoz1 vVww== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=4Ut8pJenbbfhanl2aOoK0pt6xWf0qOGuFx8Wspm0t8o=; b=kXRF2/yW0rqf3n5evfvgayARBmH9y8pVOqhN0sznDBPmxuSN8zHkAniL6Q5YkWtd7M MxG/qnPZNArcRRuoWqGe5NJR2wVEj0lIdfESjXvZAyb8rvIF18DVrSPhRxyIevM28tLh +0oaCu2v9jhIHEFvDyTUe78SfzV71tRQQ2L7jPQbMK9y1Di/qPmi7QUfXYym6292whhV KO+vnK5zXxUNHAIDYmFfOmlpopH84sX8Ls9wYO91Jkii1jkA3a+0E/9y33cUEMR290UJ Nt3I6Y++ZD53TZai/MQ0QkwuUSjLbKI/huRLSLxsXTSCRU7leLJ55N7rf2OLDjEeWf3U jTiA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=MQYzQbNY; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id e1si1649154pjr.28.2019.06.24.22.13.16; Mon, 24 Jun 2019 22:13:16 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=MQYzQbNY; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728021AbfFYFNP (ORCPT + 30 others); Tue, 25 Jun 2019 01:13:15 -0400 Received: from mail-wr1-f66.google.com ([209.85.221.66]:36233 "EHLO mail-wr1-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727358AbfFYFNI (ORCPT ); Tue, 25 Jun 2019 01:13:08 -0400 Received: by mail-wr1-f66.google.com with SMTP id n4so15026553wrs.3 for ; Mon, 24 Jun 2019 22:13:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=4Ut8pJenbbfhanl2aOoK0pt6xWf0qOGuFx8Wspm0t8o=; b=MQYzQbNYvxusF57C18gXvM1eka5gQuEJ/R8ufmNqYIO+mmMce5I25JzdSIWDOIqTVT Vo4e6VYb8JsTA5twrek74WolVnQN6cPzfzssZpOrbGi0tKIShxyS2TWbrH1MKOxTtVd/ WL4n7tViZgPeip+DT4tsa67+W6eS2unJA/4dSiq9nKd7F+3rXGWzIa0BbnYhGqC79g9Z djoqYLPqJSSOKT8waUiK6BTGcp3HccM9aVhimjv9A/rT6gg2ZzKmJf5h7d2t/8jeUpNL CdG71P4tuL/yFhnUKJmoS4TafUoWR71HRUlC1pU/W7X3kvUPNalBJZO5sd46xa86HVLX cmXw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=4Ut8pJenbbfhanl2aOoK0pt6xWf0qOGuFx8Wspm0t8o=; b=Q6NaVMMqDLpi5v7UaT+4Oe/HkX3widQhRC+Ae6BJGkUcNwLGTJgmZYcMTGQBO/0KZW 6HYEUMzrwxad6UQeLBis3VJbE+WKcQFZ4e26v9+DMxnHRoTu7u4A+B0BFL16w4Rd7FlX n5/Dp8rJZES43vZc2NZ2AC9w6XimSONkhC6mhcNtaY2YOP13Vg1UGMcVQjbxlziuwvrb PbLPpLrfte6PzNB/sWp/GeAg337YHIx/rD3Fi3O60n6Jl08OOlDtpXqwMQC9VLehaG66 ZLbfd0meq5EIrmoD31544v2esp3kh9ltlGuKEvlkf4I1MZEmN63xHPLsoMt2RJRQlByB JNLg== X-Gm-Message-State: APjAAAXeSEhi8jhqDxaXgVn9QV1QpWLcbH+PLynBawYTNtYPykmsae9L k06JUQYMP0R9TnL+fFS/8MUhdw== X-Received: by 2002:a5d:6443:: with SMTP id d3mr21587242wrw.279.1561439586531; Mon, 24 Jun 2019 22:13:06 -0700 (PDT) Received: from localhost.localdomain (146-241-102-168.dyn.eolo.it. [146.241.102.168]) by smtp.gmail.com with ESMTPSA id q20sm28543149wra.36.2019.06.24.22.13.05 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 24 Jun 2019 22:13:06 -0700 (PDT) From: Paolo Valente To: Jens Axboe Cc: linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, ulf.hansson@linaro.org, linus.walleij@linaro.org, bfq-iosched@googlegroups.com, oleksandr@natalenko.name, bottura.nicola95@gmail.com, srivatsa@csail.mit.edu, Paolo Valente Subject: [PATCH BUGFIX IMPROVEMENT V2 2/7] block, bfq: fix rq_in_driver check in bfq_update_inject_limit Date: Tue, 25 Jun 2019 07:12:44 +0200 Message-Id: <20190625051249.39265-3-paolo.valente@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190625051249.39265-1-paolo.valente@linaro.org> References: <20190625051249.39265-1-paolo.valente@linaro.org> MIME-Version: 1.0 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org One of the cases where the parameters for injection may be updated is when there are no more in-flight I/O requests. The number of in-flight requests is stored in the field bfqd->rq_in_driver of the descriptor bfqd of the device. So, the controlled condition is bfqd->rq_in_driver == 0. Unfortunately, this is wrong because, the instruction that checks this condition is in the code path that handles the completion of a request, and, in particular, the instruction is executed before bfqd->rq_in_driver is decremented in such a code path. This commit fixes this issue by just replacing 0 with 1 in the comparison. Reported-by: Srivatsa S. Bhat (VMware) Tested-by: Srivatsa S. Bhat (VMware) Signed-off-by: Paolo Valente --- block/bfq-iosched.c | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-) -- 2.20.1 diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c index 9bc10198ddff..05041f84b8da 100644 --- a/block/bfq-iosched.c +++ b/block/bfq-iosched.c @@ -5481,8 +5481,14 @@ static void bfq_update_inject_limit(struct bfq_data *bfqd, * total service time, and there seem to be the right * conditions to do it, or we can lower the last base value * computed. + * + * NOTE: (bfqd->rq_in_driver == 1) means that there is no I/O + * request in flight, because this function is in the code + * path that handles the completion of a request of bfqq, and, + * in particular, this function is executed before + * bfqd->rq_in_driver is decremented in such a code path. */ - if ((bfqq->last_serv_time_ns == 0 && bfqd->rq_in_driver == 0) || + if ((bfqq->last_serv_time_ns == 0 && bfqd->rq_in_driver == 1) || tot_time_ns < bfqq->last_serv_time_ns) { bfqq->last_serv_time_ns = tot_time_ns; /* From patchwork Tue Jun 25 05:12:45 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Valente X-Patchwork-Id: 167670 Delivered-To: patch@linaro.org Received: by 2002:a92:4782:0:0:0:0:0 with SMTP id e2csp5085905ilk; Mon, 24 Jun 2019 22:13:14 -0700 (PDT) X-Google-Smtp-Source: APXvYqzQLtSLm1uSoqxlt9TasQjepWNs7EIO8A1yPwCQMBsaW1zmlGqs2zlJppWpJfIKrH60DjQV X-Received: by 2002:a17:902:a81:: with SMTP id 1mr150947772plp.287.1561439594424; Mon, 24 Jun 2019 22:13:14 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1561439594; cv=none; d=google.com; s=arc-20160816; b=hC7xlq9/Mggmc0YOVrtD3rUJgmLrml4V2Db3u72Bq8IOSQdTgciaANQ5A1JEPa1psz VuXQD62pBBv2SedVbUVnUnzsrS1Ap6YkMsEAaw38vA6mQG6PM9Jrn/lHy4EQb2lYbx2j 8mEnFCwfLXlBplrDVSE7yo43nxo/wOaRNyayXOtGV7gTHLutlw9DLkvyHlvzUxH+2Yi5 CLha4CfoisDGYG5IBL2383x9u+jri0inxcQkDWfsFAZhbDa5G+WBEQqMCTw+XYzq+Yp1 hhhmMnl/+n1Th4+vEU087Xw0tMrRMC/5KIuVpt/nL7eqTNKOpRP/rbKXpgEBjKaSbzfu SgEA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=XwWPXATMjzsrThYG9AxfqAA38GUZEp5E9KeCsI7Vd04=; b=r/zT0cN80mVi2a0u0AIQW7Hj5XxvE9jV6b6ks7BGNTx5RenPvaK+9licVonaoVBTd2 dxkSjdX+wVvSfFrHN9cGT7b2nJ3E+yFwJam1HL8d6ZQr9W3rno96DcYsfhGKkwnJThPC 299Dcm8P8U4uRP0vex7516gK/0FixL+UTX+ZzfyLJF9VyW7nDIKNK81v+mYzo+QCO5Pu REZRboGOmriAxFE5rwfNN1tGsqAJgaXrU8KOwRmSOkKQHNtdWbneLE6kxZ7wkMQ077Mt 08zpkAw6hy9Z3Ej7LN4RszdhkC5vcNkHZEAUj/51lGdeg7oSebI6LR3K0SBq9SJsNomB Vrig== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=LUWYoAI0; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id e1si1649154pjr.28.2019.06.24.22.13.14; Mon, 24 Jun 2019 22:13:14 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=LUWYoAI0; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727978AbfFYFNM (ORCPT + 30 others); Tue, 25 Jun 2019 01:13:12 -0400 Received: from mail-wm1-f68.google.com ([209.85.128.68]:53400 "EHLO mail-wm1-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727869AbfFYFNJ (ORCPT ); Tue, 25 Jun 2019 01:13:09 -0400 Received: by mail-wm1-f68.google.com with SMTP id x15so1376143wmj.3 for ; Mon, 24 Jun 2019 22:13:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=XwWPXATMjzsrThYG9AxfqAA38GUZEp5E9KeCsI7Vd04=; b=LUWYoAI0RIV2kR9YiKqtLKhr7jPVpGW/qnnu3/EArBmigcVqRS4njao+hxXF4mPymS OA5I39ma3J3gmFXeP6z+9v0md0a8qsPaPT/3MKAEdUA46eEbUU9Dnxh37yaB792iizJe VXL/d1z3ZwztcxcZEjEwa9zfyN/JFj8mRIzSV5pwMYv7aKkQQj+5eE/u+UeDfLs4HLYN Ko/rfNODIZGh37y6CdTioKuB/eOoCC/392y0WRxWth84g8XcFwhKbm5pPsOkBKvJUfnS YXCQ3I/uCvi/hiws7bN+uiOHh52uCJALNb+gxbOWHAQ4qfc7m6f4bIPce0UNKITQOORF B69g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=XwWPXATMjzsrThYG9AxfqAA38GUZEp5E9KeCsI7Vd04=; b=EOSGtidXh8McW2v4WIVr3d8iVZ4TZW54NxJLLmEgiE4neIPoNP4yfB3WmaeK713w96 /OiE5d+Sr/9nhWX3ZYLzw3dM2LBXBj5tktOqQhXdxoaguWqTpG5cri4/WHL/hEg4lYRr lxlN+qgpmxak3ZdWrzFEznYjeiqgcNNVkZ6dPqmQJJ0RGz5jRARQAvHVVepktQlthOec GTwqO6T/+57ZgFhaypmuZd1t/iGuNYHaQYaJteEXFa7KH6pnxPhgzvr7FyT4URlNdctb jxDal26tc8ls9yA4Dzjl/0kH20esLD2MfFS062LxP1gxtGSYtmvyyZ0+QpFH1YaRji6V B/Aw== X-Gm-Message-State: APjAAAX1rCWCyHVCSN1hvsQ1hgmKf7jshf+gHhFklMjrBjOWZl15Nt5b 7FNGB7x2ni+NliUSrf9QNk2LAQ== X-Received: by 2002:a1c:4184:: with SMTP id o126mr18023346wma.68.1561439587593; Mon, 24 Jun 2019 22:13:07 -0700 (PDT) Received: from localhost.localdomain (146-241-102-168.dyn.eolo.it. [146.241.102.168]) by smtp.gmail.com with ESMTPSA id q20sm28543149wra.36.2019.06.24.22.13.06 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 24 Jun 2019 22:13:07 -0700 (PDT) From: Paolo Valente To: Jens Axboe Cc: linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, ulf.hansson@linaro.org, linus.walleij@linaro.org, bfq-iosched@googlegroups.com, oleksandr@natalenko.name, bottura.nicola95@gmail.com, srivatsa@csail.mit.edu, Paolo Valente Subject: [PATCH BUGFIX IMPROVEMENT V2 3/7] block, bfq: update base request service times when possible Date: Tue, 25 Jun 2019 07:12:45 +0200 Message-Id: <20190625051249.39265-4-paolo.valente@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190625051249.39265-1-paolo.valente@linaro.org> References: <20190625051249.39265-1-paolo.valente@linaro.org> MIME-Version: 1.0 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org I/O injection gets reduced if it increases the request service times of the victim queue beyond a certain threshold. The threshold, in its turn, is computed as a function of the base service time enjoyed by the queue when it undergoes no injection. As a consequence, for injection to work properly, the above base value has to be accurate. In this respect, such a value may vary over time. For example, it varies if the size or the spatial locality of the I/O requests in the queue change. It is then important to update this value whenever possible. This commit performs this update. Reported-by: Srivatsa S. Bhat (VMware) Tested-by: Srivatsa S. Bhat (VMware) Signed-off-by: Paolo Valente --- block/bfq-iosched.c | 13 ++++++++++++- 1 file changed, 12 insertions(+), 1 deletion(-) -- 2.20.1 diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c index 05041f84b8da..62442083b147 100644 --- a/block/bfq-iosched.c +++ b/block/bfq-iosched.c @@ -5496,7 +5496,18 @@ static void bfq_update_inject_limit(struct bfq_data *bfqd, * start trying injection. */ bfqq->inject_limit = max_t(unsigned int, 1, old_limit); - } + } else if (!bfqd->rqs_injected && bfqd->rq_in_driver == 1) + /* + * No I/O injected and no request still in service in + * the drive: these are the exact conditions for + * computing the base value of the total service time + * for bfqq. So let's update this value, because it is + * rather variable. For example, it varies if the size + * or the spatial locality of the I/O requests in bfqq + * change. + */ + bfqq->last_serv_time_ns = tot_time_ns; + /* update complete, not waiting for any request completion any longer */ bfqd->waited_rq = NULL; From patchwork Tue Jun 25 05:12:46 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Valente X-Patchwork-Id: 167673 Delivered-To: patch@linaro.org Received: by 2002:a92:4782:0:0:0:0:0 with SMTP id e2csp5086095ilk; Mon, 24 Jun 2019 22:13:26 -0700 (PDT) X-Google-Smtp-Source: APXvYqwMm+MpbVRAQcldm+zY/GCXXwkBpjl6/9AxEHI2j/x9NpZxnOZchohBzldCEu6oEZtj8Zi2 X-Received: by 2002:a65:480b:: with SMTP id h11mr3941656pgs.222.1561439605902; Mon, 24 Jun 2019 22:13:25 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1561439605; cv=none; d=google.com; s=arc-20160816; b=m1RzMYuqcGmK8EiqE5+Xoczn1cyJOMyNAS8t1GAgTLx9kjlBLE4KMQKGKTU9mJO9lX DUHnXNtzVhT3dKtHhKkls8/OpWX4Axjuolb/1BRUdEjDX7tTNP6CsJZ27CtVO+/5RuqA k8ADb6aHOSqBPFeFuwluNTz5eQRh/HZhIllZ0S5/3kjUs6+JryeWuBgyiLPkGG93s71J LUXIJGRCRcujitKtD8IG+I39cYFGf0pfq5YL9CzI4DZsTuQ6GNbUDBeQ/vqQCjhyZAw8 zGl/3CL+7ZcR8Xw6O0SXpajpaz5wG8MLJjnFpzoMr+3i8FfuP7efMINDtdzBaUvCesd9 HzUg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=hDUKv8YUyogiDpRRrW1kMLdeBfpgWpL6nQUelcgrRGk=; b=WRFg2Vz354NeoOW+aLg9bbDUL5Wu31i9w3q6Rxn+yLVtIAYr1Aa7azBYA4iSY3D/+K AguZ6v5l2DHG8XgV9MFsOMEwBxqUtL8Ox5I2APe0UbpqXr0W0oHESOySzOC9WuJ9sf+M RmVPk1L0Qh7ReIuDBd83ASuM875HiMrsiKoRW8DhZi0OSsKej3Wo5ig/9bJpq3cQh/TJ 2jtky7rKk6wg5G5XFydYHIYVR2sDaQoZNvO9W/OXK1Eq59qsfbs5p8KuLbc3DLghe76v 1WlgCGn/TPUWj27VsEJ6QRzyWWNzwVLsYa475aswtjXYQECqVftY8KJsUhVw9NcD/zLR lbtw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=P1UbVuQA; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id o24si13228270pll.280.2019.06.24.22.13.25; Mon, 24 Jun 2019 22:13:25 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=P1UbVuQA; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728088AbfFYFNY (ORCPT + 30 others); Tue, 25 Jun 2019 01:13:24 -0400 Received: from mail-wm1-f66.google.com ([209.85.128.66]:37515 "EHLO mail-wm1-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727893AbfFYFNK (ORCPT ); Tue, 25 Jun 2019 01:13:10 -0400 Received: by mail-wm1-f66.google.com with SMTP id f17so1445445wme.2 for ; Mon, 24 Jun 2019 22:13:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=hDUKv8YUyogiDpRRrW1kMLdeBfpgWpL6nQUelcgrRGk=; b=P1UbVuQAQAxdEuobXFG66avtxb07lOHKpDbpTaxKfsxl05uYaz7+ztqWh3qTSOslXw rsOUdD4c3nDgaH4dPRY1Mr9udMkIN9CpzscP/Svrwe2NRK3B2O1dJ7lGN/ROvgyDU6Sk PnAf1ulxThb8PdJfbojGyLcuMOsJmQKGxa0bsal6Z3DGKXhnR1JRhVPTu2wAs7ZT5bfT h58RYIHsHZ0zTgei4x0GDZKaUIQEW4qwMVhzN2ImIcPQSv358AkUzeFIPgTt3tFXgau+ OuazCdzSt6jUbFR/NqxSWpO1k9EGsUPYn1OBDGTdT8/eso7X+JIGh9tayMVvcme5uXNS 4KZw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=hDUKv8YUyogiDpRRrW1kMLdeBfpgWpL6nQUelcgrRGk=; b=YH4KHHqVYjjC6w5cr21mTXXWgKxfJFw+5vDK7X2Y1tPIZFchAqpo7lZKOwFH8IGsHJ gpUcCYiKccWjbwRaO3VITfulUCvr08JJjez6QM5IDjg3aRBhww//AxlPNW4OYMAeBJQm iJSjLxAiqGPOTv/wbJyudNcqaa/tkdmZPcLhljmTtKJ14I2Q91+v8qFLojCdxQkqB6ap RHKl/8sZCC4oA1biUKjD/tF6M9clWMYPdcdBRbh14s7BzfdMnk9V88xhC1WWaEVAoO+O fo4o5WgeQLhcfyurzWHd3Pp4vfgekfFKm8fnoaUiMiEVtrTTeJw/qwYJSabzfBb5YEPh YEMA== X-Gm-Message-State: APjAAAUlpOKzYWYkHy3oyvM7qzpj8oauWVG4PH308Ied1CfQhNNuyGrL 8QjGKEsdlkvCzOlY5prpjfON1A== X-Received: by 2002:a1c:a6d3:: with SMTP id p202mr19024488wme.26.1561439588831; Mon, 24 Jun 2019 22:13:08 -0700 (PDT) Received: from localhost.localdomain (146-241-102-168.dyn.eolo.it. [146.241.102.168]) by smtp.gmail.com with ESMTPSA id q20sm28543149wra.36.2019.06.24.22.13.07 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 24 Jun 2019 22:13:08 -0700 (PDT) From: Paolo Valente To: Jens Axboe Cc: linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, ulf.hansson@linaro.org, linus.walleij@linaro.org, bfq-iosched@googlegroups.com, oleksandr@natalenko.name, bottura.nicola95@gmail.com, srivatsa@csail.mit.edu, Paolo Valente Subject: [PATCH BUGFIX IMPROVEMENT V2 4/7] block, bfq: bring forward seek&think time update Date: Tue, 25 Jun 2019 07:12:46 +0200 Message-Id: <20190625051249.39265-5-paolo.valente@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190625051249.39265-1-paolo.valente@linaro.org> References: <20190625051249.39265-1-paolo.valente@linaro.org> MIME-Version: 1.0 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Until the base value for request service times gets finally computed for a bfq_queue, the inject limit for that queue does depend on the think-time state (short|long) of the queue. A timely update of the think time then guarantees a quicker activation or deactivation of the injection. Fortunately, the think time of a bfq_queue is updated in the same code path as the inject limit; but after the inject limit. This commits moves the update of the think time before the update of the inject limit. For coherence, it moves the update of the seek time too. Reported-by: Srivatsa S. Bhat (VMware) Tested-by: Srivatsa S. Bhat (VMware) Signed-off-by: Paolo Valente --- block/bfq-iosched.c | 14 ++++---------- 1 file changed, 4 insertions(+), 10 deletions(-) -- 2.20.1 diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c index 62442083b147..d5bc32371ace 100644 --- a/block/bfq-iosched.c +++ b/block/bfq-iosched.c @@ -4979,19 +4979,9 @@ static void bfq_update_has_short_ttime(struct bfq_data *bfqd, static void bfq_rq_enqueued(struct bfq_data *bfqd, struct bfq_queue *bfqq, struct request *rq) { - struct bfq_io_cq *bic = RQ_BIC(rq); - if (rq->cmd_flags & REQ_META) bfqq->meta_pending++; - bfq_update_io_thinktime(bfqd, bfqq); - bfq_update_has_short_ttime(bfqd, bfqq, bic); - bfq_update_io_seektime(bfqd, bfqq, rq); - - bfq_log_bfqq(bfqd, bfqq, - "rq_enqueued: has_short_ttime=%d (seeky %d)", - bfq_bfqq_has_short_ttime(bfqq), BFQQ_SEEKY(bfqq)); - bfqq->last_request_pos = blk_rq_pos(rq) + blk_rq_sectors(rq); if (bfqq == bfqd->in_service_queue && bfq_bfqq_wait_request(bfqq)) { @@ -5079,6 +5069,10 @@ static bool __bfq_insert_request(struct bfq_data *bfqd, struct request *rq) bfqq = new_bfqq; } + bfq_update_io_thinktime(bfqd, bfqq); + bfq_update_has_short_ttime(bfqd, bfqq, RQ_BIC(rq)); + bfq_update_io_seektime(bfqd, bfqq, rq); + waiting = bfqq && bfq_bfqq_wait_request(bfqq); bfq_add_request(rq); idle_timer_disabled = waiting && !bfq_bfqq_wait_request(bfqq); From patchwork Tue Jun 25 05:12:47 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Valente X-Patchwork-Id: 167675 Delivered-To: patch@linaro.org Received: by 2002:a92:4782:0:0:0:0:0 with SMTP id e2csp5086219ilk; Mon, 24 Jun 2019 22:13:34 -0700 (PDT) X-Google-Smtp-Source: APXvYqxTQdSZza93qnzxmJsvjNu6dS5cb+z+47iKcb5uvu74jGg46BoUmJ/Yxtt7wBWhlG0c9/Wp X-Received: by 2002:a17:902:1c9:: with SMTP id b67mr107271210plb.333.1561439614099; Mon, 24 Jun 2019 22:13:34 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1561439614; cv=none; d=google.com; s=arc-20160816; b=frhxNwb/scDZTAd9bABm/gpYRyRr9VTChV9mntlb5EvJ+0lwSJknHkqUxRHO8uPVYo obiF/GrynXOn5jtbCn9Fzgub1cKE6all/aXgHdKtTZ8bkMjYxGkaA5uHJXNag+ZP5VR/ nUJ3QPmgIXMKnVQ7EPRvxEp14FhTgMIj3Iac+lyBOSZYSEYuvp6vSOW0DAoeFPsb6DOT aor+oMDUsZ3hu/4oZQZ4wioeBEnvvmSgq9wNuZPg6IMG4jBa4wpxkMMAo6UZ/1dP7U/1 3DmdeZ4QoUe9O+Aks3mIKBPN5dX8eDMK78JDiV84PXQtGudiycYRi+J97glIxS0qYHG9 +/2Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=w8UfFEEZ6K2E6c2JKmDSUWjvlUsE4oPmj8NaEsXohpk=; b=jc8ps398dDbZQSFy0qoUTUIpcgkLYISkm6LHMC5Hv5JjLEAUhqDeVasr89Sj5AB5t8 EI+tca9HopQVreUbO3kfxdXMPUVZkd38VKMCJrzfQgqyk42qP4G+DSHmxbzygthP/yjC hW4ZyiAKAv7fPxG5E8J2U4FpJNyeJF361o+z4wQ8+8JbW5B4i62/GzX51WqNlgTLw9iv /rdIlVXwjUMeioBHG/ew4Ew2tADYcv6fOv5jLjRCFgSAPyomqbSbD/PjxKLLsVdZBQAS 2a303wIKdLMT8XvBhR7aNS07Nxbxp/24BMBFN0OlMu1j1uhVkG/8iG8LRKRsBYa7JEwV UC1Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b="BC/Npn6s"; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 33si12890850plv.153.2019.06.24.22.13.33; Mon, 24 Jun 2019 22:13:34 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b="BC/Npn6s"; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728187AbfFYFNc (ORCPT + 30 others); Tue, 25 Jun 2019 01:13:32 -0400 Received: from mail-wr1-f41.google.com ([209.85.221.41]:36733 "EHLO mail-wr1-f41.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727941AbfFYFNN (ORCPT ); Tue, 25 Jun 2019 01:13:13 -0400 Received: by mail-wr1-f41.google.com with SMTP id n4so15026684wrs.3 for ; Mon, 24 Jun 2019 22:13:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=w8UfFEEZ6K2E6c2JKmDSUWjvlUsE4oPmj8NaEsXohpk=; b=BC/Npn6sXzor82m17/Mr98ygwLXHbmRoz79qYX4mRpjYQkzfd816L1FGyb8WpRSTn8 2lifyti1cbi/IRSf78EMcs3r+9vyMM74gZzYlDA3AssY20PWEiY/KXSOLlz6fR/4jzml +W2AKbU0IUyjVB80M4QF2JixqXF8C3o9gbp66ELn0EHSleNoUBXb0khiuiPlOqsl0s8M GYq6ZOtH54VbeOfrdt5KgtxEBP0p7Xmxrx9DVkGSVgeuevA/bAw6mveo6khgU4aP6jHM svoa6sgylUlZe8/jymzEQmPCbEGknWshFqAjGG1TBBYo7v+gm3iu9lSMHcL8jtPfZaTU HoHg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=w8UfFEEZ6K2E6c2JKmDSUWjvlUsE4oPmj8NaEsXohpk=; b=laZDFYZ577BbV0hBkJZYH7MZFbF1sLdcf0IBLjSyY/q+6meTsC6HbtRJosgj33O+Q9 xArpmxZw4ExpMGMi7A4iQrxPVTTVeGRIufuegE3XkDC6t/W4AZh3pN502XaCsTCjRGPj MFszgtQNp6OPATqu8wDBIYjQGMfK3M1ltRfZtUIk+k/HXJ4LN/2tNeW/I0AC3B2D5Usm j3tKWAZAJ6924N9w4Qlin9hyX4DkIuOT6nz/TLmTZZn+x7C2BvxwO2sUEgJJGga3U+Lo Uk6SFqogJbP7CjzVyFhREHDaVck7CnxY8gytSfaAFXA7/RuP3m19FrLDOakWf7i0PAqx 84UA== X-Gm-Message-State: APjAAAXo0BMoz1+O+A+rTPqEpq8R+pxYJboJM+2oQfVaQn3R7KIFhVo2 S7S3qiiNutzsehS3XTR1f0Q9ew== X-Received: by 2002:a5d:42c5:: with SMTP id t5mr94842180wrr.5.1561439590043; Mon, 24 Jun 2019 22:13:10 -0700 (PDT) Received: from localhost.localdomain (146-241-102-168.dyn.eolo.it. [146.241.102.168]) by smtp.gmail.com with ESMTPSA id q20sm28543149wra.36.2019.06.24.22.13.08 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 24 Jun 2019 22:13:09 -0700 (PDT) From: Paolo Valente To: Jens Axboe Cc: linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, ulf.hansson@linaro.org, linus.walleij@linaro.org, bfq-iosched@googlegroups.com, oleksandr@natalenko.name, bottura.nicola95@gmail.com, srivatsa@csail.mit.edu, Paolo Valente Subject: [PATCH BUGFIX IMPROVEMENT V2 5/7] block, bfq: detect wakers and unconditionally inject their I/O Date: Tue, 25 Jun 2019 07:12:47 +0200 Message-Id: <20190625051249.39265-6-paolo.valente@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190625051249.39265-1-paolo.valente@linaro.org> References: <20190625051249.39265-1-paolo.valente@linaro.org> MIME-Version: 1.0 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org A bfq_queue Q may happen to be synchronized with another bfq_queue Q2, i.e., the I/O of Q2 may need to be completed for Q to receive new I/O. We call Q2 "waker queue". If I/O plugging is being performed for Q, and Q is not receiving any more I/O because of the above synchronization, then, thanks to BFQ's injection mechanism, the waker queue is likely to get served before the I/O-plugging timeout fires. Unfortunately, this fact may not be sufficient to guarantee a high throughput during the I/O plugging, because the inject limit for Q may be too low to guarantee a lot of injected I/O. In addition, the duration of the plugging, i.e., the time before Q finally receives new I/O, may not be minimized, because the waker queue may happen to be served only after other queues. To address these issues, this commit introduces the explicit detection of the waker queue, and the unconditional injection of a pending I/O request of the waker queue on each invocation of bfq_dispatch_request(). One may be concerned that this systematic injection of I/O from the waker queue delays the service of Q's I/O. Fortunately, it doesn't. On the contrary, next Q's I/O is brought forward dramatically, for it is not blocked for milliseconds. Reported-by: Srivatsa S. Bhat (VMware) Tested-by: Srivatsa S. Bhat (VMware) Signed-off-by: Paolo Valente --- block/bfq-iosched.c | 270 ++++++++++++++++++++++++++++++++++++++------ block/bfq-iosched.h | 25 +++- 2 files changed, 261 insertions(+), 34 deletions(-) -- 2.20.1 diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c index d5bc32371ace..9e2fbb7d1fb6 100644 --- a/block/bfq-iosched.c +++ b/block/bfq-iosched.c @@ -157,6 +157,7 @@ BFQ_BFQQ_FNS(in_large_burst); BFQ_BFQQ_FNS(coop); BFQ_BFQQ_FNS(split_coop); BFQ_BFQQ_FNS(softrt_update); +BFQ_BFQQ_FNS(has_waker); #undef BFQ_BFQQ_FNS \ /* Expiration time of sync (0) and async (1) requests, in ns. */ @@ -1814,6 +1815,111 @@ static void bfq_add_request(struct request *rq) bfqd->queued++; if (RB_EMPTY_ROOT(&bfqq->sort_list) && bfq_bfqq_sync(bfqq)) { + /* + * Detect whether bfqq's I/O seems synchronized with + * that of some other queue, i.e., whether bfqq, after + * remaining empty, happens to receive new I/O only + * right after some I/O request of the other queue has + * been completed. We call waker queue the other + * queue, and we assume, for simplicity, that bfqq may + * have at most one waker queue. + * + * A remarkable throughput boost can be reached by + * unconditionally injecting the I/O of the waker + * queue, every time a new bfq_dispatch_request + * happens to be invoked while I/O is being plugged + * for bfqq. In addition to boosting throughput, this + * unblocks bfqq's I/O, thereby improving bandwidth + * and latency for bfqq. Note that these same results + * may be achieved with the general injection + * mechanism, but less effectively. For details on + * this aspect, see the comments on the choice of the + * queue for injection in bfq_select_queue(). + * + * Turning back to the detection of a waker queue, a + * queue Q is deemed as a waker queue for bfqq if, for + * two consecutive times, bfqq happens to become non + * empty right after a request of Q has been + * completed. In particular, on the first time, Q is + * tentatively set as a candidate waker queue, while + * on the second time, the flag + * bfq_bfqq_has_waker(bfqq) is set to confirm that Q + * is a waker queue for bfqq. These detection steps + * are performed only if bfqq has a long think time, + * so as to make it more likely that bfqq's I/O is + * actually being blocked by a synchronization. This + * last filter, plus the above two-times requirement, + * make false positives less likely. + * + * NOTE + * + * The sooner a waker queue is detected, the sooner + * throughput can be boosted by injecting I/O from the + * waker queue. Fortunately, detection is likely to be + * actually fast, for the following reasons. While + * blocked by synchronization, bfqq has a long think + * time. This implies that bfqq's inject limit is at + * least equal to 1 (see the comments in + * bfq_update_inject_limit()). So, thanks to + * injection, the waker queue is likely to be served + * during the very first I/O-plugging time interval + * for bfqq. This triggers the first step of the + * detection mechanism. Thanks again to injection, the + * candidate waker queue is then likely to be + * confirmed no later than during the next + * I/O-plugging interval for bfqq. + */ + if (!bfq_bfqq_has_short_ttime(bfqq) && + ktime_get_ns() - bfqd->last_completion < + 200 * NSEC_PER_USEC) { + if (bfqd->last_completed_rq_bfqq != bfqq && + bfqd->last_completed_rq_bfqq != + bfqq->waker_bfqq) { + /* + * First synchronization detected with + * a candidate waker queue, or with a + * different candidate waker queue + * from the current one. + */ + bfqq->waker_bfqq = bfqd->last_completed_rq_bfqq; + + /* + * If the waker queue disappears, then + * bfqq->waker_bfqq must be reset. To + * this goal, we maintain in each + * waker queue a list, woken_list, of + * all the queues that reference the + * waker queue through their + * waker_bfqq pointer. When the waker + * queue exits, the waker_bfqq pointer + * of all the queues in the woken_list + * is reset. + * + * In addition, if bfqq is already in + * the woken_list of a waker queue, + * then, before being inserted into + * the woken_list of a new waker + * queue, bfqq must be removed from + * the woken_list of the old waker + * queue. + */ + if (!hlist_unhashed(&bfqq->woken_list_node)) + hlist_del_init(&bfqq->woken_list_node); + hlist_add_head(&bfqq->woken_list_node, + &bfqd->last_completed_rq_bfqq->woken_list); + + bfq_clear_bfqq_has_waker(bfqq); + } else if (bfqd->last_completed_rq_bfqq == + bfqq->waker_bfqq && + !bfq_bfqq_has_waker(bfqq)) { + /* + * synchronization with waker_bfqq + * seen for the second time + */ + bfq_mark_bfqq_has_waker(bfqq); + } + } + /* * Periodically reset inject limit, to make sure that * the latter eventually drops in case workload @@ -4164,18 +4270,89 @@ static struct bfq_queue *bfq_select_queue(struct bfq_data *bfqd) bfqq->bic->bfqq[0] : NULL; /* - * If the process associated with bfqq has also async - * I/O pending, then inject it - * unconditionally. Injecting I/O from the same - * process can cause no harm to the process. On the - * contrary, it can only increase bandwidth and reduce - * latency for the process. + * The next three mutually-exclusive ifs decide + * whether to try injection, and choose the queue to + * pick an I/O request from. + * + * The first if checks whether the process associated + * with bfqq has also async I/O pending. If so, it + * injects such I/O unconditionally. Injecting async + * I/O from the same process can cause no harm to the + * process. On the contrary, it can only increase + * bandwidth and reduce latency for the process. + * + * The second if checks whether there happens to be a + * non-empty waker queue for bfqq, i.e., a queue whose + * I/O needs to be completed for bfqq to receive new + * I/O. This happens, e.g., if bfqq is associated with + * a process that does some sync. A sync generates + * extra blocking I/O, which must be completed before + * the process associated with bfqq can go on with its + * I/O. If the I/O of the waker queue is not served, + * then bfqq remains empty, and no I/O is dispatched, + * until the idle timeout fires for bfqq. This is + * likely to result in lower bandwidth and higher + * latencies for bfqq, and in a severe loss of total + * throughput. The best action to take is therefore to + * serve the waker queue as soon as possible. So do it + * (without relying on the third alternative below for + * eventually serving waker_bfqq's I/O; see the last + * paragraph for further details). This systematic + * injection of I/O from the waker queue does not + * cause any delay to bfqq's I/O. On the contrary, + * next bfqq's I/O is brought forward dramatically, + * for it is not blocked for milliseconds. + * + * The third if checks whether bfqq is a queue for + * which it is better to avoid injection. It is so if + * bfqq delivers more throughput when served without + * any further I/O from other queues in the middle, or + * if the service times of bfqq's I/O requests both + * count more than overall throughput, and may be + * easily increased by injection (this happens if bfqq + * has a short think time). If none of these + * conditions holds, then a candidate queue for + * injection is looked for through + * bfq_choose_bfqq_for_injection(). Note that the + * latter may return NULL (for example if the inject + * limit for bfqq is currently 0). + * + * NOTE: motivation for the second alternative + * + * Thanks to the way the inject limit is updated in + * bfq_update_has_short_ttime(), it is rather likely + * that, if I/O is being plugged for bfqq and the + * waker queue has pending I/O requests that are + * blocking bfqq's I/O, then the third alternative + * above lets the waker queue get served before the + * I/O-plugging timeout fires. So one may deem the + * second alternative superfluous. It is not, because + * the third alternative may be way less effective in + * case of a synchronization. For two main + * reasons. First, throughput may be low because the + * inject limit may be too low to guarantee the same + * amount of injected I/O, from the waker queue or + * other queues, that the second alternative + * guarantees (the second alternative unconditionally + * injects a pending I/O request of the waker queue + * for each bfq_dispatch_request()). Second, with the + * third alternative, the duration of the plugging, + * i.e., the time before bfqq finally receives new I/O, + * may not be minimized, because the waker queue may + * happen to be served only after other queues. */ if (async_bfqq && icq_to_bic(async_bfqq->next_rq->elv.icq) == bfqq->bic && bfq_serv_to_charge(async_bfqq->next_rq, async_bfqq) <= bfq_bfqq_budget_left(async_bfqq)) bfqq = bfqq->bic->bfqq[0]; + else if (bfq_bfqq_has_waker(bfqq) && + bfq_bfqq_busy(bfqq->waker_bfqq) && + bfq_serv_to_charge(bfqq->waker_bfqq->next_rq, + bfqq->waker_bfqq) <= + bfq_bfqq_budget_left(bfqq->waker_bfqq) + ) + bfqq = bfqq->waker_bfqq; else if (!idling_boosts_thr_without_issues(bfqd, bfqq) && (bfqq->wr_coeff == 1 || bfqd->wr_busy_queues > 1 || !bfq_bfqq_has_short_ttime(bfqq))) @@ -4564,6 +4741,9 @@ static void bfq_put_cooperator(struct bfq_queue *bfqq) static void bfq_exit_bfqq(struct bfq_data *bfqd, struct bfq_queue *bfqq) { + struct bfq_queue *item; + struct hlist_node *n; + if (bfqq == bfqd->in_service_queue) { __bfq_bfqq_expire(bfqd, bfqq); bfq_schedule_dispatch(bfqd); @@ -4573,6 +4753,18 @@ static void bfq_exit_bfqq(struct bfq_data *bfqd, struct bfq_queue *bfqq) bfq_put_cooperator(bfqq); + /* remove bfqq from woken list */ + if (!hlist_unhashed(&bfqq->woken_list_node)) + hlist_del_init(&bfqq->woken_list_node); + + /* reset waker for all queues in woken list */ + hlist_for_each_entry_safe(item, n, &bfqq->woken_list, + woken_list_node) { + item->waker_bfqq = NULL; + bfq_clear_bfqq_has_waker(item); + hlist_del_init(&item->woken_list_node); + } + bfq_put_queue(bfqq); /* release process reference */ } @@ -4691,6 +4883,8 @@ static void bfq_init_bfqq(struct bfq_data *bfqd, struct bfq_queue *bfqq, RB_CLEAR_NODE(&bfqq->entity.rb_node); INIT_LIST_HEAD(&bfqq->fifo); INIT_HLIST_NODE(&bfqq->burst_list_node); + INIT_HLIST_NODE(&bfqq->woken_list_node); + INIT_HLIST_HEAD(&bfqq->woken_list); bfqq->ref = 0; bfqq->bfqd = bfqd; @@ -4909,28 +5103,27 @@ static void bfq_update_has_short_ttime(struct bfq_data *bfqd, * bfqq may have a long think time because of a * synchronization with some other queue, i.e., because the * I/O of some other queue may need to be completed for bfqq - * to receive new I/O. This happens, e.g., if bfqq is - * associated with a process that does some sync. A sync - * generates extra blocking I/O, which must be completed - * before the process associated with bfqq can go on with its - * I/O. + * to receive new I/O. Details in the comments on the choice + * of the queue for injection in bfq_select_queue(). * - * If such a synchronization is actually in place, then, - * without injection on bfqq, the blocking I/O cannot happen - * to served while bfqq is in service. As a consequence, if - * bfqq is granted I/O-dispatch-plugging, then bfqq remains - * empty, and no I/O is dispatched, until the idle timeout - * fires. This is likely to result in lower bandwidth and - * higher latencies for bfqq, and in a severe loss of total - * throughput. + * As stressed in those comments, if such a synchronization is + * actually in place, then, without injection on bfqq, the + * blocking I/O cannot happen to served while bfqq is in + * service. As a consequence, if bfqq is granted + * I/O-dispatch-plugging, then bfqq remains empty, and no I/O + * is dispatched, until the idle timeout fires. This is likely + * to result in lower bandwidth and higher latencies for bfqq, + * and in a severe loss of total throughput. * * On the opposite end, a non-zero inject limit may allow the * I/O that blocks bfqq to be executed soon, and therefore - * bfqq to receive new I/O soon. But, if this actually - * happens, then the next think-time sample for bfqq may be - * very low. This in turn may cause bfqq's think time to be - * deemed short. Without the 100 ms barrier, this new state - * change would cause the body of the next if to be executed + * bfqq to receive new I/O soon. + * + * But, if the blocking gets actually eliminated, then the + * next think-time sample for bfqq may be very low. This in + * turn may cause bfqq's think time to be deemed + * short. Without the 100 ms barrier, this new state change + * would cause the body of the next if to be executed * immediately. But this would set to 0 the inject * limit. Without injection, the blocking I/O would cause the * think time of bfqq to become long again, and therefore the @@ -4941,11 +5134,11 @@ static void bfq_update_has_short_ttime(struct bfq_data *bfqd, * In contrast, if the inject limit is not reset during such a * long time interval as 100 ms, then the number of short * think time samples can grow significantly before the reset - * is allowed. As a consequence, the think time state can - * become stable before the reset. There will be no state - * change when the 100 ms elapse, and therefore no reset of - * the inject limit. The inject limit remains steadily equal - * to 1 both during and after the 100 ms. So injection can be + * is performed. As a consequence, the think time state can + * become stable before the reset. Therefore there will be no + * state change when the 100 ms elapse, and no reset of the + * inject limit. The inject limit remains steadily equal to 1 + * both during and after the 100 ms. So injection can be * performed at all times, and throughput gets boosted. * * An inject limit equal to 1 is however in conflict, in @@ -4960,10 +5153,20 @@ static void bfq_update_has_short_ttime(struct bfq_data *bfqd, * brought forward, because it is not blocked for * milliseconds. * - * In addition, during the 100 ms, the base value for the - * total service time is likely to get finally computed, - * freeing the inject limit from its relation with the think - * time. + * In addition, serving the blocking I/O much sooner, and much + * more frequently than once per I/O-plugging timeout, makes + * it much quicker to detect a waker queue (the concept of + * waker queue is defined in the comments in + * bfq_add_request()). This makes it possible to start sooner + * to boost throughput more effectively, by injecting the I/O + * of the waker queue unconditionally on every + * bfq_dispatch_request(). + * + * One last, important benefit of not resetting the inject + * limit before 100 ms is that, during this time interval, the + * base value for the total service time is likely to get + * finally computed for bfqq, freeing the inject limit from + * its relation with the think time. */ if (state_changed && bfqq->last_serv_time_ns == 0 && (time_is_before_eq_jiffies(bfqq->decrease_time_jif + @@ -5278,6 +5481,7 @@ static void bfq_completed_request(struct bfq_queue *bfqq, struct bfq_data *bfqd) 1UL<<(BFQ_RATE_SHIFT - 10)) bfq_update_rate_reset(bfqd, NULL); bfqd->last_completion = now_ns; + bfqd->last_completed_rq_bfqq = bfqq; /* * If we are waiting to discover whether the request pattern diff --git a/block/bfq-iosched.h b/block/bfq-iosched.h index 584d3c9ed8ba..e80adf822bbe 100644 --- a/block/bfq-iosched.h +++ b/block/bfq-iosched.h @@ -357,6 +357,24 @@ struct bfq_queue { /* max service rate measured so far */ u32 max_service_rate; + + /* + * Pointer to the waker queue for this queue, i.e., to the + * queue Q such that this queue happens to get new I/O right + * after some I/O request of Q is completed. For details, see + * the comments on the choice of the queue for injection in + * bfq_select_queue(). + */ + struct bfq_queue *waker_bfqq; + /* node for woken_list, see below */ + struct hlist_node woken_list_node; + /* + * Head of the list of the woken queues for this queue, i.e., + * of the list of the queues for which this queue is a waker + * queue. This list is used to reset the waker_bfqq pointer in + * the woken queues when this queue exits. + */ + struct hlist_head woken_list; }; /** @@ -533,6 +551,9 @@ struct bfq_data { /* time of last request completion (ns) */ u64 last_completion; + /* bfqq owning the last completed rq */ + struct bfq_queue *last_completed_rq_bfqq; + /* time of last transition from empty to non-empty (ns) */ u64 last_empty_occupied_ns; @@ -743,7 +764,8 @@ enum bfqq_state_flags { * update */ BFQQF_coop, /* bfqq is shared */ - BFQQF_split_coop /* shared bfqq will be split */ + BFQQF_split_coop, /* shared bfqq will be split */ + BFQQF_has_waker /* bfqq has a waker queue */ }; #define BFQ_BFQQ_FNS(name) \ @@ -763,6 +785,7 @@ BFQ_BFQQ_FNS(in_large_burst); BFQ_BFQQ_FNS(coop); BFQ_BFQQ_FNS(split_coop); BFQ_BFQQ_FNS(softrt_update); +BFQ_BFQQ_FNS(has_waker); #undef BFQ_BFQQ_FNS /* Expiration reasons. */ From patchwork Tue Jun 25 05:12:48 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Valente X-Patchwork-Id: 167676 Delivered-To: patch@linaro.org Received: by 2002:a92:4782:0:0:0:0:0 with SMTP id e2csp5086256ilk; Mon, 24 Jun 2019 22:13:37 -0700 (PDT) X-Google-Smtp-Source: APXvYqz49b3yb4qEqk2OPxBQBgOoWV8e7/6s4f7I0BuXB+aHm+QVUrMtQPvaj/yZzThuAbTnC0Tf X-Received: by 2002:a17:902:e082:: with SMTP id cb2mr32866620plb.274.1561439616940; Mon, 24 Jun 2019 22:13:36 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1561439616; cv=none; d=google.com; s=arc-20160816; b=eb5rU6OoBhSTfVZatkO7pYHCCU3MoAYEWDJfMXeZsQsm6rVe1wsrA/8nMESp301eT0 ircXRtYVA76FQKGxpM8NOOB+6oHU7IdfvbdNIQbyFQP3PacXi81m52MDPfW/8j1Xz/LF Lp5R9uVkuD8N6Utg7ZSmOKNUB1lqYvDlcDdQ9CP219qBcJLm+kffm3h1HX2qiLrgsRpO 1WXXVPMcG73AUkb4rDedestDg51n7lkS8jy4hBMFxGY7H9c9ofa3jy3H6H9geNMPZlHo F0kkbeexY1WtVzrppmDJK8xbo+qwfcwSL5ewId0emhHBzkjhW5msrdt2mcckhhcglFjn oSFg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=JW8uE9p2cExcuevLdfaub1Bim8IPk1JSRsG24Fpy7KY=; b=RQ0lojBwTDJtopmB+QHgQhF/5urZBoQLwIfPALBvJWHPaAdD5qFqYZqeFedy3ZWXOX uKgmaEaEh8pQMNit6ABtj0pWk+RSbe7SxMCHpKz8zYAtQz1xZK8uUGQ7IYvKJkMgQA9Y 03GR8LZfPWAlJGlGXpkaZ2wds9RLmf6K/Dx6hAhALnL+eMjQZI7vlSzNP02p3PaeUM4i s1RLDuwBiw6+8OG/2CBaoPKnYYeh2rugig9BrHMbHt2WsD8ThGgwxt/xgSSsngEMvaLB gj38APDrnKGQ5FjsjUsQ9P7iFVnKSfFLh6kQG/PDj0xD39wIvD/oeeJKDFzZ4beJr7Wf Hc7Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=kOIauIcn; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 33si12890850plv.153.2019.06.24.22.13.36; Mon, 24 Jun 2019 22:13:36 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=kOIauIcn; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728140AbfFYFNb (ORCPT + 30 others); Tue, 25 Jun 2019 01:13:31 -0400 Received: from mail-wm1-f67.google.com ([209.85.128.67]:55717 "EHLO mail-wm1-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727984AbfFYFNN (ORCPT ); Tue, 25 Jun 2019 01:13:13 -0400 Received: by mail-wm1-f67.google.com with SMTP id a15so1364636wmj.5 for ; Mon, 24 Jun 2019 22:13:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=JW8uE9p2cExcuevLdfaub1Bim8IPk1JSRsG24Fpy7KY=; b=kOIauIcnpP/LDmX0G7V5Ri7NZ333YCW3flfMrFaZsHPyrIRVmeQMfukl03nKK29YqP NG/4fA21zTbXiS1iiqKk/vW0ZlPAVkPzIcsBkeMvbe7j+k3djhbbO+r09LYw38QQ54rg 04azTRQo0/2xvfIPqhGkIN311I81CkNgz6jpcV1glxGpwpZlUR0jPGteD5iTrS6E8eTm 2sA0bI6Zedvpvl8MFHI6fISWME+1DDG1a0+KMS5e/GWgpnEX/a98+UEiHZCXdrDDdvYG /tGPOEkiXFv6Yzvvoohe7ZioBaKE71e7RQ4/ha4b/HEUWcESirpO1QQ9wcr9/5AtofpT jESg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=JW8uE9p2cExcuevLdfaub1Bim8IPk1JSRsG24Fpy7KY=; b=A8foK12YSlvHYWEFQ+YrO86ODhkqcA80EODl8sBkE+AstpzFl+HGBoP8Uxwq6Q5iOW 5OA/AEEJ7sB+xtXtnWQWQwfBZKFByM/GB/WPUpPrdR3MiC1/WJmzyQJbRSDT/idSUk13 09RQa3JBrpwO+AZFP6FvYM9gy5qi+N7QCg/bEptt6f+tPUDBrOcz7BVFeKkYN0FTKgc8 LmzQXaKzCyNMFwN4EYDerS9tKIdGdHHBj/uCn/Uie+A5Z/tjwtJ/eqoGlRdkFu3YnmT0 XZ68Hlk60gVm02bzQe+t5O/1dDEcbsgJwczvDZVhVTckP7bV0rv2ibhF3w8iicrjwcK/ B4mA== X-Gm-Message-State: APjAAAW23+DVjBP16TpY6TXeWVjjK4OADUZmnXGFq6960+MxUac/PklG iVkVjv5zocxuWk2Ktu7yZ5hyKA== X-Received: by 2002:a1c:7d8e:: with SMTP id y136mr17784467wmc.16.1561439591442; Mon, 24 Jun 2019 22:13:11 -0700 (PDT) Received: from localhost.localdomain (146-241-102-168.dyn.eolo.it. [146.241.102.168]) by smtp.gmail.com with ESMTPSA id q20sm28543149wra.36.2019.06.24.22.13.10 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 24 Jun 2019 22:13:10 -0700 (PDT) From: Paolo Valente To: Jens Axboe Cc: linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, ulf.hansson@linaro.org, linus.walleij@linaro.org, bfq-iosched@googlegroups.com, oleksandr@natalenko.name, bottura.nicola95@gmail.com, srivatsa@csail.mit.edu, Paolo Valente Subject: [PATCH BUGFIX IMPROVEMENT V2 6/7] block, bfq: preempt lower-weight or lower-priority queues Date: Tue, 25 Jun 2019 07:12:48 +0200 Message-Id: <20190625051249.39265-7-paolo.valente@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190625051249.39265-1-paolo.valente@linaro.org> References: <20190625051249.39265-1-paolo.valente@linaro.org> MIME-Version: 1.0 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org BFQ enqueues the I/O coming from each process into a separate bfq_queue, and serves bfq_queues one at a time. Each bfq_queue may be served for at most timeout_sync milliseconds (default: 125 ms). This service scheme is prone to the following inaccuracy. While a bfq_queue Q1 is in service, some empty bfq_queue Q2 may receive I/O, and, according to BFQ's scheduling policy, may become the right bfq_queue to serve, in place of the currently in-service bfq_queue. In this respect, postponing the service of Q2 to after the service of Q1 finishes may delay the completion of Q2's I/O, compared with an ideal service in which all non-empty bfq_queues are served in parallel, and every non-empty bfq_queue is served at a rate proportional to the bfq_queue's weight. This additional delay is equal at most to the time Q1 may unjustly remain in service before switching to Q2. If Q1 and Q2 have the same weight, then this time is most likely negligible compared with the completion time to be guaranteed to Q2's I/O. In addition, first, one of the reasons why BFQ may want to serve Q1 for a while is that this boosts throughput and, second, serving Q1 longer reduces BFQ's overhead. As a conclusion, it is usually better not to preempt Q1 if both Q1 and Q2 have the same weight. In contrast, as Q2's weight or priority becomes higher and higher compared with that of Q1, the above delay becomes larger and larger, compared with the I/O completion times that have to be guaranteed to Q2 according to Q2's weight. So reducing this delay may be more important than avoiding the costs of preempting Q1. Accordingly, this commit preempts Q1 if Q2 has a higher weight or a higher priority than Q1. Preemption causes Q1 to be re-scheduled, and triggers a new choice of the next bfq_queue to serve. If Q2 really is the next bfq_queue to serve, then Q2 will be set in service immediately. This change reduces the component of the I/O latency caused by the above delay by about 80%. For example, on an (old) PLEXTOR PX-256M5 SSD, the maximum latency reported by fio drops from 15.1 to 3.2 ms for a process doing sporadic random reads while another process is doing continuous sequential reads. Signed-off-by: Nicola Bottura Signed-off-by: Paolo Valente --- block/bfq-iosched.c | 95 +++++++++++++++++++++++++++++++++++---------- 1 file changed, 75 insertions(+), 20 deletions(-) -- 2.20.1 diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c index 9e2fbb7d1fb6..6a3d05023300 100644 --- a/block/bfq-iosched.c +++ b/block/bfq-iosched.c @@ -1428,17 +1428,19 @@ static int bfq_min_budget(struct bfq_data *bfqd) * mechanism may be re-designed in such a way to make it possible to * know whether preemption is needed without needing to update service * trees). In addition, queue preemptions almost always cause random - * I/O, and thus loss of throughput. Because of these facts, the next - * function adopts the following simple scheme to avoid both costly - * operations and too frequent preemptions: it requests the expiration - * of the in-service queue (unconditionally) only for queues that need - * to recover a hole, or that either are weight-raised or deserve to - * be weight-raised. + * I/O, which may in turn cause loss of throughput. Finally, there may + * even be no in-service queue when the next function is invoked (so, + * no queue to compare timestamps with). Because of these facts, the + * next function adopts the following simple scheme to avoid costly + * operations, too frequent preemptions and too many dependencies on + * the state of the scheduler: it requests the expiration of the + * in-service queue (unconditionally) only for queues that need to + * recover a hole. Then it delegates to other parts of the code the + * responsibility of handling the above case 2. */ static bool bfq_bfqq_update_budg_for_activation(struct bfq_data *bfqd, struct bfq_queue *bfqq, - bool arrived_in_time, - bool wr_or_deserves_wr) + bool arrived_in_time) { struct bfq_entity *entity = &bfqq->entity; @@ -1493,7 +1495,7 @@ static bool bfq_bfqq_update_budg_for_activation(struct bfq_data *bfqd, entity->budget = max_t(unsigned long, bfqq->max_budget, bfq_serv_to_charge(bfqq->next_rq, bfqq)); bfq_clear_bfqq_non_blocking_wait_rq(bfqq); - return wr_or_deserves_wr; + return false; } /* @@ -1611,6 +1613,36 @@ static bool bfq_bfqq_idle_for_long_time(struct bfq_data *bfqd, bfqd->bfq_wr_min_idle_time); } + +/* + * Return true if bfqq is in a higher priority class, or has a higher + * weight than the in-service queue. + */ +static bool bfq_bfqq_higher_class_or_weight(struct bfq_queue *bfqq, + struct bfq_queue *in_serv_bfqq) +{ + int bfqq_weight, in_serv_weight; + + if (bfqq->ioprio_class < in_serv_bfqq->ioprio_class) + return true; + + if (in_serv_bfqq->entity.parent == bfqq->entity.parent) { + bfqq_weight = bfqq->entity.weight; + in_serv_weight = in_serv_bfqq->entity.weight; + } else { + if (bfqq->entity.parent) + bfqq_weight = bfqq->entity.parent->weight; + else + bfqq_weight = bfqq->entity.weight; + if (in_serv_bfqq->entity.parent) + in_serv_weight = in_serv_bfqq->entity.parent->weight; + else + in_serv_weight = in_serv_bfqq->entity.weight; + } + + return bfqq_weight > in_serv_weight; +} + static void bfq_bfqq_handle_idle_busy_switch(struct bfq_data *bfqd, struct bfq_queue *bfqq, int old_wr_coeff, @@ -1655,8 +1687,7 @@ static void bfq_bfqq_handle_idle_busy_switch(struct bfq_data *bfqd, */ bfqq_wants_to_preempt = bfq_bfqq_update_budg_for_activation(bfqd, bfqq, - arrived_in_time, - wr_or_deserves_wr); + arrived_in_time); /* * If bfqq happened to be activated in a burst, but has been @@ -1721,16 +1752,40 @@ static void bfq_bfqq_handle_idle_busy_switch(struct bfq_data *bfqd, /* * Expire in-service queue only if preemption may be needed - * for guarantees. In this respect, the function - * next_queue_may_preempt just checks a simple, necessary - * condition, and not a sufficient condition based on - * timestamps. In fact, for the latter condition to be - * evaluated, timestamps would need first to be updated, and - * this operation is quite costly (see the comments on the - * function bfq_bfqq_update_budg_for_activation). + * for guarantees. In particular, we care only about two + * cases. The first is that bfqq has to recover a service + * hole, as explained in the comments on + * bfq_bfqq_update_budg_for_activation(), i.e., that + * bfqq_wants_to_preempt is true. However, if bfqq does not + * carry time-critical I/O, then bfqq's bandwidth is less + * important than that of queues that carry time-critical I/O. + * So, as a further constraint, we consider this case only if + * bfqq is at least as weight-raised, i.e., at least as time + * critical, as the in-service queue. + * + * The second case is that bfqq is in a higher priority class, + * or has a higher weight than the in-service queue. If this + * condition does not hold, we don't care because, even if + * bfqq does not start to be served immediately, the resulting + * delay for bfqq's I/O is however lower or much lower than + * the ideal completion time to be guaranteed to bfqq's I/O. + * + * In both cases, preemption is needed only if, according to + * the timestamps of both bfqq and of the in-service queue, + * bfqq actually is the next queue to serve. So, to reduce + * useless preemptions, the return value of + * next_queue_may_preempt() is considered in the next compound + * condition too. Yet next_queue_may_preempt() just checks a + * simple, necessary condition for bfqq to be the next queue + * to serve. In fact, to evaluate a sufficient condition, the + * timestamps of the in-service queue would need to be + * updated, and this operation is quite costly (see the + * comments on bfq_bfqq_update_budg_for_activation()). */ - if (bfqd->in_service_queue && bfqq_wants_to_preempt && - bfqd->in_service_queue->wr_coeff < bfqq->wr_coeff && + if (bfqd->in_service_queue && + ((bfqq_wants_to_preempt && + bfqq->wr_coeff >= bfqd->in_service_queue->wr_coeff) || + bfq_bfqq_higher_class_or_weight(bfqq, bfqd->in_service_queue)) && next_queue_may_preempt(bfqd)) bfq_bfqq_expire(bfqd, bfqd->in_service_queue, false, BFQQE_PREEMPTED); From patchwork Tue Jun 25 05:12:49 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Valente X-Patchwork-Id: 167674 Delivered-To: patch@linaro.org Received: by 2002:a92:4782:0:0:0:0:0 with SMTP id e2csp5086136ilk; Mon, 24 Jun 2019 22:13:28 -0700 (PDT) X-Google-Smtp-Source: APXvYqyAdH/ooIhTnm6N7yYhipWREEwxlmKUOHG959pCU3tOspbu5pXAQRGCeGdKqrXDMHbR1gK1 X-Received: by 2002:a17:90a:8984:: with SMTP id v4mr29351108pjn.133.1561439608312; Mon, 24 Jun 2019 22:13:28 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1561439608; cv=none; d=google.com; s=arc-20160816; b=iTjzz6mEuZRb9z1n4an6eCjv2AoWDThk8ePv2B21pOREQ8+QVQKqwSUZvtxZdmhzcf x8nLOwuZ0woxSvoY/OdQnbiBZu9DSr3orwgtmi7ipIrhV771CPA6l+4Ep3MozrXQtjZw RHyppj26UlUTtkSCG9tBXktKhnVEaJv8WTkbVIOtjweHlnfXTfJKpie5ixGNiJLmheUE UJ2tgFHZCtKYfld86DSMvdnQa+lwv8a+uO8sPznb129Lw5s4s8G3GyI5ZkGWvZP7Y8zY vC+dB7KwPUvgDX/6m/OBfHiTCi2ln0UXpjbt/ahsYmQjPaWwoDsx5+SttxWOqtHpVHko Hsug== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=ZqOH6JolxVNnSbSBaGe2Jh+qvF6QxJR/Z+iRWaahp5U=; b=aF7DUtydKvRmzM0ta8y8VgmPDkMIkMxK1fRdsGPbNLSUE3ftchM3NVUTn4+j478d9s +T3Qt9uo0cNCWvPvBn1fkVh6XWQiyYYJJTWlSJCTks1jFOcOncuOrsHFBPNWqoKTWWVf sjnYzchmnJlSote37JZhjrIECoObQ53/AwkQJFvwc0zDnSmJGhKW6UAZ6RAQ7RoXuIUq sbTz6cdCU70u4xlvm6MzZGPF/XUn/A2g41AOF2cpqQSfylIetPokdaiuGjWY3NGbDzHS ntkmX1LIId+hBWOL0STQBndM2xgwuZ9eHDWVP7kL/IaGUU/Qe/3GVsrKUTm9fM5RSTpE QJxA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=ra6XFgaS; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id o24si13228270pll.280.2019.06.24.22.13.28; Mon, 24 Jun 2019 22:13:28 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=ra6XFgaS; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728127AbfFYFN1 (ORCPT + 30 others); Tue, 25 Jun 2019 01:13:27 -0400 Received: from mail-wm1-f65.google.com ([209.85.128.65]:53402 "EHLO mail-wm1-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728011AbfFYFNQ (ORCPT ); Tue, 25 Jun 2019 01:13:16 -0400 Received: by mail-wm1-f65.google.com with SMTP id x15so1376295wmj.3 for ; Mon, 24 Jun 2019 22:13:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=ZqOH6JolxVNnSbSBaGe2Jh+qvF6QxJR/Z+iRWaahp5U=; b=ra6XFgaS1A6h1N/8W58YTrkGriiiJHAvNjaWn53UFbrbTIy3Ec587O5LVXyY8MMu2T yOtG0gBqF2OfLvhR++hWQ8rQE4RNF6+2RVZNQNSl+iZ+dPAP0BP05b4P8tpHr7z/uevY iduU4i7XiZoM3pzXEAM9yH0ejnJt06LSvIztvRGL5WrXZanKNLIxWdTnOlvzZv0VQw8z gnVt6pUqbB0V0vHPtwlDq12q7AFKuS8HWf3pgSdQhlR4LfmkQ7NUqZP0A8dg09y2a1pF 1idl+isvR+emaCN6wMbgsaQVNFdSjn5xrQBRkGlsHfyF1IM7V1Wt8/kWC27t8/AEkdoY 3DDQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ZqOH6JolxVNnSbSBaGe2Jh+qvF6QxJR/Z+iRWaahp5U=; b=J+HNPk+CFiNjfAEb+Ql0iO8HiwjEB2FUfVCCDYNRjtkLYI62jARmZvb+E356AMR6EJ v2qC2yjHtWZpnOyx5tIdWR8U1CynmBHcI/TgG6bcxvlH9F6FtH+eg3530dfmW5BzNBho es3d9tI/CsPVyBa3KrUe/XyNDdcSu6QuddB1PvrTVxi1oAuavFSGH0Tqy4ADPHEPrTsf 39OemO9fpyifIQXdMDnMirXP/PhlujYfF2Eo6vKEJaxfWuUPP8F98jy5cKPGrBprLmpP XIbOaM8ixG5wIq7IjEmNyMkAXubHySrzBrNTsd97RoEHzPTayw5e8aS80jluSjB97Ez/ 2ubg== X-Gm-Message-State: APjAAAUWF8Nnz0yx1C4nSCV6zs31VTBaAnCJEHPc2JMxQ1NdQGigKTtA BCwHQAiAY8epdGjaGRoolYHdHw== X-Received: by 2002:a7b:c4c1:: with SMTP id g1mr5529925wmk.14.1561439592734; Mon, 24 Jun 2019 22:13:12 -0700 (PDT) Received: from localhost.localdomain (146-241-102-168.dyn.eolo.it. [146.241.102.168]) by smtp.gmail.com with ESMTPSA id q20sm28543149wra.36.2019.06.24.22.13.11 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 24 Jun 2019 22:13:12 -0700 (PDT) From: Paolo Valente To: Jens Axboe Cc: linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, ulf.hansson@linaro.org, linus.walleij@linaro.org, bfq-iosched@googlegroups.com, oleksandr@natalenko.name, bottura.nicola95@gmail.com, srivatsa@csail.mit.edu, Paolo Valente Subject: [PATCH BUGFIX IMPROVEMENT V2 7/7] block, bfq: re-schedule empty queues if they deserve I/O plugging Date: Tue, 25 Jun 2019 07:12:49 +0200 Message-Id: <20190625051249.39265-8-paolo.valente@linaro.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190625051249.39265-1-paolo.valente@linaro.org> References: <20190625051249.39265-1-paolo.valente@linaro.org> MIME-Version: 1.0 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Consider, on one side, a bfq_queue Q that remains empty while in service, and, on the other side, the pending I/O of bfq_queues that, according to their timestamps, have to be served after Q. If an uncontrolled amount of I/O from the latter bfq_queues were dispatched while Q is waiting for its new I/O to arrive, then Q's bandwidth guarantees would be violated. To prevent this, I/O dispatch is plugged until Q receives new I/O (except for a properly controlled amount of injected I/O). Unfortunately, preemption breaks I/O-dispatch plugging, for the following reason. Preemption is performed in two steps. First, Q is expired and re-scheduled. Second, the new bfq_queue to serve is chosen. The first step is needed by the second, as the second can be performed only after Q's timestamps have been properly updated (done in the expiration step), and Q has been re-queued for service. This dependency is a consequence of the way how BFQ's scheduling algorithm is currently implemented. But Q is not re-scheduled at all in the first step, because Q is empty. As a consequence, an uncontrolled amount of I/O may be dispatched until Q becomes non empty again. This breaks Q's service guarantees. This commit addresses this issue by re-scheduling Q even if it is empty. This in turn breaks the assumption that all scheduled queues are non empty. Then a few extra checks are now needed. Signed-off-by: Paolo Valente --- block/bfq-iosched.c | 387 +++++++++++++++++++++++--------------------- 1 file changed, 203 insertions(+), 184 deletions(-) -- 2.20.1 diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c index 6a3d05023300..72840ebf953e 100644 --- a/block/bfq-iosched.c +++ b/block/bfq-iosched.c @@ -3210,7 +3210,186 @@ static void bfq_dispatch_remove(struct request_queue *q, struct request *rq) bfq_remove_request(q, rq); } -static bool __bfq_bfqq_expire(struct bfq_data *bfqd, struct bfq_queue *bfqq) +/* + * There is a case where idling does not have to be performed for + * throughput concerns, but to preserve the throughput share of + * the process associated with bfqq. + * + * To introduce this case, we can note that allowing the drive + * to enqueue more than one request at a time, and hence + * delegating de facto final scheduling decisions to the + * drive's internal scheduler, entails loss of control on the + * actual request service order. In particular, the critical + * situation is when requests from different processes happen + * to be present, at the same time, in the internal queue(s) + * of the drive. In such a situation, the drive, by deciding + * the service order of the internally-queued requests, does + * determine also the actual throughput distribution among + * these processes. But the drive typically has no notion or + * concern about per-process throughput distribution, and + * makes its decisions only on a per-request basis. Therefore, + * the service distribution enforced by the drive's internal + * scheduler is likely to coincide with the desired throughput + * distribution only in a completely symmetric, or favorably + * skewed scenario where: + * (i-a) each of these processes must get the same throughput as + * the others, + * (i-b) in case (i-a) does not hold, it holds that the process + * associated with bfqq must receive a lower or equal + * throughput than any of the other processes; + * (ii) the I/O of each process has the same properties, in + * terms of locality (sequential or random), direction + * (reads or writes), request sizes, greediness + * (from I/O-bound to sporadic), and so on; + + * In fact, in such a scenario, the drive tends to treat the requests + * of each process in about the same way as the requests of the + * others, and thus to provide each of these processes with about the + * same throughput. This is exactly the desired throughput + * distribution if (i-a) holds, or, if (i-b) holds instead, this is an + * even more convenient distribution for (the process associated with) + * bfqq. + * + * In contrast, in any asymmetric or unfavorable scenario, device + * idling (I/O-dispatch plugging) is certainly needed to guarantee + * that bfqq receives its assigned fraction of the device throughput + * (see [1] for details). + * + * The problem is that idling may significantly reduce throughput with + * certain combinations of types of I/O and devices. An important + * example is sync random I/O on flash storage with command + * queueing. So, unless bfqq falls in cases where idling also boosts + * throughput, it is important to check conditions (i-a), i(-b) and + * (ii) accurately, so as to avoid idling when not strictly needed for + * service guarantees. + * + * Unfortunately, it is extremely difficult to thoroughly check + * condition (ii). And, in case there are active groups, it becomes + * very difficult to check conditions (i-a) and (i-b) too. In fact, + * if there are active groups, then, for conditions (i-a) or (i-b) to + * become false 'indirectly', it is enough that an active group + * contains more active processes or sub-groups than some other active + * group. More precisely, for conditions (i-a) or (i-b) to become + * false because of such a group, it is not even necessary that the + * group is (still) active: it is sufficient that, even if the group + * has become inactive, some of its descendant processes still have + * some request already dispatched but still waiting for + * completion. In fact, requests have still to be guaranteed their + * share of the throughput even after being dispatched. In this + * respect, it is easy to show that, if a group frequently becomes + * inactive while still having in-flight requests, and if, when this + * happens, the group is not considered in the calculation of whether + * the scenario is asymmetric, then the group may fail to be + * guaranteed its fair share of the throughput (basically because + * idling may not be performed for the descendant processes of the + * group, but it had to be). We address this issue with the following + * bi-modal behavior, implemented in the function + * bfq_asymmetric_scenario(). + * + * If there are groups with requests waiting for completion + * (as commented above, some of these groups may even be + * already inactive), then the scenario is tagged as + * asymmetric, conservatively, without checking any of the + * conditions (i-a), (i-b) or (ii). So the device is idled for bfqq. + * This behavior matches also the fact that groups are created + * exactly if controlling I/O is a primary concern (to + * preserve bandwidth and latency guarantees). + * + * On the opposite end, if there are no groups with requests waiting + * for completion, then only conditions (i-a) and (i-b) are actually + * controlled, i.e., provided that conditions (i-a) or (i-b) holds, + * idling is not performed, regardless of whether condition (ii) + * holds. In other words, only if conditions (i-a) and (i-b) do not + * hold, then idling is allowed, and the device tends to be prevented + * from queueing many requests, possibly of several processes. Since + * there are no groups with requests waiting for completion, then, to + * control conditions (i-a) and (i-b) it is enough to check just + * whether all the queues with requests waiting for completion also + * have the same weight. + * + * Not checking condition (ii) evidently exposes bfqq to the + * risk of getting less throughput than its fair share. + * However, for queues with the same weight, a further + * mechanism, preemption, mitigates or even eliminates this + * problem. And it does so without consequences on overall + * throughput. This mechanism and its benefits are explained + * in the next three paragraphs. + * + * Even if a queue, say Q, is expired when it remains idle, Q + * can still preempt the new in-service queue if the next + * request of Q arrives soon (see the comments on + * bfq_bfqq_update_budg_for_activation). If all queues and + * groups have the same weight, this form of preemption, + * combined with the hole-recovery heuristic described in the + * comments on function bfq_bfqq_update_budg_for_activation, + * are enough to preserve a correct bandwidth distribution in + * the mid term, even without idling. In fact, even if not + * idling allows the internal queues of the device to contain + * many requests, and thus to reorder requests, we can rather + * safely assume that the internal scheduler still preserves a + * minimum of mid-term fairness. + * + * More precisely, this preemption-based, idleless approach + * provides fairness in terms of IOPS, and not sectors per + * second. This can be seen with a simple example. Suppose + * that there are two queues with the same weight, but that + * the first queue receives requests of 8 sectors, while the + * second queue receives requests of 1024 sectors. In + * addition, suppose that each of the two queues contains at + * most one request at a time, which implies that each queue + * always remains idle after it is served. Finally, after + * remaining idle, each queue receives very quickly a new + * request. It follows that the two queues are served + * alternatively, preempting each other if needed. This + * implies that, although both queues have the same weight, + * the queue with large requests receives a service that is + * 1024/8 times as high as the service received by the other + * queue. + * + * The motivation for using preemption instead of idling (for + * queues with the same weight) is that, by not idling, + * service guarantees are preserved (completely or at least in + * part) without minimally sacrificing throughput. And, if + * there is no active group, then the primary expectation for + * this device is probably a high throughput. + * + * We are now left only with explaining the additional + * compound condition that is checked below for deciding + * whether the scenario is asymmetric. To explain this + * compound condition, we need to add that the function + * bfq_asymmetric_scenario checks the weights of only + * non-weight-raised queues, for efficiency reasons (see + * comments on bfq_weights_tree_add()). Then the fact that + * bfqq is weight-raised is checked explicitly here. More + * precisely, the compound condition below takes into account + * also the fact that, even if bfqq is being weight-raised, + * the scenario is still symmetric if all queues with requests + * waiting for completion happen to be + * weight-raised. Actually, we should be even more precise + * here, and differentiate between interactive weight raising + * and soft real-time weight raising. + * + * As a side note, it is worth considering that the above + * device-idling countermeasures may however fail in the + * following unlucky scenario: if idling is (correctly) + * disabled in a time period during which all symmetry + * sub-conditions hold, and hence the device is allowed to + * enqueue many requests, but at some later point in time some + * sub-condition stops to hold, then it may become impossible + * to let requests be served in the desired order until all + * the requests already queued in the device have been served. + */ +static bool idling_needed_for_service_guarantees(struct bfq_data *bfqd, + struct bfq_queue *bfqq) +{ + return (bfqq->wr_coeff > 1 && + bfqd->wr_busy_queues < + bfq_tot_busy_queues(bfqd)) || + bfq_asymmetric_scenario(bfqd, bfqq); +} + +static bool __bfq_bfqq_expire(struct bfq_data *bfqd, struct bfq_queue *bfqq, + enum bfqq_expiration reason) { /* * If this bfqq is shared between multiple processes, check @@ -3221,7 +3400,22 @@ static bool __bfq_bfqq_expire(struct bfq_data *bfqd, struct bfq_queue *bfqq) if (bfq_bfqq_coop(bfqq) && BFQQ_SEEKY(bfqq)) bfq_mark_bfqq_split_coop(bfqq); - if (RB_EMPTY_ROOT(&bfqq->sort_list)) { + /* + * Consider queues with a higher finish virtual time than + * bfqq. If idling_needed_for_service_guarantees(bfqq) returns + * true, then bfqq's bandwidth would be violated if an + * uncontrolled amount of I/O from these queues were + * dispatched while bfqq is waiting for its new I/O to + * arrive. This is exactly what may happen if this is a forced + * expiration caused by a preemption attempt, and if bfqq is + * not re-scheduled. To prevent this from happening, re-queue + * bfqq if it needs I/O-dispatch plugging, even if it is + * empty. By doing so, bfqq is granted to be served before the + * above queues (provided that bfqq is of course eligible). + */ + if (RB_EMPTY_ROOT(&bfqq->sort_list) && + !(reason == BFQQE_PREEMPTED && + idling_needed_for_service_guarantees(bfqd, bfqq))) { if (bfqq->dispatched == 0) /* * Overloading budget_timeout field to store @@ -3238,7 +3432,8 @@ static bool __bfq_bfqq_expire(struct bfq_data *bfqd, struct bfq_queue *bfqq) * Resort priority tree of potential close cooperators. * See comments on bfq_pos_tree_add_move() for the unlikely(). */ - if (unlikely(!bfqd->nonrot_with_queueing)) + if (unlikely(!bfqd->nonrot_with_queueing && + !RB_EMPTY_ROOT(&bfqq->sort_list))) bfq_pos_tree_add_move(bfqd, bfqq); } @@ -3739,7 +3934,7 @@ void bfq_bfqq_expire(struct bfq_data *bfqd, * reason. */ __bfq_bfqq_recalc_budget(bfqd, bfqq, reason); - if (__bfq_bfqq_expire(bfqd, bfqq)) + if (__bfq_bfqq_expire(bfqd, bfqq, reason)) /* bfqq is gone, no more actions on it */ return; @@ -3885,184 +4080,6 @@ static bool idling_boosts_thr_without_issues(struct bfq_data *bfqd, bfqd->wr_busy_queues == 0; } -/* - * There is a case where idling does not have to be performed for - * throughput concerns, but to preserve the throughput share of - * the process associated with bfqq. - * - * To introduce this case, we can note that allowing the drive - * to enqueue more than one request at a time, and hence - * delegating de facto final scheduling decisions to the - * drive's internal scheduler, entails loss of control on the - * actual request service order. In particular, the critical - * situation is when requests from different processes happen - * to be present, at the same time, in the internal queue(s) - * of the drive. In such a situation, the drive, by deciding - * the service order of the internally-queued requests, does - * determine also the actual throughput distribution among - * these processes. But the drive typically has no notion or - * concern about per-process throughput distribution, and - * makes its decisions only on a per-request basis. Therefore, - * the service distribution enforced by the drive's internal - * scheduler is likely to coincide with the desired throughput - * distribution only in a completely symmetric, or favorably - * skewed scenario where: - * (i-a) each of these processes must get the same throughput as - * the others, - * (i-b) in case (i-a) does not hold, it holds that the process - * associated with bfqq must receive a lower or equal - * throughput than any of the other processes; - * (ii) the I/O of each process has the same properties, in - * terms of locality (sequential or random), direction - * (reads or writes), request sizes, greediness - * (from I/O-bound to sporadic), and so on; - - * In fact, in such a scenario, the drive tends to treat the requests - * of each process in about the same way as the requests of the - * others, and thus to provide each of these processes with about the - * same throughput. This is exactly the desired throughput - * distribution if (i-a) holds, or, if (i-b) holds instead, this is an - * even more convenient distribution for (the process associated with) - * bfqq. - * - * In contrast, in any asymmetric or unfavorable scenario, device - * idling (I/O-dispatch plugging) is certainly needed to guarantee - * that bfqq receives its assigned fraction of the device throughput - * (see [1] for details). - * - * The problem is that idling may significantly reduce throughput with - * certain combinations of types of I/O and devices. An important - * example is sync random I/O on flash storage with command - * queueing. So, unless bfqq falls in cases where idling also boosts - * throughput, it is important to check conditions (i-a), i(-b) and - * (ii) accurately, so as to avoid idling when not strictly needed for - * service guarantees. - * - * Unfortunately, it is extremely difficult to thoroughly check - * condition (ii). And, in case there are active groups, it becomes - * very difficult to check conditions (i-a) and (i-b) too. In fact, - * if there are active groups, then, for conditions (i-a) or (i-b) to - * become false 'indirectly', it is enough that an active group - * contains more active processes or sub-groups than some other active - * group. More precisely, for conditions (i-a) or (i-b) to become - * false because of such a group, it is not even necessary that the - * group is (still) active: it is sufficient that, even if the group - * has become inactive, some of its descendant processes still have - * some request already dispatched but still waiting for - * completion. In fact, requests have still to be guaranteed their - * share of the throughput even after being dispatched. In this - * respect, it is easy to show that, if a group frequently becomes - * inactive while still having in-flight requests, and if, when this - * happens, the group is not considered in the calculation of whether - * the scenario is asymmetric, then the group may fail to be - * guaranteed its fair share of the throughput (basically because - * idling may not be performed for the descendant processes of the - * group, but it had to be). We address this issue with the following - * bi-modal behavior, implemented in the function - * bfq_asymmetric_scenario(). - * - * If there are groups with requests waiting for completion - * (as commented above, some of these groups may even be - * already inactive), then the scenario is tagged as - * asymmetric, conservatively, without checking any of the - * conditions (i-a), (i-b) or (ii). So the device is idled for bfqq. - * This behavior matches also the fact that groups are created - * exactly if controlling I/O is a primary concern (to - * preserve bandwidth and latency guarantees). - * - * On the opposite end, if there are no groups with requests waiting - * for completion, then only conditions (i-a) and (i-b) are actually - * controlled, i.e., provided that conditions (i-a) or (i-b) holds, - * idling is not performed, regardless of whether condition (ii) - * holds. In other words, only if conditions (i-a) and (i-b) do not - * hold, then idling is allowed, and the device tends to be prevented - * from queueing many requests, possibly of several processes. Since - * there are no groups with requests waiting for completion, then, to - * control conditions (i-a) and (i-b) it is enough to check just - * whether all the queues with requests waiting for completion also - * have the same weight. - * - * Not checking condition (ii) evidently exposes bfqq to the - * risk of getting less throughput than its fair share. - * However, for queues with the same weight, a further - * mechanism, preemption, mitigates or even eliminates this - * problem. And it does so without consequences on overall - * throughput. This mechanism and its benefits are explained - * in the next three paragraphs. - * - * Even if a queue, say Q, is expired when it remains idle, Q - * can still preempt the new in-service queue if the next - * request of Q arrives soon (see the comments on - * bfq_bfqq_update_budg_for_activation). If all queues and - * groups have the same weight, this form of preemption, - * combined with the hole-recovery heuristic described in the - * comments on function bfq_bfqq_update_budg_for_activation, - * are enough to preserve a correct bandwidth distribution in - * the mid term, even without idling. In fact, even if not - * idling allows the internal queues of the device to contain - * many requests, and thus to reorder requests, we can rather - * safely assume that the internal scheduler still preserves a - * minimum of mid-term fairness. - * - * More precisely, this preemption-based, idleless approach - * provides fairness in terms of IOPS, and not sectors per - * second. This can be seen with a simple example. Suppose - * that there are two queues with the same weight, but that - * the first queue receives requests of 8 sectors, while the - * second queue receives requests of 1024 sectors. In - * addition, suppose that each of the two queues contains at - * most one request at a time, which implies that each queue - * always remains idle after it is served. Finally, after - * remaining idle, each queue receives very quickly a new - * request. It follows that the two queues are served - * alternatively, preempting each other if needed. This - * implies that, although both queues have the same weight, - * the queue with large requests receives a service that is - * 1024/8 times as high as the service received by the other - * queue. - * - * The motivation for using preemption instead of idling (for - * queues with the same weight) is that, by not idling, - * service guarantees are preserved (completely or at least in - * part) without minimally sacrificing throughput. And, if - * there is no active group, then the primary expectation for - * this device is probably a high throughput. - * - * We are now left only with explaining the additional - * compound condition that is checked below for deciding - * whether the scenario is asymmetric. To explain this - * compound condition, we need to add that the function - * bfq_asymmetric_scenario checks the weights of only - * non-weight-raised queues, for efficiency reasons (see - * comments on bfq_weights_tree_add()). Then the fact that - * bfqq is weight-raised is checked explicitly here. More - * precisely, the compound condition below takes into account - * also the fact that, even if bfqq is being weight-raised, - * the scenario is still symmetric if all queues with requests - * waiting for completion happen to be - * weight-raised. Actually, we should be even more precise - * here, and differentiate between interactive weight raising - * and soft real-time weight raising. - * - * As a side note, it is worth considering that the above - * device-idling countermeasures may however fail in the - * following unlucky scenario: if idling is (correctly) - * disabled in a time period during which all symmetry - * sub-conditions hold, and hence the device is allowed to - * enqueue many requests, but at some later point in time some - * sub-condition stops to hold, then it may become impossible - * to let requests be served in the desired order until all - * the requests already queued in the device have been served. - */ -static bool idling_needed_for_service_guarantees(struct bfq_data *bfqd, - struct bfq_queue *bfqq) -{ - return (bfqq->wr_coeff > 1 && - bfqd->wr_busy_queues < - bfq_tot_busy_queues(bfqd)) || - bfq_asymmetric_scenario(bfqd, bfqq); -} - /* * For a queue that becomes empty, device idling is allowed only if * this function returns true for that queue. As a consequence, since @@ -4321,7 +4338,8 @@ static struct bfq_queue *bfq_select_queue(struct bfq_data *bfqd) (bfqq->dispatched != 0 && bfq_better_to_idle(bfqq))) { struct bfq_queue *async_bfqq = bfqq->bic && bfqq->bic->bfqq[0] && - bfq_bfqq_busy(bfqq->bic->bfqq[0]) ? + bfq_bfqq_busy(bfqq->bic->bfqq[0]) && + bfqq->bic->bfqq[0]->next_rq ? bfqq->bic->bfqq[0] : NULL; /* @@ -4403,6 +4421,7 @@ static struct bfq_queue *bfq_select_queue(struct bfq_data *bfqd) bfqq = bfqq->bic->bfqq[0]; else if (bfq_bfqq_has_waker(bfqq) && bfq_bfqq_busy(bfqq->waker_bfqq) && + bfqq->next_rq && bfq_serv_to_charge(bfqq->waker_bfqq->next_rq, bfqq->waker_bfqq) <= bfq_bfqq_budget_left(bfqq->waker_bfqq) @@ -4800,7 +4819,7 @@ static void bfq_exit_bfqq(struct bfq_data *bfqd, struct bfq_queue *bfqq) struct hlist_node *n; if (bfqq == bfqd->in_service_queue) { - __bfq_bfqq_expire(bfqd, bfqq); + __bfq_bfqq_expire(bfqd, bfqq, BFQQE_BUDGET_TIMEOUT); bfq_schedule_dispatch(bfqd); }