diff mbox series

[BUGFIX,IMPROVEMENT,07/14] block, bfq: do not plug I/O of in-service queue when harmful

Message ID 20190129110638.12652-8-paolo.valente@linaro.org
State Accepted
Commit ac8b0cb415f3aa9162009d39624501d37031533b
Headers show
Series batch of patches for next linux release | expand

Commit Message

Paolo Valente Jan. 29, 2019, 11:06 a.m. UTC
If the in-service bfq_queue is sync and remains temporarily idle, then
I/O dispatching (from other queues) may be plugged. It may be dome for
two reasons: either to boost throughput, or to preserve the bandwidth
share of the in-service queue. In the first case, if the I/O of the
in-service queue, when it finally arrives, consists only of one small
I/O request, then it makes sense to plug even the I/O of the
in-service queue. In fact, serving such a small request immediately is
likely to lower throughput instead of boosting it, whereas waiting a
little bit is likely to let that request grow, thanks to request
merging, and become more profitable in terms of throughput (this is
likely to happen exactly because the I/O of the queue has been
detected to boost throughput).

On the opposite end, if I/O dispatching is being plugged only to
preserve the bandwidth of the in-service queue, then it would be
better not to plug also the I/O of the in-service queue, because such
a plugging is likely to cause only loss of bandwidth for the queue.

Unfortunately, no distinction is made between the two cases, and the
I/O of the in-service queue is always plugged in case just a small I/O
request arrives. This commit draws this missing distinction and does
not perform harmful plugging.

Signed-off-by: Paolo Valente <paolo.valente@linaro.org>

---
 block/bfq-iosched.c | 31 +++++++++++++++++--------------
 1 file changed, 17 insertions(+), 14 deletions(-)

-- 
2.20.1
diff mbox series

Patch

diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
index 2756f4b1432b..a6fe60114ade 100644
--- a/block/bfq-iosched.c
+++ b/block/bfq-iosched.c
@@ -4599,28 +4599,31 @@  static void bfq_rq_enqueued(struct bfq_data *bfqd, struct bfq_queue *bfqq,
 		bool budget_timeout = bfq_bfqq_budget_timeout(bfqq);
 
 		/*
-		 * There is just this request queued: if the request
-		 * is small and the queue is not to be expired, then
-		 * just exit.
+		 * There is just this request queued: if
+		 * - the request is small, and
+		 * - we are idling to boost throughput, and
+		 * - the queue is not to be expired,
+		 * then just exit.
 		 *
 		 * In this way, if the device is being idled to wait
 		 * for a new request from the in-service queue, we
 		 * avoid unplugging the device and committing the
-		 * device to serve just a small request. On the
-		 * contrary, we wait for the block layer to decide
-		 * when to unplug the device: hopefully, new requests
-		 * will be merged to this one quickly, then the device
-		 * will be unplugged and larger requests will be
-		 * dispatched.
+		 * device to serve just a small request. In contrast
+		 * we wait for the block layer to decide when to
+		 * unplug the device: hopefully, new requests will be
+		 * merged to this one quickly, then the device will be
+		 * unplugged and larger requests will be dispatched.
 		 */
-		if (small_req && !budget_timeout)
+		if (small_req && idling_boosts_thr_without_issues(bfqd, bfqq) &&
+		    !budget_timeout)
 			return;
 
 		/*
-		 * A large enough request arrived, or the queue is to
-		 * be expired: in both cases disk idling is to be
-		 * stopped, so clear wait_request flag and reset
-		 * timer.
+		 * A large enough request arrived, or idling is being
+		 * performed to preserve service guarantees, or
+		 * finally the queue is to be expired: in all these
+		 * cases disk idling is to be stopped, so clear
+		 * wait_request flag and reset timer.
 		 */
 		bfq_clear_bfqq_wait_request(bfqq);
 		hrtimer_try_to_cancel(&bfqd->idle_slice_timer);