diff mbox series

[v13,03/18] block: Preserve the order of requeued zoned writes

Message ID 20231018175602.2148415-4-bvanassche@acm.org
State New
Headers show
Series Improve write performance for zoned UFS devices | expand

Commit Message

Bart Van Assche Oct. 18, 2023, 5:54 p.m. UTC
blk_mq_requeue_work() inserts requeued requests in front of other
requests. This is fine for all request types except for sequential zoned
writes. Hence this patch.

Note: moving this functionality into the mq-deadline I/O scheduler is
not an option because we want to be able to use zoned storage without
I/O scheduler.

Cc: Christoph Hellwig <hch@lst.de>
Cc: Damien Le Moal <dlemoal@kernel.org>
Cc: Ming Lei <ming.lei@redhat.com>
Cc: Hannes Reinecke <hare@suse.de>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 block/blk-mq.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

Comments

Damien Le Moal Oct. 19, 2023, 12:15 a.m. UTC | #1
On 10/19/23 02:54, Bart Van Assche wrote:
> blk_mq_requeue_work() inserts requeued requests in front of other
> requests. This is fine for all request types except for sequential zoned
> writes. Hence this patch.
> 
> Note: moving this functionality into the mq-deadline I/O scheduler is
> not an option because we want to be able to use zoned storage without
> I/O scheduler.
> 
> Cc: Christoph Hellwig <hch@lst.de>
> Cc: Damien Le Moal <dlemoal@kernel.org>
> Cc: Ming Lei <ming.lei@redhat.com>
> Cc: Hannes Reinecke <hare@suse.de>
> Signed-off-by: Bart Van Assche <bvanassche@acm.org>
> ---
>  block/blk-mq.c | 4 +++-
>  1 file changed, 3 insertions(+), 1 deletion(-)
> 
> diff --git a/block/blk-mq.c b/block/blk-mq.c
> index 502dafa76716..ce6ddb249959 100644
> --- a/block/blk-mq.c
> +++ b/block/blk-mq.c
> @@ -1485,7 +1485,9 @@ static void blk_mq_requeue_work(struct work_struct *work)
>  			blk_mq_request_bypass_insert(rq, 0);
>  		} else {
>  			list_del_init(&rq->queuelist);
> -			blk_mq_insert_request(rq, BLK_MQ_INSERT_AT_HEAD);
> +			blk_mq_insert_request(rq,
> +					      !blk_rq_is_seq_zoned_write(rq) ?
> +					      BLK_MQ_INSERT_AT_HEAD : 0);

Something like:

		} else {
			blk_insert_t flags = BLK_MQ_INSERT_AT_HEAD;

			if (blk_rq_is_seq_zoned_write(rq))
				flags = 0;
			blk_mq_insert_request(rq, flags);
		}

would be a lot easier to read in my opinion.
Bart Van Assche Oct. 20, 2023, 7:17 p.m. UTC | #2
On 10/18/23 17:15, Damien Le Moal wrote:
> On 10/19/23 02:54, Bart Van Assche wrote:
>> blk_mq_requeue_work() inserts requeued requests in front of other
>> requests. This is fine for all request types except for sequential zoned
>> writes. Hence this patch.
>>
>> Note: moving this functionality into the mq-deadline I/O scheduler is
>> not an option because we want to be able to use zoned storage without
>> I/O scheduler.
>>
>> Cc: Christoph Hellwig <hch@lst.de>
>> Cc: Damien Le Moal <dlemoal@kernel.org>
>> Cc: Ming Lei <ming.lei@redhat.com>
>> Cc: Hannes Reinecke <hare@suse.de>
>> Signed-off-by: Bart Van Assche <bvanassche@acm.org>
>> ---
>>   block/blk-mq.c | 4 +++-
>>   1 file changed, 3 insertions(+), 1 deletion(-)
>>
>> diff --git a/block/blk-mq.c b/block/blk-mq.c
>> index 502dafa76716..ce6ddb249959 100644
>> --- a/block/blk-mq.c
>> +++ b/block/blk-mq.c
>> @@ -1485,7 +1485,9 @@ static void blk_mq_requeue_work(struct work_struct *work)
>>   			blk_mq_request_bypass_insert(rq, 0);
>>   		} else {
>>   			list_del_init(&rq->queuelist);
>> -			blk_mq_insert_request(rq, BLK_MQ_INSERT_AT_HEAD);
>> +			blk_mq_insert_request(rq,
>> +					      !blk_rq_is_seq_zoned_write(rq) ?
>> +					      BLK_MQ_INSERT_AT_HEAD : 0);
> 
> Something like:
> 
> 		} else {
> 			blk_insert_t flags = BLK_MQ_INSERT_AT_HEAD;
> 
> 			if (blk_rq_is_seq_zoned_write(rq))
> 				flags = 0;
> 			blk_mq_insert_request(rq, flags);
> 		}
> 
> would be a lot easier to read in my opinion.

Hi Damien,

I will make this change.

Thanks,

Bart.
diff mbox series

Patch

diff --git a/block/blk-mq.c b/block/blk-mq.c
index 502dafa76716..ce6ddb249959 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -1485,7 +1485,9 @@  static void blk_mq_requeue_work(struct work_struct *work)
 			blk_mq_request_bypass_insert(rq, 0);
 		} else {
 			list_del_init(&rq->queuelist);
-			blk_mq_insert_request(rq, BLK_MQ_INSERT_AT_HEAD);
+			blk_mq_insert_request(rq,
+					      !blk_rq_is_seq_zoned_write(rq) ?
+					      BLK_MQ_INSERT_AT_HEAD : 0);
 		}
 	}