diff mbox series

[7/8] blk-mq: grab rq->refcount before calling ->fn in blk_mq_tagset_busy_iter

Message ID 20210425085753.2617424-8-ming.lei@redhat.com
State New
Headers show
Series [1/8] Revert "blk-mq: Fix races between blk_mq_update_nr_hw_queues() and iterating over tags" | expand

Commit Message

Ming Lei April 25, 2021, 8:57 a.m. UTC
Grab rq->refcount before calling ->fn in blk_mq_tagset_busy_iter(), and
this way will prevent the request from being re-used when ->fn is
running. The approach is same as what we do during handling timeout.

Fix request UAF related with completion race or queue releasing:

- If one rq is referred before rq->q is frozen, then queue won't be
frozen before the request is released during iteration.

- If one rq is referred after rq->q is frozen, refcount_inc_not_zero()
will return false, and we won't iterate over this request.

However, still one request UAF not covered: refcount_inc_not_zero() may
read one freed request, and it will be handled in next patch.

Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
 block/blk-mq-tag.c | 14 +++++++++++---
 block/blk-mq.c     | 14 +++++++++-----
 block/blk-mq.h     |  1 +
 3 files changed, 21 insertions(+), 8 deletions(-)

Comments

Bart Van Assche April 25, 2021, 6:55 p.m. UTC | #1
On 4/25/21 1:57 AM, Ming Lei wrote:
> However, still one request UAF not covered: refcount_inc_not_zero() may
> read one freed request, and it will be handled in next patch.

This means that patch "blk-mq: clear stale request in tags->rq[] before
freeing one request pool" should come before this patch.

> @@ -276,12 +277,15 @@ static bool bt_tags_iter(struct sbitmap *bitmap, unsigned int bitnr, void *data)
>  		rq = tags->static_rqs[bitnr];
>  	else
>  		rq = tags->rqs[bitnr];
> -	if (!rq)
> +	if (!rq || !refcount_inc_not_zero(&rq->ref))
>  		return true;
>  	if ((iter_data->flags & BT_TAG_ITER_STARTED) &&
>  	    !blk_mq_request_started(rq))
> -		return true;
> -	return iter_data->fn(rq, iter_data->data, reserved);
> +		ret = true;
> +	else
> +		ret = iter_data->fn(rq, iter_data->data, reserved);
> +	blk_mq_put_rq_ref(rq);
> +	return ret;
>  }

Even if patches 7/8 and 8/8 would be reordered, the above code
introduces a new use-after-free, a use-after-free that is much worse
than the UAF in kernel v5.11. The following sequence can be triggered by
the above code:
* bt_tags_iter() reads tags->rqs[bitnr] and stores the request pointer
in the 'rq' variable.
* Request 'rq' completes, tags->rqs[bitnr] is cleared and the memory
that backs that request is freed.
* The memory that backs 'rq' is used for another purpose and the request
reference count becomes nonzero.
* bt_tags_iter() increments the request reference count and thereby
corrupts memory.

Bart.
Ming Lei April 26, 2021, 12:41 a.m. UTC | #2
On Sun, Apr 25, 2021 at 11:55:22AM -0700, Bart Van Assche wrote:
> On 4/25/21 1:57 AM, Ming Lei wrote:

> > However, still one request UAF not covered: refcount_inc_not_zero() may

> > read one freed request, and it will be handled in next patch.

> 

> This means that patch "blk-mq: clear stale request in tags->rq[] before

> freeing one request pool" should come before this patch.


It doesn't matter. This patch only can't avoid the UAF too, we need
to grab req->ref to prevent queue from being frozen.

> 

> > @@ -276,12 +277,15 @@ static bool bt_tags_iter(struct sbitmap *bitmap, unsigned int bitnr, void *data)

> >  		rq = tags->static_rqs[bitnr];

> >  	else

> >  		rq = tags->rqs[bitnr];

> > -	if (!rq)

> > +	if (!rq || !refcount_inc_not_zero(&rq->ref))

> >  		return true;

> >  	if ((iter_data->flags & BT_TAG_ITER_STARTED) &&

> >  	    !blk_mq_request_started(rq))

> > -		return true;

> > -	return iter_data->fn(rq, iter_data->data, reserved);

> > +		ret = true;

> > +	else

> > +		ret = iter_data->fn(rq, iter_data->data, reserved);

> > +	blk_mq_put_rq_ref(rq);

> > +	return ret;

> >  }

> 

> Even if patches 7/8 and 8/8 would be reordered, the above code

> introduces a new use-after-free, a use-after-free that is much worse

> than the UAF in kernel v5.11. The following sequence can be triggered by

> the above code:

> * bt_tags_iter() reads tags->rqs[bitnr] and stores the request pointer

> in the 'rq' variable.

> * Request 'rq' completes, tags->rqs[bitnr] is cleared and the memory

> that backs that request is freed.

> * The memory that backs 'rq' is used for another purpose and the request

> reference count becomes nonzero.


That means the 'rq' is re-allocated, and it becomes in-flight again.

> * bt_tags_iter() increments the request reference count and thereby

> corrupts memory.


No, When refcount_inc_not_zero() succeeds in bt_tags_iter(), no one can
free the request any more until ->fn() returns, why do you think memory
corrupts? This pattern isn't different with timeout's usage, is it?

If IO activity is allowed during iterating tagset requests, ->fn() and
in-flight IO can always be run concurrently. That is caller's
responsibility to handle the race. That is why you can see lots callers
do quiesce queues before calling blk_mq_tagset_busy_iter(), but
quiesce isn't required if ->fn() just READs request only.

Your patch or current in-tree code has same 'problem' too, if you think
it is a problem. Clearing ->rq[tag] or holding a lock before calling
->fn() can not avoid such thing, can it?

Finally it is a request walking in tagset wide, so it should be safe for
->fn to iterate over request in this way. The thing is just that req->tag may
become not same with 'bitnr' any more. We can handle it simply by checking
if 'req->tag == bitnr' in bt_tags_iter() after the req->ref is grabbed,
still not sure if it is absolutely necessary.


Thanks,
Ming
diff mbox series

Patch

diff --git a/block/blk-mq-tag.c b/block/blk-mq-tag.c
index 2a37731e8244..489d2db89856 100644
--- a/block/blk-mq-tag.c
+++ b/block/blk-mq-tag.c
@@ -264,6 +264,7 @@  static bool bt_tags_iter(struct sbitmap *bitmap, unsigned int bitnr, void *data)
 	struct blk_mq_tags *tags = iter_data->tags;
 	bool reserved = iter_data->flags & BT_TAG_ITER_RESERVED;
 	struct request *rq;
+	bool ret;
 
 	if (!reserved)
 		bitnr += tags->nr_reserved_tags;
@@ -276,12 +277,15 @@  static bool bt_tags_iter(struct sbitmap *bitmap, unsigned int bitnr, void *data)
 		rq = tags->static_rqs[bitnr];
 	else
 		rq = tags->rqs[bitnr];
-	if (!rq)
+	if (!rq || !refcount_inc_not_zero(&rq->ref))
 		return true;
 	if ((iter_data->flags & BT_TAG_ITER_STARTED) &&
 	    !blk_mq_request_started(rq))
-		return true;
-	return iter_data->fn(rq, iter_data->data, reserved);
+		ret = true;
+	else
+		ret = iter_data->fn(rq, iter_data->data, reserved);
+	blk_mq_put_rq_ref(rq);
+	return ret;
 }
 
 /**
@@ -348,6 +352,10 @@  void blk_mq_all_tag_iter(struct blk_mq_tags *tags, busy_tag_iter_fn *fn,
  *		indicates whether or not @rq is a reserved request. Return
  *		true to continue iterating tags, false to stop.
  * @priv:	Will be passed as second argument to @fn.
+ *
+ * We grab one request reference before calling @fn and release it after
+ * @fn returns. So far we don't support to pass the request reference to
+ * one new conetxt in @fn.
  */
 void blk_mq_tagset_busy_iter(struct blk_mq_tag_set *tagset,
 		busy_tag_iter_fn *fn, void *priv)
diff --git a/block/blk-mq.c b/block/blk-mq.c
index e3d1067b10c3..9a4d520740a1 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -925,6 +925,14 @@  static bool blk_mq_req_expired(struct request *rq, unsigned long *next)
 	return false;
 }
 
+void blk_mq_put_rq_ref(struct request *rq)
+{
+	if (is_flush_rq(rq, rq->mq_hctx))
+		rq->end_io(rq, 0);
+	else if (refcount_dec_and_test(&rq->ref))
+		__blk_mq_free_request(rq);
+}
+
 static bool blk_mq_check_expired(struct blk_mq_hw_ctx *hctx,
 		struct request *rq, void *priv, bool reserved)
 {
@@ -958,11 +966,7 @@  static bool blk_mq_check_expired(struct blk_mq_hw_ctx *hctx,
 	if (blk_mq_req_expired(rq, next))
 		blk_mq_rq_timed_out(rq, reserved);
 
-	if (is_flush_rq(rq, hctx))
-		rq->end_io(rq, 0);
-	else if (refcount_dec_and_test(&rq->ref))
-		__blk_mq_free_request(rq);
-
+	blk_mq_put_rq_ref(rq);
 	return true;
 }
 
diff --git a/block/blk-mq.h b/block/blk-mq.h
index 3616453ca28c..143afe42c63a 100644
--- a/block/blk-mq.h
+++ b/block/blk-mq.h
@@ -47,6 +47,7 @@  void blk_mq_add_to_requeue_list(struct request *rq, bool at_head,
 void blk_mq_flush_busy_ctxs(struct blk_mq_hw_ctx *hctx, struct list_head *list);
 struct request *blk_mq_dequeue_from_ctx(struct blk_mq_hw_ctx *hctx,
 					struct blk_mq_ctx *start);
+void blk_mq_put_rq_ref(struct request *rq);
 
 /*
  * Internal helpers for allocating/freeing the request map