mbox series

[v4,00/21] ibmvfc: initial MQ development

Message ID 20210111231225.105347-1-tyreld@linux.ibm.com
Headers show
Series ibmvfc: initial MQ development | expand

Message

Tyrel Datwyler Jan. 11, 2021, 11:12 p.m. UTC
Recent updates in pHyp Firmware and VIOS releases provide new infrastructure
towards enabling Subordinate Command Response Queues (Sub-CRQs) such that each
Sub-CRQ is a channel backed by an actual hardware queue in the FC stack on the
partner VIOS. Sub-CRQs are registered with the firmware via hypercalls and then
negotiated with the VIOS via new Management Datagrams (MADs) for channel setup.

This initial implementation adds the necessary Sub-CRQ framework and implements
the new MADs for negotiating and assigning a set of Sub-CRQs to associated VIOS
HW backed channels.

This latest series is completely rebased and reimplemented on top of the recent
("ibmvfc: MQ prepartory locking work") series. [1]

[1] https://lore.kernel.org/linux-scsi/20210106201835.1053593-1-tyreld@linux.ibm.com/

changes in v4:
* Series rebased and reworked on top of previous ibmvfc locking series
* Dropped all previous Reviewed-by tags

changes in v3:
* Patch 4: changed firmware support logging to dev_warn_once
* Patch 6: adjusted locking
* Patch 15: dropped logging verbosity, moved cancel event tracking into subqueue
* Patch 17: removed write permission for migration module parameters
	    drive hard reset after update to num of scsi channels

changes in v2:
* Patch 4: NULL'd scsi_scrq reference after deallocation
* Patch 6: Added switch case to handle XPORT event
* Patch 9: fixed ibmvfc_event leak and double free
* added support for cancel command with MQ
* added parameter toggles for MQ settings

Tyrel Datwyler (21):
  ibmvfc: add vhost fields and defaults for MQ enablement
  ibmvfc: move event pool init/free routines
  ibmvfc: init/free event pool during queue allocation/free
  ibmvfc: add size parameter to ibmvfc_init_event_pool
  ibmvfc: define hcall wrapper for registering a Sub-CRQ
  ibmvfc: add Subordinate CRQ definitions
  ibmvfc: add alloc/dealloc routines for SCSI Sub-CRQ Channels
  ibmvfc: add Sub-CRQ IRQ enable/disable routine
  ibmvfc: add handlers to drain and complete Sub-CRQ responses
  ibmvfc: define Sub-CRQ interrupt handler routine
  ibmvfc: map/request irq and register Sub-CRQ interrupt handler
  ibmvfc: implement channel enquiry and setup commands
  ibmvfc: advertise client support for using hardware channels
  ibmvfc: set and track hw queue in ibmvfc_event struct
  ibmvfc: send commands down HW Sub-CRQ when channelized
  ibmvfc: register Sub-CRQ handles with VIOS during channel setup
  ibmvfc: add cancel mad initialization helper
  ibmvfc: send Cancel MAD down each hw scsi channel
  ibmvfc: purge scsi channels after transport loss/reset
  ibmvfc: enable MQ and set reasonable defaults
  ibmvfc: provide modules parameters for MQ settings

 drivers/scsi/ibmvscsi/ibmvfc.c | 914 ++++++++++++++++++++++++++++-----
 drivers/scsi/ibmvscsi/ibmvfc.h |  38 ++
 2 files changed, 824 insertions(+), 128 deletions(-)

Comments

Brian King Jan. 12, 2021, 9:46 p.m. UTC | #1
On 1/11/21 5:12 PM, Tyrel Datwyler wrote:
> diff --git a/drivers/scsi/ibmvscsi/ibmvfc.c b/drivers/scsi/ibmvscsi/ibmvfc.c
> index b0b0212344f3..24e1278acfeb 100644
> --- a/drivers/scsi/ibmvscsi/ibmvfc.c
> +++ b/drivers/scsi/ibmvscsi/ibmvfc.c
> @@ -2418,18 +2418,79 @@ static struct ibmvfc_event *ibmvfc_init_tmf(struct ibmvfc_queue *queue,
>  	return evt;
>  }
>  
> -/**
> - * ibmvfc_cancel_all - Cancel all outstanding commands to the device
> - * @sdev:	scsi device to cancel commands
> - * @type:	type of error recovery being performed
> - *
> - * This sends a cancel to the VIOS for the specified device. This does
> - * NOT send any abort to the actual device. That must be done separately.
> - *
> - * Returns:
> - *	0 on success / other on failure
> - **/
> -static int ibmvfc_cancel_all(struct scsi_device *sdev, int type)
> +static int ibmvfc_cancel_all_mq(struct scsi_device *sdev, int type)
> +{
> +	struct ibmvfc_host *vhost = shost_priv(sdev->host);
> +	struct ibmvfc_event *evt, *found_evt, *temp;
> +	struct ibmvfc_queue *queues = vhost->scsi_scrqs.scrqs;
> +	unsigned long flags;
> +	int num_hwq, i;
> +	LIST_HEAD(cancelq);
> +	u16 status;
> +
> +	ENTER;
> +	spin_lock_irqsave(vhost->host->host_lock, flags);
> +	num_hwq = vhost->scsi_scrqs.active_queues;
> +	for (i = 0; i < num_hwq; i++) {
> +		spin_lock(queues[i].q_lock);
> +		spin_lock(&queues[i].l_lock);
> +		found_evt = NULL;
> +		list_for_each_entry(evt, &queues[i].sent, queue_list) {
> +			if (evt->cmnd && evt->cmnd->device == sdev) {
> +				found_evt = evt;
> +				break;
> +			}
> +		}
> +		spin_unlock(&queues[i].l_lock);
> +

I really don't like the way the first for loop grabs all the q_locks, and then
there is a second for loop that drops all these locks... I think this code would
be cleaner if it looked like:

		if (found_evt && vhost->logged_in) {
			evt = ibmvfc_init_tmf(&queues[i], sdev, type);
			evt->sync_iu = &queues[i].cancel_rsp;
			ibmvfc_send_event(evt, vhost, default_timeout);
			list_add_tail(&evt->cancel, &cancelq);
		}

		spin_unlock(queues[i].q_lock);

	}

> +		if (!found_evt)
> +			continue;
> +
> +		if (vhost->logged_in) {
> +			evt = ibmvfc_init_tmf(&queues[i], sdev, type);
> +			evt->sync_iu = &queues[i].cancel_rsp;
> +			ibmvfc_send_event(evt, vhost, default_timeout);
> +			list_add_tail(&evt->cancel, &cancelq);
> +		}
> +	}
> +
> +	for (i = 0; i < num_hwq; i++)
> +		spin_unlock(queues[i].q_lock);
> +	spin_unlock_irqrestore(vhost->host->host_lock, flags);
> +
> +	if (list_empty(&cancelq)) {
> +		if (vhost->log_level > IBMVFC_DEFAULT_LOG_LEVEL)
> +			sdev_printk(KERN_INFO, sdev, "No events found to cancel\n");
> +		return 0;
> +	}
> +
> +	sdev_printk(KERN_INFO, sdev, "Cancelling outstanding commands.\n");
> +
> +	list_for_each_entry_safe(evt, temp, &cancelq, cancel) {
> +		wait_for_completion(&evt->comp);
> +		status = be16_to_cpu(evt->queue->cancel_rsp.mad_common.status);

You probably want a list_del(&evt->cancel) here.

> +		ibmvfc_free_event(evt);
> +
> +		if (status != IBMVFC_MAD_SUCCESS) {
> +			sdev_printk(KERN_WARNING, sdev, "Cancel failed with rc=%x\n", status);
> +			switch (status) {
> +			case IBMVFC_MAD_DRIVER_FAILED:
> +			case IBMVFC_MAD_CRQ_ERROR:
> +			/* Host adapter most likely going through reset, return success to
> +			 * the caller will wait for the command being cancelled to get returned
> +			 */
> +				break;
> +			default:
> +				break;

I think this default break needs to be a return -EIO.

> +			}
> +		}
> +	}
> +
> +	sdev_printk(KERN_INFO, sdev, "Successfully cancelled outstanding commands\n");
> +	return 0;
> +}
> +
> +static int ibmvfc_cancel_all_sq(struct scsi_device *sdev, int type)
>  {
>  	struct ibmvfc_host *vhost = shost_priv(sdev->host);
>  	struct ibmvfc_event *evt, *found_evt;
> @@ -2498,6 +2559,27 @@ static int ibmvfc_cancel_all(struct scsi_device *sdev, int type)
>  	return 0;
>  }
>
Tyrel Datwyler Jan. 13, 2021, 12:33 a.m. UTC | #2
On 1/12/21 2:54 PM, Brian King wrote:
> On 1/11/21 5:12 PM, Tyrel Datwyler wrote:

>> Introduce several new vhost fields for managing MQ state of the adapter

>> as well as initial defaults for MQ enablement.

>>

>> Signed-off-by: Tyrel Datwyler <tyreld@linux.ibm.com>

>> ---

>>  drivers/scsi/ibmvscsi/ibmvfc.c | 8 ++++++++

>>  drivers/scsi/ibmvscsi/ibmvfc.h | 9 +++++++++

>>  2 files changed, 17 insertions(+)

>>

>> diff --git a/drivers/scsi/ibmvscsi/ibmvfc.c b/drivers/scsi/ibmvscsi/ibmvfc.c

>> index ba95438a8912..9200fe49c57e 100644

>> --- a/drivers/scsi/ibmvscsi/ibmvfc.c

>> +++ b/drivers/scsi/ibmvscsi/ibmvfc.c

>> @@ -3302,6 +3302,7 @@ static struct scsi_host_template driver_template = {

>>  	.max_sectors = IBMVFC_MAX_SECTORS,

>>  	.shost_attrs = ibmvfc_attrs,

>>  	.track_queue_depth = 1,

>> +	.host_tagset = 1,

> 

> This doesn't seem right. You are setting host_tagset, which means you want a

> shared, host wide, tag set for commands. It also means that the total

> queue depth for the host is can_queue. However, it looks like you are allocating

> max_requests events for each sub crq, which means you are over allocating memory.


With the shared tagset yes the queue depth for the host is can_queue, but this
also implies that the max queue depth for each hw queue is also can_queue. So,
in the worst case that all commands are queued down the same hw queue we need an
event pool with can_queue commands.

> 

> Looking at this closer, we might have bigger problems. There is a host wide

> max number of commands that the VFC host supports, which gets returned on

> NPIV Login. This value can change across a live migration event.


From what I understand the max commands can only become less.

> 

> The ibmvfc driver, which does the same thing the lpfc driver does, modifies

> can_queue on the scsi_host *after* the tag set has been allocated. This looks

> to be a concern with ibmvfc, not sure about lpfc, as it doesn't look like

> we look at can_queue once the tag set is setup, and I'm not seeing a good way

> to dynamically change the host queue depth once the tag set is setup. 

> 

> Unless I'm missing something, our best options appear to either be to implement

> our own host wide busy reference counting, which doesn't sound very good, or

> we need to add some API to block / scsi that allows us to dynamically change

> can_queue.


Changing can_queue won't do use any good with the shared tagset becasue each
queue still needs to be able to queue can_queue number of commands in the worst
case.

The complaint before was not using shared tags increases the tag memory
allocation because they are statically allocated for each queue. The question is
what claims more memory our event pool allocation or the tagset per queue
allocation.

If we chose to not use the shared tagset then the queue depth for each hw queue
becomes (can_queue / nr_hw_queues).

-Tyrel

> 

> Thanks,

> 

> Brian

> 

>
Brian King Jan. 13, 2021, 5:13 p.m. UTC | #3
On 1/12/21 6:33 PM, Tyrel Datwyler wrote:
> On 1/12/21 2:54 PM, Brian King wrote:

>> On 1/11/21 5:12 PM, Tyrel Datwyler wrote:

>>> Introduce several new vhost fields for managing MQ state of the adapter

>>> as well as initial defaults for MQ enablement.

>>>

>>> Signed-off-by: Tyrel Datwyler <tyreld@linux.ibm.com>

>>> ---

>>>  drivers/scsi/ibmvscsi/ibmvfc.c | 8 ++++++++

>>>  drivers/scsi/ibmvscsi/ibmvfc.h | 9 +++++++++

>>>  2 files changed, 17 insertions(+)

>>>

>>> diff --git a/drivers/scsi/ibmvscsi/ibmvfc.c b/drivers/scsi/ibmvscsi/ibmvfc.c

>>> index ba95438a8912..9200fe49c57e 100644

>>> --- a/drivers/scsi/ibmvscsi/ibmvfc.c

>>> +++ b/drivers/scsi/ibmvscsi/ibmvfc.c

>>> @@ -3302,6 +3302,7 @@ static struct scsi_host_template driver_template = {

>>>  	.max_sectors = IBMVFC_MAX_SECTORS,

>>>  	.shost_attrs = ibmvfc_attrs,

>>>  	.track_queue_depth = 1,

>>> +	.host_tagset = 1,

>>

>> This doesn't seem right. You are setting host_tagset, which means you want a

>> shared, host wide, tag set for commands. It also means that the total

>> queue depth for the host is can_queue. However, it looks like you are allocating

>> max_requests events for each sub crq, which means you are over allocating memory.

> 

> With the shared tagset yes the queue depth for the host is can_queue, but this

> also implies that the max queue depth for each hw queue is also can_queue. So,

> in the worst case that all commands are queued down the same hw queue we need an

> event pool with can_queue commands.

> 

>>

>> Looking at this closer, we might have bigger problems. There is a host wide

>> max number of commands that the VFC host supports, which gets returned on

>> NPIV Login. This value can change across a live migration event.

> 

> From what I understand the max commands can only become less.

> 

>>

>> The ibmvfc driver, which does the same thing the lpfc driver does, modifies

>> can_queue on the scsi_host *after* the tag set has been allocated. This looks

>> to be a concern with ibmvfc, not sure about lpfc, as it doesn't look like

>> we look at can_queue once the tag set is setup, and I'm not seeing a good way

>> to dynamically change the host queue depth once the tag set is setup. 

>>

>> Unless I'm missing something, our best options appear to either be to implement

>> our own host wide busy reference counting, which doesn't sound very good, or

>> we need to add some API to block / scsi that allows us to dynamically change

>> can_queue.

> 

> Changing can_queue won't do use any good with the shared tagset becasue each

> queue still needs to be able to queue can_queue number of commands in the worst

> case.


The issue I'm trying to highlight here is the following scenario:

1. We set shost->can_queue, then call scsi_add_host, which allocates the tag set.

2. On our NPIV login response from the VIOS, we might get a lower value than we
initially set in shost->can_queue, so we update it, but nobody ever looks at it
again, and we don't have any protection against sending too many commands to the host.


Basically, we no longer have any code that ensures we don't send more
commands to the VIOS than we are told it supports. According to the architecture,
if we actually do this, the VIOS will do an h_free_crq, which would be a bit
of a bug on our part.

I don't think it was ever clearly defined in the API that a driver can
change shost->can_queue after calling scsi_add_host, but up until
commit 6eb045e092efefafc6687409a6fa6d1dabf0fb69, this worked and now
it doesn't. 

I started looking through drivers that do this, and so far, it looks like the
following drivers do: ibmvfc, lpfc, aix94xx, libfc, BusLogic, and likely others...

We probably need an API that lets us change shost->can_queue dynamically.

Thanks,

Brian

-- 
Brian King
Power Linux I/O
IBM Linux Technology Center
Brian King Jan. 13, 2021, 6:57 p.m. UTC | #4
On 1/11/21 5:12 PM, Tyrel Datwyler wrote:
> @@ -5880,12 +5936,13 @@ static int ibmvfc_probe(struct vio_dev *vdev, const struct vio_device_id *id)

>  

>  	shost->transportt = ibmvfc_transport_template;

>  	shost->can_queue = max_requests;

> +	shost->can_queue = (max_requests / nr_scsi_hw_queues);


This looks to be in conflict with the fact that the first patch requested a shared tag set, right?

>  	shost->max_lun = max_lun;

>  	shost->max_id = max_targets;

>  	shost->max_sectors = IBMVFC_MAX_SECTORS;

>  	shost->max_cmd_len = IBMVFC_MAX_CDB_LEN;

>  	shost->unique_id = shost->host_no;

> -	shost->nr_hw_queues = IBMVFC_MQ ? IBMVFC_SCSI_HW_QUEUES : 1;

> +	shost->nr_hw_queues = mq_enabled ? nr_scsi_hw_queues : 1;


You might want to range check this, to make sure its sane.
>  

>  	vhost = shost_priv(shost);

>  	INIT_LIST_HEAD(&vhost->targets);

> @@ -5897,8 +5954,8 @@ static int ibmvfc_probe(struct vio_dev *vdev, const struct vio_device_id *id)

>  	vhost->log_level = log_level;

>  	vhost->task_set = 1;

>  

> -	vhost->mq_enabled = IBMVFC_MQ;

> -	vhost->client_scsi_channels = IBMVFC_SCSI_CHANNELS;

> +	vhost->mq_enabled = mq_enabled;

> +	vhost->client_scsi_channels = min(nr_scsi_hw_queues, nr_scsi_channels);

>  	vhost->using_channels = 0;

>  	vhost->do_enquiry = 1;

>  

> 



-- 
Brian King
Power Linux I/O
IBM Linux Technology Center
Brian King Jan. 13, 2021, 6:58 p.m. UTC | #5
With the exception of the few comments I've shared, the rest of this looks
good to me and you can add my:

Reviewed-by: Brian King <brking@linux.vnet.ibm.com>


Thanks,

Brian

-- 
Brian King
Power Linux I/O
IBM Linux Technology Center
Ming Lei Jan. 14, 2021, 1:27 a.m. UTC | #6
On Wed, Jan 13, 2021 at 11:13:07AM -0600, Brian King wrote:
> On 1/12/21 6:33 PM, Tyrel Datwyler wrote:

> > On 1/12/21 2:54 PM, Brian King wrote:

> >> On 1/11/21 5:12 PM, Tyrel Datwyler wrote:

> >>> Introduce several new vhost fields for managing MQ state of the adapter

> >>> as well as initial defaults for MQ enablement.

> >>>

> >>> Signed-off-by: Tyrel Datwyler <tyreld@linux.ibm.com>

> >>> ---

> >>>  drivers/scsi/ibmvscsi/ibmvfc.c | 8 ++++++++

> >>>  drivers/scsi/ibmvscsi/ibmvfc.h | 9 +++++++++

> >>>  2 files changed, 17 insertions(+)

> >>>

> >>> diff --git a/drivers/scsi/ibmvscsi/ibmvfc.c b/drivers/scsi/ibmvscsi/ibmvfc.c

> >>> index ba95438a8912..9200fe49c57e 100644

> >>> --- a/drivers/scsi/ibmvscsi/ibmvfc.c

> >>> +++ b/drivers/scsi/ibmvscsi/ibmvfc.c

> >>> @@ -3302,6 +3302,7 @@ static struct scsi_host_template driver_template = {

> >>>  	.max_sectors = IBMVFC_MAX_SECTORS,

> >>>  	.shost_attrs = ibmvfc_attrs,

> >>>  	.track_queue_depth = 1,

> >>> +	.host_tagset = 1,

> >>

> >> This doesn't seem right. You are setting host_tagset, which means you want a

> >> shared, host wide, tag set for commands. It also means that the total

> >> queue depth for the host is can_queue. However, it looks like you are allocating

> >> max_requests events for each sub crq, which means you are over allocating memory.

> > 

> > With the shared tagset yes the queue depth for the host is can_queue, but this

> > also implies that the max queue depth for each hw queue is also can_queue. So,

> > in the worst case that all commands are queued down the same hw queue we need an

> > event pool with can_queue commands.

> > 

> >>

> >> Looking at this closer, we might have bigger problems. There is a host wide

> >> max number of commands that the VFC host supports, which gets returned on

> >> NPIV Login. This value can change across a live migration event.

> > 

> > From what I understand the max commands can only become less.

> > 

> >>

> >> The ibmvfc driver, which does the same thing the lpfc driver does, modifies

> >> can_queue on the scsi_host *after* the tag set has been allocated. This looks

> >> to be a concern with ibmvfc, not sure about lpfc, as it doesn't look like

> >> we look at can_queue once the tag set is setup, and I'm not seeing a good way

> >> to dynamically change the host queue depth once the tag set is setup. 

> >>

> >> Unless I'm missing something, our best options appear to either be to implement

> >> our own host wide busy reference counting, which doesn't sound very good, or

> >> we need to add some API to block / scsi that allows us to dynamically change

> >> can_queue.

> > 

> > Changing can_queue won't do use any good with the shared tagset becasue each

> > queue still needs to be able to queue can_queue number of commands in the worst

> > case.

> 

> The issue I'm trying to highlight here is the following scenario:

> 

> 1. We set shost->can_queue, then call scsi_add_host, which allocates the tag set.

> 

> 2. On our NPIV login response from the VIOS, we might get a lower value than we

> initially set in shost->can_queue, so we update it, but nobody ever looks at it

> again, and we don't have any protection against sending too many commands to the host.

> 

> 

> Basically, we no longer have any code that ensures we don't send more

> commands to the VIOS than we are told it supports. According to the architecture,

> if we actually do this, the VIOS will do an h_free_crq, which would be a bit

> of a bug on our part.

> 

> I don't think it was ever clearly defined in the API that a driver can

> change shost->can_queue after calling scsi_add_host, but up until

> commit 6eb045e092efefafc6687409a6fa6d1dabf0fb69, this worked and now

> it doesn't. 


Actually it isn't related with commit 6eb045e092ef, because blk_mq_alloc_tag_set()
uses .can_queue to create driver tag sbitmap and request pool.

So even thought without 6eb045e092ef, the updated .can_queue can't work
as expected because the max driver tag depth has been fixed by blk-mq already.

What 6eb045e092ef does is just to remove the double check on max
host-wide allowed commands because that has been respected by blk-mq
driver tag allocation already.

> 

> I started looking through drivers that do this, and so far, it looks like the

> following drivers do: ibmvfc, lpfc, aix94xx, libfc, BusLogic, and likely others...

> 

> We probably need an API that lets us change shost->can_queue dynamically.


I'd suggest to confirm changing .can_queue is one real usecase.


Thanks,
Ming
Brian King Jan. 14, 2021, 5:24 p.m. UTC | #7
On 1/13/21 7:27 PM, Ming Lei wrote:
> On Wed, Jan 13, 2021 at 11:13:07AM -0600, Brian King wrote:

>> On 1/12/21 6:33 PM, Tyrel Datwyler wrote:

>>> On 1/12/21 2:54 PM, Brian King wrote:

>>>> On 1/11/21 5:12 PM, Tyrel Datwyler wrote:

>>>>> Introduce several new vhost fields for managing MQ state of the adapter

>>>>> as well as initial defaults for MQ enablement.

>>>>>

>>>>> Signed-off-by: Tyrel Datwyler <tyreld@linux.ibm.com>

>>>>> ---

>>>>>  drivers/scsi/ibmvscsi/ibmvfc.c | 8 ++++++++

>>>>>  drivers/scsi/ibmvscsi/ibmvfc.h | 9 +++++++++

>>>>>  2 files changed, 17 insertions(+)

>>>>>

>>>>> diff --git a/drivers/scsi/ibmvscsi/ibmvfc.c b/drivers/scsi/ibmvscsi/ibmvfc.c

>>>>> index ba95438a8912..9200fe49c57e 100644

>>>>> --- a/drivers/scsi/ibmvscsi/ibmvfc.c

>>>>> +++ b/drivers/scsi/ibmvscsi/ibmvfc.c

>>>>> @@ -3302,6 +3302,7 @@ static struct scsi_host_template driver_template = {

>>>>>  	.max_sectors = IBMVFC_MAX_SECTORS,

>>>>>  	.shost_attrs = ibmvfc_attrs,

>>>>>  	.track_queue_depth = 1,

>>>>> +	.host_tagset = 1,

>>>>

>>>> This doesn't seem right. You are setting host_tagset, which means you want a

>>>> shared, host wide, tag set for commands. It also means that the total

>>>> queue depth for the host is can_queue. However, it looks like you are allocating

>>>> max_requests events for each sub crq, which means you are over allocating memory.

>>>

>>> With the shared tagset yes the queue depth for the host is can_queue, but this

>>> also implies that the max queue depth for each hw queue is also can_queue. So,

>>> in the worst case that all commands are queued down the same hw queue we need an

>>> event pool with can_queue commands.

>>>

>>>>

>>>> Looking at this closer, we might have bigger problems. There is a host wide

>>>> max number of commands that the VFC host supports, which gets returned on

>>>> NPIV Login. This value can change across a live migration event.

>>>

>>> From what I understand the max commands can only become less.

>>>

>>>>

>>>> The ibmvfc driver, which does the same thing the lpfc driver does, modifies

>>>> can_queue on the scsi_host *after* the tag set has been allocated. This looks

>>>> to be a concern with ibmvfc, not sure about lpfc, as it doesn't look like

>>>> we look at can_queue once the tag set is setup, and I'm not seeing a good way

>>>> to dynamically change the host queue depth once the tag set is setup. 

>>>>

>>>> Unless I'm missing something, our best options appear to either be to implement

>>>> our own host wide busy reference counting, which doesn't sound very good, or

>>>> we need to add some API to block / scsi that allows us to dynamically change

>>>> can_queue.

>>>

>>> Changing can_queue won't do use any good with the shared tagset becasue each

>>> queue still needs to be able to queue can_queue number of commands in the worst

>>> case.

>>

>> The issue I'm trying to highlight here is the following scenario:

>>

>> 1. We set shost->can_queue, then call scsi_add_host, which allocates the tag set.

>>

>> 2. On our NPIV login response from the VIOS, we might get a lower value than we

>> initially set in shost->can_queue, so we update it, but nobody ever looks at it

>> again, and we don't have any protection against sending too many commands to the host.

>>

>>

>> Basically, we no longer have any code that ensures we don't send more

>> commands to the VIOS than we are told it supports. According to the architecture,

>> if we actually do this, the VIOS will do an h_free_crq, which would be a bit

>> of a bug on our part.

>>

>> I don't think it was ever clearly defined in the API that a driver can

>> change shost->can_queue after calling scsi_add_host, but up until

>> commit 6eb045e092efefafc6687409a6fa6d1dabf0fb69, this worked and now

>> it doesn't. 

> 

> Actually it isn't related with commit 6eb045e092ef, because blk_mq_alloc_tag_set()

> uses .can_queue to create driver tag sbitmap and request pool.

> 

> So even thought without 6eb045e092ef, the updated .can_queue can't work

> as expected because the max driver tag depth has been fixed by blk-mq already.


There are two scenarios here. In the scenario of someone increasing can_queue
after the tag set is allocated, I agree, blk-mq will never take advantage
of this. However, in the scenario of someone *decreasing* can_queue after the
tag set is allocated, prior to 6eb045e092ef, the shost->host_busy code provided
this protection.

> 

> What 6eb045e092ef does is just to remove the double check on max

> host-wide allowed commands because that has been respected by blk-mq

> driver tag allocation already.

> 

>>

>> I started looking through drivers that do this, and so far, it looks like the

>> following drivers do: ibmvfc, lpfc, aix94xx, libfc, BusLogic, and likely others...

>>

>> We probably need an API that lets us change shost->can_queue dynamically.

> 

> I'd suggest to confirm changing .can_queue is one real usecase.


For ibmvfc, the total number of commands that the scsi host supports is very
much a dynamic value. It can increase and it can decrease. Live migrating
a logical partition from one system to another is the usual cause of
such a capability change. For ibmvfc, at least, this only ever happens
when we've self blocked the host and have sent back all outstanding I/O.

However, looking at other drivers that modify can_queue dynamically, this
doesn't always hold true. Looking at libfc, it looks to dynamically ramp
up and ramp down can_queue based on its ability to handle requests.

There are certainly a number of other drivers that change can_queue
after the tag set has been allocated. Some of these drivers could
likely be changed to avoid doing this, but changing them all will likely
be difficult.

Thanks,

Brian

-- 
Brian King
Power Linux I/O
IBM Linux Technology Center
Ming Lei Jan. 15, 2021, 1:47 a.m. UTC | #8
On Thu, Jan 14, 2021 at 11:24:35AM -0600, Brian King wrote:
> On 1/13/21 7:27 PM, Ming Lei wrote:

> > On Wed, Jan 13, 2021 at 11:13:07AM -0600, Brian King wrote:

> >> On 1/12/21 6:33 PM, Tyrel Datwyler wrote:

> >>> On 1/12/21 2:54 PM, Brian King wrote:

> >>>> On 1/11/21 5:12 PM, Tyrel Datwyler wrote:

> >>>>> Introduce several new vhost fields for managing MQ state of the adapter

> >>>>> as well as initial defaults for MQ enablement.

> >>>>>

> >>>>> Signed-off-by: Tyrel Datwyler <tyreld@linux.ibm.com>

> >>>>> ---

> >>>>>  drivers/scsi/ibmvscsi/ibmvfc.c | 8 ++++++++

> >>>>>  drivers/scsi/ibmvscsi/ibmvfc.h | 9 +++++++++

> >>>>>  2 files changed, 17 insertions(+)

> >>>>>

> >>>>> diff --git a/drivers/scsi/ibmvscsi/ibmvfc.c b/drivers/scsi/ibmvscsi/ibmvfc.c

> >>>>> index ba95438a8912..9200fe49c57e 100644

> >>>>> --- a/drivers/scsi/ibmvscsi/ibmvfc.c

> >>>>> +++ b/drivers/scsi/ibmvscsi/ibmvfc.c

> >>>>> @@ -3302,6 +3302,7 @@ static struct scsi_host_template driver_template = {

> >>>>>  	.max_sectors = IBMVFC_MAX_SECTORS,

> >>>>>  	.shost_attrs = ibmvfc_attrs,

> >>>>>  	.track_queue_depth = 1,

> >>>>> +	.host_tagset = 1,

> >>>>

> >>>> This doesn't seem right. You are setting host_tagset, which means you want a

> >>>> shared, host wide, tag set for commands. It also means that the total

> >>>> queue depth for the host is can_queue. However, it looks like you are allocating

> >>>> max_requests events for each sub crq, which means you are over allocating memory.

> >>>

> >>> With the shared tagset yes the queue depth for the host is can_queue, but this

> >>> also implies that the max queue depth for each hw queue is also can_queue. So,

> >>> in the worst case that all commands are queued down the same hw queue we need an

> >>> event pool with can_queue commands.

> >>>

> >>>>

> >>>> Looking at this closer, we might have bigger problems. There is a host wide

> >>>> max number of commands that the VFC host supports, which gets returned on

> >>>> NPIV Login. This value can change across a live migration event.

> >>>

> >>> From what I understand the max commands can only become less.

> >>>

> >>>>

> >>>> The ibmvfc driver, which does the same thing the lpfc driver does, modifies

> >>>> can_queue on the scsi_host *after* the tag set has been allocated. This looks

> >>>> to be a concern with ibmvfc, not sure about lpfc, as it doesn't look like

> >>>> we look at can_queue once the tag set is setup, and I'm not seeing a good way

> >>>> to dynamically change the host queue depth once the tag set is setup. 

> >>>>

> >>>> Unless I'm missing something, our best options appear to either be to implement

> >>>> our own host wide busy reference counting, which doesn't sound very good, or

> >>>> we need to add some API to block / scsi that allows us to dynamically change

> >>>> can_queue.

> >>>

> >>> Changing can_queue won't do use any good with the shared tagset becasue each

> >>> queue still needs to be able to queue can_queue number of commands in the worst

> >>> case.

> >>

> >> The issue I'm trying to highlight here is the following scenario:

> >>

> >> 1. We set shost->can_queue, then call scsi_add_host, which allocates the tag set.

> >>

> >> 2. On our NPIV login response from the VIOS, we might get a lower value than we

> >> initially set in shost->can_queue, so we update it, but nobody ever looks at it

> >> again, and we don't have any protection against sending too many commands to the host.

> >>

> >>

> >> Basically, we no longer have any code that ensures we don't send more

> >> commands to the VIOS than we are told it supports. According to the architecture,

> >> if we actually do this, the VIOS will do an h_free_crq, which would be a bit

> >> of a bug on our part.

> >>

> >> I don't think it was ever clearly defined in the API that a driver can

> >> change shost->can_queue after calling scsi_add_host, but up until

> >> commit 6eb045e092efefafc6687409a6fa6d1dabf0fb69, this worked and now

> >> it doesn't. 

> > 

> > Actually it isn't related with commit 6eb045e092ef, because blk_mq_alloc_tag_set()

> > uses .can_queue to create driver tag sbitmap and request pool.

> > 

> > So even thought without 6eb045e092ef, the updated .can_queue can't work

> > as expected because the max driver tag depth has been fixed by blk-mq already.

> 

> There are two scenarios here. In the scenario of someone increasing can_queue

> after the tag set is allocated, I agree, blk-mq will never take advantage

> of this. However, in the scenario of someone *decreasing* can_queue after the

> tag set is allocated, prior to 6eb045e092ef, the shost->host_busy code provided

> this protection.


When .can_queue is decreased, blk-mq still may allocate driver tag which is >
.can_queue, this way might break driver/device too, but it depends on how driver
uses req->tag.

> 

> > 

> > What 6eb045e092ef does is just to remove the double check on max

> > host-wide allowed commands because that has been respected by blk-mq

> > driver tag allocation already.

> > 

> >>

> >> I started looking through drivers that do this, and so far, it looks like the

> >> following drivers do: ibmvfc, lpfc, aix94xx, libfc, BusLogic, and likely others...

> >>

> >> We probably need an API that lets us change shost->can_queue dynamically.

> > 

> > I'd suggest to confirm changing .can_queue is one real usecase.

> 

> For ibmvfc, the total number of commands that the scsi host supports is very

> much a dynamic value. It can increase and it can decrease. Live migrating

> a logical partition from one system to another is the usual cause of

> such a capability change. For ibmvfc, at least, this only ever happens

> when we've self blocked the host and have sent back all outstanding I/O.


This one looks a good use case, and the new API may have to freeze request
queues of all LUNs, and the operation is very expensive and slow. 

> 

> However, looking at other drivers that modify can_queue dynamically, this

> doesn't always hold true. Looking at libfc, it looks to dynamically ramp

> up and ramp down can_queue based on its ability to handle requests.


This one looks hard to use the new API which isn't supposed to be called
in fast path. And changing host wide resource is really not good in fast
path, IMO.

> 

> There are certainly a number of other drivers that change can_queue

> after the tag set has been allocated. Some of these drivers could

> likely be changed to avoid doing this, but changing them all will likely

> be difficult.


It is still better to understand why these drivers have to update
.can_queue dynamically.


Thanks,
Ming