mbox series

[RFC,0/4] scsi/iscsi: Send iscsi data from kblockd

Message ID 20220308003957.123312-1-michael.christie@oracle.com
Headers show
Series scsi/iscsi: Send iscsi data from kblockd | expand

Message

Mike Christie March 8, 2022, 12:39 a.m. UTC
The following patches allow iscsi to transmit the initial iscsi PDU and
data from the kblockd context similar to how nvme/tcp does it. This has
the benefit that in a lot of our setups, we run
number_of_sessions == CPUs, so the apps performing IO share the CPU with
the iscsi layer. Right now, a lot of times we end up going from kblockd
to the iscsi workqueue which just adds an extra hop. By running from the
kblockd workqueue we see improvements of 20-30% in IOPs and throughput.

I made this a RFC and cc'd Ming and Bart because I wanted to get some
extra feed back on the first patch:

scsi: Allow drivers to set BLK_MQ_F_BLOCKING

because you guys know that code. For example the iscsi layer doesn't use
scsi_host_block so we won't run into problems there. But I wanted to make
sure there were not other possible issues.

The following patches were made over this iscsi patchset

https://lore.kernel.org/linux-scsi/20220308002747.122682-1-michael.christie@oracle.com/T/#t

the scsi patch applies over Linus or Martin's tree.

Comments

Christoph Hellwig March 15, 2022, 8:26 a.m. UTC | #1
The subject seems a bit misleading, I'd expect to see
BLK_MQ_F_BLOCKING in the subject.

Note that you'll also need the series from Ming to support dm-multipath
on top devices that set BLK_MQ_F_BLOCKING.
Ming Lei March 16, 2022, 1:08 a.m. UTC | #2
On Tue, Mar 15, 2022 at 01:26:40AM -0700, Christoph Hellwig wrote:
> The subject seems a bit misleading, I'd expect to see
> BLK_MQ_F_BLOCKING in the subject.
> 
> Note that you'll also need the series from Ming to support dm-multipath
> on top devices that set BLK_MQ_F_BLOCKING.

Indeed.

Mike, please feel free to fold the following patches into your next
post:

https://lore.kernel.org/linux-block/YcP4FMG9an5ReIiV@T590/#r


Thanks, 
Ming