Message ID | 20241111-refactor-blk-affinity-helpers-v2-0-f360ddad231a@kernel.org |
---|---|
Headers | show |
Series | blk: refactor queue affinity helpers | expand |
On Tue, Nov 12, 2024 at 05:47:36AM +0100, Christoph Hellwig wrote: > This seems to mix up a few different things: > > 1) adding a new bus operation > 2) implementations of that operation for PCI and virtio > 3) a block layer consumer of the operation > > all these really should be separate per-subsystem patches. > > You'll also need to Cc the driver model maintainers. Ok, will do. > > +void blk_mq_hctx_map_queues(struct blk_mq_queue_map *qmap, > > + struct device *dev, unsigned int offset, > > + get_queue_affinity_fn *get_irq_affinity) > > +{ > > + const struct cpumask *mask = NULL; > > + unsigned int queue, cpu; > > + > > + for (queue = 0; queue < qmap->nr_queues; queue++) { > > + if (get_irq_affinity) > > + mask = get_irq_affinity(dev, queue + offset); > > + else if (dev->bus->irq_get_affinity) > > + mask = dev->bus->irq_get_affinity(dev, queue + offset); > > + > > + if (!mask) > > + goto fallback; > > + > > + for_each_cpu(cpu, mask) > > + qmap->mq_map[cpu] = qmap->queue_offset + queue; > > + } > > This does different things with a NULL argument vs not. Please split it > into two separate helpers. The non-NULL case is only uses in hisi_sas, > so it might be worth just open coding it there as well. I'd like to avoid the open coding case, because this will make the following patches to support the isolated cpu patches more complex. Having a central place where the all the mask operation are, makes it a bit simpler streamlined. But then I could open code that part as well. > > +static const struct cpumask *pci_device_irq_get_affinity(struct device *dev, > > + unsigned int irq_vec) > > +{ > > + struct pci_dev *pdev = to_pci_dev(dev); > > + > > + return pci_irq_get_affinity(pdev, irq_vec); > > Nit: this could be shortened to: > > return pci_irq_get_affinity(to_pci_dev(dev), irq_vec); Ok. Thanks, Daniel
As Christoph suggested, I introduced a new callback to struct bus_type and hooked up the callbacks directly in virtio and pci subsystem. This looks pretty neat! I've decided to provide only one version of blk_mq_hctx_map_queues but with an additional argument to pass in a driver specific callback. Currently, there is only one user of this additional callback. Maybe it would be better to provide an variant of blk_mq_hctx_map_queues for the additional argument. Not sure though. Thoughts? Signed-off-by: Daniel Wagner <dwagner@suse.de> --- Changes in v2: - added new callback to struct bus_type and call directly the affinity helpers from there. - Link to v1: https://lore.kernel.org/r/20240913-refactor-blk-affinity-helpers-v1-0-8e058f77af12@suse.de Changes in v1: - renamed blk_mq_dev_map_queues to blk_mq_hctx_map_queues - squased 'virito: add APIs for retrieving vq affinity' into 'blk-mq: introduce blk_mq_hctx_map_queues' - moved hisi_sas changed into a new patch - hisi_sas use define instead of hard coded value - moved helpers into their matching subsystem, removed blk-mq-pci and blk-mq-virtio files - fix spelling/typos - fixed long lines in docu (yep new lines in brief descriptions are supported, tested ti) - based on the first part of https://lore.kernel.org/all/20240806-isolcpus-io-queues-v3-0-da0eecfeaf8b@suse.de --- Daniel Wagner (6): blk-mq: introduce blk_mq_hctx_map_queues scsi: replace blk_mq_pci_map_queues with blk_mq_hctx_map_queues scsi: hisi_sas: replace blk_mq_pci_map_queues with blk_mq_hctx_map_queues nvme: replace blk_mq_pci_map_queues with blk_mq_hctx_map_queues virtio: blk/scsi: replace blk_mq_virtio_map_queues with blk_mq_hctx_map_queues blk-mq: remove unused queue mapping helpers block/Makefile | 2 -- block/blk-mq-cpumap.c | 40 +++++++++++++++++++++++++++ block/blk-mq-pci.c | 46 ------------------------------- block/blk-mq-virtio.c | 46 ------------------------------- drivers/block/virtio_blk.c | 4 +-- drivers/nvme/host/fc.c | 1 - drivers/nvme/host/pci.c | 3 +- drivers/pci/pci-driver.c | 16 +++++++++++ drivers/scsi/fnic/fnic_main.c | 3 +- drivers/scsi/hisi_sas/hisi_sas.h | 1 - drivers/scsi/hisi_sas/hisi_sas_v2_hw.c | 22 +++++++-------- drivers/scsi/hisi_sas/hisi_sas_v3_hw.c | 4 +-- drivers/scsi/megaraid/megaraid_sas_base.c | 3 +- drivers/scsi/mpi3mr/mpi3mr.h | 1 - drivers/scsi/mpi3mr/mpi3mr_os.c | 3 +- drivers/scsi/mpt3sas/mpt3sas_scsih.c | 4 +-- drivers/scsi/pm8001/pm8001_init.c | 2 +- drivers/scsi/pm8001/pm8001_sas.h | 1 - drivers/scsi/qla2xxx/qla_nvme.c | 4 +-- drivers/scsi/qla2xxx/qla_os.c | 4 +-- drivers/scsi/smartpqi/smartpqi_init.c | 7 ++--- drivers/scsi/virtio_scsi.c | 3 +- drivers/virtio/virtio.c | 12 ++++++++ include/linux/blk-mq-pci.h | 11 -------- include/linux/blk-mq-virtio.h | 11 -------- include/linux/blk-mq.h | 5 ++++ include/linux/device/bus.h | 3 ++ 27 files changed, 107 insertions(+), 155 deletions(-) --- base-commit: c9af98a7e8af266bae73e9d662b8341da1ec5824 change-id: 20240912-refactor-blk-affinity-helpers-7089b95b4b10 Best regards,