mbox series

[v5,0/4] Add MMC software queue support

Message ID cover.1572326519.git.baolin.wang@linaro.org
Headers show
Series Add MMC software queue support | expand

Message

(Exiting) Baolin Wang Oct. 29, 2019, 5:43 a.m. UTC
Hi All,

Now the MMC read/write stack will always wait for previous request is
completed by mmc_blk_rw_wait(), before sending a new request to hardware,
or queue a work to complete request, that will bring context switching
overhead, especially for high I/O per second rates, to affect the IO
performance.

Thus this patch set will introduce the MMC software command queue support
based on command queue engine's interfaces, and set the queue depth as 32
to allow more requests can be be prepared, merged and inserted into IO
scheduler, but we only allow 2 requests in flight, that is enough to let
the irq handler always trigger the next request without a context switch,
as well as avoiding a long latency.

Moreover we can expand the MMC software queue interface to support
MMC packed request or packed command instead of adding new interfaces,
according to previosus discussion.

Below are some comparison data with fio tool. The fio command I used
is like below with changing the '--rw' parameter and enabling the direct
IO flag to measure the actual hardware transfer speed in 4K block size.

./fio --filename=/dev/mmcblk0p30 --direct=1 --iodepth=20 --rw=read --bs=4K --size=1G --group_reporting --numjobs=20 --name=test_read

My eMMC card working at HS400 Enhanced strobe mode:
[    2.229856] mmc0: new HS400 Enhanced strobe MMC card at address 0001
[    2.237566] mmcblk0: mmc0:0001 HBG4a2 29.1 GiB 
[    2.242621] mmcblk0boot0: mmc0:0001 HBG4a2 partition 1 4.00 MiB
[    2.249110] mmcblk0boot1: mmc0:0001 HBG4a2 partition 2 4.00 MiB
[    2.255307] mmcblk0rpmb: mmc0:0001 HBG4a2 partition 3 4.00 MiB, chardev (248:0)

1. Without MMC software queue
I tested 5 times for each case and output a average speed.

1) Sequential read:
Speed: 59.4MiB/s, 63.4MiB/s, 57.5MiB/s, 57.2MiB/s, 60.8MiB/s
Average speed: 59.66MiB/s

2) Random read:
Speed: 26.9MiB/s, 26.9MiB/s, 27.1MiB/s, 27.1MiB/s, 27.2MiB/s
Average speed: 27.04MiB/s

3) Sequential write:
Speed: 71.6MiB/s, 72.5MiB/s, 72.2MiB/s, 64.6MiB/s, 67.5MiB/s
Average speed: 69.68MiB/s

4) Random write:
Speed: 36.3MiB/s, 35.4MiB/s, 38.6MiB/s, 34MiB/s, 35.5MiB/s
Average speed: 35.96MiB/s

2. With MMC software queue
I tested 5 times for each case and output a average speed.

1) Sequential read:
Speed: 59.2MiB/s, 60.4MiB/s, 63.6MiB/s, 60.3MiB/s, 59.9MiB/s
Average speed: 60.68MiB/s

2) Random read:
Speed: 31.3MiB/s, 31.4MiB/s, 31.5MiB/s, 31.3MiB/s, 31.3MiB/s
Average speed: 31.36MiB/s

3) Sequential write:
Speed: 71MiB/s, 71.8MiB/s, 72.3MiB/s, 72.2MiB/s, 71MiB/s
Average speed: 71.66MiB/s

4) Random write:
Speed: 68.9MiB/s, 68.7MiB/s, 68.8MiB/s, 68.6MiB/s, 68.8MiB/s
Average speed: 68.76MiB/s

Form above data, we can see the MMC software queue can help to improve some
performance obviously for random read and write, though no obvious improvement
for sequential read and write.

Any comments are welcome. Thanks a lot.

Changes from v4:
 - Add a seperate patch to introduce a variable to defer to complete
 data requests for some host drivers, when using host software queue.

Changes from v3:
 - Use host software queue instead of sqhci.
 - Fix random config building issue.
 - Change queue depth to 32, but still only allow 2 requests in flight.
 - Update the testing data.

Changes from v2:
 - Remove reference to 'struct cqhci_host' and 'struct cqhci_slot',
 instead adding 'struct sqhci_host', which is only used by software queue.

Changes from v1:
 - Add request_done ops for sdhci_ops.
 - Replace virtual command queue with software queue for functions and
 variables.
 - Rename the software queue file and add sqhci.h header file.

Baolin Wang (4):
  mmc: Add MMC host software queue support
  mmc: host: sdhci: Add request_done ops for struct sdhci_ops
  mmc: host: sdhci-sprd: Add software queue support
  mmc: host: sdhci: Add a variable to defer to complete data requests
    if needed

 drivers/mmc/core/block.c      |   61 ++++++++
 drivers/mmc/core/mmc.c        |   13 +-
 drivers/mmc/core/queue.c      |   33 +++-
 drivers/mmc/host/Kconfig      |    8 +
 drivers/mmc/host/Makefile     |    1 +
 drivers/mmc/host/mmc_hsq.c    |  344 +++++++++++++++++++++++++++++++++++++++++
 drivers/mmc/host/mmc_hsq.h    |   30 ++++
 drivers/mmc/host/sdhci-sprd.c |   26 ++++
 drivers/mmc/host/sdhci.c      |   14 +-
 drivers/mmc/host/sdhci.h      |    3 +
 include/linux/mmc/host.h      |    3 +
 11 files changed, 523 insertions(+), 13 deletions(-)
 create mode 100644 drivers/mmc/host/mmc_hsq.c
 create mode 100644 drivers/mmc/host/mmc_hsq.h

-- 
1.7.9.5

Comments

(Exiting) Baolin Wang Nov. 6, 2019, 9:42 a.m. UTC | #1
Hi Ulf,

On Tue, 29 Oct 2019 at 13:44, Baolin Wang <baolin.wang@linaro.org> wrote:
>

> Hi All,

>

> Now the MMC read/write stack will always wait for previous request is

> completed by mmc_blk_rw_wait(), before sending a new request to hardware,

> or queue a work to complete request, that will bring context switching

> overhead, especially for high I/O per second rates, to affect the IO

> performance.

>

> Thus this patch set will introduce the MMC software command queue support

> based on command queue engine's interfaces, and set the queue depth as 32

> to allow more requests can be be prepared, merged and inserted into IO

> scheduler, but we only allow 2 requests in flight, that is enough to let

> the irq handler always trigger the next request without a context switch,

> as well as avoiding a long latency.

>

> Moreover we can expand the MMC software queue interface to support

> MMC packed request or packed command instead of adding new interfaces,

> according to previosus discussion.

>

> Below are some comparison data with fio tool. The fio command I used

> is like below with changing the '--rw' parameter and enabling the direct

> IO flag to measure the actual hardware transfer speed in 4K block size.

>

> ./fio --filename=/dev/mmcblk0p30 --direct=1 --iodepth=20 --rw=read --bs=4K --size=1G --group_reporting --numjobs=20 --name=test_read

>

> My eMMC card working at HS400 Enhanced strobe mode:

> [    2.229856] mmc0: new HS400 Enhanced strobe MMC card at address 0001

> [    2.237566] mmcblk0: mmc0:0001 HBG4a2 29.1 GiB

> [    2.242621] mmcblk0boot0: mmc0:0001 HBG4a2 partition 1 4.00 MiB

> [    2.249110] mmcblk0boot1: mmc0:0001 HBG4a2 partition 2 4.00 MiB

> [    2.255307] mmcblk0rpmb: mmc0:0001 HBG4a2 partition 3 4.00 MiB, chardev (248:0)

>

> 1. Without MMC software queue

> I tested 5 times for each case and output a average speed.

>

> 1) Sequential read:

> Speed: 59.4MiB/s, 63.4MiB/s, 57.5MiB/s, 57.2MiB/s, 60.8MiB/s

> Average speed: 59.66MiB/s

>

> 2) Random read:

> Speed: 26.9MiB/s, 26.9MiB/s, 27.1MiB/s, 27.1MiB/s, 27.2MiB/s

> Average speed: 27.04MiB/s

>

> 3) Sequential write:

> Speed: 71.6MiB/s, 72.5MiB/s, 72.2MiB/s, 64.6MiB/s, 67.5MiB/s

> Average speed: 69.68MiB/s

>

> 4) Random write:

> Speed: 36.3MiB/s, 35.4MiB/s, 38.6MiB/s, 34MiB/s, 35.5MiB/s

> Average speed: 35.96MiB/s

>

> 2. With MMC software queue

> I tested 5 times for each case and output a average speed.

>

> 1) Sequential read:

> Speed: 59.2MiB/s, 60.4MiB/s, 63.6MiB/s, 60.3MiB/s, 59.9MiB/s

> Average speed: 60.68MiB/s

>

> 2) Random read:

> Speed: 31.3MiB/s, 31.4MiB/s, 31.5MiB/s, 31.3MiB/s, 31.3MiB/s

> Average speed: 31.36MiB/s

>

> 3) Sequential write:

> Speed: 71MiB/s, 71.8MiB/s, 72.3MiB/s, 72.2MiB/s, 71MiB/s

> Average speed: 71.66MiB/s

>

> 4) Random write:

> Speed: 68.9MiB/s, 68.7MiB/s, 68.8MiB/s, 68.6MiB/s, 68.8MiB/s

> Average speed: 68.76MiB/s

>

> Form above data, we can see the MMC software queue can help to improve some

> performance obviously for random read and write, though no obvious improvement

> for sequential read and write.

>

> Any comments are welcome. Thanks a lot.

>

> Changes from v4:

>  - Add a seperate patch to introduce a variable to defer to complete

>  data requests for some host drivers, when using host software queue.

>

> Changes from v3:

>  - Use host software queue instead of sqhci.

>  - Fix random config building issue.

>  - Change queue depth to 32, but still only allow 2 requests in flight.

>  - Update the testing data.

>

> Changes from v2:

>  - Remove reference to 'struct cqhci_host' and 'struct cqhci_slot',

>  instead adding 'struct sqhci_host', which is only used by software queue.

>

> Changes from v1:

>  - Add request_done ops for sdhci_ops.

>  - Replace virtual command queue with software queue for functions and

>  variables.

>  - Rename the software queue file and add sqhci.h header file.


Do you have any comments for this patch set? Since the merge window is
approaching, is it possible to apply this patch set into V5.5? Thanks.

>

> Baolin Wang (4):

>   mmc: Add MMC host software queue support

>   mmc: host: sdhci: Add request_done ops for struct sdhci_ops

>   mmc: host: sdhci-sprd: Add software queue support

>   mmc: host: sdhci: Add a variable to defer to complete data requests

>     if needed

>

>  drivers/mmc/core/block.c      |   61 ++++++++

>  drivers/mmc/core/mmc.c        |   13 +-

>  drivers/mmc/core/queue.c      |   33 +++-

>  drivers/mmc/host/Kconfig      |    8 +

>  drivers/mmc/host/Makefile     |    1 +

>  drivers/mmc/host/mmc_hsq.c    |  344 +++++++++++++++++++++++++++++++++++++++++

>  drivers/mmc/host/mmc_hsq.h    |   30 ++++

>  drivers/mmc/host/sdhci-sprd.c |   26 ++++

>  drivers/mmc/host/sdhci.c      |   14 +-

>  drivers/mmc/host/sdhci.h      |    3 +

>  include/linux/mmc/host.h      |    3 +

>  11 files changed, 523 insertions(+), 13 deletions(-)

>  create mode 100644 drivers/mmc/host/mmc_hsq.c

>  create mode 100644 drivers/mmc/host/mmc_hsq.h

>

> --

> 1.7.9.5

>



-- 
Baolin Wang
Best Regards