diff mbox series

[v5,3/3] dm mpath: add CONFIG_DM_MULTIPATH_SG_IO - failover for SG_IO

Message ID 20210628151558.2289-4-mwilck@suse.com
State Superseded
Headers show
Series scsi/dm: dm_blk_ioctl(): implement failover for SG_IO on dm-multipath | expand

Commit Message

Martin Wilck June 28, 2021, 3:15 p.m. UTC
From: Martin Wilck <mwilck@suse.com>

In virtual deployments, SCSI passthrough over dm-multipath devices is a
common setup. The qemu "pr-helper" was specifically invented for it. I
believe that this is the most important real-world scenario for sending
SG_IO ioctls to device-mapper devices.

In this configuration, guests send SCSI IO to the hypervisor in the form of
SG_IO ioctls issued by qemu. But on the device-mapper level, these SCSI
ioctls aren't treated like regular IO. Until commit 2361ae595352 ("dm mpath:
switch paths in dm_blk_ioctl() code path"), no path switching was done at
all. Worse though, if an SG_IO call fails because of a path error,
dm-multipath doesn't retry the IO on a another path; rather, the failure is
passed back to the guest, and paths are not marked as faulty.  This is in
stark contrast with regular block IO of guests on dm-multipath devices, and
certainly comes as a surprise to users who switch to SCSI passthrough in
qemu. In general, users of dm-multipath devices would probably expect failover
to work at least in a basic way.

This patch fixes this by taking a special code path for SG_IO on dm-multipath
targets if CONFIG_DM_MULTIPATH_SG_IO is set.  Rather then just choosing a
single path, sending the IO to it, and failing to the caller if the IO on the
path failed, it retries the same IO on another path for certain error codes,
using blk_path_error() to determine if a retry would make sense for the given
error code. Moreover, it fails the path on which the path error occurred,
like regular block IO would.

If all paths in a multipath map are failed, the behavior depends on the
queue_if_no_path setting of the map. If it is off, multipath_prepare_ioctl()
fails with -EIO, and the search for a valid paths is stopped. If it is on,
the caller will block until either queuing is disabled (in which case IO will
error out) or at least one path is reinstated (in which case IO will resume).
This is as close to regular READ/WRITE as it gets.

Signed-off-by: Martin Wilck <mwilck@suse.com>
---
 block/scsi_ioctl.c            |   5 +-
 drivers/md/Kconfig            |  11 ++++
 drivers/md/dm-core.h          |   5 ++
 drivers/md/dm-mpath.c         | 105 ++++++++++++++++++++++++++++++++++
 drivers/md/dm.c               |  26 ++++++++-
 include/linux/blkdev.h        |   2 +
 include/linux/device-mapper.h |   8 ++-
 7 files changed, 156 insertions(+), 6 deletions(-)

Comments

Christoph Hellwig July 1, 2021, 7:56 a.m. UTC | #1
On Mon, Jun 28, 2021 at 05:15:58PM +0200, mwilck@suse.com wrote:
> The qemu "pr-helper" was specifically invented for it. I

> believe that this is the most important real-world scenario for sending

> SG_IO ioctls to device-mapper devices.


pr-helper obviously does not SG_IO on dm-multipath as that simply does
not work.

More importantly - if you want to use persistent reservations use the
kernel ioctls for that.  These work on SCSI, NVMe and device mapper
without any extra magic.

Failing over SG_IO does not make sense.  It is an interface specically
designed to leave all error handling to the userspace program using it,
and we should not change that for one specific error case.  If you
want the kernel to handle errors for you, use the proper interfaces.
In this case this is the persistent reservation ioctls.  If they miss
some features that qemu needs we should add those.
Martin Wilck July 1, 2021, 10:35 a.m. UTC | #2
On Do, 2021-07-01 at 09:56 +0200, Christoph Hellwig wrote:
> On Mon, Jun 28, 2021 at 05:15:58PM +0200, mwilck@suse.com wrote:

> > The qemu "pr-helper" was specifically invented for it. I

> > believe that this is the most important real-world scenario for

> > sending

> > SG_IO ioctls to device-mapper devices.

> 

> pr-helper obviously does not SG_IO on dm-multipath as that simply

> does

> not work.

> 

> More importantly - if you want to use persistent reservations use the

> kernel ioctls for that.  These work on SCSI, NVMe and device mapper

> without any extra magic.


This set is not about persistent reservations. Sorry if my mentioning
pr-helper made this impression. It was only meant to emphasize the fact
that qemu SCSI passthrough using SG_IO is an important use case.

> Failing over SG_IO does not make sense.  It is an interface

> specically

> designed to leave all error handling to the userspace program using

> it,

> and we should not change that for one specific error case.  If you

> want the kernel to handle errors for you, use the proper interfaces.

> In this case this is the persistent reservation ioctls.  If they miss

> some features that qemu needs we should add those.


I respectfully disagree. Users of dm-multipath devices expect that IO
succeeds as long as there's at least one healthy path. This is a
fundamental property of multipath devices. Whether IO is sent
"normally" or via SG_IO doesn't make a difference wrt this expectation.

The fact that qemu implements SCSI passthrough this way shows that this
is not just a naïve user expectation, but is shared by experienced
developers as well. AFAICS, we can't simply move the path error
detection and retry logic into user space qemu, because user space
doesn't have information about the status of the multipath map; not to
mention that doing this would be highly inefficient.

I agree that in principle, SG_IO error handling should be left to user
space. But in this specific case, it makes sense to handle just the
"path error vs. target error" distinction in the kernel. All else is of
course still handled by user space.

Regards,
Martin
Paolo Bonzini July 1, 2021, 11:06 a.m. UTC | #3
On Thu, Jul 1, 2021 at 9:56 AM Christoph Hellwig <hch@lst.de> wrote:
> On Mon, Jun 28, 2021 at 05:15:58PM +0200, mwilck@suse.com wrote:

> > The qemu "pr-helper" was specifically invented for it. I

> > believe that this is the most important real-world scenario for sending

> > SG_IO ioctls to device-mapper devices.

>

> pr-helper obviously does not SG_IO on dm-multipath as that simply does

> not work.


Right, for the specific case of persistent reservation ioctls, SG_IO is
sent manually to each path via libmpathpersist.

Failover for SG_IO is needed for general-purpose commands (ranging
from INQUIRY/READ CAPACITY to READ/WRITE). The reason to use
SG_IO instead of syscalls is mostly to preserve sense data; QEMU does
have code to convert errno to sense data, but it's fickle. If QEMU can use
sense data directly, it's easier to forward conditions that the guest can
resolve itself (for example unit attentions) and to let the guest operate
at a lower level (e.g. host-managed ZBC can be forwarded and they just
work).

Of course, all this works only for SCSI. As NVMe becomes more common,
and Linux exposes more functionality to userspace with a fabric-neutral
API, QEMU's SBC emulation can start using that functionality and provide
low-level passthrough functionality no matter if the host is using SCSI
or NVMe. Again, the main obstacle for this is sense data; for example,
the SCSI subsystem rightfully eats unit attentions and converts them to
uevents if you go through read/write requests instead of SG_IO.

> More importantly - if you want to use persistent reservations use the

> kernel ioctls for that.  These work on SCSI, NVMe and device mapper

> without any extra magic.


If they provide functionality equivalent to libmpathpersist without having
to do the DM_TABLE_STATUS, I will certainly consider switching! The
only possible issue could be the lost unit attentions.

Paolo

> Failing over SG_IO does not make sense.  It is an interface specically

> designed to leave all error handling to the userspace program using it,

> and we should not change that for one specific error case.  If you

> want the kernel to handle errors for you, use the proper interfaces.

> In this case this is the persistent reservation ioctls.  If they miss

> some features that qemu needs we should add those.
Christoph Hellwig July 1, 2021, 11:34 a.m. UTC | #4
On Thu, Jul 01, 2021 at 12:35:53PM +0200, Martin Wilck wrote:
> I respectfully disagree. Users of dm-multipath devices expect that IO

> succeeds as long as there's at least one healthy path. This is a

> fundamental property of multipath devices. Whether IO is sent

> "normally" or via SG_IO doesn't make a difference wrt this expectation.


If you have those (pretty reasonable) expections don't use SG_IO.
That is what the regular read/write path is for.  SG_IO gives you
raw access to the SCSI logic unit, and you get to keep the pieces
if anything goes wrong.
Martin Wilck July 2, 2021, 2:21 p.m. UTC | #5
On Do, 2021-07-01 at 13:34 +0200, Christoph Hellwig wrote:
> On Thu, Jul 01, 2021 at 12:35:53PM +0200, Martin Wilck wrote:

> > I respectfully disagree. Users of dm-multipath devices expect that

> > IO

> > succeeds as long as there's at least one healthy path. This is a

> > fundamental property of multipath devices. Whether IO is sent

> > "normally" or via SG_IO doesn't make a difference wrt this

> > expectation.

> 

> If you have those (pretty reasonable) expections don't use SG_IO.

> That is what the regular read/write path is for.  SG_IO gives you

> raw access to the SCSI logic unit, and you get to keep the pieces

> if anything goes wrong.


With this logic, if some paths are down, SG_IO commands on multipath
devices yield random results. On one path a command would fail, and on
another it would succeed. User space has no way to control or even see
what path is being used. That's a very fragile user space API, on the
fringe of being useless IMO.

If user space is interested in the error handling semantics you
describe, it can run SG_IO on individual SCSI devices any time. With
the change I am proposing, nothing is lost for user space. If user
space decides to do SG_IO on a multipath device, it has a different
expectation, which my patch set implements. IMO we should strive to
match the semantics for ioctls on natively multipathed NVMe namespaces.

Regards
Martin
Paolo Bonzini July 5, 2021, 1:02 p.m. UTC | #6
On 02/07/21 16:21, Martin Wilck wrote:
>> SG_IO gives you raw access to the SCSI logic unit, and you get to

>> keep the pieces if anything goes wrong.

> 

> That's a very fragile user space API, on the fringe of being useless

> IMO.


Indeed.  If SG_IO is for raw access to an ITL nexus, it shouldn't be
supported at all by mpath devices.  If on the other hand SG_IO is for
raw access to a LUN, there is no reason for it to not support failover.

Paolo
Hannes Reinecke July 5, 2021, 1:11 p.m. UTC | #7
On 7/5/21 3:02 PM, Paolo Bonzini wrote:
> On 02/07/21 16:21, Martin Wilck wrote:

>>> SG_IO gives you raw access to the SCSI logic unit, and you get to

>>> keep the pieces if anything goes wrong.

>>

>> That's a very fragile user space API, on the fringe of being useless

>> IMO.

> 

> Indeed.  If SG_IO is for raw access to an ITL nexus, it shouldn't be

> supported at all by mpath devices.  If on the other hand SG_IO is for

> raw access to a LUN, there is no reason for it to not support failover.

> 

Or we look at IO_URING_OP_URING_CMD, to send raw cdbs via io_uring.
And delegate SG_IO to raw access to an ITL nexus.
Doesn't help with existing issues, but should get a clean way forward.

... I think I've even tendered a topic at LSF for this ...

Cheers,

Hannes
-- 
Dr. Hannes Reinecke                Kernel Storage Architect
hare@suse.de                              +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer
Martin Wilck July 5, 2021, 1:48 p.m. UTC | #8
On Mo, 2021-07-05 at 15:11 +0200, Hannes Reinecke wrote:
> On 7/5/21 3:02 PM, Paolo Bonzini wrote:

> > On 02/07/21 16:21, Martin Wilck wrote:

> > > > SG_IO gives you raw access to the SCSI logic unit, and you get

> > > > to

> > > > keep the pieces if anything goes wrong.

> > > 

> > > That's a very fragile user space API, on the fringe of being

> > > useless

> > > IMO.

> > 

> > Indeed.  If SG_IO is for raw access to an ITL nexus, it shouldn't

> > be

> > supported at all by mpath devices.  If on the other hand SG_IO is

> > for

> > raw access to a LUN, there is no reason for it to not support

> > failover.

> > 

> Or we look at IO_URING_OP_URING_CMD, to send raw cdbs via io_uring.

> And delegate SG_IO to raw access to an ITL nexus.

> Doesn't help with existing issues, but should get a clean way

> forward.


I still have to understand how this would help with the retrying
semantics. Wouldn't we get the exact same problem if a path error
occurs?

Regards,
Martin
Paolo Bonzini July 6, 2021, 10:13 a.m. UTC | #9
On 05/07/21 15:48, Martin Wilck wrote:
> On Mo, 2021-07-05 at 15:11 +0200, Hannes Reinecke wrote:

>> On 7/5/21 3:02 PM, Paolo Bonzini wrote:

>>> On 02/07/21 16:21, Martin Wilck wrote:

>>>>> SG_IO gives you raw access to the SCSI logic unit, and you get

>>>>> to

>>>>> keep the pieces if anything goes wrong.

>>>>

>>>> That's a very fragile user space API, on the fringe of being

>>>> useless

>>>> IMO.

>>>

>>> Indeed.  If SG_IO is for raw access to an ITL nexus, it shouldn't

>>> be supported at all by mpath devices.  If on the other hand SG_IO is

>>> for raw access to a LUN, there is no reason for it to not support

>>> failover.

>>

>> Or we look at IO_URING_OP_URING_CMD, to send raw cdbs via io_uring.

>> And delegate SG_IO to raw access to an ITL nexus.

>> Doesn't help with existing issues, but should get a clean way

>> forward.

> 

> I still have to understand how this would help with the retrying

> semantics. Wouldn't we get the exact same problem if a path error

> occurs?


Also, how would the URING_CMD API differ from SG_IO modulo one being a 
ioctl and one being io_uring-based?  In the end what you have to do is 
1) send a CDB and optionally some data 2) get back a status and 
optionally some data and sense.  Whether the intended use of the API is 
for an ITL nexus or a LUN doesn't really matter.  So, what is the 
rationale for "SG_IO is for a nexus" in the first place, if you think 
that "raw CDBs for a LUN" is a useful operation?  You can still use 
DM_TABLE_STATUS (iirc) to address a specific ITL nexus if desired.

Besides the virtualization case, think of udev rules that use SG_IO to 
retrieve the device identification page for an mpath device and create 
corresponding symlinks in /dev.  They would fail if the first path is 
not responding, which is not desirable.

Paolo
diff mbox series

Patch

diff --git a/block/scsi_ioctl.c b/block/scsi_ioctl.c
index e061398be90f..e6a62c1c5404 100644
--- a/block/scsi_ioctl.c
+++ b/block/scsi_ioctl.c
@@ -281,8 +281,8 @@  static int blk_complete_sghdr_rq(struct request *rq, struct sg_io_hdr *hdr,
 	return ret;
 }
 
-static int sg_io(struct request_queue *q, struct gendisk *bd_disk,
-		struct sg_io_hdr *hdr, fmode_t mode)
+int sg_io(struct request_queue *q, struct gendisk *bd_disk,
+	  struct sg_io_hdr *hdr, fmode_t mode)
 {
 	unsigned long start_time;
 	ssize_t ret = 0;
@@ -367,6 +367,7 @@  static int sg_io(struct request_queue *q, struct gendisk *bd_disk,
 	blk_put_request(rq);
 	return ret;
 }
+EXPORT_SYMBOL_GPL(sg_io);
 
 /**
  * sg_scsi_ioctl  --  handle deprecated SCSI_IOCTL_SEND_COMMAND ioctl
diff --git a/drivers/md/Kconfig b/drivers/md/Kconfig
index f2014385d48b..f28f29e3bd11 100644
--- a/drivers/md/Kconfig
+++ b/drivers/md/Kconfig
@@ -473,6 +473,17 @@  config DM_MULTIPATH_IOA
 
 	  If unsure, say N.
 
+config DM_MULTIPATH_SG_IO
+       bool "Retry SCSI generic I/O on multipath devices"
+       depends on DM_MULTIPATH && BLK_SCSI_REQUEST
+       help
+	 With this option, SCSI generic (SG) requests issued on multipath
+	 devices will behave similar to regular block I/O: upon failure,
+	 they are repeated on a different path, and the erroring device
+	 is marked as failed.
+
+	 If unsure, say N.
+
 config DM_DELAY
 	tristate "I/O delaying target"
 	depends on BLK_DEV_DM
diff --git a/drivers/md/dm-core.h b/drivers/md/dm-core.h
index 5953ff2bd260..8bd8a8e3916e 100644
--- a/drivers/md/dm-core.h
+++ b/drivers/md/dm-core.h
@@ -189,4 +189,9 @@  extern atomic_t dm_global_event_nr;
 extern wait_queue_head_t dm_global_eventq;
 void dm_issue_global_event(void);
 
+int __dm_prepare_ioctl(struct mapped_device *md, int *srcu_idx,
+		       struct block_device **bdev,
+		       struct dm_target **target);
+void dm_unprepare_ioctl(struct mapped_device *md, int srcu_idx);
+
 #endif
diff --git a/drivers/md/dm-mpath.c b/drivers/md/dm-mpath.c
index bced42f082b0..86518aec32b4 100644
--- a/drivers/md/dm-mpath.c
+++ b/drivers/md/dm-mpath.c
@@ -11,6 +11,7 @@ 
 #include "dm-bio-record.h"
 #include "dm-path-selector.h"
 #include "dm-uevent.h"
+#include "dm-core.h"
 
 #include <linux/blkdev.h>
 #include <linux/ctype.h>
@@ -26,6 +27,7 @@ 
 #include <scsi/scsi_dh.h>
 #include <linux/atomic.h>
 #include <linux/blk-mq.h>
+#include <scsi/sg.h>
 
 #define DM_MSG_PREFIX "multipath"
 #define DM_PG_INIT_DELAY_MSECS 2000
@@ -2129,6 +2131,106 @@  static int multipath_busy(struct dm_target *ti)
 	return busy;
 }
 
+#ifdef CONFIG_DM_MULTIPATH_SG_IO
+static int pgpath_sg_io_ioctl(struct block_device *bdev,
+			    struct sg_io_hdr *hdr, struct multipath *m,
+			    fmode_t mode)
+{
+	int rc;
+	blk_status_t sts;
+	struct priority_group *pg;
+	struct pgpath *pgpath;
+	char path_name[BDEVNAME_SIZE];
+
+	rc = sg_io(bdev->bd_disk->queue, bdev->bd_disk, hdr, mode);
+	DMDEBUG("SG_IO via %s: rc = %d D%02xH%02xM%02xS%02x",
+		bdevname(bdev, path_name), rc,
+		hdr->driver_status, hdr->host_status,
+		hdr->msg_status, hdr->status);
+
+	/*
+	 * Errors resulting from invalid parameters shouldn't be retried
+	 * on another path.
+	 */
+	switch (rc) {
+	case -ENOIOCTLCMD:
+	case -EFAULT:
+	case -EINVAL:
+	case -EPERM:
+		return rc;
+	default:
+		break;
+	}
+
+	sts = sg_io_to_blk_status(hdr);
+	if (sts == BLK_STS_OK)
+		return 0;
+	else if (!blk_path_error(sts))
+		return blk_status_to_errno(sts);
+
+	/* path error - fail the path */
+	list_for_each_entry(pg, &m->priority_groups, list) {
+		list_for_each_entry(pgpath, &pg->pgpaths, list) {
+			if (pgpath->path.dev->bdev == bdev)
+				fail_path(pgpath);
+		}
+	}
+
+	return -EAGAIN;
+}
+
+static int multipath_sg_io_ioctl(struct block_device *bdev, fmode_t mode,
+				 unsigned int cmd, unsigned long uarg)
+{
+	struct mapped_device *md = bdev->bd_disk->private_data;
+	void __user *arg = (void __user *)uarg;
+	struct sg_io_hdr hdr;
+	int rc;
+	bool suspended;
+	int retries = 5;
+
+	if (copy_from_user(&hdr, arg, sizeof(hdr)))
+		return -EFAULT;
+
+	if (hdr.interface_id != 'S')
+		return -EINVAL;
+
+	if (hdr.dxfer_len > (queue_max_hw_sectors(bdev->bd_disk->queue) << 9))
+		return -EIO;
+
+	do {
+		struct block_device *path_dev;
+		struct dm_target *tgt;
+		struct sg_io_hdr rhdr;
+		int srcu_idx;
+
+		suspended = false;
+		/* This will fail and break the loop if no valid paths found */
+		rc = __dm_prepare_ioctl(md, &srcu_idx, &path_dev, &tgt);
+		if (rc == -EAGAIN)
+			suspended = true;
+		else if (rc < 0)
+			DMERR("%s: failed to get path: %d", __func__, rc);
+		else {
+			/* Need to copy the sg_io_hdr, it may be modified */
+			rhdr = hdr;
+			rc = pgpath_sg_io_ioctl(path_dev, &rhdr,
+						tgt->private, mode);
+			if (rc == 0 && copy_to_user(arg, &rhdr, sizeof(rhdr)))
+				rc = -EFAULT;
+		}
+		dm_unprepare_ioctl(md, srcu_idx);
+		if (suspended) {
+			DMDEBUG("%s: suspended, retries = %d\n",
+				__func__, retries);
+			msleep(20);
+		}
+	} while (rc == -EAGAIN && (!suspended || retries-- > 0));
+
+	return rc;
+}
+#endif
+
 /*-----------------------------------------------------------------
  * Module setup
  *---------------------------------------------------------------*/
@@ -2153,6 +2255,9 @@  static struct target_type multipath_target = {
 	.prepare_ioctl = multipath_prepare_ioctl,
 	.iterate_devices = multipath_iterate_devices,
 	.busy = multipath_busy,
+#ifdef CONFIG_DM_MULTIPATH_SG_IO
+	.sg_io_ioctl = multipath_sg_io_ioctl,
+#endif
 };
 
 static int __init dm_multipath_init(void)
diff --git a/drivers/md/dm.c b/drivers/md/dm.c
index ca2aedd8ee7d..af15d2c132ce 100644
--- a/drivers/md/dm.c
+++ b/drivers/md/dm.c
@@ -29,6 +29,7 @@ 
 #include <linux/part_stat.h>
 #include <linux/blk-crypto.h>
 #include <linux/keyslot-manager.h>
+#include <scsi/sg.h>
 
 #define DM_MSG_PREFIX "core"
 
@@ -522,8 +523,9 @@  static int dm_blk_report_zones(struct gendisk *disk, sector_t sector,
 #define dm_blk_report_zones		NULL
 #endif /* CONFIG_BLK_DEV_ZONED */
 
-static int dm_prepare_ioctl(struct mapped_device *md, int *srcu_idx,
-			    struct block_device **bdev)
+int __dm_prepare_ioctl(struct mapped_device *md, int *srcu_idx,
+		       struct block_device **bdev,
+		       struct dm_target **target)
 {
 	struct dm_target *tgt;
 	struct dm_table *map;
@@ -553,13 +555,24 @@  static int dm_prepare_ioctl(struct mapped_device *md, int *srcu_idx,
 		goto retry;
 	}
 
+	if (r >= 0 && target)
+		*target = tgt;
+
 	return r;
 }
+EXPORT_SYMBOL_GPL(__dm_prepare_ioctl);
 
-static void dm_unprepare_ioctl(struct mapped_device *md, int srcu_idx)
+static int dm_prepare_ioctl(struct mapped_device *md, int *srcu_idx,
+			    struct block_device **bdev)
+{
+	return __dm_prepare_ioctl(md, srcu_idx, bdev, NULL);
+}
+
+void dm_unprepare_ioctl(struct mapped_device *md, int srcu_idx)
 {
 	dm_put_live_table(md, srcu_idx);
 }
+EXPORT_SYMBOL_GPL(dm_unprepare_ioctl);
 
 static int dm_blk_ioctl(struct block_device *bdev, fmode_t mode,
 			unsigned int cmd, unsigned long arg)
@@ -567,6 +580,13 @@  static int dm_blk_ioctl(struct block_device *bdev, fmode_t mode,
 	struct mapped_device *md = bdev->bd_disk->private_data;
 	int r, srcu_idx;
 
+#ifdef CONFIG_DM_MULTIPATH_SG_IO
+	if (cmd == SG_IO && md->immutable_target &&
+	    md->immutable_target->type->sg_io_ioctl)
+		return md->immutable_target->type->sg_io_ioctl(bdev, mode,
+							       cmd, arg);
+#endif
+
 	r = dm_prepare_ioctl(md, &srcu_idx, &bdev);
 	if (r < 0)
 		goto out;
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 5da03edf125c..96eaf1edd7b0 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -923,6 +923,8 @@  extern int scsi_cmd_ioctl(struct request_queue *, struct gendisk *, fmode_t,
 			  unsigned int, void __user *);
 extern int sg_scsi_ioctl(struct request_queue *, struct gendisk *, fmode_t,
 			 struct scsi_ioctl_command __user *);
+extern int sg_io(struct request_queue *q, struct gendisk *gd,
+		 struct sg_io_hdr *hdr, fmode_t mode);
 extern int get_sg_io_hdr(struct sg_io_hdr *hdr, const void __user *argp);
 extern int put_sg_io_hdr(const struct sg_io_hdr *hdr, void __user *argp);
 
diff --git a/include/linux/device-mapper.h b/include/linux/device-mapper.h
index ff700fb6ce1d..c94ae6ee81b8 100644
--- a/include/linux/device-mapper.h
+++ b/include/linux/device-mapper.h
@@ -151,6 +151,10 @@  typedef size_t (*dm_dax_copy_iter_fn)(struct dm_target *ti, pgoff_t pgoff,
 		void *addr, size_t bytes, struct iov_iter *i);
 typedef int (*dm_dax_zero_page_range_fn)(struct dm_target *ti, pgoff_t pgoff,
 		size_t nr_pages);
+
+typedef int (*dm_sg_io_ioctl_fn)(struct block_device *bdev, fmode_t mode,
+				 unsigned int cmd, unsigned long uarg);
+
 #define PAGE_SECTORS (PAGE_SIZE / 512)
 
 void dm_error(const char *message);
@@ -204,7 +208,9 @@  struct target_type {
 	dm_dax_copy_iter_fn dax_copy_from_iter;
 	dm_dax_copy_iter_fn dax_copy_to_iter;
 	dm_dax_zero_page_range_fn dax_zero_page_range;
-
+#ifdef CONFIG_DM_MULTIPATH_SG_IO
+	dm_sg_io_ioctl_fn sg_io_ioctl;
+#endif
 	/* For internal device-mapper use. */
 	struct list_head list;
 };