diff mbox

[RFC] drivers: dma-mapping: Do not attempt to create a scatterlist for from_coherent buffers

Message ID 1493143059-2113-1-git-send-email-catalin.marinas@arm.com
State New
Headers show

Commit Message

Catalin Marinas April 25, 2017, 5:57 p.m. UTC
Memory returned by dma_alloc_from_coherent() is not backed by struct
page and creating a scatterlist would use invalid page pointers. The
patch introduces the dma_vaddr_from_coherent() function and the
corresponding check in dma_get_sgtable_attrs().

Fixes: d2b7428eb0ca ("common: dma-mapping: introduce dma_get_sgtable() function")
Cc: Marek Szyprowski <m.szyprowski@samsung.com>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Russell King - ARM Linux <linux@arm.linux.org.uk>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>

---

In a recent discussion around the iommu DMA ops on arm64, Russell
pointed out that dma_get_sgtable is not safe since the coherent DMA
memory is not always backed by struct page. Russell has queued an
arm-specific patch checking for pfn_valid() but I thought I'd make a
more generic fix. This patch aims to bring the dma_get_sgtable() API in
line with the dma_alloc/mmap/free with respect to the from_coherent
memory.

 drivers/base/dma-coherent.c | 9 +++++++++
 include/linux/dma-mapping.h | 5 +++++
 2 files changed, 14 insertions(+)

Comments

Russell King (Oracle) April 25, 2017, 6:05 p.m. UTC | #1
On Tue, Apr 25, 2017 at 06:57:39PM +0100, Catalin Marinas wrote:
> Memory returned by dma_alloc_from_coherent() is not backed by struct

> page and creating a scatterlist would use invalid page pointers. The

> patch introduces the dma_vaddr_from_coherent() function and the

> corresponding check in dma_get_sgtable_attrs().

> 

> Fixes: d2b7428eb0ca ("common: dma-mapping: introduce dma_get_sgtable() function")

> Cc: Marek Szyprowski <m.szyprowski@samsung.com>

> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

> Cc: Russell King - ARM Linux <linux@arm.linux.org.uk>

> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>

> ---

> 

> In a recent discussion around the iommu DMA ops on arm64, Russell

> pointed out that dma_get_sgtable is not safe since the coherent DMA

> memory is not always backed by struct page. Russell has queued an

> arm-specific patch checking for pfn_valid() but I thought I'd make a

> more generic fix. This patch aims to bring the dma_get_sgtable() API in

> line with the dma_alloc/mmap/free with respect to the from_coherent

> memory.


Sorry, I don't think this is the correct approach.

You're assuming that 'vaddr' will always be a valid lowmem address.
That isn't always the case - some dma coherent allocations provide
remapped memory.

The reason for dma_get_sgtable() existing is to coerce the DMA
coherent memory into a scatterlist so that it can be passed through
the dma_buf API - that's where the problem lies.  The dma_buf API
needs fixing so that coherent memory can be sanely passed, and
dma_get_sgtable() needs to be put out of its misery.

-- 
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line: currently at 9.6Mbps down 400kbps up
according to speedtest.net.
Catalin Marinas April 25, 2017, 6:35 p.m. UTC | #2
On Tue, Apr 25, 2017 at 07:05:26PM +0100, Russell King - ARM Linux wrote:
> On Tue, Apr 25, 2017 at 06:57:39PM +0100, Catalin Marinas wrote:

> > Memory returned by dma_alloc_from_coherent() is not backed by struct

> > page and creating a scatterlist would use invalid page pointers. The

> > patch introduces the dma_vaddr_from_coherent() function and the

> > corresponding check in dma_get_sgtable_attrs().

> > 

> > Fixes: d2b7428eb0ca ("common: dma-mapping: introduce dma_get_sgtable() function")

> > Cc: Marek Szyprowski <m.szyprowski@samsung.com>

> > Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

> > Cc: Russell King - ARM Linux <linux@arm.linux.org.uk>

> > Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>

> > ---

> > 

> > In a recent discussion around the iommu DMA ops on arm64, Russell

> > pointed out that dma_get_sgtable is not safe since the coherent DMA

> > memory is not always backed by struct page. Russell has queued an

> > arm-specific patch checking for pfn_valid() but I thought I'd make a

> > more generic fix. This patch aims to bring the dma_get_sgtable() API in

> > line with the dma_alloc/mmap/free with respect to the from_coherent

> > memory.

> 

> Sorry, I don't think this is the correct approach.

> 

> You're assuming that 'vaddr' will always be a valid lowmem address.

> That isn't always the case - some dma coherent allocations provide

> remapped memory.


I'm not assuming lowmem. The only thing this patch does is that if the
memory came from the dma_declare_coherent range, it returns -ENXIO. This
is in line with the dma_free_attrs() for example, which calls
dma_release_from_coherent(), but for sgtable this wouldn't make sense,
hence -ENXIO.

I agree with you that vaddr can be remapped (which is the case on arm64
as well) but it's the responsibility of the arch-specific
dma_ops->get_sgtable() to handle it properly (vmalloc_to_page etc.). The
fall-back dma_common_get_sgtable() does assume lowmem but it doesn't
mean that the arch DMA ops need to use it.

> The reason for dma_get_sgtable() existing is to coerce the DMA

> coherent memory into a scatterlist so that it can be passed through

> the dma_buf API - that's where the problem lies.  The dma_buf API

> needs fixing so that coherent memory can be sanely passed, and

> dma_get_sgtable() needs to be put out of its misery.


I agree but didn't dare to dig into this ;).

-- 
Catalin
diff mbox

Patch

diff --git a/drivers/base/dma-coherent.c b/drivers/base/dma-coherent.c
index 640a7e63c453..1d2cdbefb850 100644
--- a/drivers/base/dma-coherent.c
+++ b/drivers/base/dma-coherent.c
@@ -279,6 +279,15 @@  int dma_mmap_from_coherent(struct device *dev, struct vm_area_struct *vma,
 }
 EXPORT_SYMBOL(dma_mmap_from_coherent);
 
+int dma_vaddr_from_coherent(struct device *dev, void *vaddr, size_t size)
+{
+	struct dma_coherent_mem *mem = dev ? dev->dma_mem : NULL;
+
+	return mem && vaddr >= mem->virt_base &&
+		vaddr + size <= (mem->virt_base + (mem->size << PAGE_SHIFT));
+}
+EXPORT_SYMBOL(dma_vaddr_from_coherent);
+
 /*
  * Support for reserved memory regions defined in device tree
  */
diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h
index 0977317c6835..4dc99c6db184 100644
--- a/include/linux/dma-mapping.h
+++ b/include/linux/dma-mapping.h
@@ -164,10 +164,12 @@  int dma_release_from_coherent(struct device *dev, int order, void *vaddr);
 
 int dma_mmap_from_coherent(struct device *dev, struct vm_area_struct *vma,
 			    void *cpu_addr, size_t size, int *ret);
+int dma_vaddr_from_coherent(struct device *dev, void *vaddr, size_t size);
 #else
 #define dma_alloc_from_coherent(dev, size, handle, ret) (0)
 #define dma_release_from_coherent(dev, order, vaddr) (0)
 #define dma_mmap_from_coherent(dev, vma, vaddr, order, ret) (0)
+#define dma_vaddr_from_coherent(dev, vaddr, size) (0)
 #endif /* CONFIG_HAVE_GENERIC_DMA_COHERENT */
 
 #ifdef CONFIG_HAS_DMA
@@ -461,6 +463,9 @@  dma_get_sgtable_attrs(struct device *dev, struct sg_table *sgt, void *cpu_addr,
 {
 	const struct dma_map_ops *ops = get_dma_ops(dev);
 	BUG_ON(!ops);
+	/* dma_alloc_from_coherent() memory is not backed by struct page */
+	if (dma_vaddr_from_coherent(dev, cpu_addr, size))
+		return -ENXIO;
 	if (ops->get_sgtable)
 		return ops->get_sgtable(dev, sgt, cpu_addr, dma_addr, size,
 					attrs);