Message ID | 1414422568-19103-6-git-send-email-stefano.stabellini@eu.citrix.com |
---|---|
State | New |
Headers | show |
On Mon, Oct 27, 2014 at 03:09:26PM +0000, Stefano Stabellini wrote: > Merge xen/mm32.c into xen/mm.c. > As a consequence the code gets compiled on arm64 too: introduce a few > compat functions to actually be able to compile it. > > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> > Acked-by: Ian Campbell <ian.campbell@citrix.com> Since I missed the commit introducing mm32.c (340720be32d4 xen/arm: reimplement xen_dma_unmap_page & friends), I'll add a retrospective NAK ;). The main reason is the asymmetry between dma map and unmap. With host swiotlb somehow getting !dma_capable(dev), you even risk leaking dom0 swiotlb bounce buffers (on arm64). > --- a/arch/arm/xen/mm.c > +++ b/arch/arm/xen/mm.c [...] > +/* functions called by SWIOTLB */ > + > +static void dma_cache_maint(dma_addr_t handle, unsigned long offset, > + size_t size, enum dma_data_direction dir, > + void (*op)(const void *, size_t, int)) > +{ > + unsigned long pfn; > + size_t left = size; > + > + pfn = (handle >> PAGE_SHIFT) + offset / PAGE_SIZE; > + offset %= PAGE_SIZE; > + > + do { > + size_t len = left; > + void *vaddr; > + > + if (!pfn_valid(pfn)) Is this the pfn or the mfn? As you said in the previous email, there is no mfn_to_pfn() conversion, so that's actually in another address space where dom0 pfn_valid() would not make sense. Do you use this as a check for foreign pages? If yes, is the mfn for such pages guaranteed to be different from any valid dom0 pfn? > + { > + /* TODO: cache flush */ What does this TODO mean here? Which cases are not covered yet? > + } else { > + struct page *page = pfn_to_page(pfn); If the pfn here is correct, can you not just have something like: void xen_dma_unmap_page(struct device *hwdev, dma_addr_t handle, size_t size, enum dma_data_direction dir, struct dma_attrs *attrs) { unsigned long pfn = handle >> PAGE_SHIFT; /* could use some macros */ if (!pfn_valid(pfn)) { /* FIXME */ return; } __generic_dma_ops(hwdev)->unmap_page(hwdev, handle, size, dir, attrs); }
On Fri, 7 Nov 2014, Catalin Marinas wrote: > On Mon, Oct 27, 2014 at 03:09:26PM +0000, Stefano Stabellini wrote: > > Merge xen/mm32.c into xen/mm.c. > > As a consequence the code gets compiled on arm64 too: introduce a few > > compat functions to actually be able to compile it. > > > > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> > > Acked-by: Ian Campbell <ian.campbell@citrix.com> > > Since I missed the commit introducing mm32.c (340720be32d4 xen/arm: > reimplement xen_dma_unmap_page & friends), I'll add a retrospective NAK ;). > > The main reason is the asymmetry between dma map and unmap. With host > swiotlb somehow getting !dma_capable(dev), you even risk leaking dom0 > swiotlb bounce buffers (on arm64). > > > --- a/arch/arm/xen/mm.c > > +++ b/arch/arm/xen/mm.c > [...] > > +/* functions called by SWIOTLB */ > > + > > +static void dma_cache_maint(dma_addr_t handle, unsigned long offset, > > + size_t size, enum dma_data_direction dir, > > + void (*op)(const void *, size_t, int)) > > +{ > > + unsigned long pfn; > > + size_t left = size; > > + > > + pfn = (handle >> PAGE_SHIFT) + offset / PAGE_SIZE; > > + offset %= PAGE_SIZE; > > + > > + do { > > + size_t len = left; > > + void *vaddr; > > + > > + if (!pfn_valid(pfn)) > > Is this the pfn or the mfn? As you said in the previous email, there is > no mfn_to_pfn() conversion, so that's actually in another address space > where dom0 pfn_valid() would not make sense. That is actually the mfn. The check works because dom0 is mapped 1:1, so if the mfn is a valid pfn, then it means that it is a local page. > Do you use this as a check for foreign pages? If yes, is the mfn for > such pages guaranteed to be different from any valid dom0 pfn? Yes and yes > > + { > > + /* TODO: cache flush */ > > What does this TODO mean here? Which cases are not covered yet? We are going to fill in the cache flush implementation for foreign pages later, in patch 8/8. > > + } else { > > + struct page *page = pfn_to_page(pfn); > > If the pfn here is correct, can you not just have something like: > > void xen_dma_unmap_page(struct device *hwdev, dma_addr_t handle, > size_t size, enum dma_data_direction dir, > struct dma_attrs *attrs) > > { > unsigned long pfn = handle >> PAGE_SHIFT; /* could use some macros */ > if (!pfn_valid(pfn)) { > /* FIXME */ > return; > } > __generic_dma_ops(hwdev)->unmap_page(hwdev, handle, size, dir, attrs); > } Yes, this is possible. Then in patch 8/8 I could remove the FIXME and add a call to a function that issues the new GNTTABOP_cache_flush hypercall. It would also remove the asymmetry you mentioned before because we could do the same for map_page. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
On Fri, Nov 07, 2014 at 03:28:38PM +0000, Stefano Stabellini wrote: > On Fri, 7 Nov 2014, Catalin Marinas wrote: > > On Mon, Oct 27, 2014 at 03:09:26PM +0000, Stefano Stabellini wrote: > > > Merge xen/mm32.c into xen/mm.c. > > > As a consequence the code gets compiled on arm64 too: introduce a few > > > compat functions to actually be able to compile it. > > > > > > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> > > > Acked-by: Ian Campbell <ian.campbell@citrix.com> > > > > Since I missed the commit introducing mm32.c (340720be32d4 xen/arm: > > reimplement xen_dma_unmap_page & friends), I'll add a retrospective NAK ;). > > > > The main reason is the asymmetry between dma map and unmap. With host > > swiotlb somehow getting !dma_capable(dev), you even risk leaking dom0 > > swiotlb bounce buffers (on arm64). > > > > > --- a/arch/arm/xen/mm.c > > > +++ b/arch/arm/xen/mm.c > > [...] > > > +/* functions called by SWIOTLB */ > > > + > > > +static void dma_cache_maint(dma_addr_t handle, unsigned long offset, > > > + size_t size, enum dma_data_direction dir, > > > + void (*op)(const void *, size_t, int)) > > > +{ > > > + unsigned long pfn; > > > + size_t left = size; > > > + > > > + pfn = (handle >> PAGE_SHIFT) + offset / PAGE_SIZE; > > > + offset %= PAGE_SIZE; > > > + > > > + do { > > > + size_t len = left; > > > + void *vaddr; > > > + > > > + if (!pfn_valid(pfn)) > > > > Is this the pfn or the mfn? As you said in the previous email, there is > > no mfn_to_pfn() conversion, so that's actually in another address space > > where dom0 pfn_valid() would not make sense. > > That is actually the mfn. The check works because dom0 is mapped 1:1, so > if the mfn is a valid pfn, then it means that it is a local page. So the Xen DMA ops would never be called on anything other than dom0? If that's correct, the pfn_valid() check would work. But add some big comments as it's not clear at all to someone not familiar with Xen.
On Fri, 7 Nov 2014, Catalin Marinas wrote: > On Fri, Nov 07, 2014 at 03:28:38PM +0000, Stefano Stabellini wrote: > > On Fri, 7 Nov 2014, Catalin Marinas wrote: > > > On Mon, Oct 27, 2014 at 03:09:26PM +0000, Stefano Stabellini wrote: > > > > Merge xen/mm32.c into xen/mm.c. > > > > As a consequence the code gets compiled on arm64 too: introduce a few > > > > compat functions to actually be able to compile it. > > > > > > > > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> > > > > Acked-by: Ian Campbell <ian.campbell@citrix.com> > > > > > > Since I missed the commit introducing mm32.c (340720be32d4 xen/arm: > > > reimplement xen_dma_unmap_page & friends), I'll add a retrospective NAK ;). > > > > > > The main reason is the asymmetry between dma map and unmap. With host > > > swiotlb somehow getting !dma_capable(dev), you even risk leaking dom0 > > > swiotlb bounce buffers (on arm64). > > > > > > > --- a/arch/arm/xen/mm.c > > > > +++ b/arch/arm/xen/mm.c > > > [...] > > > > +/* functions called by SWIOTLB */ > > > > + > > > > +static void dma_cache_maint(dma_addr_t handle, unsigned long offset, > > > > + size_t size, enum dma_data_direction dir, > > > > + void (*op)(const void *, size_t, int)) > > > > +{ > > > > + unsigned long pfn; > > > > + size_t left = size; > > > > + > > > > + pfn = (handle >> PAGE_SHIFT) + offset / PAGE_SIZE; > > > > + offset %= PAGE_SIZE; > > > > + > > > > + do { > > > > + size_t len = left; > > > > + void *vaddr; > > > > + > > > > + if (!pfn_valid(pfn)) > > > > > > Is this the pfn or the mfn? As you said in the previous email, there is > > > no mfn_to_pfn() conversion, so that's actually in another address space > > > where dom0 pfn_valid() would not make sense. > > > > That is actually the mfn. The check works because dom0 is mapped 1:1, so > > if the mfn is a valid pfn, then it means that it is a local page. > > So the Xen DMA ops would never be called on anything other than dom0? That's right. > If that's correct, the pfn_valid() check would work. But add some big > comments as it's not clear at all to someone not familiar with Xen. Yeah... -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
diff --git a/arch/arm/xen/Makefile b/arch/arm/xen/Makefile index 1f85bfe..1296952 100644 --- a/arch/arm/xen/Makefile +++ b/arch/arm/xen/Makefile @@ -1 +1 @@ -obj-y := enlighten.o hypercall.o grant-table.o p2m.o mm.o mm32.o +obj-y := enlighten.o hypercall.o grant-table.o p2m.o mm.o diff --git a/arch/arm/xen/mm.c b/arch/arm/xen/mm.c index b0e77de..ff413a8 100644 --- a/arch/arm/xen/mm.c +++ b/arch/arm/xen/mm.c @@ -1,6 +1,10 @@ +#include <linux/cpu.h> +#include <linux/dma-mapping.h> #include <linux/bootmem.h> #include <linux/gfp.h> +#include <linux/highmem.h> #include <linux/export.h> +#include <linux/of_address.h> #include <linux/slab.h> #include <linux/types.h> #include <linux/dma-mapping.h> @@ -16,6 +20,125 @@ #include <asm/xen/hypercall.h> #include <asm/xen/interface.h> + +#ifdef CONFIG_ARM64 +static inline void dmac_map_area(const void *start, size_t size, int dir) +{ + return __dma_map_area(start, size, dir); +} + +static inline void dmac_unmap_area(const void *start, size_t size, int dir) +{ + return __dma_unmap_area(start, size, dir); +} + +static inline bool cache_is_vipt_nonaliasing(void) +{ + return true; +} + +static inline void *kmap_high_get(struct page *page) +{ + return NULL; +} + +static inline void kunmap_high(struct page *page) {} +#endif + +/* functions called by SWIOTLB */ + +static void dma_cache_maint(dma_addr_t handle, unsigned long offset, + size_t size, enum dma_data_direction dir, + void (*op)(const void *, size_t, int)) +{ + unsigned long pfn; + size_t left = size; + + pfn = (handle >> PAGE_SHIFT) + offset / PAGE_SIZE; + offset %= PAGE_SIZE; + + do { + size_t len = left; + void *vaddr; + + if (!pfn_valid(pfn)) + { + /* TODO: cache flush */ + } else { + struct page *page = pfn_to_page(pfn); + + if (PageHighMem(page)) { + if (len + offset > PAGE_SIZE) + len = PAGE_SIZE - offset; + + if (cache_is_vipt_nonaliasing()) { + vaddr = kmap_atomic(page); + op(vaddr + offset, len, dir); + kunmap_atomic(vaddr); + } else { + vaddr = kmap_high_get(page); + if (vaddr) { + op(vaddr + offset, len, dir); + kunmap_high(page); + } + } + } else { + vaddr = page_address(page) + offset; + op(vaddr, len, dir); + } + } + + offset = 0; + pfn++; + left -= len; + } while (left); +} + +static void __xen_dma_page_dev_to_cpu(struct device *hwdev, dma_addr_t handle, + size_t size, enum dma_data_direction dir) +{ + /* Cannot use __dma_page_dev_to_cpu because we don't have a + * struct page for handle */ + + dma_cache_maint(handle & PAGE_MASK, handle & ~PAGE_MASK, size, dir, dmac_unmap_area); +} + +static void __xen_dma_page_cpu_to_dev(struct device *hwdev, dma_addr_t handle, + size_t size, enum dma_data_direction dir) +{ + + dma_cache_maint(handle & PAGE_MASK, handle & ~PAGE_MASK, size, dir, dmac_map_area); +} + +void xen_dma_unmap_page(struct device *hwdev, dma_addr_t handle, + size_t size, enum dma_data_direction dir, + struct dma_attrs *attrs) + +{ + if (is_device_dma_coherent(hwdev)) + return; + if (dma_get_attr(DMA_ATTR_SKIP_CPU_SYNC, attrs)) + return; + + __xen_dma_page_dev_to_cpu(hwdev, handle, size, dir); +} + +void xen_dma_sync_single_for_cpu(struct device *hwdev, + dma_addr_t handle, size_t size, enum dma_data_direction dir) +{ + if (is_device_dma_coherent(hwdev)) + return; + __xen_dma_page_dev_to_cpu(hwdev, handle, size, dir); +} + +void xen_dma_sync_single_for_device(struct device *hwdev, + dma_addr_t handle, size_t size, enum dma_data_direction dir) +{ + if (is_device_dma_coherent(hwdev)) + return; + __xen_dma_page_cpu_to_dev(hwdev, handle, size, dir); +} + int xen_create_contiguous_region(phys_addr_t pstart, unsigned int order, unsigned int address_bits, dma_addr_t *dma_handle) diff --git a/arch/arm/xen/mm32.c b/arch/arm/xen/mm32.c deleted file mode 100644 index d8ed359..0000000 --- a/arch/arm/xen/mm32.c +++ /dev/null @@ -1,110 +0,0 @@ -#include <linux/cpu.h> -#include <linux/dma-mapping.h> -#include <linux/gfp.h> -#include <linux/highmem.h> - -#include <xen/features.h> - - -/* functions called by SWIOTLB */ - -static void dma_cache_maint(dma_addr_t handle, unsigned long offset, - size_t size, enum dma_data_direction dir, - void (*op)(const void *, size_t, int)) -{ - unsigned long pfn; - size_t left = size; - - pfn = (handle >> PAGE_SHIFT) + offset / PAGE_SIZE; - offset %= PAGE_SIZE; - - do { - size_t len = left; - void *vaddr; - - if (!pfn_valid(pfn)) - { - /* TODO: cache flush */ - } else { - struct page *page = pfn_to_page(pfn); - - if (PageHighMem(page)) { - if (len + offset > PAGE_SIZE) - len = PAGE_SIZE - offset; - - if (cache_is_vipt_nonaliasing()) { - vaddr = kmap_atomic(page); - op(vaddr + offset, len, dir); - kunmap_atomic(vaddr); - } else { - vaddr = kmap_high_get(page); - if (vaddr) { - op(vaddr + offset, len, dir); - kunmap_high(page); - } - } - } else { - vaddr = page_address(page) + offset; - op(vaddr, len, dir); - } - } - - offset = 0; - pfn++; - left -= len; - } while (left); -} - -static void __xen_dma_page_dev_to_cpu(struct device *hwdev, dma_addr_t handle, - size_t size, enum dma_data_direction dir) -{ - /* Cannot use __dma_page_dev_to_cpu because we don't have a - * struct page for handle */ - - dma_cache_maint(handle & PAGE_MASK, handle & ~PAGE_MASK, size, dir, dmac_unmap_area); -} - -static void __xen_dma_page_cpu_to_dev(struct device *hwdev, dma_addr_t handle, - size_t size, enum dma_data_direction dir) -{ - - dma_cache_maint(handle & PAGE_MASK, handle & ~PAGE_MASK, size, dir, dmac_map_area); -} - -void xen_dma_unmap_page(struct device *hwdev, dma_addr_t handle, - size_t size, enum dma_data_direction dir, - struct dma_attrs *attrs) - -{ - if (is_device_dma_coherent(hwdev)) - return; - if (dma_get_attr(DMA_ATTR_SKIP_CPU_SYNC, attrs)) - return; - - __xen_dma_page_dev_to_cpu(hwdev, handle, size, dir); -} - -void xen_dma_sync_single_for_cpu(struct device *hwdev, - dma_addr_t handle, size_t size, enum dma_data_direction dir) -{ - if (is_device_dma_coherent(hwdev)) - return; - __xen_dma_page_dev_to_cpu(hwdev, handle, size, dir); -} - -void xen_dma_sync_single_for_device(struct device *hwdev, - dma_addr_t handle, size_t size, enum dma_data_direction dir) -{ - if (is_device_dma_coherent(hwdev)) - return; - __xen_dma_page_cpu_to_dev(hwdev, handle, size, dir); -} - -int __init xen_mm32_init(void) -{ - if (!xen_initial_domain()) - return 0; - - return 0; -} -arch_initcall(xen_mm32_init); diff --git a/arch/arm64/include/asm/xen/page-coherent.h b/arch/arm64/include/asm/xen/page-coherent.h index dde3fc9..2052102 100644 --- a/arch/arm64/include/asm/xen/page-coherent.h +++ b/arch/arm64/include/asm/xen/page-coherent.h @@ -1,43 +1 @@ -#ifndef _ASM_ARM64_XEN_PAGE_COHERENT_H -#define _ASM_ARM64_XEN_PAGE_COHERENT_H - -#include <asm/page.h> -#include <linux/dma-attrs.h> -#include <linux/dma-mapping.h> - -static inline void *xen_alloc_coherent_pages(struct device *hwdev, size_t size, - dma_addr_t *dma_handle, gfp_t flags, - struct dma_attrs *attrs) -{ - return __generic_dma_ops(hwdev)->alloc(hwdev, size, dma_handle, flags, attrs); -} - -static inline void xen_free_coherent_pages(struct device *hwdev, size_t size, - void *cpu_addr, dma_addr_t dma_handle, - struct dma_attrs *attrs) -{ - __generic_dma_ops(hwdev)->free(hwdev, size, cpu_addr, dma_handle, attrs); -} - -static inline void xen_dma_map_page(struct device *hwdev, struct page *page, - unsigned long offset, size_t size, enum dma_data_direction dir, - struct dma_attrs *attrs) -{ -} - -static inline void xen_dma_unmap_page(struct device *hwdev, dma_addr_t handle, - size_t size, enum dma_data_direction dir, - struct dma_attrs *attrs) -{ -} - -static inline void xen_dma_sync_single_for_cpu(struct device *hwdev, - dma_addr_t handle, size_t size, enum dma_data_direction dir) -{ -} - -static inline void xen_dma_sync_single_for_device(struct device *hwdev, - dma_addr_t handle, size_t size, enum dma_data_direction dir) -{ -} -#endif /* _ASM_ARM64_XEN_PAGE_COHERENT_H */ +#include <../../arm/include/asm/xen/page-coherent.h>