diff mbox

[RFC,v2,2/7] dma-mapping: replace set_arch_dma_coherent_ops with arch_setup_dma_ops

Message ID 1409680587-29818-3-git-send-email-will.deacon@arm.com
State New
Headers show

Commit Message

Will Deacon Sept. 2, 2014, 5:56 p.m. UTC
set_arch_dma_coherent_ops is called from of_dma_configure in order to
swizzle the architectural dma-mapping functions over to a cache-coherent
implementation. This is currently implemented only for ARM.

In anticipation of re-using this mechanism for IOMMU-backed dma-mapping
ops too, this patch replaces the function with a broader
arch_setup_dma_ops callback which is also responsible for setting the
DMA mask and offset as well as selecting the correct mapping functions.

A further advantage of this split is that it nicely isolates the
of-specific code from the dma-mapping code, allowing potential reuse by
other buses (e.g. PCI) in the future.

Signed-off-by: Will Deacon <will.deacon@arm.com>
---
 arch/arm/include/asm/dma-mapping.h | 13 ++++++++----
 drivers/of/platform.c              | 42 ++++++++++----------------------------
 include/linux/dma-mapping.h        |  8 +++-----
 3 files changed, 23 insertions(+), 40 deletions(-)

Comments

Grygorii Strashko Sept. 5, 2014, 3:37 p.m. UTC | #1
Hi Will,

On 09/02/2014 08:56 PM, Will Deacon wrote:
> set_arch_dma_coherent_ops is called from of_dma_configure in order to
> swizzle the architectural dma-mapping functions over to a cache-coherent
> implementation. This is currently implemented only for ARM.
> 
> In anticipation of re-using this mechanism for IOMMU-backed dma-mapping
> ops too, this patch replaces the function with a broader
> arch_setup_dma_ops callback which is also responsible for setting the
> DMA mask and offset as well as selecting the correct mapping functions.
> 
> A further advantage of this split is that it nicely isolates the
> of-specific code from the dma-mapping code, allowing potential reuse by
> other buses (e.g. PCI) in the future.

I think this patch can introduce a regression if it will be used as is :(

When this code was initially created there ware a lot of discussion about
and finally it was decided to configure all common (for all arches) DMA 
specific parameters for devices in common code, while strictly arch 
specific things using arch specific APIs/callbacks.

The following parameters are common now:
dev->coherent_dma_mask	= mask;
dev->dma_mask		= &dev->coherent_dma_mask;
dev->dma_pfn_offset	= offset;

and they need to be set always, otherwise it will affect on other
DT-based arches.

Links:
- [PATCH v2 0/7] ARM: dma: Support dma-ranges and dma-coherent
http://www.spinics.net/lists/arm-kernel/msg311678.html

- [PATCH 0/7] of: setup dma parameters using dma-ranges and dma-coherent
https://lkml.org/lkml/2014/3/6/186

- [PATCH v2 0/7] of: setup dma parameters using dma-ranges and dma-coherent
https://lkml.org/lkml/2014/4/19/80

- [PATCH v3 0/7] of: setup dma parameters using dma-ranges and dma-coherent
http://thread.gmane.org/gmane.linux.kernel/1690224/focus=319246

> 
> Signed-off-by: Will Deacon <will.deacon@arm.com>
> ---
>   arch/arm/include/asm/dma-mapping.h | 13 ++++++++----
>   drivers/of/platform.c              | 42 ++++++++++----------------------------
>   include/linux/dma-mapping.h        |  8 +++-----
>   3 files changed, 23 insertions(+), 40 deletions(-)
> 

[...]

> +	arch_setup_dma_ops(dev, DMA_BIT_MASK(32), offset, coherent);
>   }
>   
>   /**
> diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h
> index 931b70986272..0f7f7b68b0db 100644
> --- a/include/linux/dma-mapping.h
> +++ b/include/linux/dma-mapping.h
> @@ -129,11 +129,9 @@ static inline int dma_coerce_mask_and_coherent(struct device *dev, u64 mask)
>   
>   extern u64 dma_get_required_mask(struct device *dev);
>   
> -#ifndef set_arch_dma_coherent_ops
> -static inline int set_arch_dma_coherent_ops(struct device *dev)
> -{
> -	return 0;
> -}
> +#ifndef arch_setup_dma_ops
> +static inline void arch_setup_dma_ops(struct device *dev, u64 mask,
> +				      unsigned long offset, bool coherent) { }
>   #endif
>   
>   static inline unsigned int dma_get_max_seg_size(struct device *dev)
> 

Regards,
-grygorii
Will Deacon Sept. 8, 2014, 10:31 a.m. UTC | #2
On Fri, Sep 05, 2014 at 04:37:41PM +0100, Grygorii Strashko wrote:
> Hi Will,

Hi Grygorii,

> On 09/02/2014 08:56 PM, Will Deacon wrote:
> > set_arch_dma_coherent_ops is called from of_dma_configure in order to
> > swizzle the architectural dma-mapping functions over to a cache-coherent
> > implementation. This is currently implemented only for ARM.
> > 
> > In anticipation of re-using this mechanism for IOMMU-backed dma-mapping
> > ops too, this patch replaces the function with a broader
> > arch_setup_dma_ops callback which is also responsible for setting the
> > DMA mask and offset as well as selecting the correct mapping functions.
> > 
> > A further advantage of this split is that it nicely isolates the
> > of-specific code from the dma-mapping code, allowing potential reuse by
> > other buses (e.g. PCI) in the future.
> 
> I think this patch can introduce a regression if it will be used as is :(
> 
> When this code was initially created there ware a lot of discussion about
> and finally it was decided to configure all common (for all arches) DMA 
> specific parameters for devices in common code, while strictly arch 
> specific things using arch specific APIs/callbacks.
> 
> The following parameters are common now:
> dev->coherent_dma_mask	= mask;
> dev->dma_mask		= &dev->coherent_dma_mask;
> dev->dma_pfn_offset	= offset;
> 
> and they need to be set always, otherwise it will affect on other
> DT-based arches.

Ok, in which case we can either configure the struct device in
of_dma_configure before calling arch_setup_dma_ops, or we can do the work
in a generic version of arch_setup_dma_ops (that I haven't added yet).

The former is how it's done in mainline atm, so I'll stick with that.

Thanks,

Will
Grygorii Strashko Sept. 9, 2014, 2:15 p.m. UTC | #3
Hi Will,

On 09/08/2014 01:31 PM, Will Deacon wrote:
> On Fri, Sep 05, 2014 at 04:37:41PM +0100, Grygorii Strashko wrote:
> 
>> On 09/02/2014 08:56 PM, Will Deacon wrote:
>>> set_arch_dma_coherent_ops is called from of_dma_configure in order to
>>> swizzle the architectural dma-mapping functions over to a cache-coherent
>>> implementation. This is currently implemented only for ARM.
>>>
>>> In anticipation of re-using this mechanism for IOMMU-backed dma-mapping
>>> ops too, this patch replaces the function with a broader
>>> arch_setup_dma_ops callback which is also responsible for setting the
>>> DMA mask and offset as well as selecting the correct mapping functions.
>>>
>>> A further advantage of this split is that it nicely isolates the
>>> of-specific code from the dma-mapping code, allowing potential reuse by
>>> other buses (e.g. PCI) in the future.
>>
>> I think this patch can introduce a regression if it will be used as is :(
>>
>> When this code was initially created there ware a lot of discussion about
>> and finally it was decided to configure all common (for all arches) DMA
>> specific parameters for devices in common code, while strictly arch
>> specific things using arch specific APIs/callbacks.
>>
>> The following parameters are common now:
>> dev->coherent_dma_mask	= mask;
>> dev->dma_mask		= &dev->coherent_dma_mask;
>> dev->dma_pfn_offset	= offset;
>>
>> and they need to be set always, otherwise it will affect on other
>> DT-based arches.
> 
> Ok, in which case we can either configure the struct device in
> of_dma_configure before calling arch_setup_dma_ops, or we can do the work
> in a generic version of arch_setup_dma_ops (that I haven't added yet).

Just think that it would more simpler to add arch-API:
  arch_setup_iommu(struct device *dev, struct iommu_dma_mapping *iommu)

and call it at the end of of_dma_configure() as following:

  arch_setup_iommu(dev, of_iommu_configure(dev));

so it will setup IOMMU parameters for device and overwrite DMA
parameters respectively.

if IOMMU is not present/disabled arch_setup_iommu() will be a NOP and 
system will roll back to default (old) behavior.

Also it will simplify ARM dma-mapping changes -
 only IOMMU code will need to be added.

Best regards,
-grygorii
diff mbox

Patch

diff --git a/arch/arm/include/asm/dma-mapping.h b/arch/arm/include/asm/dma-mapping.h
index c45b61a4b4a5..dad006dabbe6 100644
--- a/arch/arm/include/asm/dma-mapping.h
+++ b/arch/arm/include/asm/dma-mapping.h
@@ -121,12 +121,17 @@  static inline unsigned long dma_max_pfn(struct device *dev)
 }
 #define dma_max_pfn(dev) dma_max_pfn(dev)
 
-static inline int set_arch_dma_coherent_ops(struct device *dev)
+static inline void arch_setup_dma_ops(struct device *dev, u64 mask,
+				      unsigned long offset, bool coherent)
 {
-	set_dma_ops(dev, &arm_coherent_dma_ops);
-	return 0;
+	dev->coherent_dma_mask	= mask;
+	dev->dma_mask		= &dev->coherent_dma_mask;
+	dev->dma_pfn_offset	= offset;
+
+	if (coherent)
+		set_dma_ops(dev, &arm_coherent_dma_ops);
 }
-#define set_arch_dma_coherent_ops(dev)	set_arch_dma_coherent_ops(dev)
+#define arch_setup_dma_ops arch_setup_dma_ops
 
 static inline dma_addr_t phys_to_dma(struct device *dev, phys_addr_t paddr)
 {
diff --git a/drivers/of/platform.c b/drivers/of/platform.c
index 0197725e033a..484c558c63a6 100644
--- a/drivers/of/platform.c
+++ b/drivers/of/platform.c
@@ -164,43 +164,23 @@  static void of_dma_configure(struct platform_device *pdev)
 {
 	u64 dma_addr, paddr, size;
 	int ret;
+	bool coherent;
+	unsigned long offset;
 	struct device *dev = &pdev->dev;
 
-	/*
-	 * Set default dma-mask to 32 bit. Drivers are expected to setup
-	 * the correct supported dma_mask.
-	 */
-	dev->coherent_dma_mask = DMA_BIT_MASK(32);
-
-	/*
-	 * Set it to coherent_dma_mask by default if the architecture
-	 * code has not set it.
-	 */
-	if (!dev->dma_mask)
-		dev->dma_mask = &dev->coherent_dma_mask;
+	ret = of_dma_get_range(dev->of_node, &dma_addr, &paddr, &size);
+	offset = ret < 0 ? 0 : PFN_DOWN(paddr - dma_addr);
+	dev_dbg(dev, "dma_pfn_offset(%#08lx)\n", dev->dma_pfn_offset);
 
-	/*
-	 * if dma-coherent property exist, call arch hook to setup
-	 * dma coherent operations.
-	 */
-	if (of_dma_is_coherent(dev->of_node)) {
-		set_arch_dma_coherent_ops(dev);
-		dev_dbg(dev, "device is dma coherent\n");
-	}
+	coherent = of_dma_is_coherent(dev->of_node);
+	dev_dbg(dev, "device is%sdma coherent\n",
+		coherent ? " " : " not ");
 
 	/*
-	 * if dma-ranges property doesn't exist - just return else
-	 * setup the dma offset
+	 * Set default dma-mask to 32 bit. Drivers are expected to setup
+	 * the correct supported dma_mask.
 	 */
-	ret = of_dma_get_range(dev->of_node, &dma_addr, &paddr, &size);
-	if (ret < 0) {
-		dev_dbg(dev, "no dma range information to setup\n");
-		return;
-	}
-
-	/* DMA ranges found. Calculate and set dma_pfn_offset */
-	dev->dma_pfn_offset = PFN_DOWN(paddr - dma_addr);
-	dev_dbg(dev, "dma_pfn_offset(%#08lx)\n", dev->dma_pfn_offset);
+	arch_setup_dma_ops(dev, DMA_BIT_MASK(32), offset, coherent);
 }
 
 /**
diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h
index 931b70986272..0f7f7b68b0db 100644
--- a/include/linux/dma-mapping.h
+++ b/include/linux/dma-mapping.h
@@ -129,11 +129,9 @@  static inline int dma_coerce_mask_and_coherent(struct device *dev, u64 mask)
 
 extern u64 dma_get_required_mask(struct device *dev);
 
-#ifndef set_arch_dma_coherent_ops
-static inline int set_arch_dma_coherent_ops(struct device *dev)
-{
-	return 0;
-}
+#ifndef arch_setup_dma_ops
+static inline void arch_setup_dma_ops(struct device *dev, u64 mask,
+				      unsigned long offset, bool coherent) { }
 #endif
 
 static inline unsigned int dma_get_max_seg_size(struct device *dev)