diff mbox

[RFC,2/3] ARM: dma-mapping: Pass DMA attrs as IOMMU prot

Message ID 20130620.112439.1330557591655135630.hdoyu@nvidia.com
State New
Headers show

Commit Message

Hiroshi Doyu June 20, 2013, 8:24 a.m. UTC
Hi Nishanth,

Nishanth Peethambaran <nishanth.p@gmail.com> wrote @ Thu, 20 Jun 2013 10:07:00 +0200:

> It would be better to define a prot flag bit in iommu API and convert
> the attrs to prot flag bit in dma-mapping aPI before calling the iommu
> API.

That's the 1st option.

> On Thu, Jun 20, 2013 at 11:19 AM, Hiroshi Doyu <hdoyu@nvidia.com> wrote:
....
> > @@ -1280,7 +1281,7 @@ ____iommu_create_mapping(struct device *dev, dma_addr_t *req,
> >                                 break;
> >
> >                 len = (j - i) << PAGE_SHIFT;
> > -               ret = iommu_map(mapping->domain, iova, phys, len, 0);
> > +               ret = iommu_map(mapping->domain, iova, phys, len, (int)attrs);
> 
> Use dma_get_attr and translate the READ_ONLY attr to a new READ_ONLY
> prot flag bit which needs to be defined in iommu.h

Both DMA_ATTR_READ_ONLY and IOMMU_READ are just logical bit in their
layers respectively and eventually it's converted to H/W dependent
bit.

If IOMMU is considered as one of specific case of DMA, sharing
dma_attr between IOMMU and DMA may not be so bad. IIRC, ARM:
dma-mapping API was implemented based on this concept(?).

Comments

Nishanth Peethambaran June 20, 2013, 8:55 a.m. UTC | #1
Hi Hiroshi,

My preference would be to keep the iommu prot flags separate from
dma-mapping attrs.
dma-attrs have options which are not related to iommu - like having
ARM kernel mapping.
iommu.h is at a much lower level where we can define prot flags which
are useful for iommu page-table management.

Thinking again on it, I feel the translation of dma-attr should happen
when dma-mapping code makes calls to " iommu mapping" API.
"iommu mapping" API which does iova management and make iommu API
calls could be taken out from dma-mapping.c, for which I see
discussion already happening for arm64.

If devices allocate via dma-mapping API, physical mem allocation, iova
allocation and iommu mapping happens internally.
Devices may allocate physical memory using any allocator (say ION
including carveout area), use "iommu mapping" API which will do iova
allocation and iommu mapping. The prot flags could be useful in this
case as well - not sure whether we would need dma-attrs here.

 - Nishanth Peethambaran

- Nishanth Peethambaran
  +91-9448074166



On Thu, Jun 20, 2013 at 1:54 PM, Hiroshi Doyu <hdoyu@nvidia.com> wrote:
> Hi Nishanth,
>
> Nishanth Peethambaran <nishanth.p@gmail.com> wrote @ Thu, 20 Jun 2013 10:07:00 +0200:
>
>> It would be better to define a prot flag bit in iommu API and convert
>> the attrs to prot flag bit in dma-mapping aPI before calling the iommu
>> API.
>
> That's the 1st option.
>
>> On Thu, Jun 20, 2013 at 11:19 AM, Hiroshi Doyu <hdoyu@nvidia.com> wrote:
> ....
>> > @@ -1280,7 +1281,7 @@ ____iommu_create_mapping(struct device *dev, dma_addr_t *req,
>> >                                 break;
>> >
>> >                 len = (j - i) << PAGE_SHIFT;
>> > -               ret = iommu_map(mapping->domain, iova, phys, len, 0);
>> > +               ret = iommu_map(mapping->domain, iova, phys, len, (int)attrs);
>>
>> Use dma_get_attr and translate the READ_ONLY attr to a new READ_ONLY
>> prot flag bit which needs to be defined in iommu.h
>
> Both DMA_ATTR_READ_ONLY and IOMMU_READ are just logical bit in their
> layers respectively and eventually it's converted to H/W dependent
> bit.
>
> If IOMMU is considered as one of specific case of DMA, sharing
> dma_attr between IOMMU and DMA may not be so bad. IIRC, ARM:
> dma-mapping API was implemented based on this concept(?).
>
> diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
> index d8f98b1..161a1b0 100644
> --- a/drivers/iommu/iommu.c
> +++ b/drivers/iommu/iommu.c
> @@ -755,7 +755,7 @@ int iommu_domain_has_cap(struct iommu_domain *domain,
>  EXPORT_SYMBOL_GPL(iommu_domain_has_cap);
>
>  int iommu_map(struct iommu_domain *domain, unsigned long iova,
> -             phys_addr_t paddr, size_t size, int prot)
> +             phys_addr_t paddr, size_t size, struct dma_attr *attrs)
>  {
>         unsigned long orig_iova = iova;
>         unsigned int min_pagesz;
Hiroshi Doyu June 20, 2013, 9:54 a.m. UTC | #2
Nishanth Peethambaran <nishanth.p@gmail.com> wrote @ Thu, 20 Jun 2013 10:55:32 +0200:

> Thinking again on it, I feel the translation of dma-attr should happen
> when dma-mapping code makes calls to " iommu mapping" API.
> "iommu mapping" API which does iova management and make iommu API
> calls could be taken out from dma-mapping.c, for which I see
> discussion already happening for arm64.

According to the discussion ARM64:dma-mapping, I got an implression
that IOVA management part in ARM:dma-mapping API would be a new common
module(ie: /lib/iommu-helper.c), but still IOMMU API itself would
reamin as primitive as what they are.

+---------------------------------
|     DMA mapping API
|                            +-----------------
+--------------+-----------+ | DMA mapping API($ARCH)
|   IOMMU API  |  IOVA MGT | +------------------
+--------------+-----------+
|IOMMU API(H/W)|
+--------------+
|  IOMMU H/W   |
+--------------+

> If devices allocate via dma-mapping API, physical mem allocation, iova
> allocation and iommu mapping happens internally.
> Devices may allocate physical memory using any allocator (say ION
> including carveout area), use "iommu mapping" API which will do iova
> allocation and iommu mapping. The prot flags could be useful in this
> case as well - not sure whether we would need dma-attrs here.

I haven't followed ION recently, but can't ION backed by DMA mapping
API instead of using IOMMU API directly?
Arnd Bergmann June 20, 2013, 10:13 a.m. UTC | #3
On Thursday 20 June 2013, Hiroshi Doyu wrote:
> > If devices allocate via dma-mapping API, physical mem allocation, iova
> > allocation and iommu mapping happens internally.
> > Devices may allocate physical memory using any allocator (say ION
> > including carveout area), use "iommu mapping" API which will do iova
> > allocation and iommu mapping. The prot flags could be useful in this
> > case as well - not sure whether we would need dma-attrs here.
> 
> I haven't followed ION recently, but can't ION backed by DMA mapping
> API instead of using IOMMU API directly?

For a GPU with an IOMMU, you typically want per-process IOMMU contexts,
which are only available when using the IOMMU API directly, as the
dma-mapping abstraction uses only one context for kernel space.

	Arnd
Hiroshi Doyu June 20, 2013, 10:34 a.m. UTC | #4
Arnd Bergmann <arnd@arndb.de> wrote @ Thu, 20 Jun 2013 12:13:13 +0200:

> On Thursday 20 June 2013, Hiroshi Doyu wrote:
> > > If devices allocate via dma-mapping API, physical mem allocation, iova
> > > allocation and iommu mapping happens internally.
> > > Devices may allocate physical memory using any allocator (say ION
> > > including carveout area), use "iommu mapping" API which will do iova
> > > allocation and iommu mapping. The prot flags could be useful in this
> > > case as well - not sure whether we would need dma-attrs here.
> > 
> > I haven't followed ION recently, but can't ION backed by DMA mapping
> > API instead of using IOMMU API directly?
> 
> For a GPU with an IOMMU, you typically want per-process IOMMU contexts,
> which are only available when using the IOMMU API directly, as the
> dma-mapping abstraction uses only one context for kernel space.

Yes, we had some experiment for switching IOMMU context with DMA
mapping API. We needed to add some new DMA mapping API, and didn't
look so nice at that time. What do you think to introduce multiple
context or switching context in dma-mapping abstruction?
Arnd Bergmann June 20, 2013, 10:57 a.m. UTC | #5
On Thursday 20 June 2013, Hiroshi Doyu wrote:
> Arnd Bergmann <arnd@arndb.de> wrote @ Thu, 20 Jun 2013 12:13:13 +0200:
> 
> > On Thursday 20 June 2013, Hiroshi Doyu wrote:
> > > > If devices allocate via dma-mapping API, physical mem allocation, iova
> > > > allocation and iommu mapping happens internally.
> > > > Devices may allocate physical memory using any allocator (say ION
> > > > including carveout area), use "iommu mapping" API which will do iova
> > > > allocation and iommu mapping. The prot flags could be useful in this
> > > > case as well - not sure whether we would need dma-attrs here.
> > > 
> > > I haven't followed ION recently, but can't ION backed by DMA mapping
> > > API instead of using IOMMU API directly?
> > 
> > For a GPU with an IOMMU, you typically want per-process IOMMU contexts,
> > which are only available when using the IOMMU API directly, as the
> > dma-mapping abstraction uses only one context for kernel space.
> 
> Yes, we had some experiment for switching IOMMU context with DMA
> mapping API. We needed to add some new DMA mapping API, and didn't
> look so nice at that time. What do you think to introduce multiple
> context or switching context in dma-mapping abstruction?

My feeling is that drivers with the need for multiple contexts
are better off using the iommu API, since that is what it was
made for. The dma-mapping abstraction really tries to hide the
bus address assignment, while users with multiple contexts typically
also want to control the bus addresses.

	Arnd
Nishanth Peethambaran June 20, 2013, 12:49 p.m. UTC | #6
On Thu, Jun 20, 2013 at 4:27 PM, Arnd Bergmann <arnd@arndb.de> wrote:
> On Thursday 20 June 2013, Hiroshi Doyu wrote:
>> Arnd Bergmann <arnd@arndb.de> wrote @ Thu, 20 Jun 2013 12:13:13 +0200:
>>
>> > On Thursday 20 June 2013, Hiroshi Doyu wrote:
>> > > > If devices allocate via dma-mapping API, physical mem allocation, iova
>> > > > allocation and iommu mapping happens internally.
>> > > > Devices may allocate physical memory using any allocator (say ION
>> > > > including carveout area), use "iommu mapping" API which will do iova
>> > > > allocation and iommu mapping. The prot flags could be useful in this
>> > > > case as well - not sure whether we would need dma-attrs here.
>> > >
>> > > I haven't followed ION recently, but can't ION backed by DMA mapping
>> > > API instead of using IOMMU API directly?
>> >
>> > For a GPU with an IOMMU, you typically want per-process IOMMU contexts,
>> > which are only available when using the IOMMU API directly, as the
>> > dma-mapping abstraction uses only one context for kernel space.
>>
>> Yes, we had some experiment for switching IOMMU context with DMA
>> mapping API. We needed to add some new DMA mapping API, and didn't
>> look so nice at that time. What do you think to introduce multiple
>> context or switching context in dma-mapping abstruction?
>
> My feeling is that drivers with the need for multiple contexts
> are better off using the iommu API, since that is what it was
> made for. The dma-mapping abstraction really tries to hide the
> bus address assignment, while users with multiple contexts typically
> also want to control the bus addresses.
>
>         Arnd

ION is more of a physical memory allocator supporting buddy pages as
well as memory reserved at boot-time. DMA type of heap is only one of
the types
of heap. For system heap, ION provides an sg_table which device will
have to map it using iommu API to get dma_address for the device. Even
for DMA type of heap, each device will have to map the pages to its
own iommu as Arnd mentioned.

- Nishanth Peethambaran
diff mbox

Patch

diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
index d8f98b1..161a1b0 100644
--- a/drivers/iommu/iommu.c
+++ b/drivers/iommu/iommu.c
@@ -755,7 +755,7 @@  int iommu_domain_has_cap(struct iommu_domain *domain,
 EXPORT_SYMBOL_GPL(iommu_domain_has_cap);
 
 int iommu_map(struct iommu_domain *domain, unsigned long iova,
-	      phys_addr_t paddr, size_t size, int prot)
+	      phys_addr_t paddr, size_t size, struct dma_attr *attrs)
 {
 	unsigned long orig_iova = iova;
 	unsigned int min_pagesz;