diff mbox

[2/5] iommu/exynos: Fix warnings from DMA-debug

Message ID 1479986420-30859-3-git-send-email-m.szyprowski@samsung.com
State New
Headers show

Commit Message

Marek Szyprowski Nov. 24, 2016, 11:20 a.m. UTC
Add a simple checks for dma_map_single() return value to make DMA-debug
checker happly. Exynos IOMMU on Samsung Exynos SoCs always use device,
which has linear DMA mapping ops (dma address is equal to physical memory
address), so no failures are returned from dma_map_single().

Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>

---
 drivers/iommu/exynos-iommu.c | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

-- 
1.9.1

--
To unsubscribe from this list: send the line "unsubscribe linux-samsung-soc" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Comments

Joerg Roedel Nov. 29, 2016, 4:43 p.m. UTC | #1
On Thu, Nov 24, 2016 at 12:20:17PM +0100, Marek Szyprowski wrote:
> @@ -744,6 +744,7 @@ static struct iommu_domain *exynos_iommu_domain_alloc(unsigned type)

>  				DMA_TO_DEVICE);

>  	/* For mapping page table entries we rely on dma == phys */

>  	BUG_ON(handle != virt_to_phys(domain->pgtable));

> +	BUG_ON(dma_mapping_error(dma_dev, handle));


A BUG_ON is a bad way of handling this. Please propagate the error
upwards, there is no need to crash the kernel because of a failure like
this.


	Joerg

--
To unsubscribe from this list: send the line "unsubscribe linux-samsung-soc" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/drivers/iommu/exynos-iommu.c b/drivers/iommu/exynos-iommu.c
index ac726e1760de..e7851cffbbee 100644
--- a/drivers/iommu/exynos-iommu.c
+++ b/drivers/iommu/exynos-iommu.c
@@ -744,6 +744,7 @@  static struct iommu_domain *exynos_iommu_domain_alloc(unsigned type)
 				DMA_TO_DEVICE);
 	/* For mapping page table entries we rely on dma == phys */
 	BUG_ON(handle != virt_to_phys(domain->pgtable));
+	BUG_ON(dma_mapping_error(dma_dev, handle));
 
 	spin_lock_init(&domain->lock);
 	spin_lock_init(&domain->pgtablelock);
@@ -898,6 +899,7 @@  static sysmmu_pte_t *alloc_lv2entry(struct exynos_iommu_domain *domain,
 	}
 
 	if (lv1ent_fault(sent)) {
+		dma_addr_t handle;
 		sysmmu_pte_t *pent;
 		bool need_flush_flpd_cache = lv1ent_zero(sent);
 
@@ -909,7 +911,9 @@  static sysmmu_pte_t *alloc_lv2entry(struct exynos_iommu_domain *domain,
 		update_pte(sent, mk_lv1ent_page(virt_to_phys(pent)));
 		kmemleak_ignore(pent);
 		*pgcounter = NUM_LV2ENTRIES;
-		dma_map_single(dma_dev, pent, LV2TABLE_SIZE, DMA_TO_DEVICE);
+		handle = dma_map_single(dma_dev, pent, LV2TABLE_SIZE,
+					DMA_TO_DEVICE);
+		BUG_ON(dma_mapping_error(dma_dev, handle));
 
 		/*
 		 * If pre-fetched SLPD is a faulty SLPD in zero_l2_table,