From patchwork Sat May 12 08:42:10 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Abhinav X-Patchwork-Id: 8586 Return-Path: X-Original-To: patchwork@peony.canonical.com Delivered-To: patchwork@peony.canonical.com Received: from fiordland.canonical.com (fiordland.canonical.com [91.189.94.145]) by peony.canonical.com (Postfix) with ESMTP id CA32223E49 for ; Mon, 14 May 2012 10:52:09 +0000 (UTC) Received: from mail-ob0-f180.google.com (mail-ob0-f180.google.com [209.85.214.180]) by fiordland.canonical.com (Postfix) with ESMTP id 757EDA1851D for ; Mon, 14 May 2012 10:52:09 +0000 (UTC) Received: by obbun3 with SMTP id un3so5629972obb.11 for ; Mon, 14 May 2012 03:52:08 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=x-forwarded-to:x-forwarded-for:delivered-to:received-spf:x-auditid :from:to:date:message-id:x-mailer:x-brightmail-tracker:x-tm-as-mml :x-mailman-approved-at:cc:subject:x-beenthere:x-mailman-version :precedence:list-id:list-unsubscribe:list-archive:list-post :list-help:list-subscribe:mime-version:content-type :content-transfer-encoding:sender:errors-to:x-gm-message-state; bh=hp77dFl0tq1Im4pJjrrH5v7v/V9oMkxSvi3KFrRDe5k=; b=L6lqYP8nXZrsif4NRi8FLFYgKFjz+pFf48XbTCqfkLpjcBhTHOL2fGlLhIn9j5Vzg/ 3T1f/38Cd0IM+A6AGCb16ES8laS5OVrr08Qa8u45xXSzGzronmi9fV88AUgPxhlodX8W 28r7VtvbT0aTd+8TqTxSNz4sZimFb92uG0vZkonr4G2UzhVS6gIkrv8YBFj91tqMEjW/ ojhEZFh9JNOVRxtTkTKTi0bVMxZv2thR4QGxNiB4JJ7Xz14qdJkYhJbTO3f0gMlouz++ IweZXE+o3vPAgqDO/IA/LEcMlPv5BG6mWOO12O9FDUloZwA4DShGGC7cNWx9H+FNuWvJ iBmw== Received: by 10.50.183.225 with SMTP id ep1mr4024425igc.1.1336992728754; Mon, 14 May 2012 03:52:08 -0700 (PDT) X-Forwarded-To: linaro-patchwork@canonical.com X-Forwarded-For: patch@linaro.org linaro-patchwork@canonical.com Delivered-To: patches@linaro.org Received: by 10.231.35.72 with SMTP id o8csp334124ibd; Mon, 14 May 2012 03:52:06 -0700 (PDT) Received: by 10.216.132.94 with SMTP id n72mr4763409wei.60.1336992725889; Mon, 14 May 2012 03:52:05 -0700 (PDT) Received: from mombin.canonical.com (mombin.canonical.com. [91.189.95.16]) by mx.google.com with ESMTP id m58si18230767wee.35.2012.05.14.03.52.04; Mon, 14 May 2012 03:52:05 -0700 (PDT) Received-SPF: neutral (google.com: 91.189.95.16 is neither permitted nor denied by best guess record for domain of linaro-mm-sig-bounces@lists.linaro.org) client-ip=91.189.95.16; Authentication-Results: mx.google.com; spf=neutral (google.com: 91.189.95.16 is neither permitted nor denied by best guess record for domain of linaro-mm-sig-bounces@lists.linaro.org) smtp.mail=linaro-mm-sig-bounces@lists.linaro.org Received: from localhost ([127.0.0.1] helo=mombin.canonical.com) by mombin.canonical.com with esmtp (Exim 4.71) (envelope-from ) id 1STssk-0004wo-7N; Mon, 14 May 2012 10:52:02 +0000 Received: from mailout2.samsung.com ([203.254.224.25]) by mombin.canonical.com with esmtp (Exim 4.71) (envelope-from ) id 1ST7gp-0001Qb-AV for linaro-mm-sig@lists.linaro.org; Sat, 12 May 2012 08:28:35 +0000 Received: from epcpsbgm1.samsung.com (mailout2.samsung.com [203.254.224.25]) by mailout2.samsung.com (Oracle Communications Messaging Server 7u4-24.01(7.0.4.24.0) 64bit (built Nov 17 2011)) with ESMTP id <0M3W00K08I7L0LE0@mailout2.samsung.com> for linaro-mm-sig@lists.linaro.org; Sat, 12 May 2012 17:28:33 +0900 (KST) X-AuditID: cbfee61a-b7bbeae000003a71-a4-4fae1f31c2f7 Received: from epmmp2 ( [203.254.227.17]) by epcpsbgm1.samsung.com (MMPCPMTA) with SMTP id 0B.43.14961.13F1EAF4; Sat, 12 May 2012 17:28:33 +0900 (KST) Received: from localhost.localdomain ([107.108.73.106]) by mmp2.samsung.com (Oracle Communications Messaging Server 7u4-24.01 (7.0.4.24.0) 64bit (built Nov 17 2011)) with ESMTPA id <0M3W006GQI7HLBD0@mmp2.samsung.com> for linaro-mm-sig@lists.linaro.org; Sat, 12 May 2012 17:28:33 +0900 (KST) From: Abhinav To: m.szyprowski@samsung.com Date: Sat, 12 May 2012 14:12:10 +0530 Message-id: <1336812130-10132-1-git-send-email-abhinav.k@samsung.com> X-Mailer: git-send-email 1.7.0.4 X-Brightmail-Tracker: AAAAAA== X-TM-AS-MML: No X-Mailman-Approved-At: Mon, 14 May 2012 10:52:01 +0000 Cc: linaro-mm-sig@lists.linaro.org, kyungmin.park@samsung.com, abhinav.k@samsung.com, subash.rp@samsung.com Subject: [Linaro-mm-sig] [PATCH 3/3] [RFC]:DMA-MAPPING:Add check inside IOMMU ops for kernel or user space allcoation X-BeenThere: linaro-mm-sig@lists.linaro.org X-Mailman-Version: 2.1.13 Precedence: list List-Id: "Unified memory management interest group." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: linaro-mm-sig-bounces@lists.linaro.org Errors-To: linaro-mm-sig-bounces@lists.linaro.org X-Gm-Message-State: ALoCoQlFesCN30M2szOM1SgbSsh6rXI7NbWeyxnjdprSCLheAC44N4tEHcCwWKR/wz9JVHUwAmik With this we can do a run time check on the allocation type for either kernel or user using the dma attribute passed to dma-mapping iommu ops. Signed-off-by: Abhinav --- arch/arm/mm/dma-mapping.c | 88 +++++++++++++++++++++++++++++++++++---------- 1 files changed, 69 insertions(+), 19 deletions(-) -- 1.7.0.4 diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c index 2c5a285..4cd46b4 100644 --- a/arch/arm/mm/dma-mapping.c +++ b/arch/arm/mm/dma-mapping.c @@ -428,6 +428,7 @@ static void __dma_free_remap(void *cpu_addr, size_t size) arm_vmregion_free(&consistent_head, c); } + #else /* !CONFIG_MMU */ #define __dma_alloc_remap(page, size, gfp, prot, c) page_address(page) @@ -894,6 +895,35 @@ __iommu_alloc_remap(struct page **pages, size_t size, gfp_t gfp, pgprot_t prot) size_t align; size_t count = size >> PAGE_SHIFT; int bit; + unsigned long mem_type = (unsigned long)gfp; + + + if(mem_type){ + + struct page_infodma *pages_in; + + pages_in = kzalloc( sizeof(struct page_infodma*), GFP_KERNEL); + if(!pages_in) + return NULL; + + pages_in->nr_pages = count; + + return (void*)pages_in; + + } + + /* + * Align the virtual region allocation - maximum alignment is + * a section size, minimum is a page size. This helps reduce + * fragmentation of the DMA space, and also prevents allocations + * smaller than a section from crossing a section boundary. + */ + + bit = fls(size - 1); + if (bit > SECTION_SHIFT) + bit = SECTION_SHIFT; + align = 1 << bit; + if (!consistent_pte[0]) { pr_err("%s: not initialised\n", __func__); @@ -901,16 +931,6 @@ __iommu_alloc_remap(struct page **pages, size_t size, gfp_t gfp, pgprot_t prot) return NULL; } - /* - * Align the virtual region allocation - maximum alignment is - * a section size, minimum is a page size. This helps reduce - * fragmentation of the DMA space, and also prevents allocations - * smaller than a section from crossing a section boundary. - */ - bit = fls(size - 1); - if (bit > SECTION_SHIFT) - bit = SECTION_SHIFT; - align = 1 << bit; /* * Allocate a virtual address in the consistent mapping region. @@ -946,6 +966,7 @@ __iommu_alloc_remap(struct page **pages, size_t size, gfp_t gfp, pgprot_t prot) return NULL; } + /* * Create a mapping in device IO address space for specified pages */ @@ -973,13 +994,16 @@ __iommu_create_mapping(struct device *dev, struct page **pages, size_t size) len = (j - i) << PAGE_SHIFT; ret = iommu_map(mapping->domain, iova, phys, len, 0); + if (ret < 0) goto fail; + iova += len; i = j; } return dma_addr; fail: + iommu_unmap(mapping->domain, dma_addr, iova-dma_addr); __free_iova(mapping, dma_addr, size); return DMA_ERROR_CODE; @@ -1007,6 +1031,8 @@ static void *arm_iommu_alloc_attrs(struct device *dev, size_t size, pgprot_t prot = __get_dma_pgprot(attrs, pgprot_kernel); struct page **pages; void *addr = NULL; + struct page_infodma *page_ret; + unsigned long mem_type; *handle = DMA_ERROR_CODE; size = PAGE_ALIGN(size); @@ -1019,11 +1045,19 @@ static void *arm_iommu_alloc_attrs(struct device *dev, size_t size, if (*handle == DMA_ERROR_CODE) goto err_buffer; - addr = __iommu_alloc_remap(pages, size, gfp, prot); + mem_type = dma_get_attr(DMA_ATTR_USER_SPACE, attrs); + + addr = __iommu_alloc_remap(pages, size, mem_type, prot); if (!addr) goto err_mapping; - return addr; + if(mem_type){ + page_ret = (struct page_infodma *)addr; + page_ret->pages = pages; + return page_ret; + } + else + return addr; err_mapping: __iommu_remove_mapping(dev, *handle, size); @@ -1071,18 +1105,34 @@ static int arm_iommu_mmap_attrs(struct device *dev, struct vm_area_struct *vma, void arm_iommu_free_attrs(struct device *dev, size_t size, void *cpu_addr, dma_addr_t handle, struct dma_attrs *attrs) { - struct arm_vmregion *c; + + unsigned long mem_type = dma_get_attr(DMA_ATTR_USER_SPACE, attrs); + size = PAGE_ALIGN(size); - c = arm_vmregion_find(&consistent_head, (unsigned long)cpu_addr); - if (c) { - struct page **pages = c->priv; - __dma_free_remap(cpu_addr, size); - __iommu_remove_mapping(dev, handle, size); - __iommu_free_buffer(dev, pages, size); + + if(mem_type){ + + struct page_infodma *pagesin = cpu_addr; + if (pagesin) { + struct page **pages = pagesin->pages; + __iommu_remove_mapping(dev, handle, size); + __iommu_free_buffer(dev, pages, size); + } + } + else{ + struct arm_vmregion *c; + c = arm_vmregion_find(&consistent_head, (unsigned long)cpu_addr); + if (c) { + struct page **pages = c->priv; + __dma_free_remap(cpu_addr, size); + __iommu_remove_mapping(dev, handle, size); + __iommu_free_buffer(dev, pages, size); + } } } + /* * Map a part of the scatter-gather list into contiguous io address space */