From patchwork Wed Apr 20 01:04:26 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stephen Boyd X-Patchwork-Id: 66142 Delivered-To: patch@linaro.org Received: by 10.140.93.198 with SMTP id d64csp2172305qge; Tue, 19 Apr 2016 18:05:04 -0700 (PDT) X-Received: by 10.66.122.139 with SMTP id ls11mr8512478pab.14.1461114304353; Tue, 19 Apr 2016 18:05:04 -0700 (PDT) Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id w79si10880777pfi.231.2016.04.19.18.05.04; Tue, 19 Apr 2016 18:05:04 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753497AbcDTBEm (ORCPT + 29 others); Tue, 19 Apr 2016 21:04:42 -0400 Received: from mail-pf0-f177.google.com ([209.85.192.177]:35876 "EHLO mail-pf0-f177.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753092AbcDTBEf (ORCPT ); Tue, 19 Apr 2016 21:04:35 -0400 Received: by mail-pf0-f177.google.com with SMTP id e128so12078695pfe.3 for ; Tue, 19 Apr 2016 18:04:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=vmQyU3aFuh+pjqW6PuB5dxkIFenBYRyfIZnJpM9Z8QU=; b=IzVbP/0gegqIYwZq6w3L/v/Fn7dL4gW+p8YpTPYBKc2wv/kLQgpMuU4Po+QkXMI861 cjZPFYAwGb1dSuU3X4J4RNXeRTF3PPR3u7tiaBJ5CBgxRkvLSMONHJ2e+0iYih6rtubp GvhgMAMdD188YOF5ld9/GQuPiwiZjQGDvXzE0= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=vmQyU3aFuh+pjqW6PuB5dxkIFenBYRyfIZnJpM9Z8QU=; b=Eh9dTeqJBCsXqQPji/mk/Y647KmMwjQU0xY5oYtS8PTIMj5hM/CRULivI7+n2R72U4 X1vFIsUAUKvRzj21ej9vewIpHgmR/ZrpK6W3vSUaDphHobMj9jD2A5mr3Cd2Qt1CZ13p jRjuMzDM9+H3QfAQz+Ki3JOxFXTzvCJLGr9XYu5Vvhw02iu772K+y3FAnkqbUyT7C/mt hnbt66WZEF7Ds5bkapAsAAcgNULbID0AEiXDhev2Qlki4HT0GomxIVl33H7Jnvmd913Q t2mNrwiFdMKWOrM3bWPR28lFnw948EtUWiJsjuOl+eLURxy+YL0P98FWMjQ/MqowBbOd 7odQ== X-Gm-Message-State: AOPr4FX+nzRouwtM2F5yzcr8icoz9ZMrSVvmIRcW5A5hVO8LJWNjwxwFqZBAImDuHrqYEC85 X-Received: by 10.98.0.202 with SMTP id 193mr8316654pfa.120.1461114274970; Tue, 19 Apr 2016 18:04:34 -0700 (PDT) Received: from localhost.localdomain (pat_11.qualcomm.com. [192.35.156.11]) by smtp.gmail.com with ESMTPSA id ba9sm93738120pab.24.2016.04.19.18.04.33 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 19 Apr 2016 18:04:34 -0700 (PDT) From: Stephen Boyd To: linux-kernel@vger.kernel.org Cc: linux-arm@lists.infradead.org, Robin Murphy , Laura Abbott , Arnd Bergmann , Marek Szyprowski , Mimi Zohar , Andrew Morton , Mark Brown , Catalin Marinas , Will Deacon , Ming Lei , Laura Abbott Subject: [RFC/PATCHv2 v2 1/4] ARM64: dma: Add support for NO_KERNEL_MAPPING attribute Date: Tue, 19 Apr 2016 18:04:26 -0700 Message-Id: <1461114269-13718-2-git-send-email-stephen.boyd@linaro.org> X-Mailer: git-send-email 2.8.0.rc4 In-Reply-To: <1461114269-13718-1-git-send-email-stephen.boyd@linaro.org> References: <1461114269-13718-1-git-send-email-stephen.boyd@linaro.org> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Both the IOMMU and non-IOMMU allocations don't respect the NO_KERNEL_MAPPING attribute, therefore drivers can't save virtual address space and time spent mapping large buffers that are intended only for userspace. Plumb this attribute into the code for both types of DMA ops. Cc: Robin Murphy Cc: Laura Abbott Cc: Arnd Bergmann Cc: Marek Szyprowski Signed-off-by: Stephen Boyd --- arch/arm64/mm/dma-mapping.c | 39 ++++++++++++++++++++++++++++++--------- 1 file changed, 30 insertions(+), 9 deletions(-) -- 2.8.0.rc4 diff --git a/arch/arm64/mm/dma-mapping.c b/arch/arm64/mm/dma-mapping.c index a6e757cbab77..9686e722a047 100644 --- a/arch/arm64/mm/dma-mapping.c +++ b/arch/arm64/mm/dma-mapping.c @@ -169,6 +169,9 @@ static void *__dma_alloc(struct device *dev, size_t size, /* create a coherent mapping */ page = virt_to_page(ptr); + if (dma_get_attr(DMA_ATTR_NO_KERNEL_MAPPING, attrs)) + return page; + coherent_ptr = dma_common_contiguous_remap(page, size, VM_USERMAP, prot, NULL); if (!coherent_ptr) @@ -194,7 +197,8 @@ static void __dma_free(struct device *dev, size_t size, if (!is_device_dma_coherent(dev)) { if (__free_from_pool(vaddr, size)) return; - vunmap(vaddr); + if (!dma_get_attr(DMA_ATTR_NO_KERNEL_MAPPING, attrs)) + vunmap(vaddr); } __dma_free_coherent(dev, size, swiotlb_addr, dma_handle, attrs); } @@ -567,6 +571,9 @@ static void *__iommu_alloc_attrs(struct device *dev, size_t size, if (!pages) return NULL; + if (dma_get_attr(DMA_ATTR_NO_KERNEL_MAPPING, attrs)) + return pages; + addr = dma_common_pages_remap(pages, size, VM_USERMAP, prot, __builtin_return_address(0)); if (!addr) @@ -624,18 +631,32 @@ static void __iommu_free_attrs(struct device *dev, size_t size, void *cpu_addr, if (WARN_ON(!area || !area->pages)) return; iommu_dma_free(dev, area->pages, iosize, &handle); - dma_common_free_remap(cpu_addr, size, VM_USERMAP); + if (!dma_get_attr(DMA_ATTR_NO_KERNEL_MAPPING, attrs)) + dma_common_free_remap(cpu_addr, size, VM_USERMAP); } else { iommu_dma_unmap_page(dev, handle, iosize, 0, NULL); __free_pages(virt_to_page(cpu_addr), get_order(size)); } } +static struct page **__iommu_get_pages(void *cpu_addr, struct dma_attrs *attrs) +{ + struct vm_struct *area; + + if (dma_get_attr(DMA_ATTR_NO_KERNEL_MAPPING, attrs)) + return cpu_addr; + + area = find_vm_area(cpu_addr); + if (area) + return area->pages; + return NULL; +} + static int __iommu_mmap_attrs(struct device *dev, struct vm_area_struct *vma, void *cpu_addr, dma_addr_t dma_addr, size_t size, struct dma_attrs *attrs) { - struct vm_struct *area; + struct page **pages; int ret; vma->vm_page_prot = __get_dma_pgprot(attrs, vma->vm_page_prot, @@ -644,11 +665,11 @@ static int __iommu_mmap_attrs(struct device *dev, struct vm_area_struct *vma, if (dma_mmap_from_coherent(dev, vma, cpu_addr, size, &ret)) return ret; - area = find_vm_area(cpu_addr); - if (WARN_ON(!area || !area->pages)) + pages = __iommu_get_pages(cpu_addr, attrs); + if (WARN_ON(!pages)) return -ENXIO; - return iommu_dma_mmap(area->pages, size, vma); + return iommu_dma_mmap(pages, size, vma); } static int __iommu_get_sgtable(struct device *dev, struct sg_table *sgt, @@ -656,12 +677,12 @@ static int __iommu_get_sgtable(struct device *dev, struct sg_table *sgt, size_t size, struct dma_attrs *attrs) { unsigned int count = PAGE_ALIGN(size) >> PAGE_SHIFT; - struct vm_struct *area = find_vm_area(cpu_addr); + struct page **pages = __iommu_get_pages(cpu_addr, attrs); - if (WARN_ON(!area || !area->pages)) + if (WARN_ON(!pages)) return -ENXIO; - return sg_alloc_table_from_pages(sgt, area->pages, count, 0, size, + return sg_alloc_table_from_pages(sgt, pages, count, 0, size, GFP_KERNEL); }