From patchwork Tue Apr 10 11:04:10 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marek Szyprowski X-Patchwork-Id: 7711 Return-Path: X-Original-To: patchwork@peony.canonical.com Delivered-To: patchwork@peony.canonical.com Received: from fiordland.canonical.com (fiordland.canonical.com [91.189.94.145]) by peony.canonical.com (Postfix) with ESMTP id D5E6323E00 for ; Tue, 10 Apr 2012 11:04:47 +0000 (UTC) Received: from mail-iy0-f180.google.com (mail-iy0-f180.google.com [209.85.210.180]) by fiordland.canonical.com (Postfix) with ESMTP id 9BE31A186F3 for ; Tue, 10 Apr 2012 11:04:47 +0000 (UTC) Received: by mail-iy0-f180.google.com with SMTP id e36so10084781iag.11 for ; Tue, 10 Apr 2012 04:04:47 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=x-forwarded-to:x-forwarded-for:delivered-to:received-spf:date:from :in-reply-to:to:message-id:mime-version:x-mailer:references:cc :subject:x-beenthere:x-mailman-version:precedence:list-id :list-unsubscribe:list-archive:list-post:list-help:list-subscribe :content-type:content-transfer-encoding:sender:errors-to :x-gm-message-state; bh=GIrUDnSNWUPrJUShSEyhbpMbUd+xNyUkVZYNiD78/c0=; b=NkhfZCI94/S6ugP1UO9203jsyY1jh26mnHYDN4YdEODE07syH7lF4pY51O/8zLXOx2 TV6NhrveNSkccoJYNFrz66Mt2f1Vkg9MWeMlJOzOIOglMefrbCSypCgeILwQCSIlbdNa mAjTltn+ZZNt+YN5Cp2U8iZqK1BsXXoCtEzgAwl1Lp8M09XQXfdmS7rbtgMMgX8P5/DL ILqEqVGgRnnjQtPXA07gGufec4+tVoqz77ltXH4eyn7fDTXZBbCjuVZp8QuBIiaRQNuu 1iQ5x5cUVHpyknMdyoAUnpJfy0UPWgAvIAi/rw6bolTdbVi6tymDgnXlbomjrz14i3dh 8LfA== Received: by 10.50.51.197 with SMTP id m5mr1815044igo.38.1334055887410; Tue, 10 Apr 2012 04:04:47 -0700 (PDT) X-Forwarded-To: linaro-patchwork@canonical.com X-Forwarded-For: patch@linaro.org linaro-patchwork@canonical.com Delivered-To: patches@linaro.org Received: by 10.231.164.217 with SMTP id f25csp15068iby; Tue, 10 Apr 2012 04:04:45 -0700 (PDT) Received: by 10.216.134.2 with SMTP id r2mr6182656wei.31.1334055879000; Tue, 10 Apr 2012 04:04:39 -0700 (PDT) Received: from mombin.canonical.com (mombin.canonical.com. [91.189.95.16]) by mx.google.com with ESMTP id l32si17794045weq.106.2012.04.10.04.04.37; Tue, 10 Apr 2012 04:04:38 -0700 (PDT) Received-SPF: neutral (google.com: 91.189.95.16 is neither permitted nor denied by best guess record for domain of linaro-mm-sig-bounces@lists.linaro.org) client-ip=91.189.95.16; Authentication-Results: mx.google.com; spf=neutral (google.com: 91.189.95.16 is neither permitted nor denied by best guess record for domain of linaro-mm-sig-bounces@lists.linaro.org) smtp.mail=linaro-mm-sig-bounces@lists.linaro.org Received: from localhost ([127.0.0.1] helo=mombin.canonical.com) by mombin.canonical.com with esmtp (Exim 4.71) (envelope-from ) id 1SHYsF-0002kO-MQ; Tue, 10 Apr 2012 11:04:35 +0000 Received: from mailout1.w1.samsung.com ([210.118.77.11]) by mombin.canonical.com with esmtp (Exim 4.71) (envelope-from ) id 1SHYs8-0002hW-LP for linaro-mm-sig@lists.linaro.org; Tue, 10 Apr 2012 11:04:28 +0000 Received: from euspt2 (mailout1.w1.samsung.com [210.118.77.11]) by mailout1.w1.samsung.com (iPlanet Messaging Server 5.2 Patch 2 (built Jul 14 2004)) with ESMTP id <0M2900L8DG1SUP@mailout1.w1.samsung.com> for linaro-mm-sig@lists.linaro.org; Tue, 10 Apr 2012 12:03:29 +0100 (BST) Received: from linux.samsung.com ([106.116.38.10]) by spt2.w1.samsung.com (iPlanet Messaging Server 5.2 Patch 2 (built Jul 14 2004)) with ESMTPA id <0M2900J74G3BAB@spt2.w1.samsung.com> for linaro-mm-sig@lists.linaro.org; Tue, 10 Apr 2012 12:04:24 +0100 (BST) Received: from mcdsrvbld02.digital.local (unknown [106.116.37.23]) by linux.samsung.com (Postfix) with ESMTP id 96E4E27005E; Tue, 10 Apr 2012 13:13:26 +0200 (CEST) Date: Tue, 10 Apr 2012 13:04:10 +0200 From: Marek Szyprowski In-reply-to: <1334055852-19500-1-git-send-email-m.szyprowski@samsung.com> To: linux-arm-kernel@lists.infradead.org, linaro-mm-sig@lists.linaro.org, linux-mm@kvack.org, linux-arch@vger.kernel.org, iommu@lists.linux-foundation.org Message-id: <1334055852-19500-9-git-send-email-m.szyprowski@samsung.com> MIME-version: 1.0 X-Mailer: git-send-email 1.7.9.1 References: <1334055852-19500-1-git-send-email-m.szyprowski@samsung.com> Cc: Russell King - ARM Linux , Arnd Bergmann , Konrad Rzeszutek Wilk , Benjamin Herrenschmidt , Kyungmin Park , Andrzej Pietrasiewicz , KyongHo Cho , Joerg Roedel Subject: [Linaro-mm-sig] [PATCHv8 08/10] ARM: dma-mapping: remove redundant code and cleanup X-BeenThere: linaro-mm-sig@lists.linaro.org X-Mailman-Version: 2.1.13 Precedence: list List-Id: "Unified memory management interest group." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: linaro-mm-sig-bounces@lists.linaro.org Errors-To: linaro-mm-sig-bounces@lists.linaro.org X-Gm-Message-State: ALoCoQkkSd9dtyvLVf2VB3JoOijBi12XW+dPsND/618mGqWwQDhwZJ3LFK0E8GbgFZmU6NwE8/wt This patch just performs a global cleanup in DMA mapping implementation for ARM architecture. Some of the tiny helper functions have been moved to the caller code, some have been merged together. Signed-off-by: Marek Szyprowski Acked-by: Kyungmin Park Acked-by: Arnd Bergmann --- arch/arm/mm/dma-mapping.c | 88 ++++++++++++-------------------------------- 1 files changed, 24 insertions(+), 64 deletions(-) diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c index 695c219..485c693 100644 --- a/arch/arm/mm/dma-mapping.c +++ b/arch/arm/mm/dma-mapping.c @@ -40,64 +40,12 @@ * the CPU does do speculative prefetches, which means we clean caches * before transfers and delay cache invalidation until transfer completion. * - * Private support functions: these are not part of the API and are - * liable to change. Drivers must not use these. */ -static inline void __dma_single_cpu_to_dev(const void *kaddr, size_t size, - enum dma_data_direction dir) -{ - extern void ___dma_single_cpu_to_dev(const void *, size_t, - enum dma_data_direction); - - if (!arch_is_coherent()) - ___dma_single_cpu_to_dev(kaddr, size, dir); -} - -static inline void __dma_single_dev_to_cpu(const void *kaddr, size_t size, - enum dma_data_direction dir) -{ - extern void ___dma_single_dev_to_cpu(const void *, size_t, - enum dma_data_direction); - - if (!arch_is_coherent()) - ___dma_single_dev_to_cpu(kaddr, size, dir); -} - -static inline void __dma_page_cpu_to_dev(struct page *page, unsigned long off, - size_t size, enum dma_data_direction dir) -{ - extern void ___dma_page_cpu_to_dev(struct page *, unsigned long, +static void __dma_page_cpu_to_dev(struct page *, unsigned long, size_t, enum dma_data_direction); - - if (!arch_is_coherent()) - ___dma_page_cpu_to_dev(page, off, size, dir); -} - -static inline void __dma_page_dev_to_cpu(struct page *page, unsigned long off, - size_t size, enum dma_data_direction dir) -{ - extern void ___dma_page_dev_to_cpu(struct page *, unsigned long, +static void __dma_page_dev_to_cpu(struct page *, unsigned long, size_t, enum dma_data_direction); - if (!arch_is_coherent()) - ___dma_page_dev_to_cpu(page, off, size, dir); -} - - -static inline dma_addr_t __dma_map_page(struct device *dev, struct page *page, - unsigned long offset, size_t size, enum dma_data_direction dir) -{ - __dma_page_cpu_to_dev(page, offset, size, dir); - return pfn_to_dma(dev, page_to_pfn(page)) + offset; -} - -static inline void __dma_unmap_page(struct device *dev, dma_addr_t handle, - size_t size, enum dma_data_direction dir) -{ - __dma_page_dev_to_cpu(pfn_to_page(dma_to_pfn(dev, handle)), - handle & ~PAGE_MASK, size, dir); -} - /** * arm_dma_map_page - map a portion of a page for streaming DMA * @dev: valid struct device pointer, or NULL for ISA and EISA-like devices @@ -112,11 +60,13 @@ static inline void __dma_unmap_page(struct device *dev, dma_addr_t handle, * The device owns this memory once this call has completed. The CPU * can regain ownership by calling dma_unmap_page(). */ -static inline dma_addr_t arm_dma_map_page(struct device *dev, struct page *page, +static dma_addr_t arm_dma_map_page(struct device *dev, struct page *page, unsigned long offset, size_t size, enum dma_data_direction dir, struct dma_attrs *attrs) { - return __dma_map_page(dev, page, offset, size, dir); + if (!arch_is_coherent()) + __dma_page_cpu_to_dev(page, offset, size, dir); + return pfn_to_dma(dev, page_to_pfn(page)) + offset; } /** @@ -133,27 +83,31 @@ static inline dma_addr_t arm_dma_map_page(struct device *dev, struct page *page, * After this call, reads by the CPU to the buffer are guaranteed to see * whatever the device wrote there. */ -static inline void arm_dma_unmap_page(struct device *dev, dma_addr_t handle, +static void arm_dma_unmap_page(struct device *dev, dma_addr_t handle, size_t size, enum dma_data_direction dir, struct dma_attrs *attrs) { - __dma_unmap_page(dev, handle, size, dir); + if (!arch_is_coherent()) + __dma_page_dev_to_cpu(pfn_to_page(dma_to_pfn(dev, handle)), + handle & ~PAGE_MASK, size, dir); } -static inline void arm_dma_sync_single_for_cpu(struct device *dev, +static void arm_dma_sync_single_for_cpu(struct device *dev, dma_addr_t handle, size_t size, enum dma_data_direction dir) { unsigned int offset = handle & (PAGE_SIZE - 1); struct page *page = pfn_to_page(dma_to_pfn(dev, handle-offset)); - __dma_page_dev_to_cpu(page, offset, size, dir); + if (!arch_is_coherent()) + __dma_page_dev_to_cpu(page, offset, size, dir); } -static inline void arm_dma_sync_single_for_device(struct device *dev, +static void arm_dma_sync_single_for_device(struct device *dev, dma_addr_t handle, size_t size, enum dma_data_direction dir) { unsigned int offset = handle & (PAGE_SIZE - 1); struct page *page = pfn_to_page(dma_to_pfn(dev, handle-offset)); - __dma_page_cpu_to_dev(page, offset, size, dir); + if (!arch_is_coherent()) + __dma_page_cpu_to_dev(page, offset, size, dir); } static int arm_dma_set_mask(struct device *dev, u64 dma_mask); @@ -644,7 +598,13 @@ static void dma_cache_maint_page(struct page *page, unsigned long offset, } while (left); } -void ___dma_page_cpu_to_dev(struct page *page, unsigned long off, +/* + * Make an area consistent for devices. + * Note: Drivers should NOT use this function directly, as it will break + * platforms with CONFIG_DMABOUNCE. + * Use the driver DMA support - see dma-mapping.h (dma_sync_*) + */ +static void __dma_page_cpu_to_dev(struct page *page, unsigned long off, size_t size, enum dma_data_direction dir) { unsigned long paddr; @@ -660,7 +620,7 @@ void ___dma_page_cpu_to_dev(struct page *page, unsigned long off, /* FIXME: non-speculating: flush on bidirectional mappings? */ } -void ___dma_page_dev_to_cpu(struct page *page, unsigned long off, +static void __dma_page_dev_to_cpu(struct page *page, unsigned long off, size_t size, enum dma_data_direction dir) { unsigned long paddr = page_to_phys(page) + off;