From patchwork Mon Feb 4 13:23:02 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marek Szyprowski X-Patchwork-Id: 14527 Return-Path: X-Original-To: patchwork@peony.canonical.com Delivered-To: patchwork@peony.canonical.com Received: from fiordland.canonical.com (fiordland.canonical.com [91.189.94.145]) by peony.canonical.com (Postfix) with ESMTP id 12F8823F99 for ; Mon, 4 Feb 2013 13:24:04 +0000 (UTC) Received: from mail-ve0-f179.google.com (mail-ve0-f179.google.com [209.85.128.179]) by fiordland.canonical.com (Postfix) with ESMTP id A0C78A1925C for ; Mon, 4 Feb 2013 13:24:03 +0000 (UTC) Received: by mail-ve0-f179.google.com with SMTP id da11so3648986veb.10 for ; Mon, 04 Feb 2013 05:24:03 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=x-received:x-forwarded-to:x-forwarded-for:delivered-to:x-received :received-spf:x-auditid:from:to:date:message-id:x-mailer:in-reply-to :references:x-brightmail-tracker:cc:subject:x-beenthere :x-mailman-version:precedence:list-id:list-unsubscribe:list-archive :list-post:list-help:list-subscribe:mime-version:content-type :content-transfer-encoding:sender:errors-to:x-gm-message-state; bh=1MYsqSZvV2juRzaiMRNsyMIr8vWXQW6T27owFKFl3jE=; b=MkYpun4O93b1fBZ+KAyQ/ZfyOT1A1NTGTrmfHHzwwK4aWjFEcqaoIK+1NqJIoFKIhH 9iTd3qvhLQb0oxpOM5+/2EOxI7MOAWJVhUbPSNJd4H9eFK1TcKJ/dCCkigBTED/qmcFS ju5v1NhVUhnfMgYupJg+4Hu0HyvBOUFR2/zWmuEMA+KvBbzncYCXKxfL4cr+vNRm+N1i NoRVBxrX0P/jvz9d8XqkTUxT6pMNOce93qGsXLNITjGDU9FlxuMPmAkYmkqYbmlu5b2q n6YJ73WvQLPsRzWUCvgF1YJKU5+fxWMcQ80ziJiSXANK0bTn8OJoi/m6d02f+uIHBAdW gCXA== X-Received: by 10.52.66.168 with SMTP id g8mr19797938vdt.27.1359984243125; Mon, 04 Feb 2013 05:24:03 -0800 (PST) X-Forwarded-To: linaro-patchwork@canonical.com X-Forwarded-For: patch@linaro.org linaro-patchwork@canonical.com Delivered-To: patches@linaro.org Received: by 10.58.252.8 with SMTP id zo8csp86798vec; Mon, 4 Feb 2013 05:24:01 -0800 (PST) X-Received: by 10.14.213.131 with SMTP id a3mr38326556eep.24.1359984240403; Mon, 04 Feb 2013 05:24:00 -0800 (PST) Received: from mombin.canonical.com (mombin.canonical.com. [91.189.95.16]) by mx.google.com with ESMTP id u9si28302668eep.12.2013.02.04.05.23.56; Mon, 04 Feb 2013 05:24:00 -0800 (PST) Received-SPF: neutral (google.com: 91.189.95.16 is neither permitted nor denied by best guess record for domain of linaro-mm-sig-bounces@lists.linaro.org) client-ip=91.189.95.16; Authentication-Results: mx.google.com; spf=neutral (google.com: 91.189.95.16 is neither permitted nor denied by best guess record for domain of linaro-mm-sig-bounces@lists.linaro.org) smtp.mail=linaro-mm-sig-bounces@lists.linaro.org Received: from localhost ([127.0.0.1] helo=mombin.canonical.com) by mombin.canonical.com with esmtp (Exim 4.71) (envelope-from ) id 1U2M1Z-0004fJ-3h; Mon, 04 Feb 2013 13:23:53 +0000 Received: from mailout2.samsung.com ([203.254.224.25]) by mombin.canonical.com with esmtp (Exim 4.71) (envelope-from ) id 1U2M1U-0004fB-Id for linaro-mm-sig@lists.linaro.org; Mon, 04 Feb 2013 13:23:49 +0000 Received: from epcpsbgm1.samsung.com (epcpsbgm1 [203.254.230.26]) by mailout2.samsung.com (Oracle Communications Messaging Server 7u4-24.01(7.0.4.24.0) 64bit (built Nov 17 2011)) with ESMTP id <0MHP0025W6J6ER60@mailout2.samsung.com> for linaro-mm-sig@lists.linaro.org; Mon, 04 Feb 2013 22:23:45 +0900 (KST) X-AuditID: cbfee61a-b7f7d6d000000f4e-01-510fb6613556 Received: from epmmp2 ( [203.254.227.17]) by epcpsbgm1.samsung.com (EPCPMTA) with SMTP id C8.9E.03918.166BF015; Mon, 04 Feb 2013 22:23:45 +0900 (KST) Received: from localhost.localdomain ([106.116.147.30]) by mmp2.samsung.com (Oracle Communications Messaging Server 7u4-24.01 (7.0.4.24.0) 64bit (built Nov 17 2011)) with ESMTPA id <0MHP00M6P6JANF00@mmp2.samsung.com> for linaro-mm-sig@lists.linaro.org; Mon, 04 Feb 2013 22:23:45 +0900 (KST) From: Marek Szyprowski To: linux-arm-kernel@lists.infradead.org, linaro-mm-sig@lists.linaro.org Date: Mon, 04 Feb 2013 14:23:02 +0100 Message-id: <1359984182-6307-1-git-send-email-m.szyprowski@samsung.com> X-Mailer: git-send-email 1.7.9.5 In-reply-to: <1358350284-6972-2-git-send-email-m.szyprowski@samsung.com> References: <1358350284-6972-2-git-send-email-m.szyprowski@samsung.com> X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFnrAJMWRmVeSWpSXmKPExsVy+t9jQd3EbfyBBs0N/BZfrjxkcmD0uP3v MXMAYxSXTUpqTmZZapG+XQJXxpQ1s1gLzqhUPJ2xlbmBsU+2i5GDQ0LAROLlGuMuRk4gU0zi wr31bF2MXBxCAtMZJZ6tPsYO4axikti16T4LSBWbgKFE19suNhBbRMBD4smKc8wgNrPAYSaJ 6RMiQIYKC8RKND7WAwmzCKhK7Dp3BqyEV8Bd4mL7SSaIvQoScybZgIQ5gaY0TNkKNl0IqOTJ 8XfsExh5FzAyrGIUTS1ILihOSs811CtOzC0uzUvXS87P3cQI9vczqR2MKxssDjEKcDAq8fAy /OILFGJNLCuuzD3EKMHBrCTCWzSZP1CINyWxsiq1KD++qDQntfgQozQHi5I4L+OpJwFCAumJ JanZqakFqUUwWSYOTqkGRvGHhjZMh1x5ZlyLbDj8UnO37e1dk+Sdpk1o/ras9WVsguS62N13 WStmTcs7Z/88d6/8yr9XcvbvNO1mOqulU7M3i6NWcFmV74nXqWlLJ9y6c/m6roB7yoGNOSJz 9b5/YHP5eXWS7aT9uWvuMO1tZVg+v9Ex+dB8XR7PM6kxbxoOpWcc+r4mZJYSS3FGoqEWc1Fx IgA3oQP08wEAAA== Cc: Russell King - ARM Linux , Arnd Bergmann , Michal Nazarewicz , heesub.shin@samsung.com, Minchan Kim , Kyungmin Park , sj2202.park@samsung.com, lauraa@quicinc.com Subject: [Linaro-mm-sig] [PATCHv2 1/2] ARM: dma-mapping: add support for CMA regions placed in highmem zone X-BeenThere: linaro-mm-sig@lists.linaro.org X-Mailman-Version: 2.1.13 Precedence: list List-Id: "Unified memory management interest group." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: linaro-mm-sig-bounces@lists.linaro.org Errors-To: linaro-mm-sig-bounces@lists.linaro.org X-Gm-Message-State: ALoCoQkKf/1dSySzSdFbIdeC7WE63WcnERysIspblUYuySwSL/ps9NFaU9ATcT8j56yTQ3s8gs25 This patch adds missing pieces to correctly support memory pages served from CMA regions placed in high memory zones. Please note that the default global CMA area is still put into lowmem and is limited by optional architecture specific DMA zone. One can however put device specific CMA regions in high memory zone to reduce lowmem usage. Signed-off-by: Marek Szyprowski Signed-off-by: Kyungmin Park --- Changle log: v2: restructured code and made all himem checks positive ('if (PageHighMem(page))' instead of 'if (!PageHighMem(page))') --- arch/arm/mm/dma-mapping.c | 53 ++++++++++++++++++++++++++++++++++----------- 1 file changed, 40 insertions(+), 13 deletions(-) diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c index 076c26d..90e059b 100644 --- a/arch/arm/mm/dma-mapping.c +++ b/arch/arm/mm/dma-mapping.c @@ -186,13 +186,24 @@ static u64 get_coherent_dma_mask(struct device *dev) static void __dma_clear_buffer(struct page *page, size_t size) { - void *ptr; /* * Ensure that the allocated pages are zeroed, and that any data * lurking in the kernel direct-mapped region is invalidated. */ - ptr = page_address(page); - if (ptr) { + if (PageHighMem(page)) { + phys_addr_t base = __pfn_to_phys(page_to_pfn(page)); + phys_addr_t end = base + size; + while (size > 0) { + void *ptr = kmap_atomic(page); + memset(ptr, 0, PAGE_SIZE); + dmac_flush_range(ptr, ptr + PAGE_SIZE); + kunmap_atomic(ptr); + page++; + size -= PAGE_SIZE; + } + outer_flush_range(base, end); + } else { + void *ptr = page_address(page); memset(ptr, 0, size); dmac_flush_range(ptr, ptr + size); outer_flush_range(__pa(ptr), __pa(ptr) + size); @@ -243,7 +254,8 @@ static void __dma_free_buffer(struct page *page, size_t size) #endif static void *__alloc_from_contiguous(struct device *dev, size_t size, - pgprot_t prot, struct page **ret_page); + pgprot_t prot, struct page **ret_page, + const void *caller); static void *__alloc_remap_buffer(struct device *dev, size_t size, gfp_t gfp, pgprot_t prot, struct page **ret_page, @@ -346,10 +358,11 @@ static int __init atomic_pool_init(void) goto no_pages; if (IS_ENABLED(CONFIG_CMA)) - ptr = __alloc_from_contiguous(NULL, pool->size, prot, &page); + ptr = __alloc_from_contiguous(NULL, pool->size, prot, &page, + atomic_pool_init); else ptr = __alloc_remap_buffer(NULL, pool->size, GFP_KERNEL, prot, - &page, NULL); + &page, atomic_pool_init); if (ptr) { int i; @@ -542,27 +555,41 @@ static int __free_from_pool(void *start, size_t size) } static void *__alloc_from_contiguous(struct device *dev, size_t size, - pgprot_t prot, struct page **ret_page) + pgprot_t prot, struct page **ret_page, + const void *caller) { unsigned long order = get_order(size); size_t count = size >> PAGE_SHIFT; struct page *page; + void *ptr; page = dma_alloc_from_contiguous(dev, count, order); if (!page) return NULL; __dma_clear_buffer(page, size); - __dma_remap(page, size, prot); + if (PageHighMem(page)) { + ptr = __dma_alloc_remap(page, size, GFP_KERNEL, prot, caller); + if (!ptr) { + dma_release_from_contiguous(dev, page, count); + return NULL; + } + } else { + __dma_remap(page, size, prot); + ptr = page_address(page); + } *ret_page = page; - return page_address(page); + return ptr; } static void __free_from_contiguous(struct device *dev, struct page *page, - size_t size) + void *cpu_addr, size_t size) { - __dma_remap(page, size, pgprot_kernel); + if (PageHighMem(page)) + __dma_free_remap(cpu_addr, size); + else + __dma_remap(page, size, pgprot_kernel); dma_release_from_contiguous(dev, page, size >> PAGE_SHIFT); } @@ -645,7 +672,7 @@ static void *__dma_alloc(struct device *dev, size_t size, dma_addr_t *handle, else if (!IS_ENABLED(CONFIG_CMA)) addr = __alloc_remap_buffer(dev, size, gfp, prot, &page, caller); else - addr = __alloc_from_contiguous(dev, size, prot, &page); + addr = __alloc_from_contiguous(dev, size, prot, &page, caller); if (addr) *handle = pfn_to_dma(dev, page_to_pfn(page)); @@ -739,7 +766,7 @@ static void __arm_dma_free(struct device *dev, size_t size, void *cpu_addr, * Non-atomic allocations cannot be freed with IRQs disabled */ WARN_ON(irqs_disabled()); - __free_from_contiguous(dev, page, size); + __free_from_contiguous(dev, page, cpu_addr, size); } }