From patchwork Thu Nov 8 06:38:57 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marek Szyprowski X-Patchwork-Id: 12759 Return-Path: X-Original-To: patchwork@peony.canonical.com Delivered-To: patchwork@peony.canonical.com Received: from fiordland.canonical.com (fiordland.canonical.com [91.189.94.145]) by peony.canonical.com (Postfix) with ESMTP id 8973123E16 for ; Thu, 8 Nov 2012 06:39:36 +0000 (UTC) Received: from mail-ie0-f180.google.com (mail-ie0-f180.google.com [209.85.223.180]) by fiordland.canonical.com (Postfix) with ESMTP id 00693A194D1 for ; Thu, 8 Nov 2012 06:39:35 +0000 (UTC) Received: by mail-ie0-f180.google.com with SMTP id e10so3550154iej.11 for ; Wed, 07 Nov 2012 22:39:35 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=x-forwarded-to:x-forwarded-for:delivered-to:received-spf:x-auditid :from:to:date:message-id:x-mailer:x-brightmail-tracker:cc:subject :x-beenthere:x-mailman-version:precedence:list-id:list-unsubscribe :list-archive:list-post:list-help:list-subscribe:mime-version :content-type:content-transfer-encoding:sender:errors-to :x-gm-message-state; bh=9ApdQYquuqJm1c6H6U2OVmTXXuFjEbtyKEQa5J6ZD9w=; b=ZsedqA6+/RB3qrL3KTbMgBalZ1ArFtNUWbui2Arm5E16ivRiSNy6vJJscvj7YSdv4L WVoA6cug1rKZFDyopOrE/L5tTq/oYjf8vJa4f4XvGAJw53WFgh7Pks2vuigOyOWy8ard Ab0uEsQPWqRqf1uOFhyjZjW2TwVUfO+a35ZAmQ4RC1ZBSx9+FLqeK1EcH0cni5EpNR57 GN74VQl+m+gKIIHmDWcMx30liVWAxtXafX4ZXgiRMCKT6Kj3QDG6oKqzPksbzNO1QJeP nkOqwRuuCA/W0ONtR7c9gHbjUru4qNeRc3hXZFE3ng2Pyt/yifGgUTkz87cfo78tZxYO +4zA== Received: by 10.50.161.169 with SMTP id xt9mr18755720igb.62.1352356775362; Wed, 07 Nov 2012 22:39:35 -0800 (PST) X-Forwarded-To: linaro-patchwork@canonical.com X-Forwarded-For: patch@linaro.org linaro-patchwork@canonical.com Delivered-To: patches@linaro.org Received: by 10.50.67.148 with SMTP id n20csp330510igt; Wed, 7 Nov 2012 22:39:34 -0800 (PST) Received: by 10.204.156.13 with SMTP id u13mr1685444bkw.113.1352356774142; Wed, 07 Nov 2012 22:39:34 -0800 (PST) Received: from mombin.canonical.com (mombin.canonical.com. [91.189.95.16]) by mx.google.com with ESMTP id hi15si34673434bkc.4.2012.11.07.22.39.31; Wed, 07 Nov 2012 22:39:34 -0800 (PST) Received-SPF: neutral (google.com: 91.189.95.16 is neither permitted nor denied by best guess record for domain of linaro-mm-sig-bounces@lists.linaro.org) client-ip=91.189.95.16; Authentication-Results: mx.google.com; spf=neutral (google.com: 91.189.95.16 is neither permitted nor denied by best guess record for domain of linaro-mm-sig-bounces@lists.linaro.org) smtp.mail=linaro-mm-sig-bounces@lists.linaro.org Received: from localhost ([127.0.0.1] helo=mombin.canonical.com) by mombin.canonical.com with esmtp (Exim 4.71) (envelope-from ) id 1TWLlw-0002Jr-Ar; Thu, 08 Nov 2012 06:39:28 +0000 Received: from mailout2.samsung.com ([203.254.224.25]) by mombin.canonical.com with esmtp (Exim 4.71) (envelope-from ) id 1TWLlv-0002JB-5b for linaro-mm-sig@lists.linaro.org; Thu, 08 Nov 2012 06:39:27 +0000 Received: from epcpsbgm1.samsung.com (epcpsbgm1 [203.254.230.26]) by mailout2.samsung.com (Oracle Communications Messaging Server 7u4-24.01(7.0.4.24.0) 64bit (built Nov 17 2011)) with ESMTP id <0MD5008ZTP5C4NG0@mailout2.samsung.com> for linaro-mm-sig@lists.linaro.org; Thu, 08 Nov 2012 15:39:23 +0900 (KST) X-AuditID: cbfee61a-b7fa66d0000004cf-d1-509b539bf833 Received: from epmmp1.local.host ( [203.254.227.16]) by epcpsbgm1.samsung.com (EPCPMTA) with SMTP id 22.C9.01231.B935B905; Thu, 08 Nov 2012 15:39:23 +0900 (KST) Received: from localhost.localdomain ([106.116.147.30]) by mmp1.samsung.com (Oracle Communications Messaging Server 7u4-24.01 (7.0.4.24.0) 64bit (built Nov 17 2011)) with ESMTPA id <0MD500FRYP5CSD80@mmp1.samsung.com> for linaro-mm-sig@lists.linaro.org; Thu, 08 Nov 2012 15:39:23 +0900 (KST) From: Marek Szyprowski To: linux-arm-kernel@lists.infradead.org, linaro-mm-sig@lists.linaro.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Date: Thu, 08 Nov 2012 07:38:57 +0100 Message-id: <1352356737-14413-1-git-send-email-m.szyprowski@samsung.com> X-Mailer: git-send-email 1.7.9.5 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFjrJJMWRmVeSWpSXmKPExsVy+t9jAd3ZwbMDDPZPkLb4cuUhkwOjx+1/ j5kDGKO4bFJSczLLUov07RK4Mp4fMim4IVrR8fA/YwPjL8EuRk4OCQETieaXK9ggbDGJC/fW A9lcHEICixgl5p7qYoVwVjFJfJ+8kRWkik3AUKLrbRdQFQeHiECNxLwZjCA1zAJrmSSOzznP CFIjLBAp8XJnKzOIzSKgKnHu/U+wDbwCHhJzZ50A65UQUJCYM8lmAiP3AkaGVYyiqQXJBcVJ 6bmGesWJucWleel6yfm5mxjBHnwmtYNxZYPFIUYBDkYlHl4NydkBQqyJZcWVuYcYJTiYlUR4 lzkDhXhTEiurUovy44tKc1KLDzFKc7AoifM2e6QECAmkJ5akZqemFqQWwWSZODilGhhzgnrV doifWP9sn5vbqxWel/7t2Tb3RcQGgbse/+RvP1i9jfHYNL7Jqw/8dFUrWc7zKW2zUb1Y9Pwm i+/q5nVX7TYxTGN6wLc7/YVXvt+Hw57rVi6UenNg5upbfzuXT9Pl/bnmb3Tm17zbllOat4jH b8p4WNXA+DfR8a1E7rxjz9VUWSu/LD62R4mlOCPRUIu5qDgRAFoRfz7cAQAA Cc: Thomas Petazzoni , Andrew Lunn , Arnd Bergmann , Kyungmin Park , Soren Moch , Sebastian Hesselbarth Subject: [Linaro-mm-sig] [PATCH] mm: dmapool: use provided gfp flags for all dma_alloc_coherent() calls X-BeenThere: linaro-mm-sig@lists.linaro.org X-Mailman-Version: 2.1.13 Precedence: list List-Id: "Unified memory management interest group." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: linaro-mm-sig-bounces@lists.linaro.org Errors-To: linaro-mm-sig-bounces@lists.linaro.org X-Gm-Message-State: ALoCoQnHUZl4NQEBMWVeG53vf8NMYYFmWwF96TZnGB6mg7fl600sDZs6L6g6LFfHXZOUvZMuuOsK dmapool always calls dma_alloc_coherent() with GFP_ATOMIC flag, regardless the flags provided by the caller. This causes excessive pruning of emergency memory pools without any good reason. This patch changes the code to correctly use gfp flags provided by the dmapool caller. This should solve the dmapool usage on ARM architecture, where GFP_ATOMIC DMA allocations can be served only from the special, very limited memory pool. Reported-by: Soren Moch Reported-by: Thomas Petazzoni Signed-off-by: Marek Szyprowski Tested-by: Andrew Lunn Reported-by: Soeren Moch Tested-by: Soeren Moch --- mm/dmapool.c | 27 +++++++-------------------- 1 file changed, 7 insertions(+), 20 deletions(-) diff --git a/mm/dmapool.c b/mm/dmapool.c index c5ab33b..86de9b2 100644 --- a/mm/dmapool.c +++ b/mm/dmapool.c @@ -62,8 +62,6 @@ struct dma_page { /* cacheable header for 'allocation' bytes */ unsigned int offset; }; -#define POOL_TIMEOUT_JIFFIES ((100 /* msec */ * HZ) / 1000) - static DEFINE_MUTEX(pools_lock); static ssize_t @@ -227,7 +225,6 @@ static struct dma_page *pool_alloc_page(struct dma_pool *pool, gfp_t mem_flags) memset(page->vaddr, POOL_POISON_FREED, pool->allocation); #endif pool_initialise_page(pool, page); - list_add(&page->page_list, &pool->page_list); page->in_use = 0; page->offset = 0; } else { @@ -315,30 +312,21 @@ void *dma_pool_alloc(struct dma_pool *pool, gfp_t mem_flags, might_sleep_if(mem_flags & __GFP_WAIT); spin_lock_irqsave(&pool->lock, flags); - restart: list_for_each_entry(page, &pool->page_list, page_list) { if (page->offset < pool->allocation) goto ready; } - page = pool_alloc_page(pool, GFP_ATOMIC); - if (!page) { - if (mem_flags & __GFP_WAIT) { - DECLARE_WAITQUEUE(wait, current); - __set_current_state(TASK_UNINTERRUPTIBLE); - __add_wait_queue(&pool->waitq, &wait); - spin_unlock_irqrestore(&pool->lock, flags); + /* pool_alloc_page() might sleep, so temporarily drop &pool->lock */ + spin_unlock_irqrestore(&pool->lock, flags); - schedule_timeout(POOL_TIMEOUT_JIFFIES); + page = pool_alloc_page(pool, mem_flags); + if (!page) + return NULL; - spin_lock_irqsave(&pool->lock, flags); - __remove_wait_queue(&pool->waitq, &wait); - goto restart; - } - retval = NULL; - goto done; - } + spin_lock_irqsave(&pool->lock, flags); + list_add(&page->page_list, &pool->page_list); ready: page->in_use++; offset = page->offset; @@ -348,7 +336,6 @@ void *dma_pool_alloc(struct dma_pool *pool, gfp_t mem_flags, #ifdef DMAPOOL_DEBUG memset(retval, POOL_POISON_ALLOCATED, pool->size); #endif - done: spin_unlock_irqrestore(&pool->lock, flags); return retval; }