From patchwork Wed Jan 13 01:21:40 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Minchan Kim X-Patchwork-Id: 363113 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.0 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DD06BC433E0 for ; Wed, 13 Jan 2021 01:23:24 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A20B42311F for ; Wed, 13 Jan 2021 01:23:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726278AbhAMBWi (ORCPT ); Tue, 12 Jan 2021 20:22:38 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37270 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725950AbhAMBWg (ORCPT ); Tue, 12 Jan 2021 20:22:36 -0500 Received: from mail-pg1-x52b.google.com (mail-pg1-x52b.google.com [IPv6:2607:f8b0:4864:20::52b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 558DAC061786; Tue, 12 Jan 2021 17:21:56 -0800 (PST) Received: by mail-pg1-x52b.google.com with SMTP id z21so420057pgj.4; Tue, 12 Jan 2021 17:21:56 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=sender:from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=eZjjOD1PAmLkt+KYwX6n9LE4b54UBbbygpUxkzT+or8=; b=XZyicf8PriLrAcopwo6UoBZ2+lhcEdn784PDfMi7NUgzyI7I+N5SgMly//qSIv4VBp VTDA8AnIMjYGYG41U1kxbk6y7LqEEGj/wvRjrRF+VLOtCww1qfCyWBcXzMXC22ch+pFq LULPifbEKlfFp0HwQfIL6TIyV9fjG3N0AbMjAyGr7rATTImzgVddblFTy+wWiFhxitf4 oc/nsHoi+BCOaqcD55v+0n116cokVDpgdnFJ4YaHYdTIUjf5YGBEu24LTdsJIyFAsWP1 nDrIc777QjgBIPMiHPtrsjoMj6PH7/PVUVKUCsrFDt9+ixraYJFBcYRxXyy9vmcZMa0e N4WA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:from:to:cc:subject:date:message-id :in-reply-to:references:mime-version:content-transfer-encoding; bh=eZjjOD1PAmLkt+KYwX6n9LE4b54UBbbygpUxkzT+or8=; b=AtpdU99R4TqkyRRQ5nirLTAWHXM6dsDQ0AAVFABBVog/AZQAHjN+H0iHwQhxD7NtRt c9V5dj4i+T20i4Bqkh+xPYsg19lFdp51Ut/IszGvr2ihNGTYQZeWd1UfsI4dGSYiU7dN MVOJ6lNK2og7EoS66FVZHJ+GSUbcI+KQYXleYfCzDY1h7FPBwI4MTtOXzp1Ut3uAdd3w ebig7cCW/aEy2sC3ED0Iuq+iTEZI+qucOji/1OFhac5YwFlDuD6HdUEevsy3cbAkitCo fNuE8PPOHFKhZU9xhN8+DrUEGQEdRyjxVcH9Nl8K++ghU/PIa9rgbpKKclArNSkGaL0m i3VA== X-Gm-Message-State: AOAM531doNQOHzrvY0a2vgBkjYMHkfaZHDfFSWw4AkFDQMVRQzw8v4zt GOBVI0djB47CC6PWrse+J2o= X-Google-Smtp-Source: ABdhPJwSdhqG2rRLgSIEqNF8bNii9Cb2Adz5+g/DkcL3+NdOF69XsrCwlWHeitqR7tDvSm7XLb21yQ== X-Received: by 2002:a63:286:: with SMTP id 128mr1849593pgc.246.1610500915906; Tue, 12 Jan 2021 17:21:55 -0800 (PST) Received: from bbox-1.mtv.corp.google.com ([2620:15c:211:201:7220:84ff:fe09:5e58]) by smtp.gmail.com with ESMTPSA id b2sm355197pff.79.2021.01.12.17.21.53 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 12 Jan 2021 17:21:54 -0800 (PST) Sender: Minchan Kim From: Minchan Kim To: Andrew Morton Cc: linux-mm , LKML , hyesoo.yu@samsung.com, david@redhat.com, mhocko@suse.com, surenb@google.com, pullip.cho@samsung.com, joaodias@google.com, hridya@google.com, john.stultz@linaro.org, sumit.semwal@linaro.org, linux-media@vger.kernel.org, devicetree@vger.kernel.org, hch@infradead.org, robh+dt@kernel.org, linaro-mm-sig@lists.linaro.org, Minchan Kim Subject: [PATCH v3 1/4] mm: cma: introduce gfp flag in cma_alloc instead of no_warn Date: Tue, 12 Jan 2021 17:21:40 -0800 Message-Id: <20210113012143.1201105-2-minchan@kernel.org> X-Mailer: git-send-email 2.30.0.284.gd98b1dd5eaa7-goog In-Reply-To: <20210113012143.1201105-1-minchan@kernel.org> References: <20210113012143.1201105-1-minchan@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org The upcoming patch will introduce __GFP_NORETRY semantic in alloc_contig_range which is a failfast mode of the API. Instead of adding a additional parameter for gfp, replace no_warn with gfp flag. To keep old behaviors, it follows the rule below. no_warn gfp_flags false GFP_KERNEL true GFP_KERNEL|__GFP_NOWARN gfp & __GFP_NOWARN GFP_KERNEL | (gfp & __GFP_NOWARN) Signed-off-by: Minchan Kim Reviewed-by: Suren Baghdasaryan --- drivers/dma-buf/heaps/cma_heap.c | 2 +- drivers/s390/char/vmcp.c | 2 +- include/linux/cma.h | 2 +- kernel/dma/contiguous.c | 3 ++- mm/cma.c | 12 ++++++------ mm/cma_debug.c | 2 +- mm/hugetlb.c | 6 ++++-- mm/secretmem.c | 3 ++- 8 files changed, 18 insertions(+), 14 deletions(-) diff --git a/drivers/dma-buf/heaps/cma_heap.c b/drivers/dma-buf/heaps/cma_heap.c index 364fc2f3e499..0afc1907887a 100644 --- a/drivers/dma-buf/heaps/cma_heap.c +++ b/drivers/dma-buf/heaps/cma_heap.c @@ -298,7 +298,7 @@ static int cma_heap_allocate(struct dma_heap *heap, if (align > CONFIG_CMA_ALIGNMENT) align = CONFIG_CMA_ALIGNMENT; - cma_pages = cma_alloc(cma_heap->cma, pagecount, align, false); + cma_pages = cma_alloc(cma_heap->cma, pagecount, align, GFP_KERNEL); if (!cma_pages) goto free_buffer; diff --git a/drivers/s390/char/vmcp.c b/drivers/s390/char/vmcp.c index 9e066281e2d0..78f9adf56456 100644 --- a/drivers/s390/char/vmcp.c +++ b/drivers/s390/char/vmcp.c @@ -70,7 +70,7 @@ static void vmcp_response_alloc(struct vmcp_session *session) * anymore the system won't work anyway. */ if (order > 2) - page = cma_alloc(vmcp_cma, nr_pages, 0, false); + page = cma_alloc(vmcp_cma, nr_pages, 0, GFP_KERNEL); if (page) { session->response = (char *)page_to_phys(page); session->cma_alloc = 1; diff --git a/include/linux/cma.h b/include/linux/cma.h index 217999c8a762..d6c02d08ddbc 100644 --- a/include/linux/cma.h +++ b/include/linux/cma.h @@ -45,7 +45,7 @@ extern int cma_init_reserved_mem(phys_addr_t base, phys_addr_t size, const char *name, struct cma **res_cma); extern struct page *cma_alloc(struct cma *cma, size_t count, unsigned int align, - bool no_warn); + gfp_t gfp_mask); extern bool cma_release(struct cma *cma, const struct page *pages, unsigned int count); extern int cma_for_each_area(int (*it)(struct cma *cma, void *data), void *data); diff --git a/kernel/dma/contiguous.c b/kernel/dma/contiguous.c index 3d63d91cba5c..552ed531c018 100644 --- a/kernel/dma/contiguous.c +++ b/kernel/dma/contiguous.c @@ -260,7 +260,8 @@ struct page *dma_alloc_from_contiguous(struct device *dev, size_t count, if (align > CONFIG_CMA_ALIGNMENT) align = CONFIG_CMA_ALIGNMENT; - return cma_alloc(dev_get_cma_area(dev), count, align, no_warn); + return cma_alloc(dev_get_cma_area(dev), count, align, GFP_KERNEL | + (no_warn ? __GFP_NOWARN : 0)); } /** diff --git a/mm/cma.c b/mm/cma.c index 0ba69cd16aeb..35053b82aedc 100644 --- a/mm/cma.c +++ b/mm/cma.c @@ -419,13 +419,13 @@ static inline void cma_debug_show_areas(struct cma *cma) { } * @cma: Contiguous memory region for which the allocation is performed. * @count: Requested number of pages. * @align: Requested alignment of pages (in PAGE_SIZE order). - * @no_warn: Avoid printing message about failed allocation + * @gfp_mask: GFP mask to use during during the cma allocation. * * This function allocates part of contiguous memory on specific * contiguous memory area. */ struct page *cma_alloc(struct cma *cma, size_t count, unsigned int align, - bool no_warn) + gfp_t gfp_mask) { unsigned long mask, offset; unsigned long pfn = -1; @@ -438,8 +438,8 @@ struct page *cma_alloc(struct cma *cma, size_t count, unsigned int align, if (!cma || !cma->count || !cma->bitmap) return NULL; - pr_debug("%s(cma %p, count %zu, align %d)\n", __func__, (void *)cma, - count, align); + pr_debug("%s(cma %p, count %zu, align %d gfp_mask 0x%x)\n", __func__, + (void *)cma, count, align, gfp_mask); if (!count) return NULL; @@ -471,7 +471,7 @@ struct page *cma_alloc(struct cma *cma, size_t count, unsigned int align, pfn = cma->base_pfn + (bitmap_no << cma->order_per_bit); ret = alloc_contig_range(pfn, pfn + count, MIGRATE_CMA, - GFP_KERNEL | (no_warn ? __GFP_NOWARN : 0)); + gfp_mask); if (ret == 0) { page = pfn_to_page(pfn); @@ -500,7 +500,7 @@ struct page *cma_alloc(struct cma *cma, size_t count, unsigned int align, page_kasan_tag_reset(page + i); } - if (ret && !no_warn) { + if (ret && !(gfp_mask & __GFP_NOWARN)) { pr_err("%s: alloc failed, req-size: %zu pages, ret: %d\n", __func__, count, ret); cma_debug_show_areas(cma); diff --git a/mm/cma_debug.c b/mm/cma_debug.c index d5bf8aa34fdc..00170c41cf81 100644 --- a/mm/cma_debug.c +++ b/mm/cma_debug.c @@ -137,7 +137,7 @@ static int cma_alloc_mem(struct cma *cma, int count) if (!mem) return -ENOMEM; - p = cma_alloc(cma, count, 0, false); + p = cma_alloc(cma, count, 0, GFP_KERNEL); if (!p) { kfree(mem); return -ENOMEM; diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 737b2dce19e6..695af33aa66c 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1266,7 +1266,8 @@ static struct page *alloc_gigantic_page(struct hstate *h, gfp_t gfp_mask, if (hugetlb_cma[nid]) { page = cma_alloc(hugetlb_cma[nid], nr_pages, - huge_page_order(h), true); + huge_page_order(h), + GFP_KERNEL | __GFP_NOWARN); if (page) return page; } @@ -1277,7 +1278,8 @@ static struct page *alloc_gigantic_page(struct hstate *h, gfp_t gfp_mask, continue; page = cma_alloc(hugetlb_cma[node], nr_pages, - huge_page_order(h), true); + huge_page_order(h), + GFP_KERNEL | __GFP_NOWARN); if (page) return page; } diff --git a/mm/secretmem.c b/mm/secretmem.c index b8a32954ac68..585d55b9f9d8 100644 --- a/mm/secretmem.c +++ b/mm/secretmem.c @@ -86,7 +86,8 @@ static int secretmem_pool_increase(struct secretmem_ctx *ctx, gfp_t gfp) struct page *page; int err; - page = cma_alloc(secretmem_cma, nr_pages, PMD_SIZE, gfp & __GFP_NOWARN); + page = cma_alloc(secretmem_cma, nr_pages, PMD_SIZE, + GFP_KERNEL | (gfp & __GFP_NOWARN)); if (!page) return -ENOMEM; From patchwork Wed Jan 13 01:21:41 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Minchan Kim X-Patchwork-Id: 362350 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.0 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A7E79C43331 for ; Wed, 13 Jan 2021 01:22:52 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 75AA123138 for ; Wed, 13 Jan 2021 01:22:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1725950AbhAMBWk (ORCPT ); Tue, 12 Jan 2021 20:22:40 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37280 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727108AbhAMBWi (ORCPT ); Tue, 12 Jan 2021 20:22:38 -0500 Received: from mail-pj1-x1031.google.com (mail-pj1-x1031.google.com [IPv6:2607:f8b0:4864:20::1031]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5F3F3C061794; Tue, 12 Jan 2021 17:21:58 -0800 (PST) Received: by mail-pj1-x1031.google.com with SMTP id v1so125383pjr.2; Tue, 12 Jan 2021 17:21:58 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=sender:from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=PwrTvDCpQxTz/DIQgW8kuhPAh38es7QArT97GZe3VtU=; b=uaiyVxcKcdDalFZ8wJ+put1IH215gZVoe2plqMdRP+2t2x/Fxizlr/N/MMPOBy+I0b HitSdRYQF8rvO3yB3xWl5iWNM8zM/Io4Fhhz8Qobmt1P7n4QfUQjBwVrWDgj+r666bBv DkJ32IPzh0QlpdPB3YkZ6eMbzbV3ARDqbP38WnfOSzZldntVOmmHoaihJCYYTegPp9am F2LQIXwlIkdYp4QYNBsRxXKBElNr5yvhYGzrhq31yP3OOV63VYiKQtGWWpjzacC868HZ DL3z2TQ8pFoXWFzW3HfQcUecvT+tbK3HLkMB4un/Ef86zwrP5ZZOdFIQFNO6Ig5X8KKd aoUA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:from:to:cc:subject:date:message-id :in-reply-to:references:mime-version:content-transfer-encoding; bh=PwrTvDCpQxTz/DIQgW8kuhPAh38es7QArT97GZe3VtU=; b=oh+1nF2SBifjshIQs5mSi17mKaK08TsUbZax/gSRyRu8LIJQZBkNtxlnt32TiqHdxh QgrRBbQ9+TSBSNo/CIxxoQuM9titTL6ltqXnJ4InsskYjeSnNlRcybopbooplTKTr3mA GW4b8ZWpPLYWT3jh6NkWva2s7cWkIPbY2KQS4rtquxQwi8nOtr7Xny2E07pPS9ezMn+j qzd29Et72l938mLwCocNqaZy3lVyhMOuDsko4TAG6JPiAFG6uxQzTeKGzg5l9/8H2xj+ 09u5I27YaQhC6iipvyCpBYpIS8p6YpsL0cUd6dRPfAaXr6/IQQDtJk6TWKG5nxtNwNur Gn8Q== X-Gm-Message-State: AOAM533EwWadb2PDg9aFidDnCV4oVer0M5bexgmrECFkLjcmID9YN0Zt yHi/Tix1bjH/0FBw0uB260E= X-Google-Smtp-Source: ABdhPJzmO+8D0rnhhJPNIKgMyyHl4myY8RINkNNrnEa0wEBRddADD3DsZUjfzpKGbwWsqPYYVIySqA== X-Received: by 2002:a17:90a:c82:: with SMTP id v2mr475726pja.171.1610500917994; Tue, 12 Jan 2021 17:21:57 -0800 (PST) Received: from bbox-1.mtv.corp.google.com ([2620:15c:211:201:7220:84ff:fe09:5e58]) by smtp.gmail.com with ESMTPSA id b2sm355197pff.79.2021.01.12.17.21.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 12 Jan 2021 17:21:57 -0800 (PST) Sender: Minchan Kim From: Minchan Kim To: Andrew Morton Cc: linux-mm , LKML , hyesoo.yu@samsung.com, david@redhat.com, mhocko@suse.com, surenb@google.com, pullip.cho@samsung.com, joaodias@google.com, hridya@google.com, john.stultz@linaro.org, sumit.semwal@linaro.org, linux-media@vger.kernel.org, devicetree@vger.kernel.org, hch@infradead.org, robh+dt@kernel.org, linaro-mm-sig@lists.linaro.org, Minchan Kim Subject: [PATCH v3 2/4] mm: failfast mode with __GFP_NORETRY in alloc_contig_range Date: Tue, 12 Jan 2021 17:21:41 -0800 Message-Id: <20210113012143.1201105-3-minchan@kernel.org> X-Mailer: git-send-email 2.30.0.284.gd98b1dd5eaa7-goog In-Reply-To: <20210113012143.1201105-1-minchan@kernel.org> References: <20210113012143.1201105-1-minchan@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org Contiguous memory allocation can be stalled due to waiting on page writeback and/or page lock which causes unpredictable delay. It's a unavoidable cost for the requestor to get *big* contiguous memory but it's expensive for *small* contiguous memory(e.g., order-4) because caller could retry the request in diffrent range where would have easy migratable pages without stalling. This patch introduce __GFP_NORETRY as compaction gfp_mask in alloc_contig_range so it will fail fast without blocking when it encounters pages needed waitting. Signed-off-by: Minchan Kim --- mm/page_alloc.c | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 5b3923db9158..ff41ceb4db51 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -8489,12 +8489,16 @@ static int __alloc_contig_migrate_range(struct compact_control *cc, unsigned int nr_reclaimed; unsigned long pfn = start; unsigned int tries = 0; + unsigned int max_tries = 5; int ret = 0; struct migration_target_control mtc = { .nid = zone_to_nid(cc->zone), .gfp_mask = GFP_USER | __GFP_MOVABLE | __GFP_RETRY_MAYFAIL, }; + if (cc->alloc_contig && cc->mode == MIGRATE_ASYNC) + max_tries = 1; + migrate_prep(); while (pfn < end || !list_empty(&cc->migratepages)) { @@ -8511,7 +8515,7 @@ static int __alloc_contig_migrate_range(struct compact_control *cc, break; } tries = 0; - } else if (++tries == 5) { + } else if (++tries == max_tries) { ret = ret < 0 ? ret : -EBUSY; break; } @@ -8562,7 +8566,7 @@ int alloc_contig_range(unsigned long start, unsigned long end, .nr_migratepages = 0, .order = -1, .zone = page_zone(pfn_to_page(start)), - .mode = MIGRATE_SYNC, + .mode = gfp_mask & __GFP_NORETRY ? MIGRATE_ASYNC : MIGRATE_SYNC, .ignore_skip_hint = true, .no_set_skip_hint = true, .gfp_mask = current_gfp_context(gfp_mask), From patchwork Wed Jan 13 01:21:42 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Minchan Kim X-Patchwork-Id: 362349 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.0 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 14A79C433E9 for ; Wed, 13 Jan 2021 01:23:25 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id CFC782311B for ; Wed, 13 Jan 2021 01:23:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726581AbhAMBWv (ORCPT ); Tue, 12 Jan 2021 20:22:51 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37288 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727136AbhAMBWk (ORCPT ); Tue, 12 Jan 2021 20:22:40 -0500 Received: from mail-pf1-x435.google.com (mail-pf1-x435.google.com [IPv6:2607:f8b0:4864:20::435]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 415FAC061795; Tue, 12 Jan 2021 17:22:00 -0800 (PST) Received: by mail-pf1-x435.google.com with SMTP id h10so201131pfo.9; Tue, 12 Jan 2021 17:22:00 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=sender:from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=moLKqW8sUzdDe7Tu+bx0keAxg66CW63q9xlqRNBYrrA=; b=XUDwakez4EGE0+imMc50hGx3ds5QMmtB9M5WblPQvBHRhPN2royeIYKqS9ueWEaeMt 70V/NvI6mYHZWxn4quFPSoW8gISButU43tslILdi4HOA+4x8p71KnIRyoIG6/6GPhOQJ +dmhxIzPBOsyt+vTmaNf/S9BbzfZS12Jw7bdccqgSk2Txk35XxW4u9ckwu9z31+FajDH ezWy+YqvNWsZbPtSKxF6mqZ29Flxm+bcg9xHOWYCGrpkM2yfXBHjrR0mY4CuaZAr81PZ 5ePUpF69j4The4uiJitpef/JoIe48XGfF9t1oNFv+xGOYGdymXqc0bYHKDIYBG24ihhm kckw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:from:to:cc:subject:date:message-id :in-reply-to:references:mime-version:content-transfer-encoding; bh=moLKqW8sUzdDe7Tu+bx0keAxg66CW63q9xlqRNBYrrA=; b=jWxqrhd3n1GX1QWN1PuA4bDFoqX1V6pvhBtVPkf1cIaLanL1TD2gyNbMU3ArdfqCZd tox8ZrS/FTSLcl/8Alpwyibt4CMG8scwz7FmeChnLrD/AsQYLFL8W6qcB1e4OSOnLWE3 5U4vOkjG28WtvjlnsNhc9WPLuxUkxTKV/wu9N5vI+vgsFRUTl9Jyfa65K29UfvTZVkft w1P8K1RXekVvxryZcCjCQwRwwywW5o2EXQ350E74K7zCCnSCl9XtZPkqihAIhR4cyPc6 Y1kPs5ootrw0gwpEy+m5BHhdZvAJTRn/rBLXAR2ItWHWchzBnmMKtrV3FlWjgt9o/v7b 3/TQ== X-Gm-Message-State: AOAM533JeRdTgmr3uW6+uPG9HFaJArZVb+hEQgzCf4uzcr3+cZWxOZea N2ElKhsoBjIyj1Zxls5lXc4= X-Google-Smtp-Source: ABdhPJx5ssWUpEIEIBeaTawSvCWhAJ4WrRqcd7JRRzO48BBfVLvjE9R4HsG7zK71sU4cpDiUxflR/w== X-Received: by 2002:a63:cf56:: with SMTP id b22mr1900832pgj.16.1610500919879; Tue, 12 Jan 2021 17:21:59 -0800 (PST) Received: from bbox-1.mtv.corp.google.com ([2620:15c:211:201:7220:84ff:fe09:5e58]) by smtp.gmail.com with ESMTPSA id b2sm355197pff.79.2021.01.12.17.21.58 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 12 Jan 2021 17:21:59 -0800 (PST) Sender: Minchan Kim From: Minchan Kim To: Andrew Morton Cc: linux-mm , LKML , hyesoo.yu@samsung.com, david@redhat.com, mhocko@suse.com, surenb@google.com, pullip.cho@samsung.com, joaodias@google.com, hridya@google.com, john.stultz@linaro.org, sumit.semwal@linaro.org, linux-media@vger.kernel.org, devicetree@vger.kernel.org, hch@infradead.org, robh+dt@kernel.org, linaro-mm-sig@lists.linaro.org, Minchan Kim Subject: [PATCH v3 3/4] dt-bindings: reserved-memory: Make DMA-BUF CMA heap DT-configurable Date: Tue, 12 Jan 2021 17:21:42 -0800 Message-Id: <20210113012143.1201105-4-minchan@kernel.org> X-Mailer: git-send-email 2.30.0.284.gd98b1dd5eaa7-goog In-Reply-To: <20210113012143.1201105-1-minchan@kernel.org> References: <20210113012143.1201105-1-minchan@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org From: Hyesoo Yu Document devicetree binding for chunk cma heap on dma heap framework. The DMA chunk heap supports the bulk allocation of higher order pages. Signed-off-by: Hyesoo Yu Signed-off-by: Minchan Kim Signed-off-by: Hridya Valsaraju Change-Id: I8fb231e5a8360e2d8f65947e155b12aa664dde01 --- .../reserved-memory/dma_heap_chunk.yaml | 58 +++++++++++++++++++ 1 file changed, 58 insertions(+) create mode 100644 Documentation/devicetree/bindings/reserved-memory/dma_heap_chunk.yaml diff --git a/Documentation/devicetree/bindings/reserved-memory/dma_heap_chunk.yaml b/Documentation/devicetree/bindings/reserved-memory/dma_heap_chunk.yaml new file mode 100644 index 000000000000..3e7fed5fb006 --- /dev/null +++ b/Documentation/devicetree/bindings/reserved-memory/dma_heap_chunk.yaml @@ -0,0 +1,58 @@ +# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) +%YAML 1.2 +--- +$id: http://devicetree.org/schemas/reserved-memory/dma_heap_chunk.yaml# +$schema: http://devicetree.org/meta-schemas/core.yaml# + +title: Device tree binding for chunk heap on DMA HEAP FRAMEWORK + +description: | + The DMA chunk heap is backed by the Contiguous Memory Allocator (CMA) and + supports bulk allocation of fixed size pages. + +maintainers: + - Hyesoo Yu + - John Stultz + - Minchan Kim + - Hridya Valsaraju + + +properties: + compatible: + enum: + - dma_heap,chunk + + chunk-order: + description: | + order of pages that will get allocated from the chunk DMA heap. + maxItems: 1 + + size: + maxItems: 1 + + alignment: + maxItems: 1 + +required: + - compatible + - size + - alignment + - chunk-order + +additionalProperties: false + +examples: + - | + reserved-memory { + #address-cells = <2>; + #size-cells = <1>; + + chunk_memory: chunk_memory { + compatible = "dma_heap,chunk"; + size = <0x3000000>; + alignment = <0x0 0x00010000>; + chunk-order = <4>; + }; + }; + + From patchwork Wed Jan 13 01:21:43 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Minchan Kim X-Patchwork-Id: 363114 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.0 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6BB76C4332E for ; Wed, 13 Jan 2021 01:22:52 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3CD8B2311B for ; Wed, 13 Jan 2021 01:22:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727171AbhAMBWo (ORCPT ); Tue, 12 Jan 2021 20:22:44 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37298 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727168AbhAMBWm (ORCPT ); Tue, 12 Jan 2021 20:22:42 -0500 Received: from mail-pl1-x633.google.com (mail-pl1-x633.google.com [IPv6:2607:f8b0:4864:20::633]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 947C7C06179F; Tue, 12 Jan 2021 17:22:02 -0800 (PST) Received: by mail-pl1-x633.google.com with SMTP id j1so154072pld.3; Tue, 12 Jan 2021 17:22:02 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=sender:from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=U1hE5pPA1ohYP1JDiQz5BQFJjkJmCgttV7NpNfvk7yg=; b=orBqHzjOFXT+CnYLrWa6FJe53PFjtBsCq3xDL7kcXLJOefQa4qWknWDkfL22IuKv3J mNeWd8CVf/v9W9v3RTMWlbAQMQbSo8nvc6ibhZWkbbIPLAx9nTApQVOrUeuTFvhE7TDZ GB0rFOVAXCbYDNRN+pyjeZsmxSNgD+fznFHgvVMaaaHWDxaSJdYAMVWuZQmxV3KNxUwv pF80NlkB+CbePuCYC4jd6MakYvGLLAdHhYmR6TDixuxYJAtvP5nQ4Y4ftIoFlfrmPWrR 2vQF7pNjxyxCA9Pifeu73GPN1cya7uG1Cjq+DSMY/TIUtoOHOmyUzpY9uJvcxxETtVb8 HGAA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:from:to:cc:subject:date:message-id :in-reply-to:references:mime-version:content-transfer-encoding; bh=U1hE5pPA1ohYP1JDiQz5BQFJjkJmCgttV7NpNfvk7yg=; b=Dl5oAy/ezfMsy7gzaw0kqsmn6FISO9nI8SincfePxOAdKFkBkJHfMv+n38BHFFoMx0 2ZZEs68c8SUw+zdauEmvB+uUcI+ku6P9tAIlPlBi5KS5lShh+JcnzN/Cm6RlTr9cO0cR CO4+nkrmH1+CnBXylcR6vfJwteOnjOwXd2LbzdAe3lH9GcZPPRTWxCOrvAc9eMMOa3S0 IuXFlgX3UVqGeBVjWZKAeTh7bAFjgsYIwETsHIpTX1b/7dFU2dSmCQRfbItAKFLwURoL sl/xLbY+JyhfrVwySTQhtgTrEUZHeEzgyQEf64Pb/khCZ84pHxLEyUga6EsstmJsiqwi bi7A== X-Gm-Message-State: AOAM533gzg30ZUYvA8fgbt5P76Z5UzV36BlZsbr+FXtCJ/L14y1GCcFD tbVC/tpgYwSAqkLqNJmFoAU= X-Google-Smtp-Source: ABdhPJyf1HuZLxbHJbfL7oYJnhYSy0tbPZOCYvUsRIHJ+5fQqlCVocwkaGCpMJELChjdPKjKc518Vw== X-Received: by 2002:a17:90a:7106:: with SMTP id h6mr549225pjk.22.1610500922075; Tue, 12 Jan 2021 17:22:02 -0800 (PST) Received: from bbox-1.mtv.corp.google.com ([2620:15c:211:201:7220:84ff:fe09:5e58]) by smtp.gmail.com with ESMTPSA id b2sm355197pff.79.2021.01.12.17.22.00 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 12 Jan 2021 17:22:01 -0800 (PST) Sender: Minchan Kim From: Minchan Kim To: Andrew Morton Cc: linux-mm , LKML , hyesoo.yu@samsung.com, david@redhat.com, mhocko@suse.com, surenb@google.com, pullip.cho@samsung.com, joaodias@google.com, hridya@google.com, john.stultz@linaro.org, sumit.semwal@linaro.org, linux-media@vger.kernel.org, devicetree@vger.kernel.org, hch@infradead.org, robh+dt@kernel.org, linaro-mm-sig@lists.linaro.org, Minchan Kim Subject: [PATCH v3 4/4] dma-buf: heaps: add chunk heap to dmabuf heaps Date: Tue, 12 Jan 2021 17:21:43 -0800 Message-Id: <20210113012143.1201105-5-minchan@kernel.org> X-Mailer: git-send-email 2.30.0.284.gd98b1dd5eaa7-goog In-Reply-To: <20210113012143.1201105-1-minchan@kernel.org> References: <20210113012143.1201105-1-minchan@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org From: Hyesoo Yu This patch supports chunk heap that allocates the buffers that arranged into a list a fixed size chunks taken from CMA. The chunk heap driver is bound directly to a reserved_memory node by following Rob Herring's suggestion in [1]. [1] https://lore.kernel.org/lkml/20191025225009.50305-2-john.stultz@linaro.org/T/#m3dc63acd33fea269a584f43bb799a876f0b2b45d Signed-off-by: Hyesoo Yu Signed-off-by: Hridya Valsaraju Signed-off-by: Minchan Kim --- drivers/dma-buf/heaps/Kconfig | 8 + drivers/dma-buf/heaps/Makefile | 1 + drivers/dma-buf/heaps/chunk_heap.c | 477 +++++++++++++++++++++++++++++ 3 files changed, 486 insertions(+) create mode 100644 drivers/dma-buf/heaps/chunk_heap.c diff --git a/drivers/dma-buf/heaps/Kconfig b/drivers/dma-buf/heaps/Kconfig index a5eef06c4226..6527233f52a8 100644 --- a/drivers/dma-buf/heaps/Kconfig +++ b/drivers/dma-buf/heaps/Kconfig @@ -12,3 +12,11 @@ config DMABUF_HEAPS_CMA Choose this option to enable dma-buf CMA heap. This heap is backed by the Contiguous Memory Allocator (CMA). If your system has these regions, you should say Y here. + +config DMABUF_HEAPS_CHUNK + bool "DMA-BUF CHUNK Heap" + depends on DMABUF_HEAPS && DMA_CMA + help + Choose this option to enable dma-buf CHUNK heap. This heap is backed + by the Contiguous Memory Allocator (CMA) and allocates the buffers that + arranged into a list of fixed size chunks taken from CMA. diff --git a/drivers/dma-buf/heaps/Makefile b/drivers/dma-buf/heaps/Makefile index 974467791032..8faa6cfdc0c5 100644 --- a/drivers/dma-buf/heaps/Makefile +++ b/drivers/dma-buf/heaps/Makefile @@ -1,3 +1,4 @@ # SPDX-License-Identifier: GPL-2.0 obj-$(CONFIG_DMABUF_HEAPS_SYSTEM) += system_heap.o obj-$(CONFIG_DMABUF_HEAPS_CMA) += cma_heap.o +obj-$(CONFIG_DMABUF_HEAPS_CHUNK) += chunk_heap.o diff --git a/drivers/dma-buf/heaps/chunk_heap.c b/drivers/dma-buf/heaps/chunk_heap.c new file mode 100644 index 000000000000..64f748c81e1f --- /dev/null +++ b/drivers/dma-buf/heaps/chunk_heap.c @@ -0,0 +1,477 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * DMA-BUF chunk heap exporter + * + * Copyright (c) 2020 Samsung Electronics Co., Ltd. + * Author: for Samsung Electronics. + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +struct chunk_heap { + struct dma_heap *heap; + uint32_t order; + struct cma *cma; +}; + +struct chunk_heap_buffer { + struct chunk_heap *heap; + struct list_head attachments; + struct mutex lock; + struct sg_table sg_table; + unsigned long len; + int vmap_cnt; + void *vaddr; +}; + +struct chunk_heap_attachment { + struct device *dev; + struct sg_table *table; + struct list_head list; + bool mapped; +}; + +struct chunk_heap chunk_heaps[MAX_CMA_AREAS]; +unsigned int chunk_heap_count; + +static struct sg_table *dup_sg_table(struct sg_table *table) +{ + struct sg_table *new_table; + int ret, i; + struct scatterlist *sg, *new_sg; + + new_table = kzalloc(sizeof(*new_table), GFP_KERNEL); + if (!new_table) + return ERR_PTR(-ENOMEM); + + ret = sg_alloc_table(new_table, table->orig_nents, GFP_KERNEL); + if (ret) { + kfree(new_table); + return ERR_PTR(-ENOMEM); + } + + new_sg = new_table->sgl; + for_each_sgtable_sg(table, sg, i) { + sg_set_page(new_sg, sg_page(sg), sg->length, sg->offset); + new_sg = sg_next(new_sg); + } + + return new_table; +} + +static int chunk_heap_attach(struct dma_buf *dmabuf, struct dma_buf_attachment *attachment) +{ + struct chunk_heap_buffer *buffer = dmabuf->priv; + struct chunk_heap_attachment *a; + struct sg_table *table; + + a = kzalloc(sizeof(*a), GFP_KERNEL); + if (!a) + return -ENOMEM; + + table = dup_sg_table(&buffer->sg_table); + if (IS_ERR(table)) { + kfree(a); + return -ENOMEM; + } + + a->table = table; + a->dev = attachment->dev; + + attachment->priv = a; + + mutex_lock(&buffer->lock); + list_add(&a->list, &buffer->attachments); + mutex_unlock(&buffer->lock); + + return 0; +} + +static void chunk_heap_detach(struct dma_buf *dmabuf, struct dma_buf_attachment *attachment) +{ + struct chunk_heap_buffer *buffer = dmabuf->priv; + struct chunk_heap_attachment *a = attachment->priv; + + mutex_lock(&buffer->lock); + list_del(&a->list); + mutex_unlock(&buffer->lock); + + sg_free_table(a->table); + kfree(a->table); + kfree(a); +} + +static struct sg_table *chunk_heap_map_dma_buf(struct dma_buf_attachment *attachment, + enum dma_data_direction direction) +{ + struct chunk_heap_attachment *a = attachment->priv; + struct sg_table *table = a->table; + int ret; + + if (a->mapped) + return table; + + ret = dma_map_sgtable(attachment->dev, table, direction, 0); + if (ret) + return ERR_PTR(ret); + + a->mapped = true; + return table; +} + +static void chunk_heap_unmap_dma_buf(struct dma_buf_attachment *attachment, + struct sg_table *table, + enum dma_data_direction direction) +{ + struct chunk_heap_attachment *a = attachment->priv; + + a->mapped = false; + dma_unmap_sgtable(attachment->dev, table, direction, 0); +} + +static int chunk_heap_dma_buf_begin_cpu_access(struct dma_buf *dmabuf, + enum dma_data_direction direction) +{ + struct chunk_heap_buffer *buffer = dmabuf->priv; + struct chunk_heap_attachment *a; + + mutex_lock(&buffer->lock); + + if (buffer->vmap_cnt) + invalidate_kernel_vmap_range(buffer->vaddr, buffer->len); + + list_for_each_entry(a, &buffer->attachments, list) { + if (!a->mapped) + continue; + dma_sync_sgtable_for_cpu(a->dev, a->table, direction); + } + mutex_unlock(&buffer->lock); + + return 0; +} + +static int chunk_heap_dma_buf_end_cpu_access(struct dma_buf *dmabuf, + enum dma_data_direction direction) +{ + struct chunk_heap_buffer *buffer = dmabuf->priv; + struct chunk_heap_attachment *a; + + mutex_lock(&buffer->lock); + + if (buffer->vmap_cnt) + flush_kernel_vmap_range(buffer->vaddr, buffer->len); + + list_for_each_entry(a, &buffer->attachments, list) { + if (!a->mapped) + continue; + dma_sync_sgtable_for_device(a->dev, a->table, direction); + } + mutex_unlock(&buffer->lock); + + return 0; +} + +static int chunk_heap_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma) +{ + struct chunk_heap_buffer *buffer = dmabuf->priv; + struct sg_table *table = &buffer->sg_table; + unsigned long addr = vma->vm_start; + struct sg_page_iter piter; + int ret; + + for_each_sgtable_page(table, &piter, vma->vm_pgoff) { + struct page *page = sg_page_iter_page(&piter); + + ret = remap_pfn_range(vma, addr, page_to_pfn(page), PAGE_SIZE, + vma->vm_page_prot); + if (ret) + return ret; + addr += PAGE_SIZE; + if (addr >= vma->vm_end) + return 0; + } + return 0; +} + +static void *chunk_heap_do_vmap(struct chunk_heap_buffer *buffer) +{ + struct sg_table *table = &buffer->sg_table; + int npages = PAGE_ALIGN(buffer->len) / PAGE_SIZE; + struct page **pages = vmalloc(sizeof(struct page *) * npages); + struct page **tmp = pages; + struct sg_page_iter piter; + void *vaddr; + + if (!pages) + return ERR_PTR(-ENOMEM); + + for_each_sgtable_page(table, &piter, 0) { + WARN_ON(tmp - pages >= npages); + *tmp++ = sg_page_iter_page(&piter); + } + + vaddr = vmap(pages, npages, VM_MAP, PAGE_KERNEL); + vfree(pages); + + if (!vaddr) + return ERR_PTR(-ENOMEM); + + return vaddr; +} + +static int chunk_heap_vmap(struct dma_buf *dmabuf, struct dma_buf_map *map) +{ + struct chunk_heap_buffer *buffer = dmabuf->priv; + void *vaddr; + + mutex_lock(&buffer->lock); + if (buffer->vmap_cnt) { + vaddr = buffer->vaddr; + } else { + vaddr = chunk_heap_do_vmap(buffer); + if (IS_ERR(vaddr)) { + mutex_unlock(&buffer->lock); + + return PTR_ERR(vaddr); + } + buffer->vaddr = vaddr; + } + buffer->vmap_cnt++; + dma_buf_map_set_vaddr(map, vaddr); + + mutex_unlock(&buffer->lock); + + return 0; +} + +static void chunk_heap_vunmap(struct dma_buf *dmabuf, struct dma_buf_map *map) +{ + struct chunk_heap_buffer *buffer = dmabuf->priv; + + mutex_lock(&buffer->lock); + if (!--buffer->vmap_cnt) { + vunmap(buffer->vaddr); + buffer->vaddr = NULL; + } + mutex_unlock(&buffer->lock); +} + +static void chunk_heap_dma_buf_release(struct dma_buf *dmabuf) +{ + struct chunk_heap_buffer *buffer = dmabuf->priv; + struct chunk_heap *chunk_heap = buffer->heap; + struct sg_table *table; + struct scatterlist *sg; + int i; + + table = &buffer->sg_table; + for_each_sgtable_sg(table, sg, i) + cma_release(chunk_heap->cma, sg_page(sg), 1 << chunk_heap->order); + sg_free_table(table); + kfree(buffer); +} + +static const struct dma_buf_ops chunk_heap_buf_ops = { + .attach = chunk_heap_attach, + .detach = chunk_heap_detach, + .map_dma_buf = chunk_heap_map_dma_buf, + .unmap_dma_buf = chunk_heap_unmap_dma_buf, + .begin_cpu_access = chunk_heap_dma_buf_begin_cpu_access, + .end_cpu_access = chunk_heap_dma_buf_end_cpu_access, + .mmap = chunk_heap_mmap, + .vmap = chunk_heap_vmap, + .vunmap = chunk_heap_vunmap, + .release = chunk_heap_dma_buf_release, +}; + +static int chunk_heap_allocate(struct dma_heap *heap, unsigned long len, + unsigned long fd_flags, unsigned long heap_flags) +{ + struct chunk_heap *chunk_heap = dma_heap_get_drvdata(heap); + struct chunk_heap_buffer *buffer; + DEFINE_DMA_BUF_EXPORT_INFO(exp_info); + struct dma_buf *dmabuf; + struct sg_table *table; + struct scatterlist *sg; + struct page **pages; + unsigned int chunk_size = PAGE_SIZE << chunk_heap->order; + unsigned int count, alloced = 0; + unsigned int alloc_order = max_t(unsigned int, pageblock_order, chunk_heap->order); + unsigned int nr_chunks_per_alloc = 1 << (alloc_order - chunk_heap->order); + gfp_t gfp_flags = GFP_KERNEL|__GFP_NORETRY; + int ret = -ENOMEM; + pgoff_t pg; + + buffer = kzalloc(sizeof(*buffer), GFP_KERNEL); + if (!buffer) + return ret; + + INIT_LIST_HEAD(&buffer->attachments); + mutex_init(&buffer->lock); + buffer->heap = chunk_heap; + buffer->len = ALIGN(len, chunk_size); + count = buffer->len / chunk_size; + + pages = kvmalloc_array(count, sizeof(*pages), GFP_KERNEL); + if (!pages) + goto err_pages; + + while (alloced < count) { + struct page *page; + int i; + + while (count - alloced < nr_chunks_per_alloc) { + alloc_order--; + nr_chunks_per_alloc >>= 1; + } + + page = cma_alloc(chunk_heap->cma, 1 << alloc_order, + alloc_order, gfp_flags); + if (!page) { + if (gfp_flags & __GFP_NORETRY) { + gfp_flags &= ~__GFP_NORETRY; + continue; + } + break; + } + + for (i = 0; i < nr_chunks_per_alloc; i++, alloced++) { + pages[alloced] = page; + page += 1 << chunk_heap->order; + } + } + + if (alloced < count) + goto err_alloc; + + table = &buffer->sg_table; + if (sg_alloc_table(table, count, GFP_KERNEL)) + goto err_alloc; + + sg = table->sgl; + for (pg = 0; pg < count; pg++) { + sg_set_page(sg, pages[pg], chunk_size, 0); + sg = sg_next(sg); + } + + exp_info.ops = &chunk_heap_buf_ops; + exp_info.size = buffer->len; + exp_info.flags = fd_flags; + exp_info.priv = buffer; + dmabuf = dma_buf_export(&exp_info); + if (IS_ERR(dmabuf)) { + ret = PTR_ERR(dmabuf); + goto err_export; + } + kvfree(pages); + + ret = dma_buf_fd(dmabuf, fd_flags); + if (ret < 0) { + dma_buf_put(dmabuf); + return ret; + } + + return 0; +err_export: + sg_free_table(table); +err_alloc: + for (pg = 0; pg < alloced; pg++) + cma_release(chunk_heap->cma, pages[pg], 1 << chunk_heap->order); + kvfree(pages); +err_pages: + kfree(buffer); + + return ret; +} + +static const struct dma_heap_ops chunk_heap_ops = { + .allocate = chunk_heap_allocate, +}; + +static int register_chunk_heap(struct chunk_heap *chunk_heap_info) +{ + struct dma_heap_export_info exp_info; + + exp_info.name = cma_get_name(chunk_heap_info->cma); + exp_info.ops = &chunk_heap_ops; + exp_info.priv = chunk_heap_info; + + chunk_heap_info->heap = dma_heap_add(&exp_info); + if (IS_ERR(chunk_heap_info->heap)) + return PTR_ERR(chunk_heap_info->heap); + + return 0; +} + +static int __init chunk_heap_init(void) +{ + unsigned int i; + + for (i = 0; i < chunk_heap_count; i++) + register_chunk_heap(&chunk_heaps[i]); + + return 0; +} +module_init(chunk_heap_init); + +#ifdef CONFIG_OF_EARLY_FLATTREE + +static int __init dmabuf_chunk_heap_area_init(struct reserved_mem *rmem) +{ + int ret; + struct cma *cma; + struct chunk_heap *chunk_heap_info; + const __be32 *chunk_order; + + phys_addr_t align = PAGE_SIZE << max(MAX_ORDER - 1, pageblock_order); + phys_addr_t mask = align - 1; + + if ((rmem->base & mask) || (rmem->size & mask)) { + pr_err("Incorrect alignment for CMA region\n"); + return -EINVAL; + } + + ret = cma_init_reserved_mem(rmem->base, rmem->size, 0, rmem->name, &cma); + if (ret) { + pr_err("Reserved memory: unable to setup CMA region\n"); + return ret; + } + + /* Architecture specific contiguous memory fixup. */ + dma_contiguous_early_fixup(rmem->base, rmem->size); + + chunk_heap_info = &chunk_heaps[chunk_heap_count]; + chunk_heap_info->cma = cma; + + chunk_order = of_get_flat_dt_prop(rmem->fdt_node, "chunk-order", NULL); + + if (chunk_order) + chunk_heap_info->order = be32_to_cpu(*chunk_order); + else + chunk_heap_info->order = 4; + + chunk_heap_count++; + + return 0; +} +RESERVEDMEM_OF_DECLARE(dmabuf_chunk_heap, "dma_heap,chunk", + dmabuf_chunk_heap_area_init); +#endif + +MODULE_DESCRIPTION("DMA-BUF Chunk Heap"); +MODULE_LICENSE("GPL v2");