From patchwork Wed Mar 20 17:33:35 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Benjamin Gaignard X-Patchwork-Id: 15441 Return-Path: X-Original-To: patchwork@peony.canonical.com Delivered-To: patchwork@peony.canonical.com Received: from fiordland.canonical.com (fiordland.canonical.com [91.189.94.145]) by peony.canonical.com (Postfix) with ESMTP id ED44D23E3E for ; Wed, 20 Mar 2013 17:34:16 +0000 (UTC) Received: from mail-vb0-f53.google.com (mail-vb0-f53.google.com [209.85.212.53]) by fiordland.canonical.com (Postfix) with ESMTP id 5DBCAA19503 for ; Wed, 20 Mar 2013 17:34:16 +0000 (UTC) Received: by mail-vb0-f53.google.com with SMTP id fj18so1254079vbb.26 for ; Wed, 20 Mar 2013 10:34:15 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=x-received:x-forwarded-to:x-forwarded-for:delivered-to:x-received :received-spf:x-received:from:to:date:message-id:x-mailer :in-reply-to:references:subject:x-beenthere:x-mailman-version :precedence:list-id:list-unsubscribe:list-archive:list-post :list-help:list-subscribe:mime-version:content-type :content-transfer-encoding:errors-to:sender:x-gm-message-state; bh=7nQVcik4Mdu3ybwo++BI2nxc4VU/ZZl5fvOfNBHPaTI=; b=aQIO4v6v82tfjF8wQth8pD+mhrQgoeIOg6241QIE3l9eI4OxcxaWb8s4R1xqVrOlwS bJp1mm/F4lpynJPUwfg6TXLryoT43Jtlzs4poTD2woe5ynZ4bWuI4cA+LVzXv9HVvHpG nvZZ9I4Y0arKb/3VlTcH5fR3R2yYhsIkP+Cyv1QtKWxHeW4RtPjO6dtvnSuVoHsQdz88 R+m6omPMcu2elBi1uw3t5NJcgfs0fGCw6r2WWKipHkvJ6Mv2wOcPG4wP+WUwzlsdvM23 IGUwxqkmZ7a7n0tiRmCISdAqBrRVDCpHJM6OvYsXrJeupN4cBxG4hXtEGmaBk6526Kla NbSQ== X-Received: by 10.52.76.103 with SMTP id j7mr7926958vdw.90.1363800855786; Wed, 20 Mar 2013 10:34:15 -0700 (PDT) X-Forwarded-To: linaro-patchwork@canonical.com X-Forwarded-For: patch@linaro.org linaro-patchwork@canonical.com Delivered-To: patches@linaro.org Received: by 10.58.233.198 with SMTP id ty6csp23749vec; Wed, 20 Mar 2013 10:34:15 -0700 (PDT) X-Received: by 10.14.1.130 with SMTP id 2mr73551832eed.15.1363800852547; Wed, 20 Mar 2013 10:34:12 -0700 (PDT) Received: from ip-10-141-164-156.ec2.internal (lists.linaro.org. [54.225.227.206]) by mx.google.com with ESMTPS id v44si3903247eep.74.2013.03.20.10.34.02 (version=TLSv1 cipher=RC4-SHA bits=128/128); Wed, 20 Mar 2013 10:34:12 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linaro-mm-sig-bounces@lists.linaro.org designates 54.225.227.206 as permitted sender) client-ip=54.225.227.206; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linaro-mm-sig-bounces@lists.linaro.org designates 54.225.227.206 as permitted sender) smtp.mail=linaro-mm-sig-bounces@lists.linaro.org Received: from localhost ([127.0.0.1] helo=ip-10-141-164-156.ec2.internal) by ip-10-141-164-156.ec2.internal with esmtp (Exim 4.76) (envelope-from ) id 1UIMtd-0002gE-Ox; Wed, 20 Mar 2013 17:33:53 +0000 Received: from mail-wg0-f41.google.com ([74.125.82.41]) by ip-10-141-164-156.ec2.internal with esmtp (Exim 4.76) (envelope-from ) id 1UIMtc-0002fY-GT for linaro-mm-sig@lists.linaro.org; Wed, 20 Mar 2013 17:33:52 +0000 Received: by mail-wg0-f41.google.com with SMTP id ds1so1179174wgb.4 for ; Wed, 20 Mar 2013 10:33:53 -0700 (PDT) X-Received: by 10.180.72.165 with SMTP id e5mr102002wiv.7.1363800833211; Wed, 20 Mar 2013 10:33:53 -0700 (PDT) Received: from lmenx321.lme.st.com (lya72-2-88-175-155-153.fbx.proxad.net. [88.175.155.153]) by mx.google.com with ESMTPS id c15sm4556331wiw.3.2013.03.20.10.33.52 (version=TLSv1 cipher=RC4-SHA bits=128/128); Wed, 20 Mar 2013 10:33:52 -0700 (PDT) From: Benjamin Gaignard To: linaro-mm-sig@lists.linaro.org, rebecca@android.com Date: Wed, 20 Mar 2013 18:33:35 +0100 Message-Id: <1363800815-2955-4-git-send-email-benjamin.gaignard@linaro.org> X-Mailer: git-send-email 1.7.10 In-Reply-To: <1363800815-2955-1-git-send-email-benjamin.gaignard@linaro.org> References: <1363800815-2955-1-git-send-email-benjamin.gaignard@linaro.org> Subject: [Linaro-mm-sig] [PATCH v9 3/3] gpu: ion: add CMA heap X-BeenThere: linaro-mm-sig@lists.linaro.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: "Unified memory management interest group." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: linaro-mm-sig-bounces@lists.linaro.org Sender: linaro-mm-sig-bounces@lists.linaro.org X-Gm-Message-State: ALoCoQmDl3KRoL9iG5tagMIFdMSrvEKXz3sMQuC7iI/M9bz7LBvx2jWDengklt2Qe894wDcjxMrL New heap type ION_HEAP_TYPE_DMA where allocation is done with dma_alloc_coherent API. device coherent_dma_mask must be set to DMA_BIT_MASK(32). ion_platform_heap private field is used to retrieve the device linked to CMA, if NULL the default CMA area is used. ion_cma_get_sgtable is a copy of dma_common_get_sgtable function which should be in kernel 3.5 Signed-off-by: Benjamin Gaignard --- drivers/gpu/ion/Makefile | 1 + drivers/gpu/ion/ion_cma_heap.c | 221 ++++++++++++++++++++++++++++++++++++++++ drivers/gpu/ion/ion_heap.c | 6 ++ drivers/gpu/ion/ion_priv.h | 14 +++ include/linux/ion.h | 3 + 5 files changed, 245 insertions(+) create mode 100644 drivers/gpu/ion/ion_cma_heap.c diff --git a/drivers/gpu/ion/Makefile b/drivers/gpu/ion/Makefile index 306fff9..1bd9c6a 100644 --- a/drivers/gpu/ion/Makefile +++ b/drivers/gpu/ion/Makefile @@ -1,3 +1,4 @@ obj-$(CONFIG_ION) += ion.o ion_heap.o ion_page_pool.o ion_system_heap.o \ ion_carveout_heap.o ion_chunk_heap.o +obj-$(CONFIG_ION_CMA) += ion_cma_heap.o obj-$(CONFIG_ION_TEGRA) += tegra/ diff --git a/drivers/gpu/ion/ion_cma_heap.c b/drivers/gpu/ion/ion_cma_heap.c new file mode 100644 index 0000000..281608c --- /dev/null +++ b/drivers/gpu/ion/ion_cma_heap.c @@ -0,0 +1,221 @@ +/* + * drivers/gpu/ion/ion_cma_heap.c + * + * Copyright (C) Linaro 2012 + * Author: for ST-Ericsson. + * + * This software is licensed under the terms of the GNU General Public + * License version 2, as published by the Free Software Foundation, and + * may be copied, distributed, and modified under those terms. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + */ + +#include +#include +#include +#include +#include +#include + +/* for ion_heap_ops structure */ +#include "ion_priv.h" + +#define ION_CMA_ALLOCATE_FAILED -1 + +struct ion_cma_buffer_info { + void *cpu_addr; + dma_addr_t handle; + struct sg_table *table; +}; + +/* + * Create scatter-list for the already allocated DMA buffer. + * This function could be replaced by dma_common_get_sgtable + * as soon as it will avalaible. + */ +int ion_cma_get_sgtable(struct device *dev, struct sg_table *sgt, + void *cpu_addr, dma_addr_t handle, size_t size) +{ + struct page *page = virt_to_page(cpu_addr); + int ret; + + ret = sg_alloc_table(sgt, 1, GFP_KERNEL); + if (unlikely(ret)) + return ret; + + sg_set_page(sgt->sgl, page, PAGE_ALIGN(size), 0); + return 0; +} + +/* + * Create scatter-list for each page of the already allocated DMA buffer. + */ +int ion_cma_get_sgtable_per_page(struct device *dev, struct sg_table *sgt, + void *cpu_addr, dma_addr_t handle, size_t size) +{ + struct page *page = virt_to_page(cpu_addr); + int ret, i; + struct scatterlist *sg; + + ret = sg_alloc_table(sgt, PAGE_ALIGN(size) / PAGE_SIZE, GFP_KERNEL); + if (unlikely(ret)) + return ret; + + sg = sgt->sgl; + for (i = 0; i < (PAGE_ALIGN(size) / PAGE_SIZE); i++) { + page = virt_to_page(cpu_addr + (i * PAGE_SIZE)); + sg_set_page(sg, page, PAGE_SIZE, 0); + sg = sg_next(sg); + } + return 0; +} + +/* ION CMA heap operations functions */ +static int ion_cma_allocate(struct ion_heap *heap, struct ion_buffer *buffer, + unsigned long len, unsigned long align, + unsigned long flags) +{ + struct device *dev = heap->priv; + struct ion_cma_buffer_info *info; + + dev_dbg(dev, "Request buffer allocation len %ld\n", len); + + info = kzalloc(sizeof(struct ion_cma_buffer_info), GFP_KERNEL); + if (!info) { + dev_err(dev, "Can't allocate buffer info\n"); + return ION_CMA_ALLOCATE_FAILED; + } + + info->cpu_addr = dma_alloc_coherent(dev, len, &(info->handle), 0); + + if (!info->cpu_addr) { + dev_err(dev, "Fail to allocate buffer\n"); + goto err; + } + + info->table = kmalloc(sizeof(struct sg_table), GFP_KERNEL); + if (!info->table) { + dev_err(dev, "Fail to allocate sg table\n"); + goto free_mem; + } + + if (ion_buffer_fault_user_mappings(buffer)) { + if (ion_cma_get_sgtable_per_page + (dev, info->table, info->cpu_addr, info->handle, len)) + goto free_table; + } else { + if (ion_cma_get_sgtable + (dev, info->table, info->cpu_addr, info->handle, len)) + goto free_table; + } + /* keep this for memory release */ + buffer->priv_virt = info; + dev_dbg(dev, "Allocate buffer %p\n", buffer); + return 0; + +free_table: + kfree(info->table); +free_mem: + dma_free_coherent(dev, len, info->cpu_addr, info->handle); +err: + kfree(info); + return ION_CMA_ALLOCATE_FAILED; +} + +static void ion_cma_free(struct ion_buffer *buffer) +{ + struct device *dev = buffer->heap->priv; + struct ion_cma_buffer_info *info = buffer->priv_virt; + + dev_dbg(dev, "Release buffer %p\n", buffer); + /* release memory */ + dma_free_coherent(dev, buffer->size, info->cpu_addr, info->handle); + /* release sg table */ + sg_free_table(info->table); + kfree(info->table); + kfree(info); +} + +/* return physical address in addr */ +static int ion_cma_phys(struct ion_heap *heap, struct ion_buffer *buffer, + ion_phys_addr_t *addr, size_t *len) +{ + struct device *dev = heap->priv; + struct ion_cma_buffer_info *info = buffer->priv_virt; + + dev_dbg(dev, "Return buffer %p physical address 0x%x\n", buffer, + info->handle); + + *addr = info->handle; + *len = buffer->size; + + return 0; +} + +struct sg_table *ion_cma_heap_map_dma(struct ion_heap *heap, + struct ion_buffer *buffer) +{ + struct ion_cma_buffer_info *info = buffer->priv_virt; + + return info->table; +} + +void ion_cma_heap_unmap_dma(struct ion_heap *heap, + struct ion_buffer *buffer) +{ + return; +} + +static int ion_cma_mmap(struct ion_heap *mapper, struct ion_buffer *buffer, + struct vm_area_struct *vma) +{ + struct device *dev = buffer->heap->priv; + struct ion_cma_buffer_info *info = buffer->priv_virt; + + return dma_mmap_coherent(dev, vma, info->cpu_addr, info->handle, + buffer->size); +} + +void *ion_cma_map_kernel(struct ion_heap *heap, struct ion_buffer *buffer) +{ + struct ion_cma_buffer_info *info = buffer->priv_virt; + /* kernel memory mapping has been done at allocation time */ + return info->cpu_addr; +} + +static struct ion_heap_ops ion_cma_ops = { + .allocate = ion_cma_allocate, + .free = ion_cma_free, + .map_dma = ion_cma_heap_map_dma, + .unmap_dma = ion_cma_heap_unmap_dma, + .phys = ion_cma_phys, + .map_user = ion_cma_mmap, + .map_kernel = ion_cma_map_kernel, +}; + +struct ion_heap *ion_cma_heap_create(struct ion_platform_heap *data) +{ + struct ion_heap *heap; + + heap = kzalloc(sizeof(struct ion_heap), GFP_KERNEL); + + if (!heap) + return ERR_PTR(-ENOMEM); + + heap->ops = &ion_cma_ops; + /* set device as private heaps data, later it will be + * used to make the link with reserved CMA memory */ + heap->priv = data->priv; + heap->type = ION_HEAP_TYPE_DMA; + return heap; +} + +void ion_cma_heap_destroy(struct ion_heap *heap) +{ + kfree(heap); +} diff --git a/drivers/gpu/ion/ion_heap.c b/drivers/gpu/ion/ion_heap.c index 3ec6357..6eb4c32 100644 --- a/drivers/gpu/ion/ion_heap.c +++ b/drivers/gpu/ion/ion_heap.c @@ -147,6 +147,9 @@ struct ion_heap *ion_heap_create(struct ion_platform_heap *heap_data) case ION_HEAP_TYPE_CHUNK: heap = ion_chunk_heap_create(heap_data); break; + case ION_HEAP_TYPE_DMA: + heap = ion_cma_heap_create(heap_data); + break; default: pr_err("%s: Invalid heap type %d\n", __func__, heap_data->type); @@ -184,6 +187,9 @@ void ion_heap_destroy(struct ion_heap *heap) case ION_HEAP_TYPE_CHUNK: ion_chunk_heap_destroy(heap); break; + case ION_HEAP_TYPE_DMA: + ion_cma_heap_destroy(heap); + break; default: pr_err("%s: Invalid heap type %d\n", __func__, heap->type); diff --git a/drivers/gpu/ion/ion_priv.h b/drivers/gpu/ion/ion_priv.h index c79a942..bbf3cad 100644 --- a/drivers/gpu/ion/ion_priv.h +++ b/drivers/gpu/ion/ion_priv.h @@ -234,6 +234,20 @@ ion_phys_addr_t ion_carveout_allocate(struct ion_heap *heap, unsigned long size, unsigned long align); void ion_carveout_free(struct ion_heap *heap, ion_phys_addr_t addr, unsigned long size); + +#ifdef CONFIG_CMA +struct ion_heap *ion_cma_heap_create(struct ion_platform_heap *heap); +void ion_cma_heap_destroy(struct ion_heap *heap); +#else +static inline struct ion_heap *ion_cma_heap_create(struct ion_platform_heap + *heap) +{ + return NULL; +} + +static inline void ion_cma_heap_destroy(struct ion_heap *heap) {}; +#endif + /** * The carveout heap returns physical addresses, since 0 may be a valid * physical address, this is used to indicate allocation failed diff --git a/include/linux/ion.h b/include/linux/ion.h index 2ce49d0..440f4b3 100644 --- a/include/linux/ion.h +++ b/include/linux/ion.h @@ -27,6 +27,7 @@ struct ion_handle; * @ION_HEAP_TYPE_CARVEOUT: memory allocated from a prereserved * carveout heap, allocations are physically * contiguous + * @ION_HEAP_TYPE_DMA: memory allocated via DMA API * @ION_NUM_HEAPS: helper for iterating over heaps, a bit mask * is used to identify the heaps, so only 32 * total heap types are supported @@ -36,6 +37,7 @@ enum ion_heap_type { ION_HEAP_TYPE_SYSTEM_CONTIG, ION_HEAP_TYPE_CARVEOUT, ION_HEAP_TYPE_CHUNK, + ION_HEAP_TYPE_DMA, ION_HEAP_TYPE_CUSTOM, /* must be last so device specific heaps always are at the end of this enum */ ION_NUM_HEAPS = 16, @@ -44,6 +46,7 @@ enum ion_heap_type { #define ION_HEAP_SYSTEM_MASK (1 << ION_HEAP_TYPE_SYSTEM) #define ION_HEAP_SYSTEM_CONTIG_MASK (1 << ION_HEAP_TYPE_SYSTEM_CONTIG) #define ION_HEAP_CARVEOUT_MASK (1 << ION_HEAP_TYPE_CARVEOUT) +#define ION_HEAP_TYPE_DMA_MASK (1 << ION_HEAP_TYPE_DMA) #define ION_NUM_HEAP_IDS sizeof(unsigned int) * 8