From patchwork Wed Jun 17 12:10:52 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Benjamin Gaignard X-Patchwork-Id: 49976 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-wi0-f197.google.com (mail-wi0-f197.google.com [209.85.212.197]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 65D94228CC for ; Wed, 17 Jun 2015 12:11:23 +0000 (UTC) Received: by wizw5 with SMTP id w5sf13126434wiz.2 for ; Wed, 17 Jun 2015 05:11:22 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:delivered-to:from:to:subject:date :message-id:in-reply-to:references:cc:precedence:list-id :list-unsubscribe:list-archive:list-post:list-help:list-subscribe :mime-version:content-type:content-transfer-encoding:errors-to :sender:x-original-sender:x-original-authentication-results :mailing-list; bh=Z52jqWvTRLzIclew0//k3bD1sN4qQW9de0GY60rMPBM=; b=ZD/8cWtdupQrkqbjFmsaEtdk89dORnTUo+fIxg92pwBx35rJfXi+KnaEdOi7Djbmr7 4rVTIuiTH/sVT7hY/vLIC0RsDSwPWvziKR6v36DZhdV7qH9lPPu3aLntHnZwoMDAqtoN 58S/y2DSPhV+i1eKnwwvuCZ6xMpu73nir4OeP8PiDMM170y65CmkeT4fPV/WlyNmP7pF 2pvmWYsntAtutUYpSVdOIrr02Qi8bFnb8RhmgOro4fZk6RJvp9+WtV+sie2xUGidULlT N2p+/ofweCZ8UP6Ypi/Ssev5of5jE2gLHcGA57PjEJXfTBnRWeWX2qfVDGe+QrlsJMRy eLJQ== X-Gm-Message-State: ALoCoQkpGJz+mRJlaDCOfDrFM6bQHnUNTZAmO1FDT04tElGNlu7dZzYfqEWapGUYTTPXU77L3l6T X-Received: by 10.180.90.106 with SMTP id bv10mr8199995wib.6.1434543082649; Wed, 17 Jun 2015 05:11:22 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.116.7 with SMTP id js7ls281527lab.34.gmail; Wed, 17 Jun 2015 05:11:22 -0700 (PDT) X-Received: by 10.152.7.239 with SMTP id m15mr7302284laa.95.1434543082489; Wed, 17 Jun 2015 05:11:22 -0700 (PDT) Received: from mail-la0-f42.google.com (mail-la0-f42.google.com. [209.85.215.42]) by mx.google.com with ESMTPS id e3si3409922laa.104.2015.06.17.05.11.22 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 17 Jun 2015 05:11:22 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.42 as permitted sender) client-ip=209.85.215.42; Received: by laka10 with SMTP id a10so31537089lak.0 for ; Wed, 17 Jun 2015 05:11:22 -0700 (PDT) X-Received: by 10.152.36.161 with SMTP id r1mr7554516laj.88.1434543082376; Wed, 17 Jun 2015 05:11:22 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.108.230 with SMTP id hn6csp427097lbb; Wed, 17 Jun 2015 05:11:21 -0700 (PDT) X-Received: by 10.70.129.202 with SMTP id ny10mr10191243pdb.107.1434543076279; Wed, 17 Jun 2015 05:11:16 -0700 (PDT) Received: from gabe.freedesktop.org (gabe.freedesktop.org. [131.252.210.177]) by mx.google.com with ESMTP id sl6si5993901pac.192.2015.06.17.05.11.15; Wed, 17 Jun 2015 05:11:16 -0700 (PDT) Received-SPF: pass (google.com: domain of dri-devel-bounces@lists.freedesktop.org designates 131.252.210.177 as permitted sender) client-ip=131.252.210.177; Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 94E2C6EA69; Wed, 17 Jun 2015 05:11:13 -0700 (PDT) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from mail-wi0-f171.google.com (mail-wi0-f171.google.com [209.85.212.171]) by gabe.freedesktop.org (Postfix) with ESMTP id 7BE366E69C for ; Wed, 17 Jun 2015 05:11:12 -0700 (PDT) Received: by wicnd19 with SMTP id nd19so26845940wic.1 for ; Wed, 17 Jun 2015 05:11:11 -0700 (PDT) X-Received: by 10.194.82.38 with SMTP id f6mr5093355wjy.16.1434543071916; Wed, 17 Jun 2015 05:11:11 -0700 (PDT) Received: from LMENX321.lme.st.com (LPuteaux-656-1-48-212.w82-127.abo.wanadoo.fr. [82.127.83.212]) by mx.google.com with ESMTPSA id k2sm7381617wix.4.2015.06.17.05.11.10 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Wed, 17 Jun 2015 05:11:11 -0700 (PDT) From: Benjamin Gaignard To: linux-media@vger.kernel.org, linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, hverkuil@xs4all.nl, laurent.pinchart@ideasonboard.com, daniel.vetter@ffwll.ch, robdclark@gmail.com, treding@nvidia.com, airlied@redhat.com, sumit.semwal@linaro.org, gnomes@lxorguk.ukuu.org.uk, weigelt@melag.de Subject: [PATCH 2/2] SMAF: add CMA allocator Date: Wed, 17 Jun 2015 14:10:52 +0200 Message-Id: <1434543052-18795-3-git-send-email-benjamin.gaignard@linaro.org> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1434543052-18795-1-git-send-email-benjamin.gaignard@linaro.org> References: <1434543052-18795-1-git-send-email-benjamin.gaignard@linaro.org> Cc: linaro-mm-sig@lists.linaro.org, Benjamin Gaignard , tom.gall@linaro.org X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: , List-Help: , List-Subscribe: , MIME-Version: 1.0 Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: benjamin.gaignard@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.42 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 SMAF CMA allocator implement helpers functions to allow SMAF to allocate contiguous memory. match() each if at least one of the attached devices have coherent_dma_mask set to DMA_BIT_MASK(32). For allocation it use dma_alloc_attrs() with DMA_ATTR_WRITE_COMBINE and not dma_alloc_writecombine to be compatible with ARM 64bits architecture Signed-off-by: Benjamin Gaignard --- drivers/smaf/Kconfig | 6 ++ drivers/smaf/Makefile | 1 + drivers/smaf/smaf-cma.c | 198 ++++++++++++++++++++++++++++++++++++++++++++++++ 3 files changed, 205 insertions(+) create mode 100644 drivers/smaf/smaf-cma.c diff --git a/drivers/smaf/Kconfig b/drivers/smaf/Kconfig index d36651a..058ec4c 100644 --- a/drivers/smaf/Kconfig +++ b/drivers/smaf/Kconfig @@ -3,3 +3,9 @@ config SMAF depends on DMA_SHARED_BUFFER help Choose this option to enable Secure Memory Allocation Framework + +config SMAF_CMA + tristate "SMAF CMA allocator" + depends on SMAF && HAVE_DMA_ATTRS + help + Choose this option to enable CMA allocation within SMAF diff --git a/drivers/smaf/Makefile b/drivers/smaf/Makefile index 40cd882..05bab01 100644 --- a/drivers/smaf/Makefile +++ b/drivers/smaf/Makefile @@ -1 +1,2 @@ obj-$(CONFIG_SMAF) += smaf-core.o +obj-$(CONFIG_SMAF_CMA) += smaf-cma.o diff --git a/drivers/smaf/smaf-cma.c b/drivers/smaf/smaf-cma.c new file mode 100644 index 0000000..b3ebd57 --- /dev/null +++ b/drivers/smaf/smaf-cma.c @@ -0,0 +1,198 @@ +/* + * smaf-cma.c + * + * Copyright (C) Linaro SA 2015 + * Author: Benjamin Gaignard for Linaro. + * License terms: GNU General Public License (GPL), version 2 + */ + +#include +#include +#include +#include + +struct smaf_cma_buffer_info { + struct device *dev; + size_t size; + void *vaddr; + dma_addr_t paddr; +}; + +/** + * find_matching_device - iterate over the attached devices to find one + * with coherent_dma_mask correctly set to DMA_BIT_MASK(32). + * Matching device (if any) will be used to aim CMA area. + */ +static struct device *find_matching_device(struct dma_buf *dmabuf) +{ + struct dma_buf_attachment *attach_obj; + + list_for_each_entry(attach_obj, &dmabuf->attachments, node) { + if (attach_obj->dev->coherent_dma_mask == DMA_BIT_MASK(32)) + return attach_obj->dev; + } + + return NULL; +} + +/** + * smaf_cma_match - return true if at least one device has been found + */ +static bool smaf_cma_match(struct dma_buf *dmabuf) +{ + return !!find_matching_device(dmabuf); +} + +static void smaf_cma_release(struct dma_buf *dmabuf) +{ + struct smaf_cma_buffer_info *info = dmabuf->priv; + DEFINE_DMA_ATTRS(attrs); + + dma_set_attr(DMA_ATTR_WRITE_COMBINE, &attrs); + + dma_free_attrs(info->dev, info->size, info->vaddr, info->paddr, &attrs); + + kfree(info); +} + +static struct sg_table *smaf_cma_map(struct dma_buf_attachment *attachment, + enum dma_data_direction direction) +{ + struct smaf_cma_buffer_info *info = attachment->dmabuf->priv; + struct sg_table *sgt; + int ret; + + sgt = kzalloc(sizeof(*sgt), GFP_KERNEL); + if (!sgt) + return NULL; + + ret = dma_get_sgtable(info->dev, sgt, info->vaddr, + info->paddr, info->size); + if (ret < 0) + goto out; + + sg_dma_address(sgt->sgl) = info->paddr; + return sgt; + +out: + kfree(sgt); + return NULL; +} + +static void smaf_cma_unmap(struct dma_buf_attachment *attachment, + struct sg_table *sgt, + enum dma_data_direction direction) +{ + /* do nothing */ +} + +static int smaf_cma_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma) +{ + struct smaf_cma_buffer_info *info = dmabuf->priv; + int ret; + DEFINE_DMA_ATTRS(attrs); + + dma_set_attr(DMA_ATTR_WRITE_COMBINE, &attrs); + + if (info->size < vma->vm_end - vma->vm_start) + return -EINVAL; + + vma->vm_flags |= VM_IO | VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP; + ret = dma_mmap_attrs(info->dev, vma, info->vaddr, info->paddr, + info->size, &attrs); + + return ret; +} + +static void *smaf_cma_vmap(struct dma_buf *dmabuf) +{ + struct smaf_cma_buffer_info *info = dmabuf->priv; + + return info->vaddr; +} + +static void *smaf_kmap_atomic(struct dma_buf *dmabuf, unsigned long offset) +{ + struct smaf_cma_buffer_info *info = dmabuf->priv; + + return (void *)info->paddr + offset; +} + +static struct dma_buf_ops smaf_cma_ops = { + .map_dma_buf = smaf_cma_map, + .unmap_dma_buf = smaf_cma_unmap, + .mmap = smaf_cma_mmap, + .release = smaf_cma_release, + .kmap_atomic = smaf_kmap_atomic, + .kmap = smaf_kmap_atomic, + .vmap = smaf_cma_vmap, +}; + +static struct dma_buf *smaf_cma_allocate(struct dma_buf *dmabuf, + size_t length, unsigned int flags) +{ + struct dma_buf_attachment *attach_obj; + struct smaf_cma_buffer_info *info; + struct dma_buf *cma_dmabuf; + int ret; + + DEFINE_DMA_BUF_EXPORT_INFO(export); + DEFINE_DMA_ATTRS(attrs); + + dma_set_attr(DMA_ATTR_WRITE_COMBINE, &attrs); + + info = kzalloc(sizeof(*info), GFP_KERNEL); + if (!info) + return NULL; + + info->size = round_up(length, PAGE_SIZE); + info->dev = find_matching_device(dmabuf); + + info->vaddr = dma_alloc_attrs(info->dev, info->size, &info->paddr, + GFP_KERNEL | __GFP_NOWARN, &attrs); + if (!info->vaddr) { + ret = -ENOMEM; + goto error; + } + + export.ops = &smaf_cma_ops; + export.size = info->size; + export.flags = flags; + export.priv = info; + + cma_dmabuf = dma_buf_export(&export); + if (IS_ERR(cma_dmabuf)) + goto error; + + list_for_each_entry(attach_obj, &dmabuf->attachments, node) { + dma_buf_attach(cma_dmabuf, attach_obj->dev); + } + + return cma_dmabuf; + +error: + kfree(info); + return NULL; +} + +struct smaf_allocator smaf_cma = { + .match = smaf_cma_match, + .allocate = smaf_cma_allocate, +}; + +static int __init smaf_cma_init(void) +{ + INIT_LIST_HEAD(&smaf_cma.list_node); + return smaf_register_allocator(&smaf_cma); +} +module_init(smaf_cma_init); + +static void __exit smaf_cma_deinit(void) +{ + smaf_unregister_allocator(&smaf_cma); +} +module_exit(smaf_cma_deinit); + +MODULE_DESCRIPTION("SMAF CMA module"); +MODULE_LICENSE("GPL"); +MODULE_AUTHOR("Benjamin Gaignard ");