From patchwork Wed Jan 20 21:09:37 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: John Stultz X-Patchwork-Id: 367098 Delivered-To: patches@linaro.org Received: by 2002:a02:a60d:0:0:0:0:0 with SMTP id c13csp813196jam; Wed, 20 Jan 2021 13:09:46 -0800 (PST) X-Received: by 2002:a17:902:67:b029:de:c5e0:87ca with SMTP id 94-20020a1709020067b02900dec5e087camr9864904pla.64.1611176986338; Wed, 20 Jan 2021 13:09:46 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1611176986; cv=none; d=google.com; s=arc-20160816; b=YqpHk9GzHv79u7ZzAMD7/fU6dd6Vs9ThDEihlfAc0QfoKvcCQHJc1GbsYthpCTyy/D QFKdqJT9SjCBq1xIFrYm+5o4DvVQy5ySlltz7w2so/6esBixkIRvgXrCrbgBhy/twC3w LonvN0ahMrVumznRXXRxxcxEsT+WoAulBzRHEzK/fVm7W8eTOg1u+Ci85bn34qYZ2cLr PMOIyXZh+G8tbI4SnSRemV0Bpqt9Cx/Cx0n0GkGWw8lS48onk/ac/RXjBzFkxyvA3VLY y+ebI+GHY7LCpqRSyMeflN5tsc3BBxwKnwpDDCaqQXuO87YrhOff3X4b2Ytl76inxra1 RiQA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=H4+X0xWeooH24gYhSYvXX9x75t8RTwKg5FJ1Ffw/87E=; b=SK2FiLwzUwclrjT53bVOh5tNVyPLlslshlmZXHXed9DDczNDLaJ6crpMI27zb+c8/u EmkEn18lmqKKpgDzWKb6FKQSQdNxhDOr0qXAOEtozPi6fmiD1ZODxZoJfpQpohU1OtgM /tg0EDBI29+QMd8LDahaRz0Drc/O9hWaJCTMAYovmca9hNX05y4945DC0+Rlx7ljJbUo OmMEzpPTU4LArW7A4nj66Sf+YdoE0d/JgLuE7MiR6RH2ws75CxuzNUYsshT1oliOt4QB Gr4LzYuTzH9G2onF1Dqrz2fQkmVPu0cGu+DfN1RLpS4HfEI8EPnoNWAgsuQIIbSNbT+l SHjg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=b+9dZxp4; spf=pass (google.com: domain of john.stultz@linaro.org designates 209.85.220.41 as permitted sender) smtp.mailfrom=john.stultz@linaro.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from mail-sor-f41.google.com (mail-sor-f41.google.com. [209.85.220.41]) by mx.google.com with SMTPS id n5sor1669366pfn.18.2021.01.20.13.09.46 for (Google Transport Security); Wed, 20 Jan 2021 13:09:46 -0800 (PST) Received-SPF: pass (google.com: domain of john.stultz@linaro.org designates 209.85.220.41 as permitted sender) client-ip=209.85.220.41; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=b+9dZxp4; spf=pass (google.com: domain of john.stultz@linaro.org designates 209.85.220.41 as permitted sender) smtp.mailfrom=john.stultz@linaro.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=H4+X0xWeooH24gYhSYvXX9x75t8RTwKg5FJ1Ffw/87E=; b=b+9dZxp4AirxgnJJSr4dySxi/WJW1ARIEAf6hNM3BR6rRHmNRJ1MZpwJ/OBt4UYMY8 74ECTrUU5Uil9qGJ7UZJT3o60mz3sfiRSorXPkmty6sTDdyuLqvgUxyWgmcTJe2Tc/3Y hQDyLsBwYERkBEox9qzKb0srA+DM7+gD+ck9f2hcZZ4VifojYxJWZu5+h066eQNlqQkW WTNIByqh6IaEZEa2iqKZhuBmlIop5JUJslwRi0p3MgdJXLIuLzfzP0BoKR/uEb8aby56 kW8NUHY2+Uvckz0UPPxHeKrgMis65PYRLsvnRES7q738rjV7tRJep+hw+/Iqm6OfsekA 0riw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=H4+X0xWeooH24gYhSYvXX9x75t8RTwKg5FJ1Ffw/87E=; b=Y9zEHgxHTZuf3suApdjlBmk8qJMSQncXul33yY+nr+455rXN7Y/SPA+xw3YpxztzFS bGXze92Viki37Bfp4anWmNdCA0dy/D1hd1bLhwbEBzP4At5sIDiDsFX3PY0pqdRnx7ST qCkbZ/FPSmkFVELb42LLAKDfQI3CvZrNfhM3Chqb6/Qn2j1i8rRcA7kwKKgX6pzR0DIy dycFu4Pxqa0Ns6f39qQlauPGd2APy+m/o7/M9BH53IYyYS3qr2Jndv1cAmPtypwFs3SG tNr6xkOXSqAOis+E3lYhuPfN9rPSWT7cImD8/lPg/+rB4Sb2LKtEmBvjsGRSsH1fvk/S Ek9Q== X-Gm-Message-State: AOAM530icunU04EpHJdwgu+DJ/9ekGkWOw9sYOWyMm7MgkpivsAg3Nz5 sGzR2xNf+qY23jFJw0YqhWJaNMxA X-Google-Smtp-Source: ABdhPJyv4JjWfOCxuhxS+jF+NOmU+TvHCVz4VkCmpOfeGTszmzT23aBKlf3HPpm7AJDat28vcTMDGA== X-Received: by 2002:a62:8fca:0:b029:1a9:39bc:ed37 with SMTP id n193-20020a628fca0000b02901a939bced37mr10984394pfd.61.1611176985973; Wed, 20 Jan 2021 13:09:45 -0800 (PST) Return-Path: Received: from localhost.localdomain ([2601:1c2:680:1319:692:26ff:feda:3a81]) by smtp.gmail.com with ESMTPSA id f15sm3265629pja.24.2021.01.20.13.09.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 20 Jan 2021 13:09:45 -0800 (PST) From: John Stultz To: lkml Cc: Bing Song , Daniel Vetter , Sumit Semwal , Liam Mark , Laura Abbott , Brian Starkey , Hridya Valsaraju , Suren Baghdasaryan , Sandeep Patil , Daniel Mentz , Chris Goldsworthy , =?utf-8?q?=C3=98rjan_Eide?= , Robin Murphy , Ezequiel Garcia , Simon Ser , James Jones , linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, John Stultz Subject: [PATCH 3/3] dma-buf: cma_heap: Add a cma-uncached heap re-using the cma heap Date: Wed, 20 Jan 2021 21:09:37 +0000 Message-Id: <20210120210937.15069-4-john.stultz@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210120210937.15069-1-john.stultz@linaro.org> References: <20210120210937.15069-1-john.stultz@linaro.org> MIME-Version: 1.0 From: Bing Song This adds a heap that allocates CMA buffers that are marked as writecombined, so they are not cached by the CPU. Cc: Daniel Vetter Cc: Sumit Semwal Cc: Liam Mark Cc: Laura Abbott Cc: Brian Starkey Cc: Hridya Valsaraju Cc: Suren Baghdasaryan Cc: Sandeep Patil Cc: Daniel Mentz Cc: Chris Goldsworthy Cc: Ørjan Eide Cc: Robin Murphy Cc: Ezequiel Garcia Cc: Simon Ser Cc: James Jones Cc: linux-media@vger.kernel.org Cc: dri-devel@lists.freedesktop.org Signed-off-by: Bing Song Signed-off-by: John Stultz --- drivers/dma-buf/heaps/cma_heap.c | 119 +++++++++++++++++++++++++++---- 1 file changed, 107 insertions(+), 12 deletions(-) -- 2.17.1 diff --git a/drivers/dma-buf/heaps/cma_heap.c b/drivers/dma-buf/heaps/cma_heap.c index 364fc2f3e499..1b8c6eb0a8ea 100644 --- a/drivers/dma-buf/heaps/cma_heap.c +++ b/drivers/dma-buf/heaps/cma_heap.c @@ -38,6 +38,7 @@ struct cma_heap_buffer { pgoff_t pagecount; int vmap_cnt; void *vaddr; + bool uncached; }; struct dma_heap_attachment { @@ -45,6 +46,7 @@ struct dma_heap_attachment { struct sg_table table; struct list_head list; bool mapped; + bool uncached; }; static int cma_heap_attach(struct dma_buf *dmabuf, @@ -70,6 +72,7 @@ static int cma_heap_attach(struct dma_buf *dmabuf, a->dev = attachment->dev; INIT_LIST_HEAD(&a->list); a->mapped = false; + a->uncached = buffer->uncached; attachment->priv = a; @@ -99,8 +102,12 @@ static struct sg_table *cma_heap_map_dma_buf(struct dma_buf_attachment *attachme { struct dma_heap_attachment *a = attachment->priv; struct sg_table *table = &a->table; + int attr = 0; int ret; + if (a->uncached) + attr = DMA_ATTR_SKIP_CPU_SYNC; + ret = dma_map_sgtable(attachment->dev, table, direction, 0); if (ret) return ERR_PTR(-ENOMEM); @@ -113,7 +120,10 @@ static void cma_heap_unmap_dma_buf(struct dma_buf_attachment *attachment, enum dma_data_direction direction) { struct dma_heap_attachment *a = attachment->priv; + int attr = 0; + if (a->uncached) + attr = DMA_ATTR_SKIP_CPU_SYNC; a->mapped = false; dma_unmap_sgtable(attachment->dev, table, direction, 0); } @@ -128,10 +138,12 @@ static int cma_heap_dma_buf_begin_cpu_access(struct dma_buf *dmabuf, invalidate_kernel_vmap_range(buffer->vaddr, buffer->len); mutex_lock(&buffer->lock); - list_for_each_entry(a, &buffer->attachments, list) { - if (!a->mapped) - continue; - dma_sync_sgtable_for_cpu(a->dev, &a->table, direction); + if (!buffer->uncached) { + list_for_each_entry(a, &buffer->attachments, list) { + if (!a->mapped) + continue; + dma_sync_sgtable_for_cpu(a->dev, &a->table, direction); + } } mutex_unlock(&buffer->lock); @@ -148,10 +160,12 @@ static int cma_heap_dma_buf_end_cpu_access(struct dma_buf *dmabuf, flush_kernel_vmap_range(buffer->vaddr, buffer->len); mutex_lock(&buffer->lock); - list_for_each_entry(a, &buffer->attachments, list) { - if (!a->mapped) - continue; - dma_sync_sgtable_for_device(a->dev, &a->table, direction); + if (!buffer->uncached) { + list_for_each_entry(a, &buffer->attachments, list) { + if (!a->mapped) + continue; + dma_sync_sgtable_for_device(a->dev, &a->table, direction); + } } mutex_unlock(&buffer->lock); @@ -183,6 +197,9 @@ static int cma_heap_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma) if ((vma->vm_flags & (VM_SHARED | VM_MAYSHARE)) == 0) return -EINVAL; + if (buffer->uncached) + vma->vm_page_prot = pgprot_writecombine(vma->vm_page_prot); + vma->vm_ops = &dma_heap_vm_ops; vma->vm_private_data = buffer; @@ -191,9 +208,13 @@ static int cma_heap_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma) static void *cma_heap_do_vmap(struct cma_heap_buffer *buffer) { + pgprot_t pgprot = PAGE_KERNEL; void *vaddr; - vaddr = vmap(buffer->pages, buffer->pagecount, VM_MAP, PAGE_KERNEL); + if (buffer->uncached) + pgprot = pgprot_writecombine(PAGE_KERNEL); + + vaddr = vmap(buffer->pages, buffer->pagecount, VM_MAP, pgprot); if (!vaddr) return ERR_PTR(-ENOMEM); @@ -271,10 +292,11 @@ static const struct dma_buf_ops cma_heap_buf_ops = { .release = cma_heap_dma_buf_release, }; -static int cma_heap_allocate(struct dma_heap *heap, +static int cma_heap_do_allocate(struct dma_heap *heap, unsigned long len, unsigned long fd_flags, - unsigned long heap_flags) + unsigned long heap_flags, + bool uncached) { struct cma_heap *cma_heap = dma_heap_get_drvdata(heap); struct cma_heap_buffer *buffer; @@ -283,8 +305,9 @@ static int cma_heap_allocate(struct dma_heap *heap, pgoff_t pagecount = size >> PAGE_SHIFT; unsigned long align = get_order(size); struct page *cma_pages; + struct sg_table table; struct dma_buf *dmabuf; - int ret = -ENOMEM; + int ret = -ENOMEM, ret_sg_table; pgoff_t pg; buffer = kzalloc(sizeof(*buffer), GFP_KERNEL); @@ -294,6 +317,7 @@ static int cma_heap_allocate(struct dma_heap *heap, INIT_LIST_HEAD(&buffer->attachments); mutex_init(&buffer->lock); buffer->len = size; + buffer->uncached = uncached; if (align > CONFIG_CMA_ALIGNMENT) align = CONFIG_CMA_ALIGNMENT; @@ -356,6 +380,18 @@ static int cma_heap_allocate(struct dma_heap *heap, return ret; } + if (buffer->uncached) { + ret_sg_table = sg_alloc_table(&table, 1, GFP_KERNEL); + if (ret_sg_table) + return ret_sg_table; + + sg_set_page(table.sgl, cma_pages, size, 0); + + dma_map_sgtable(dma_heap_get_dev(heap), &table, DMA_BIDIRECTIONAL, 0); + dma_unmap_sgtable(dma_heap_get_dev(heap), &table, DMA_BIDIRECTIONAL, 0); + sg_free_table(&table); + } + return ret; free_pages: @@ -368,14 +404,45 @@ static int cma_heap_allocate(struct dma_heap *heap, return ret; } +static int cma_heap_allocate(struct dma_heap *heap, + unsigned long len, + unsigned long fd_flags, + unsigned long heap_flags) +{ + return cma_heap_do_allocate(heap, len, fd_flags, heap_flags, false); +} + +static int cma_uncached_heap_allocate(struct dma_heap *heap, + unsigned long len, + unsigned long fd_flags, + unsigned long heap_flags) +{ + return cma_heap_do_allocate(heap, len, fd_flags, heap_flags, true); +} + +/* Dummy function to be used until we can call coerce_mask_and_coherent */ +static int cma_uncached_heap_not_initialized(struct dma_heap *heap, + unsigned long len, + unsigned long fd_flags, + unsigned long heap_flags) +{ + return -EBUSY; +} + static const struct dma_heap_ops cma_heap_ops = { .allocate = cma_heap_allocate, }; +static struct dma_heap_ops cma_uncached_heap_ops = { + .allocate = cma_uncached_heap_not_initialized, +}; + static int __add_cma_heap(struct cma *cma, void *data) { struct cma_heap *cma_heap; struct dma_heap_export_info exp_info; + const char *postfixed = "-uncached"; + char *cma_name; cma_heap = kzalloc(sizeof(*cma_heap), GFP_KERNEL); if (!cma_heap) @@ -394,6 +461,34 @@ static int __add_cma_heap(struct cma *cma, void *data) return ret; } + cma_heap = kzalloc(sizeof(*cma_heap), GFP_KERNEL); + if (!cma_heap) + return -ENOMEM; + cma_heap->cma = cma; + + cma_name = kzalloc(strlen(cma_get_name(cma)) + strlen(postfixed) + 1, GFP_KERNEL); + if (!cma_name) { + kfree(cma_heap); + return -ENOMEM; + } + + exp_info.name = strcat(strcpy(cma_name, cma_get_name(cma)), postfixed); + exp_info.ops = &cma_uncached_heap_ops; + exp_info.priv = cma_heap; + + cma_heap->heap = dma_heap_add(&exp_info); + if (IS_ERR(cma_heap->heap)) { + int ret = PTR_ERR(cma_heap->heap); + + kfree(cma_heap); + kfree(cma_name); + return ret; + } + + dma_coerce_mask_and_coherent(dma_heap_get_dev(cma_heap->heap), DMA_BIT_MASK(64)); + mb(); /* make sure we only set allocate after dma_mask is set */ + cma_uncached_heap_ops.allocate = cma_uncached_heap_allocate; + return 0; }