From patchwork Fri Dec 13 22:23:43 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Stultz X-Patchwork-Id: 22374 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-qc0-f198.google.com (mail-qc0-f198.google.com [209.85.216.198]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id E216923FBA for ; Fri, 13 Dec 2013 22:26:21 +0000 (UTC) Received: by mail-qc0-f198.google.com with SMTP id e9sf4355763qcy.1 for ; Fri, 13 Dec 2013 14:26:21 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:in-reply-to:references:x-original-sender :x-original-authentication-results:precedence:mailing-list:list-id :list-post:list-help:list-archive:list-unsubscribe; bh=PzRW6MBJNHk8YsTZGVWFLNw36BYYNCe/He9Dp/7vLHI=; b=I1PMPZ/4wqTCl3h8TOBXdLr3L9YGDXtAffbr0E8KGW6iF2bOWMK0sQZ0IDxJosIKpa iAA8cnICTO1Dxlx/+lshjxeFqRDtm/x8KwB3Dfv019vTHJ9iziaQwrWCAkSBEEkC33SA ++qxGfwyUeIKxKbuljDZ/cAwusaeLehjqecjM57V6hu2ZUh+PXulkLKI58QIwnC3tEep 8oZLR/KlYtOpEztZglO9HiBUxd80UIo3qQ4U7ZQs52dxWKV6HoSHHa84JtjFXqOkhD2Y Q/L6pmqzVu6+CYlqznZl2w+Z+lViTSi/liraclE2POsasuR9JFImYiqzIu+35TyE+ySr bITw== X-Gm-Message-State: ALoCoQlcpcvJGM3c95Mr5bViYTZrIazZ5ro5Val6WfnkPF9uMh0KKqteDUodilphD8f+Gs/YW+2w X-Received: by 10.236.0.232 with SMTP id 68mr1513663yhb.16.1386973581767; Fri, 13 Dec 2013 14:26:21 -0800 (PST) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.49.36.161 with SMTP id r1ls1240512qej.71.gmail; Fri, 13 Dec 2013 14:26:21 -0800 (PST) X-Received: by 10.52.0.174 with SMTP id 14mr45301vdf.86.1386973581661; Fri, 13 Dec 2013 14:26:21 -0800 (PST) Received: from mail-vc0-f170.google.com (mail-vc0-f170.google.com [209.85.220.170]) by mx.google.com with ESMTPS id gs2si1188509vdc.109.2013.12.13.14.26.21 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 13 Dec 2013 14:26:21 -0800 (PST) Received-SPF: neutral (google.com: 209.85.220.170 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) client-ip=209.85.220.170; Received: by mail-vc0-f170.google.com with SMTP id la4so1770588vcb.1 for ; Fri, 13 Dec 2013 14:26:21 -0800 (PST) X-Received: by 10.58.189.165 with SMTP id gj5mr72672vec.71.1386973581550; Fri, 13 Dec 2013 14:26:21 -0800 (PST) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patches@linaro.org Received: by 10.220.174.196 with SMTP id u4csp73454vcz; Fri, 13 Dec 2013 14:26:21 -0800 (PST) X-Received: by 10.68.242.68 with SMTP id wo4mr6232330pbc.32.1386973580781; Fri, 13 Dec 2013 14:26:20 -0800 (PST) Received: from mail-pd0-f178.google.com (mail-pd0-f178.google.com [209.85.192.178]) by mx.google.com with ESMTPS id sw1si2509108pab.83.2013.12.13.14.26.20 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 13 Dec 2013 14:26:20 -0800 (PST) Received-SPF: neutral (google.com: 209.85.192.178 is neither permitted nor denied by best guess record for domain of john.stultz@linaro.org) client-ip=209.85.192.178; Received: by mail-pd0-f178.google.com with SMTP id y10so3031831pdj.23 for ; Fri, 13 Dec 2013 14:26:20 -0800 (PST) X-Received: by 10.66.147.9 with SMTP id tg9mr5986163pab.5.1386973580268; Fri, 13 Dec 2013 14:26:20 -0800 (PST) Received: from localhost.localdomain (c-67-170-153-23.hsd1.or.comcast.net. [67.170.153.23]) by mx.google.com with ESMTPSA id qz9sm7457908pbc.3.2013.12.13.14.26.18 for (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 13 Dec 2013 14:26:19 -0800 (PST) From: John Stultz To: LKML Cc: Greg KH , Android Kernel Team , Sumit Semwal , Jesse Barker , Colin Cross , Rebecca Schultz Zavin , John Stultz Subject: [PATCH 009/115] gpu: ion: Allocate the sg_table at creation time rather than dynamically Date: Fri, 13 Dec 2013 14:23:43 -0800 Message-Id: <1386973529-4884-10-git-send-email-john.stultz@linaro.org> X-Mailer: git-send-email 1.8.3.2 In-Reply-To: <1386973529-4884-1-git-send-email-john.stultz@linaro.org> References: <1386973529-4884-1-git-send-email-john.stultz@linaro.org> X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: john.stultz@linaro.org X-Original-Authentication-Results: mx.google.com; spf=neutral (google.com: 209.85.220.170 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Precedence: list Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org List-ID: X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , From: Rebecca Schultz Zavin Rather than calling map_dma on the allocations dynamically, this patch switches to creating the sg_table at the time the buffer is created. This is necessary because in future updates the sg_table will be used for cache maintenance. Signed-off-by: Rebecca Schultz Zavin [jstultz: modified patch to apply to staging directory] Signed-off-by: John Stultz --- drivers/staging/android/ion/ion.c | 79 ++++++++++++++++++++------------------- 1 file changed, 41 insertions(+), 38 deletions(-) diff --git a/drivers/staging/android/ion/ion.c b/drivers/staging/android/ion/ion.c index 8937618..b95202b 100644 --- a/drivers/staging/android/ion/ion.c +++ b/drivers/staging/android/ion/ion.c @@ -135,6 +135,7 @@ static struct ion_buffer *ion_buffer_create(struct ion_heap *heap, unsigned long flags) { struct ion_buffer *buffer; + struct sg_table *table; int ret; buffer = kzalloc(sizeof(struct ion_buffer), GFP_KERNEL); @@ -149,6 +150,15 @@ static struct ion_buffer *ion_buffer_create(struct ion_heap *heap, kfree(buffer); return ERR_PTR(ret); } + + table = buffer->heap->ops->map_dma(buffer->heap, buffer); + if (IS_ERR_OR_NULL(table)) { + heap->ops->free(buffer); + kfree(buffer); + return ERR_PTR(PTR_ERR(table)); + } + buffer->sg_table = table; + buffer->dev = dev; buffer->size = len; mutex_init(&buffer->lock); @@ -164,9 +174,7 @@ static void ion_buffer_destroy(struct kref *kref) if (WARN_ON(buffer->kmap_cnt > 0)) buffer->heap->ops->unmap_kernel(buffer->heap, buffer); - if (WARN_ON(buffer->dmap_cnt > 0)) - buffer->heap->ops->unmap_dma(buffer->heap, buffer); - + buffer->heap->ops->unmap_dma(buffer->heap, buffer); buffer->heap->ops->free(buffer); mutex_lock(&dev->lock); rb_erase(&buffer->node, &dev->buffers); @@ -346,6 +354,7 @@ struct ion_handle *ion_alloc(struct ion_client *client, size_t len, mutex_unlock(&client->lock); } + return handle; } @@ -607,53 +616,42 @@ void ion_client_destroy(struct ion_client *client) kfree(client); } -static struct sg_table *ion_map_dma_buf(struct dma_buf_attachment *attachment, - enum dma_data_direction direction) +struct sg_table *ion_map_dma(struct ion_client *client, + struct ion_handle *handle) { - struct dma_buf *dmabuf = attachment->dmabuf; - struct ion_buffer *buffer = dmabuf->priv; + struct ion_buffer *buffer; struct sg_table *table; - mutex_lock(&buffer->lock); - - if (!buffer->heap->ops->map_dma) { - pr_err("%s: map_dma is not implemented by this heap.\n", + mutex_lock(&client->lock); + if (!ion_handle_validate(client, handle)) { + pr_err("%s: invalid handle passed to map_dma.\n", __func__); - mutex_unlock(&buffer->lock); - return ERR_PTR(-ENODEV); - } - /* if an sg list already exists for this buffer just return it */ - if (buffer->dmap_cnt) { - table = buffer->sg_table; - goto end; + mutex_unlock(&client->lock); + return ERR_PTR(-EINVAL); } - - /* otherwise call into the heap to create one */ - table = buffer->heap->ops->map_dma(buffer->heap, buffer); - if (IS_ERR_OR_NULL(table)) - goto err; - buffer->sg_table = table; -end: - buffer->dmap_cnt++; -err: - mutex_unlock(&buffer->lock); + buffer = handle->buffer; + table = buffer->sg_table; + mutex_unlock(&client->lock); return table; } -static void ion_unmap_dma_buf(struct dma_buf_attachment *attachment, - struct sg_table *table, - enum dma_data_direction direction) +void ion_unmap_dma(struct ion_client *client, struct ion_handle *handle) +{ +} + +static struct sg_table *ion_map_dma_buf(struct dma_buf_attachment *attachment, + enum dma_data_direction direction) { struct dma_buf *dmabuf = attachment->dmabuf; struct ion_buffer *buffer = dmabuf->priv; - mutex_lock(&buffer->lock); - buffer->dmap_cnt--; - if (!buffer->dmap_cnt) { - buffer->heap->ops->unmap_dma(buffer->heap, buffer); - buffer->sg_table = NULL; - } - mutex_unlock(&buffer->lock); + return buffer->sg_table; +} + +static void ion_unmap_dma_buf(struct dma_buf_attachment *attachment, + struct sg_table *table, + enum dma_data_direction direction) +{ } static int ion_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma) @@ -987,6 +985,11 @@ void ion_device_add_heap(struct ion_device *dev, struct ion_heap *heap) struct rb_node *parent = NULL; struct ion_heap *entry; + if (!heap->ops->allocate || !heap->ops->free || !heap->ops->map_dma || + !heap->ops->unmap_dma) + pr_err("%s: can not add heap with invalid ops struct.\n", + __func__); + heap->dev = dev; mutex_lock(&dev->lock); while (*p) {