From patchwork Fri Dec 13 22:23:41 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Stultz X-Patchwork-Id: 22372 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-pa0-f69.google.com (mail-pa0-f69.google.com [209.85.220.69]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 04F2C23FBA for ; Fri, 13 Dec 2013 22:26:16 +0000 (UTC) Received: by mail-pa0-f69.google.com with SMTP id hz1sf1971164pad.4 for ; Fri, 13 Dec 2013 14:26:16 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:in-reply-to:references:x-original-sender :x-original-authentication-results:precedence:mailing-list:list-id :list-post:list-help:list-archive:list-unsubscribe; bh=YwpmVxKPUVxjetfZwRhuMhpXo25XjK2CPOZN6Xbw7ZI=; b=jfX0UtyHXF8qW3cwB21I6+JAynfkoXk/1s8D7PiQAloFYAfVEwJsx8lbHgtQq0qyJN 2eIrr3015rw4wcrhVQO1FIyjCTel2m3FxAdnAAtXZBSgxi+4vhaGBjTNPjkcfp5Gwxbg uQ31O5sCbyMuEkQ1MyiszgQKHmMtkLaczBqgFkuVGhEGMiAbiTTTtqm54A9HLPLRffmG VeTyu1AV74A5t4vc4oFRzF7z85GBc1WDBa+6m08SqwwNMMgryzENkBSg6n8FoYaTGHfN 5hr/nT/LUINt3W0cfzefF7GzZHBXSy/9mrGEli91XcyNIMnReOu7Okn2QQ6skCh6lnfo EmbQ== X-Gm-Message-State: ALoCoQn97610aTI43HC77cr4sHrjeSCsCh2GsdaNwkUWeFezkBpNNezthvmQ/I+1XkjlpKa1IKsk X-Received: by 10.66.182.137 with SMTP id ee9mr2921658pac.0.1386973576295; Fri, 13 Dec 2013 14:26:16 -0800 (PST) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.49.116.42 with SMTP id jt10ls1103259qeb.66.gmail; Fri, 13 Dec 2013 14:26:16 -0800 (PST) X-Received: by 10.221.3.200 with SMTP id nz8mr1097378vcb.67.1386973576121; Fri, 13 Dec 2013 14:26:16 -0800 (PST) Received: from mail-ve0-f180.google.com (mail-ve0-f180.google.com [209.85.128.180]) by mx.google.com with ESMTPS id uw4si1190766vec.115.2013.12.13.14.26.16 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 13 Dec 2013 14:26:16 -0800 (PST) Received-SPF: neutral (google.com: 209.85.128.180 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) client-ip=209.85.128.180; Received: by mail-ve0-f180.google.com with SMTP id jz11so1815330veb.39 for ; Fri, 13 Dec 2013 14:26:16 -0800 (PST) X-Received: by 10.58.187.129 with SMTP id fs1mr2166355vec.45.1386973576044; Fri, 13 Dec 2013 14:26:16 -0800 (PST) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patches@linaro.org Received: by 10.220.174.196 with SMTP id u4csp73441vcz; Fri, 13 Dec 2013 14:26:15 -0800 (PST) X-Received: by 10.66.121.68 with SMTP id li4mr6098970pab.33.1386973574586; Fri, 13 Dec 2013 14:26:14 -0800 (PST) Received: from mail-pb0-f45.google.com (mail-pb0-f45.google.com [209.85.160.45]) by mx.google.com with ESMTPS id zq7si2504283pac.130.2013.12.13.14.26.14 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 13 Dec 2013 14:26:14 -0800 (PST) Received-SPF: neutral (google.com: 209.85.160.45 is neither permitted nor denied by best guess record for domain of john.stultz@linaro.org) client-ip=209.85.160.45; Received: by mail-pb0-f45.google.com with SMTP id rp16so3127598pbb.18 for ; Fri, 13 Dec 2013 14:26:14 -0800 (PST) X-Received: by 10.68.172.196 with SMTP id be4mr6177460pbc.12.1386973574113; Fri, 13 Dec 2013 14:26:14 -0800 (PST) Received: from localhost.localdomain (c-67-170-153-23.hsd1.or.comcast.net. [67.170.153.23]) by mx.google.com with ESMTPSA id qz9sm7457908pbc.3.2013.12.13.14.26.12 for (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 13 Dec 2013 14:26:13 -0800 (PST) From: John Stultz To: LKML Cc: Greg KH , Android Kernel Team , Sumit Semwal , Jesse Barker , Colin Cross , Rebecca Schultz Zavin , John Stultz Subject: [PATCH 007/115] gpu: ion: Use alloc_pages instead of vmalloc from the system heap Date: Fri, 13 Dec 2013 14:23:41 -0800 Message-Id: <1386973529-4884-8-git-send-email-john.stultz@linaro.org> X-Mailer: git-send-email 1.8.3.2 In-Reply-To: <1386973529-4884-1-git-send-email-john.stultz@linaro.org> References: <1386973529-4884-1-git-send-email-john.stultz@linaro.org> X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: john.stultz@linaro.org X-Original-Authentication-Results: mx.google.com; spf=neutral (google.com: 209.85.128.180 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Precedence: list Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org List-ID: X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , From: Rebecca Schultz Zavin With this change the ion_system_heap will only use kernel address space when the memory is mapped into the kernel (rare case). Signed-off-by: Rebecca Schultz Zavin [jstultz: modified patch to apply to staging directory] Signed-off-by: John Stultz --- drivers/staging/android/ion/ion_system_heap.c | 94 +++++++++++++++++---------- 1 file changed, 61 insertions(+), 33 deletions(-) diff --git a/drivers/staging/android/ion/ion_system_heap.c b/drivers/staging/android/ion/ion_system_heap.c index d7e0fa0..3383a88 100644 --- a/drivers/staging/android/ion/ion_system_heap.c +++ b/drivers/staging/android/ion/ion_system_heap.c @@ -27,74 +27,102 @@ static int ion_system_heap_allocate(struct ion_heap *heap, unsigned long size, unsigned long align, unsigned long flags) { - buffer->priv_virt = vmalloc_user(size); - if (!buffer->priv_virt) - return -ENOMEM; - return 0; -} - -void ion_system_heap_free(struct ion_buffer *buffer) -{ - vfree(buffer->priv_virt); -} - -struct sg_table *ion_system_heap_map_dma(struct ion_heap *heap, - struct ion_buffer *buffer) -{ struct sg_table *table; struct scatterlist *sg; - int i; - int npages = PAGE_ALIGN(buffer->size) / PAGE_SIZE; - void *vaddr = buffer->priv_virt; - int ret; + int i, j; + int npages = PAGE_ALIGN(size) / PAGE_SIZE; - table = kzalloc(sizeof(struct sg_table), GFP_KERNEL); + table = kmalloc(sizeof(struct sg_table), GFP_KERNEL); if (!table) - return ERR_PTR(-ENOMEM); - ret = sg_alloc_table(table, npages, GFP_KERNEL); - if (ret) + return -ENOMEM; + i = sg_alloc_table(table, npages, GFP_KERNEL); + if (i) goto err0; for_each_sg(table->sgl, sg, table->nents, i) { struct page *page; - page = vmalloc_to_page(vaddr); - if (!page) { - ret = -ENOMEM; + page = alloc_page(GFP_KERNEL); + if (!page) goto err1; - } sg_set_page(sg, page, PAGE_SIZE, 0); - vaddr += PAGE_SIZE; } - return table; + buffer->priv_virt = table; + return 0; err1: + for_each_sg(table->sgl, sg, i, j) + __free_page(sg_page(sg)); sg_free_table(table); err0: kfree(table); - return ERR_PTR(ret); + return -ENOMEM; } -void ion_system_heap_unmap_dma(struct ion_heap *heap, - struct ion_buffer *buffer) +void ion_system_heap_free(struct ion_buffer *buffer) { + int i; + struct scatterlist *sg; + struct sg_table *table = buffer->priv_virt; + + for_each_sg(table->sgl, sg, table->nents, i) + __free_page(sg_page(sg)); if (buffer->sg_table) sg_free_table(buffer->sg_table); kfree(buffer->sg_table); } +struct sg_table *ion_system_heap_map_dma(struct ion_heap *heap, + struct ion_buffer *buffer) +{ + return buffer->priv_virt; +} + +void ion_system_heap_unmap_dma(struct ion_heap *heap, + struct ion_buffer *buffer) +{ + return; +} + void *ion_system_heap_map_kernel(struct ion_heap *heap, struct ion_buffer *buffer) { - return buffer->priv_virt; + struct scatterlist *sg; + int i; + void *vaddr; + struct sg_table *table = buffer->priv_virt; + struct page **pages = kmalloc(sizeof(struct page *) * table->nents, + GFP_KERNEL); + + for_each_sg(table->sgl, sg, table->nents, i) + pages[i] = sg_page(sg); + vaddr = vmap(pages, table->nents, VM_MAP, PAGE_KERNEL); + kfree(pages); + + return vaddr; } void ion_system_heap_unmap_kernel(struct ion_heap *heap, struct ion_buffer *buffer) { + vunmap(buffer->vaddr); } int ion_system_heap_map_user(struct ion_heap *heap, struct ion_buffer *buffer, struct vm_area_struct *vma) { - return remap_vmalloc_range(vma, buffer->priv_virt, vma->vm_pgoff); + struct sg_table *table = buffer->priv_virt; + unsigned long addr = vma->vm_start; + unsigned long offset = vma->vm_pgoff; + struct scatterlist *sg; + int i; + + for_each_sg(table->sgl, sg, table->nents, i) { + if (offset) { + offset--; + continue; + } + vm_insert_page(vma, addr, sg_page(sg)); + addr += PAGE_SIZE; + } + return 0; } static struct ion_heap_ops vmalloc_ops = {