From patchwork Fri Dec 13 22:24:31 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Stultz X-Patchwork-Id: 22422 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-yh0-f69.google.com (mail-yh0-f69.google.com [209.85.213.69]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id C80C423FBA for ; Fri, 13 Dec 2013 22:27:38 +0000 (UTC) Received: by mail-yh0-f69.google.com with SMTP id a41sf4760201yho.4 for ; Fri, 13 Dec 2013 14:27:38 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:in-reply-to:references:x-original-sender :x-original-authentication-results:precedence:mailing-list:list-id :list-post:list-help:list-archive:list-unsubscribe; bh=CV6d60RYKFZ/eT4JWarmc0+uNA9pDVEqKued1P0esKg=; b=aZYUQz3C0vRkn9dB26Ro9HWkLTlUGv90DeHnBUPECyKxjZUZaFV28KBDEQ7W0FnPUJ iBX7vAQVPj+Ci16kMq6kFr+5nDrRL+kPsOwIw80d/NuteRvdt0APNMVgKoXpexmQjeKK MRBgJbFIEpw2OUSldI3egL35foVfGI55ZEXUNv8LVLjvJR1SA9M4okSBVQgACIBh5AN/ 3toNda368Vyr2iGFF1OiC8nf8jxS2Qm22oTeYnOB2zTbNcl0D22r9B6LHfssVbU/B0gD IB8L37tv5+b2Dutzpzfps9woLFD9LYvgd4jGbXDkechnjTznDJ3InqOraJWuN1FxFzSw xUew== X-Gm-Message-State: ALoCoQkJyPRVX8NoZ8nFrN3EmNRs8rL2lAl4qRtmD32rIpLSYaPrFjEj6M//M+tLn3uL1PWOVLmc X-Received: by 10.236.90.67 with SMTP id d43mr1681225yhf.36.1386973658648; Fri, 13 Dec 2013 14:27:38 -0800 (PST) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.49.120.164 with SMTP id ld4ls1227457qeb.51.gmail; Fri, 13 Dec 2013 14:27:38 -0800 (PST) X-Received: by 10.220.91.10 with SMTP id k10mr2228290vcm.7.1386973658560; Fri, 13 Dec 2013 14:27:38 -0800 (PST) Received: from mail-vb0-f48.google.com (mail-vb0-f48.google.com [209.85.212.48]) by mx.google.com with ESMTPS id cp3si1189861vcb.133.2013.12.13.14.27.38 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 13 Dec 2013 14:27:38 -0800 (PST) Received-SPF: neutral (google.com: 209.85.212.48 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) client-ip=209.85.212.48; Received: by mail-vb0-f48.google.com with SMTP id f13so1703818vbg.7 for ; Fri, 13 Dec 2013 14:27:38 -0800 (PST) X-Received: by 10.52.103.35 with SMTP id ft3mr1866558vdb.5.1386973658480; Fri, 13 Dec 2013 14:27:38 -0800 (PST) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patches@linaro.org Received: by 10.220.174.196 with SMTP id u4csp73564vcz; Fri, 13 Dec 2013 14:27:38 -0800 (PST) X-Received: by 10.68.108.194 with SMTP id hm2mr6211410pbb.22.1386973657676; Fri, 13 Dec 2013 14:27:37 -0800 (PST) Received: from mail-pa0-f46.google.com (mail-pa0-f46.google.com [209.85.220.46]) by mx.google.com with ESMTPS id ek3si2498378pbd.205.2013.12.13.14.27.37 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 13 Dec 2013 14:27:37 -0800 (PST) Received-SPF: neutral (google.com: 209.85.220.46 is neither permitted nor denied by best guess record for domain of john.stultz@linaro.org) client-ip=209.85.220.46; Received: by mail-pa0-f46.google.com with SMTP id kl14so587219pab.5 for ; Fri, 13 Dec 2013 14:27:37 -0800 (PST) X-Received: by 10.68.172.196 with SMTP id be4mr6182799pbc.12.1386973657240; Fri, 13 Dec 2013 14:27:37 -0800 (PST) Received: from localhost.localdomain (c-67-170-153-23.hsd1.or.comcast.net. [67.170.153.23]) by mx.google.com with ESMTPSA id qz9sm7457908pbc.3.2013.12.13.14.27.36 for (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 13 Dec 2013 14:27:36 -0800 (PST) From: John Stultz To: LKML Cc: Greg KH , Android Kernel Team , Sumit Semwal , Jesse Barker , Colin Cross , Rebecca Schultz Zavin , John Stultz Subject: [PATCH 057/115] gpu: ion: Modify zeroing code so it only allocates address space once Date: Fri, 13 Dec 2013 14:24:31 -0800 Message-Id: <1386973529-4884-58-git-send-email-john.stultz@linaro.org> X-Mailer: git-send-email 1.8.3.2 In-Reply-To: <1386973529-4884-1-git-send-email-john.stultz@linaro.org> References: <1386973529-4884-1-git-send-email-john.stultz@linaro.org> X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: john.stultz@linaro.org X-Original-Authentication-Results: mx.google.com; spf=neutral (google.com: 209.85.212.48 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Precedence: list Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org List-ID: X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , From: Rebecca Schultz Zavin vmap/vunmap spend a significant amount of time allocating the address space to map into. Rather than allocating address space for each page, instead allocate once for the entire allocation and then just map and unmap each page into that address space. Signed-off-by: Rebecca Schultz Zavin [jstultz: modified patch to apply to staging directory] Signed-off-by: John Stultz --- drivers/staging/android/ion/ion_system_heap.c | 28 ++++++++++++++++++++------- 1 file changed, 21 insertions(+), 7 deletions(-) diff --git a/drivers/staging/android/ion/ion_system_heap.c b/drivers/staging/android/ion/ion_system_heap.c index 89247cf..e54307f 100644 --- a/drivers/staging/android/ion/ion_system_heap.c +++ b/drivers/staging/android/ion/ion_system_heap.c @@ -91,7 +91,7 @@ static struct page *alloc_buffer_page(struct ion_system_heap *heap, static void free_buffer_page(struct ion_system_heap *heap, struct ion_buffer *buffer, struct page *page, - unsigned int order) + unsigned int order, struct vm_struct *vm_struct) { bool cached = ion_buffer_cached(buffer); bool split_pages = ion_buffer_fault_user_mappings(buffer); @@ -105,10 +105,13 @@ static void free_buffer_page(struct ion_system_heap *heap, purpose is to keep the pages out of the cache */ for (i = 0; i < (1 << order); i++) { struct page *sub_page = page + i; - void *addr = vmap(&sub_page, 1, VM_MAP, - pgprot_writecombine(PAGE_KERNEL)); - memset(addr, 0, PAGE_SIZE); - vunmap(addr); + struct page **pages = &sub_page; + map_vm_area(vm_struct, + pgprot_writecombine(PAGE_KERNEL), + &pages); + memset(vm_struct->addr, 0, PAGE_SIZE); + unmap_kernel_range((unsigned long)vm_struct->addr, + PAGE_SIZE); } ion_page_pool_free(pool, page); } else if (split_pages) { @@ -164,6 +167,8 @@ static int ion_system_heap_allocate(struct ion_heap *heap, long size_remaining = PAGE_ALIGN(size); unsigned int max_order = orders[0]; bool split_pages = ion_buffer_fault_user_mappings(buffer); + struct vm_struct *vm_struct; + pte_t *ptes; INIT_LIST_HEAD(&pages); while (size_remaining > 0) { @@ -211,10 +216,13 @@ static int ion_system_heap_allocate(struct ion_heap *heap, err1: kfree(table); err: + vm_struct = get_vm_area(PAGE_SIZE, &ptes); list_for_each_entry(info, &pages, list) { - free_buffer_page(sys_heap, buffer, info->page, info->order); + free_buffer_page(sys_heap, buffer, info->page, info->order, + vm_struct); kfree(info); } + free_vm_area(vm_struct); return -ENOMEM; } @@ -227,10 +235,16 @@ void ion_system_heap_free(struct ion_buffer *buffer) struct sg_table *table = buffer->sg_table; struct scatterlist *sg; LIST_HEAD(pages); + struct vm_struct *vm_struct; + pte_t *ptes; int i; + vm_struct = get_vm_area(PAGE_SIZE, &ptes); + for_each_sg(table->sgl, sg, table->nents, i) - free_buffer_page(sys_heap, buffer, sg_page(sg), get_order(sg_dma_len(sg))); + free_buffer_page(sys_heap, buffer, sg_page(sg), + get_order(sg_dma_len(sg)), vm_struct); + free_vm_area(vm_struct); sg_free_table(table); kfree(table); }