From patchwork Tue Dec 17 05:07:53 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Stultz X-Patchwork-Id: 22552 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-ob0-f198.google.com (mail-ob0-f198.google.com [209.85.214.198]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 66244202E2 for ; Tue, 17 Dec 2013 05:08:06 +0000 (UTC) Received: by mail-ob0-f198.google.com with SMTP id wo20sf20700359obc.1 for ; Mon, 16 Dec 2013 21:08:06 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:in-reply-to:references:x-original-sender :x-original-authentication-results:precedence:mailing-list:list-id :list-post:list-help:list-archive:list-unsubscribe; bh=RIjA94OoW+pOS1KpyxSJAB094yHx2nBrS1L9tlarx/Q=; b=YTgXoAhs8yZ6gJsr9OMcpK/fPkdmmD5CJbDoZBzbXf/Qc+MXjqlatISf7b9qFj4glB y1RmV7AcX2Hko4o21P81dSiH84hC/HY1LuMZAGBx8Md940iifRFNhsH/f1+QOruZ5TbP a2QJ/nACJumOzbCobn8RD4Laep95TrCKrVmbmGEeJJrMpHv4GLviUHVehaG+Qwda2QyW q/AzYXtUh+hQguaDLO8eiOn5eL4Lh5YIstCehUa7OqelAtta7NmYYj7IadpTVcUXZex+ 0fGnJXgXwyzrgMr0HpLO9ijdf6G2Gb81UF17rDtfdojaDtlIgt6ux1vtc4cNUifUGBxS ABeQ== X-Gm-Message-State: ALoCoQmOokOIbv58OL369sGsz1Bk7pG5n8xvnlY0YI/2qEZJ65934Ly2tB1rtSn7pyjJ+kbB/DLd X-Received: by 10.182.219.226 with SMTP id pr2mr160262obc.44.1387256885968; Mon, 16 Dec 2013 21:08:05 -0800 (PST) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.49.35.108 with SMTP id g12ls2354323qej.81.gmail; Mon, 16 Dec 2013 21:08:05 -0800 (PST) X-Received: by 10.221.54.65 with SMTP id vt1mr841289vcb.46.1387256885828; Mon, 16 Dec 2013 21:08:05 -0800 (PST) Received: from mail-vc0-f171.google.com (mail-vc0-f171.google.com [209.85.220.171]) by mx.google.com with ESMTPS id p1si4559076vef.23.2013.12.16.21.08.05 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Mon, 16 Dec 2013 21:08:05 -0800 (PST) Received-SPF: neutral (google.com: 209.85.220.171 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) client-ip=209.85.220.171; Received: by mail-vc0-f171.google.com with SMTP id ik5so3880711vcb.30 for ; Mon, 16 Dec 2013 21:08:05 -0800 (PST) X-Received: by 10.58.37.67 with SMTP id w3mr10028378vej.22.1387256885697; Mon, 16 Dec 2013 21:08:05 -0800 (PST) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patches@linaro.org Received: by 10.220.174.196 with SMTP id u4csp43450vcz; Mon, 16 Dec 2013 21:08:04 -0800 (PST) X-Received: by 10.66.122.40 with SMTP id lp8mr25171813pab.82.1387256884420; Mon, 16 Dec 2013 21:08:04 -0800 (PST) Received: from mail-pa0-f46.google.com (mail-pa0-f46.google.com [209.85.220.46]) by mx.google.com with ESMTPS id l8si131279pao.152.2013.12.16.21.08.04 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Mon, 16 Dec 2013 21:08:04 -0800 (PST) Received-SPF: neutral (google.com: 209.85.220.46 is neither permitted nor denied by best guess record for domain of john.stultz@linaro.org) client-ip=209.85.220.46; Received: by mail-pa0-f46.google.com with SMTP id kl14so3968035pab.33 for ; Mon, 16 Dec 2013 21:08:04 -0800 (PST) X-Received: by 10.66.162.167 with SMTP id yb7mr25228683pab.16.1387256883941; Mon, 16 Dec 2013 21:08:03 -0800 (PST) Received: from localhost.localdomain (c-67-170-153-23.hsd1.or.comcast.net. [67.170.153.23]) by mx.google.com with ESMTPSA id oj6sm41996751pab.9.2013.12.16.21.08.01 for (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Mon, 16 Dec 2013 21:08:03 -0800 (PST) From: John Stultz To: LKML Cc: John Stultz , Colin Cross , Android Kernel Team , Greg KH Subject: [PATCH 3/3] staging: ion: Avoid using rt_mutexes directly. Date: Mon, 16 Dec 2013 21:07:53 -0800 Message-Id: <1387256873-21350-4-git-send-email-john.stultz@linaro.org> X-Mailer: git-send-email 1.8.3.2 In-Reply-To: <1387256873-21350-1-git-send-email-john.stultz@linaro.org> References: <1387256873-21350-1-git-send-email-john.stultz@linaro.org> X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: john.stultz@linaro.org X-Original-Authentication-Results: mx.google.com; spf=neutral (google.com: 209.85.220.171 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Precedence: list Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org List-ID: X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , RT_MUTEXES can be configured out of the kernel, causing compile problems with ION. To quote Colin: "rt_mutexes were added with the deferred freeing feature. Heaps need to return zeroed memory to userspace, but zeroing the memory on every allocation was causing performance issues. We added a SCHED_IDLE thread to zero memory in the background after freeing, but locking the heap from the SCHED_IDLE thread might block a high priority allocation thread for a long time. The lock is only used to protect the heap's free_list and free_list_size members, and is not held for any long or sleeping operations. Converting to a spinlock should prevent priority inversion without using the rt_mutex. I'd also rename it to free_lock to so it doesn't get used as a general heap lock." Thus this patch converts the rt_mutex usage to a spinlock and renames the lock free_lock to be more clear as to its use. I also had to change a bit of logic in ion_heap_freelist_drain() despite the for loop being a for_each_entry_safe(), I was still seeing list corruption or buffer sg table corruption if I dropped the lock before calling ion_buffer_destroy(). Not being able to sort out exactly why, I borrowed the loop structure from ion_heap_deferred_free() and that works in my testing w/o issue. Not sure if its the mixing of list traversal methods causing the issue? Thoughts would be appreciated. Cc: Colin Cross Cc: Android Kernel Team Cc: Greg KH Reported-by: Jim Davis Signed-off-by: John Stultz --- drivers/staging/android/ion/ion_heap.c | 31 +++++++++++++++++++------------ drivers/staging/android/ion/ion_priv.h | 2 +- 2 files changed, 20 insertions(+), 13 deletions(-) diff --git a/drivers/staging/android/ion/ion_heap.c b/drivers/staging/android/ion/ion_heap.c index 9cf5622..72fe74b 100644 --- a/drivers/staging/android/ion/ion_heap.c +++ b/drivers/staging/android/ion/ion_heap.c @@ -160,10 +160,10 @@ int ion_heap_pages_zero(struct page *page, size_t size, pgprot_t pgprot) void ion_heap_freelist_add(struct ion_heap *heap, struct ion_buffer *buffer) { - rt_mutex_lock(&heap->lock); + spin_lock(&heap->free_lock); list_add(&buffer->list, &heap->free_list); heap->free_list_size += buffer->size; - rt_mutex_unlock(&heap->lock); + spin_unlock(&heap->free_lock); wake_up(&heap->waitqueue); } @@ -171,34 +171,41 @@ size_t ion_heap_freelist_size(struct ion_heap *heap) { size_t size; - rt_mutex_lock(&heap->lock); + spin_lock(&heap->free_lock); size = heap->free_list_size; - rt_mutex_unlock(&heap->lock); + spin_unlock(&heap->free_lock); return size; } size_t ion_heap_freelist_drain(struct ion_heap *heap, size_t size) { - struct ion_buffer *buffer, *tmp; + struct ion_buffer *buffer; size_t total_drained = 0; if (ion_heap_freelist_size(heap) == 0) return 0; - rt_mutex_lock(&heap->lock); + spin_lock(&heap->free_lock); if (size == 0) size = heap->free_list_size; - list_for_each_entry_safe(buffer, tmp, &heap->free_list, list) { + while (true) { if (total_drained >= size) break; + if (list_empty(&heap->free_list)) + break; + + buffer = list_first_entry(&heap->free_list, struct ion_buffer, + list); list_del(&buffer->list); heap->free_list_size -= buffer->size; total_drained += buffer->size; + spin_unlock(&heap->free_lock); ion_buffer_destroy(buffer); + spin_lock(&heap->free_lock); } - rt_mutex_unlock(&heap->lock); + spin_unlock(&heap->free_lock); return total_drained; } @@ -213,16 +220,16 @@ static int ion_heap_deferred_free(void *data) wait_event_freezable(heap->waitqueue, ion_heap_freelist_size(heap) > 0); - rt_mutex_lock(&heap->lock); + spin_lock(&heap->free_lock); if (list_empty(&heap->free_list)) { - rt_mutex_unlock(&heap->lock); + spin_unlock(&heap->free_lock); continue; } buffer = list_first_entry(&heap->free_list, struct ion_buffer, list); list_del(&buffer->list); heap->free_list_size -= buffer->size; - rt_mutex_unlock(&heap->lock); + spin_unlock(&heap->free_lock); ion_buffer_destroy(buffer); } @@ -235,7 +242,7 @@ int ion_heap_init_deferred_free(struct ion_heap *heap) INIT_LIST_HEAD(&heap->free_list); heap->free_list_size = 0; - rt_mutex_init(&heap->lock); + spin_lock_init(&heap->free_lock); init_waitqueue_head(&heap->waitqueue); heap->task = kthread_run(ion_heap_deferred_free, heap, "%s", heap->name); diff --git a/drivers/staging/android/ion/ion_priv.h b/drivers/staging/android/ion/ion_priv.h index fc8a4c3..d986739 100644 --- a/drivers/staging/android/ion/ion_priv.h +++ b/drivers/staging/android/ion/ion_priv.h @@ -159,7 +159,7 @@ struct ion_heap { struct shrinker shrinker; struct list_head free_list; size_t free_list_size; - struct rt_mutex lock; + spinlock_t free_lock; wait_queue_head_t waitqueue; struct task_struct *task; int (*debug_show)(struct ion_heap *heap, struct seq_file *, void *);