From patchwork Thu May 30 14:08:54 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maarten Lankhorst X-Patchwork-Id: 17303 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-bk0-f69.google.com (mail-bk0-f69.google.com [209.85.214.69]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 23A1C25C6D for ; Thu, 30 May 2013 14:09:28 +0000 (UTC) Received: by mail-bk0-f69.google.com with SMTP id jc10sf279067bkc.0 for ; Thu, 30 May 2013 07:09:26 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=x-beenthere:x-forwarded-to:x-forwarded-for:delivered-to:message-id :date:from:user-agent:mime-version:to:references:in-reply-to:cc :subject:x-beenthere:x-mailman-version:precedence:list-id :list-unsubscribe:list-archive:list-post:list-help:list-subscribe :errors-to:sender:x-gm-message-state:x-original-sender :x-original-authentication-results:mailing-list:x-google-group-id :content-type:content-transfer-encoding; bh=NMzNATVd6x+YlOloIkMXSHZ7rS2xpMLTRONt8YCVtDE=; b=exBOTtPj5Aic/hIWClA6Xvj2V1mlljd6T8jT0Sohn29FrFt2rFB2xmLxip6pBfDvEy A3FvlPjvotoB18k3Bt3jR6U1gG0uzgYK3VGvtn+/auJZpfRwse+wdGkdQCGJayGkLt4e GcTo0IuXlYP9vb0PB1emPv74d5NhFfqlxn6nICYOAFVCRkNopogqfYLjEtrz9XycTT71 Lx5iA5DR25UHcP0t6tosIJuQYg+buqJsyHJpGiWHNYsmvzqhLtm/ypkT1tXsYv8IZA6r BxVrKbhSHGJPmaowGKvwkE7kkYoVboc6ECWTr8G28BqxbvfaIIzQsKC2I689K/MUVAsw oi+g== X-Received: by 10.180.75.133 with SMTP id c5mr2676791wiw.2.1369922966556; Thu, 30 May 2013 07:09:26 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.180.189.69 with SMTP id gg5ls270556wic.48.gmail; Thu, 30 May 2013 07:09:26 -0700 (PDT) X-Received: by 10.180.126.97 with SMTP id mx1mr4845041wib.57.1369922966454; Thu, 30 May 2013 07:09:26 -0700 (PDT) Received: from mail-ve0-x231.google.com (mail-ve0-x231.google.com [2607:f8b0:400c:c01::231]) by mx.google.com with ESMTPS id o6si8400379wja.117.2013.05.30.07.09.26 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Thu, 30 May 2013 07:09:26 -0700 (PDT) Received-SPF: neutral (google.com: 2607:f8b0:400c:c01::231 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) client-ip=2607:f8b0:400c:c01::231; Received: by mail-ve0-f177.google.com with SMTP id cz10so204563veb.36 for ; Thu, 30 May 2013 07:09:25 -0700 (PDT) X-Received: by 10.220.112.16 with SMTP id u16mr5690319vcp.40.1369922965277; Thu, 30 May 2013 07:09:25 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patches@linaro.org Received: by 10.220.229.199 with SMTP id jj7csp9594vcb; Thu, 30 May 2013 07:09:24 -0700 (PDT) X-Received: by 10.14.198.136 with SMTP id v8mr9845103een.68.1369922964429; Thu, 30 May 2013 07:09:24 -0700 (PDT) Received: from ip-10-141-164-156.ec2.internal (lists.linaro.org. [54.225.227.206]) by mx.google.com with ESMTPS id g44si26822188eeo.23.2013.05.30.07.09.23 for (version=TLSv1 cipher=RC4-SHA bits=128/128); Thu, 30 May 2013 07:09:24 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linaro-mm-sig-bounces@lists.linaro.org designates 54.225.227.206 as permitted sender) client-ip=54.225.227.206; Received: from localhost ([127.0.0.1] helo=ip-10-141-164-156.ec2.internal) by ip-10-141-164-156.ec2.internal with esmtp (Exim 4.76) (envelope-from ) id 1Ui3Vo-0007zv-RL; Thu, 30 May 2013 14:07:28 +0000 Received: from youngberry.canonical.com ([91.189.89.112]) by ip-10-141-164-156.ec2.internal with esmtp (Exim 4.76) (envelope-from ) id 1Ui3Vm-0007zq-SA for linaro-mm-sig@lists.linaro.org; Thu, 30 May 2013 14:07:26 +0000 Received: from 5ed49945.cm-7-5c.dynamic.ziggo.nl ([94.212.153.69] helo=[192.168.1.128]) by youngberry.canonical.com with esmtpsa (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1Ui3XC-0005a5-Nm; Thu, 30 May 2013 14:08:54 +0000 Message-ID: <51A75D76.5090700@canonical.com> Date: Thu, 30 May 2013 16:08:54 +0200 From: Maarten Lankhorst User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/20130510 Thunderbird/17.0.6 MIME-Version: 1.0 To: Inki Dae References: <20130528144420.4538.70725.stgit@patser> <20130528144839.4538.39821.stgit@patser> In-Reply-To: Cc: linux-arch@vger.kernel.org, peterz@infradead.org, x86@kernel.org, "linux-kernel@vger.kernel.org" , DRI mailing list , "linaro-mm-sig@lists.linaro.org" , robclark@gmail.com, rostedt@goodmis.org, tglx@linutronix.de, mingo@elte.hu, "linux-media@vger.kernel.org" Subject: Re: [Linaro-mm-sig] [PATCH v4 2/4] mutex: add support for wound/wait style locks, v5 X-BeenThere: linaro-mm-sig@lists.linaro.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: , List-Help: , List-Subscribe: , Errors-To: linaro-mm-sig-bounces@lists.linaro.org Sender: linaro-mm-sig-bounces@lists.linaro.org X-Gm-Message-State: ALoCoQljhGcYlRDEOIafI7BMlVRrZaxslxFgnzYzP+qh8RnabLERPklPba/znGnt4CVwa2it7rw5 X-Original-Sender: maarten.lankhorst@canonical.com X-Original-Authentication-Results: mx.google.com; spf=neutral (google.com: 2607:f8b0:400c:c01::231 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 Op 29-05-13 12:33, Inki Dae schreef: > Hi, > > Just minor comments > > +Usage >> +----- >> + >> +Three different ways to acquire locks within the same w/w class. Common >> +definitions for methods #1 and #2: >> + >> +static DEFINE_WW_CLASS(ww_class); >> + >> +struct obj { >> + struct ww_mutex lock; >> + /* obj data */ >> +}; >> + >> +struct obj_entry { >> + struct list_head *list; >> + struct obj *obj; >> +}; >> + >> +Method 1, using a list in execbuf->buffers that's not allowed to be >> reordered. >> +This is useful if a list of required objects is already tracked somewhere. >> +Furthermore the lock helper can use propagate the -EALREADY return code >> back to >> +the caller as a signal that an object is twice on the list. This is >> useful if >> +the list is constructed from userspace input and the ABI requires >> userspace to >> +not have duplicate entries (e.g. for a gpu commandbuffer submission >> ioctl). >> + >> +int lock_objs(struct list_head *list, struct ww_acquire_ctx *ctx) >> +{ >> + struct obj *res_obj = NULL; >> + struct obj_entry *contended_entry = NULL; >> + struct obj_entry *entry; >> + >> + ww_acquire_init(ctx, &ww_class); >> + >> +retry: >> + list_for_each_entry (list, entry) { >> + if (entry == res_obj) { >> Indeed, documentation was wrong. With the below diff it should almost compile now. I really don't want to know if it really does, it's meant to be documentation! diff --git a/Documentation/ww-mutex-design.txt b/Documentation/ww-mutex-design.txt index 8bd1761..379739c 100644 --- a/Documentation/ww-mutex-design.txt +++ b/Documentation/ww-mutex-design.txt @@ -100,7 +100,7 @@ struct obj { }; struct obj_entry { - struct list_head *list; + struct list_head head; struct obj *obj; }; @@ -120,14 +120,14 @@ int lock_objs(struct list_head *list, struct ww_acquire_ctx *ctx) ww_acquire_init(ctx, &ww_class); retry: - list_for_each_entry (list, entry) { - if (entry == res_obj) { + list_for_each_entry (entry, list, head) { + if (entry->obj == res_obj) { res_obj = NULL; continue; } ret = ww_mutex_lock(&entry->obj->lock, ctx); if (ret < 0) { - contended_obj = entry; + contended_entry = entry; goto err; } } @@ -136,7 +136,7 @@ retry: return 0; err: - list_for_each_entry_continue_reverse (list, contended_entry, entry) + list_for_each_entry_continue_reverse (entry, list, head) ww_mutex_unlock(&entry->obj->lock); if (res_obj) @@ -163,13 +163,13 @@ int lock_objs(struct list_head *list, struct ww_acquire_ctx *ctx) ww_acquire_init(ctx, &ww_class); - list_for_each_entry (list, entry) { + list_for_each_entry (entry, list, head) { ret = ww_mutex_lock(&entry->obj->lock, ctx); if (ret < 0) { entry2 = entry; - list_for_each_entry_continue_reverse (list, entry2) - ww_mutex_unlock(&entry->obj->lock); + list_for_each_entry_continue_reverse (entry2, list, head) + ww_mutex_unlock(&entry2->obj->lock); if (ret != -EDEADLK) { ww_acquire_fini(ctx); @@ -184,8 +184,8 @@ int lock_objs(struct list_head *list, struct ww_acquire_ctx *ctx) * buf->next to the first unlocked entry, * restarting the for loop. */ - list_del(&entry->list); - list_add(&entry->list, list); + list_del(&entry->head); + list_add(&entry->head, list); } } @@ -199,7 +199,7 @@ void unlock_objs(struct list_head *list, struct ww_acquire_ctx *ctx) { struct obj_entry *entry; - list_for_each_entry (list, entry) + list_for_each_entry (entry, list, head) ww_mutex_unlock(&entry->obj->lock); ww_acquire_fini(ctx); @@ -244,22 +244,21 @@ struct obj { static DEFINE_WW_CLASS(ww_class); -void __unlock_objs(struct list_head *list, struct ww_acquire_ctx *ctx) +void __unlock_objs(struct list_head *list) { - struct obj entry; + struct obj *entry, *temp; - for_each_safe (list, entry) { + list_for_each_entry_safe (entry, temp, list, locked_list) { /* need to do that before unlocking, since only the current lock holder is allowed to use object */ - list_del(entry->locked_list); + list_del(&entry->locked_list); ww_mutex_unlock(entry->ww_mutex) } } void lock_objs(struct list_head *list, struct ww_acquire_ctx *ctx) { - struct list_head locked_buffers; - struct obj obj = NULL, entry; + struct obj *obj; ww_acquire_init(ctx, &ww_class); @@ -275,15 +274,15 @@ retry: continue; } if (ret == -EDEADLK) { - __unlock_objs(list, ctx); + __unlock_objs(list); ww_mutex_lock_slow(obj, ctx); - list_add(locked_buffers, entry->locked_list); + list_add(&entry->locked_list, list); goto retry; } /* locked a new object, add it to the list */ - list_add(locked_buffers, entry->locked_list); + list_add_tail(&entry->locked_list, list); } ww_acquire_done(ctx); @@ -292,7 +291,7 @@ retry: void unlock_objs(struct list_head *list, struct ww_acquire_ctx *ctx) { - __unlock_objs(list, ctx); + __unlock_objs(list); ww_acquire_fini(ctx); }