From patchwork Mon Jul 2 16:07:27 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maarten Lankhorst X-Patchwork-Id: 9748 Return-Path: X-Original-To: patchwork@peony.canonical.com Delivered-To: patchwork@peony.canonical.com Received: from fiordland.canonical.com (fiordland.canonical.com [91.189.94.145]) by peony.canonical.com (Postfix) with ESMTP id 23DF923E01 for ; Mon, 2 Jul 2012 16:07:40 +0000 (UTC) Received: from mail-yw0-f50.google.com (mail-yw0-f50.google.com [209.85.213.50]) by fiordland.canonical.com (Postfix) with ESMTP id A7E1AA18B7D for ; Mon, 2 Jul 2012 16:07:39 +0000 (UTC) Received: by yhjj63 with SMTP id j63so5216288yhj.37 for ; Mon, 02 Jul 2012 09:07:39 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=x-forwarded-to:x-forwarded-for:delivered-to:received-spf :dkim-signature:message-id:date:from:user-agent:mime-version:to:cc :subject:x-beenthere:x-mailman-version:precedence:list-id :list-unsubscribe:list-archive:list-post:list-help:list-subscribe :content-type:content-transfer-encoding:sender:errors-to :x-gm-message-state; bh=ugcJcgRGGfi8f8oTJFSWkfIVtWMFgQMae7pqaJdKy/M=; b=DyxOYSranuoLzkT40M4VHrHaugkRYcTUjP9043GGEAqYDua8/h1kmtk1Wvg2OT08RM aEOlPUqLYC1y6+c/lu4X7OTtL8f1zZfbpqaID8uNVEwo4iWNrvO334DXrrLoXyixG0gk j7ZeTwBf3l0wimxI887PTnZOhEGJXnJ9Ba8ojwbPDfPt2D6cy9mFpgX5Q3QuNbDVaFMc K0agM/vUwWrBjeHNRANYceebH6iCrtRN8tflPmwzE+sVR/mBseBAxnLb2nVFR2cotLrM XF2IZHWOOkFFS6Gf+6v91q+hdnJnePRfbMyQnca29XghK+1V/DGS6dpVhVZ4Imp4zeNo gdYQ== Received: by 10.42.155.73 with SMTP id t9mr6477994icw.48.1341245258793; Mon, 02 Jul 2012 09:07:38 -0700 (PDT) X-Forwarded-To: linaro-patchwork@canonical.com X-Forwarded-For: patch@linaro.org linaro-patchwork@canonical.com Delivered-To: patches@linaro.org Received: by 10.231.24.148 with SMTP id v20csp25701ibb; Mon, 2 Jul 2012 09:07:37 -0700 (PDT) Received: by 10.205.127.77 with SMTP id gz13mr3174131bkc.17.1341245256341; Mon, 02 Jul 2012 09:07:36 -0700 (PDT) Received: from mombin.canonical.com (mombin.canonical.com. [91.189.95.16]) by mx.google.com with ESMTP id hw4si12503357bkc.7.2012.07.02.09.07.34; Mon, 02 Jul 2012 09:07:36 -0700 (PDT) Received-SPF: neutral (google.com: 91.189.95.16 is neither permitted nor denied by best guess record for domain of linaro-mm-sig-bounces@lists.linaro.org) client-ip=91.189.95.16; Authentication-Results: mx.google.com; spf=neutral (google.com: 91.189.95.16 is neither permitted nor denied by best guess record for domain of linaro-mm-sig-bounces@lists.linaro.org) smtp.mail=linaro-mm-sig-bounces@lists.linaro.org; dkim=neutral (body hash did not verify) header.i=@gmail.com Received: from localhost ([127.0.0.1] helo=mombin.canonical.com) by mombin.canonical.com with esmtp (Exim 4.71) (envelope-from ) id 1Slj9v-0003oG-QS; Mon, 02 Jul 2012 16:07:32 +0000 Received: from mail-ey0-f170.google.com ([209.85.215.170]) by mombin.canonical.com with esmtp (Exim 4.71) (envelope-from ) id 1Slj9u-0003oB-1R for linaro-mm-sig@lists.linaro.org; Mon, 02 Jul 2012 16:07:30 +0000 Received: by eaal12 with SMTP id l12so3954169eaa.1 for ; Mon, 02 Jul 2012 09:07:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=message-id:date:from:user-agent:mime-version:to:cc:subject :content-type:content-transfer-encoding; bh=+66Gvhl3awVdkG2gfYeswawTmBAuIEdX2Hvjgj+JkK8=; b=JWS6tinmLgvmewmk67fSEuP5m7H0u3PRj4hKv5kXkdksckMCGg4rW2HYSP/pao1aHr Qi+tpKiuZnP/osdD3sklaq4zQRYuc/7TBh66OlVbBoaIcpWuS6f9V4RL1nfF40TfWSGH sBSMoO/vuNFrfhY3yz460ZPOXTKDWZC2LXxuon8FjSzpwlBFImR8VsQF/YBY/ry5cfX9 Ekdyp9eah8xSpSsKj/mYymF/uYvyAOG5F5D5i/83LeYGH+Odn16UpToE+v5BBKQEk+qR lLTXezMIOB8DFHjU9ZJd1cix8cRNqvitJdvAbc+rqp4dfL3P/oKkYDDiiEVAYYRWYeFM c+xw== Received: by 10.14.127.203 with SMTP id d51mr2817992eei.201.1341245249556; Mon, 02 Jul 2012 09:07:29 -0700 (PDT) Received: from [192.168.1.128] (5ED48CEF.cm-7-5c.dynamic.ziggo.nl. [94.212.140.239]) by mx.google.com with ESMTPS id q53sm36224158eef.8.2012.07.02.09.07.27 (version=SSLv3 cipher=OTHER); Mon, 02 Jul 2012 09:07:28 -0700 (PDT) Message-ID: <4FF1C73F.6040108@gmail.com> Date: Mon, 02 Jul 2012 18:07:27 +0200 From: Maarten Lankhorst User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:13.0) Gecko/20120615 Thunderbird/13.0.1 MIME-Version: 1.0 To: dri-devel@lists.freedesktop.org Cc: linaro-mm-sig@lists.linaro.org, linux-media@vger.kernel.org Subject: [Linaro-mm-sig] [RFC v2] implicit drm synchronization wip with dma-buf X-BeenThere: linaro-mm-sig@lists.linaro.org X-Mailman-Version: 2.1.13 Precedence: list List-Id: "Unified memory management interest group." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: linaro-mm-sig-bounces@lists.linaro.org Errors-To: linaro-mm-sig-bounces@lists.linaro.org X-Gm-Message-State: ALoCoQlLuOnC0j3QtEjtoA1l8Lm9rLo9RoM9XzenTFsuOe5Plb9Lz29a8PMrfmQRMIlXm6hzN02y Well, V2 time! I was hinted to look at ttm_execbuf_util, and it does indeed contain some nice code. My goal was to extend dma-buf in a generic way now, some elements from v1 are remaining, most notably a dma-buf used for syncing. However it is expected now that dma-buf syncing will work in a very specific way now, slightly more strict than in v1. Instead of each buffer having their own dma-buf I put in 1 per command submission. This submission hasn't been run-time tested yet, but I expect the api to go something like this. Intended to be used like this: list_init(&head); add_list_tail(&validate1->entry, &head); add_list_tail(&validate2->entry, &head); add_list_tail(&validate3->entry, &head); r = dmabufmgr_eu_reserve_buffers(&head); if (r) return r; // add waits on cpu or gpu list_for_each_entry(validate, ...) { if (!validate->sync_buf) continue; // Check attachments to see if we already imported sync_buf // somewhere, if not attach to it. // waiting until cur_seq - dmabuf->sync_val >= 0 on either cpu // or as command submitted to gpu // sync_buf itself is a dma-buf, so it should be trivial // TODO: sync_buf should NEVER be validated, add is_sync_buf to dma_buf? // If this step fails: dmabufmgr_eu_backoff_reservation // else: // dmabufmgr_eu_fence_buffer_objects(our_own_sync_buf, // hwchannel * max(minhwalign, 4), ++counter[hwchannel]); // XXX: Do we still require a minimum alignment? I set up 16 for nouveau, // but this is no longer needed in this design since it only matters // for writes for which nouveau would already control the offset. } // Some time after execbuffer was executed, doesn't have to be right away but before // getting in the danger of our own counter wrapping around: // grab dmabufmgr.lru_lock, and cleanup by unreffing sync_buf when // sync_buf == ownbuf, sync_ofs == ownofs, and sync_val == saved_counter // In the meantime someone else or even us might have reserved this dma_buf // again, which is why all those checks are needed before unreffing. diff --git a/drivers/base/Makefile b/drivers/base/Makefile index 5aa2d70..86e7598 100644 --- a/drivers/base/Makefile +++ b/drivers/base/Makefile @@ -10,7 +10,7 @@ obj-$(CONFIG_CMA) += dma-contiguous.o obj-y += power/ obj-$(CONFIG_HAS_DMA) += dma-mapping.o obj-$(CONFIG_HAVE_GENERIC_DMA_COHERENT) += dma-coherent.o -obj-$(CONFIG_DMA_SHARED_BUFFER) += dma-buf.o +obj-$(CONFIG_DMA_SHARED_BUFFER) += dma-buf.o dma-buf-mgr.o dma-buf-mgr-eu.o obj-$(CONFIG_ISA) += isa.o obj-$(CONFIG_FW_LOADER) += firmware_class.o obj-$(CONFIG_NUMA) += node.o diff --git a/drivers/base/dma-buf-mgr-eu.c b/drivers/base/dma-buf-mgr-eu.c new file mode 100644 index 0000000..27ebc68 --- /dev/null +++ b/drivers/base/dma-buf-mgr-eu.c @@ -0,0 +1,170 @@ +/* + * Copyright (C) 2012 Canonical Ltd + * + * Based on ttm_bo.c which bears the following copyright notice, + * but is dual licensed: + * + * Copyright (c) 2006-2009 VMware, Inc., Palo Alto, CA., USA + * All Rights Reserved. + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the + * "Software"), to deal in the Software without restriction, including + * without limitation the rights to use, copy, modify, merge, publish, + * distribute, sub license, and/or sell copies of the Software, and to + * permit persons to whom the Software is furnished to do so, subject to + * the following conditions: + * + * The above copyright notice and this permission notice (including the + * next paragraph) shall be included in all copies or substantial portions + * of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL + * THE COPYRIGHT HOLDERS, AUTHORS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, + * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR + * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE + * USE OR OTHER DEALINGS IN THE SOFTWARE. + * + **************************************************************************/ + +#include +#include +#include + +static void dmabufmgr_eu_backoff_reservation_locked(struct list_head *list) +{ + struct dmabufmgr_validate *entry; + + list_for_each_entry(entry, list, head) { + struct dma_buf *bo = entry->bo; + if (!entry->reserved) + continue; + + entry->reserved = false; + atomic_set(&bo->reserved, 0); + wake_up_all(&bo->event_queue); + if (entry->sync_buf) + dma_buf_put(entry->sync_buf); + entry->sync_buf = NULL; + } +} + +static int +dmabufmgr_eu_wait_unreserved_locked(struct list_head *list, + struct dma_buf *bo) +{ + int ret; + + spin_unlock(&dmabufmgr.lru_lock); + ret = dmabufmgr_bo_wait_unreserved(bo, true); + spin_lock(&dmabufmgr.lru_lock); + if (unlikely(ret != 0)) + dmabufmgr_eu_backoff_reservation_locked(list); + return ret; +} + +void +dmabufmgr_eu_backoff_reservation(struct list_head *list) +{ + struct dmabufmgr_validate *entry; + + if (list_empty(list)) + return; + + entry = list_first_entry(list, struct dmabufmgr_validate, head); + spin_lock(&dmabufmgr.lru_lock); + dmabufmgr_eu_backoff_reservation_locked(list); + spin_unlock(&dmabufmgr.lru_lock); +} +EXPORT_SYMBOL_GPL(dmabufmgr_eu_backoff_reservation); + +int +dmabufmgr_eu_reserve_buffers(struct list_head *list) +{ + struct dmabufmgr_validate *entry; + int ret; + u32 val_seq; + + if (list_empty(list)) + return 0; + + list_for_each_entry(entry, list, head) { + entry->reserved = false; + entry->sync_buf = NULL; + } + +retry: + spin_lock(&dmabufmgr.lru_lock); + val_seq = dmabufmgr.counter++; + + list_for_each_entry(entry, list, head) { + struct dma_buf *bo = entry->bo; + +retry_this_bo: + ret = dmabufmgr_bo_reserve_locked(bo, true, true, true, val_seq); + switch (ret) { + case 0: + break; + case -EBUSY: + ret = dmabufmgr_eu_wait_unreserved_locked(list, bo); + if (unlikely(ret != 0)) { + spin_unlock(&dmabufmgr.lru_lock); + return ret; + } + goto retry_this_bo; + case -EAGAIN: + dmabufmgr_eu_backoff_reservation_locked(list); + spin_unlock(&dmabufmgr.lru_lock); + ret = dmabufmgr_bo_wait_unreserved(bo, true); + if (unlikely(ret != 0)) + return ret; + goto retry; + default: + dmabufmgr_eu_backoff_reservation_locked(list); + spin_unlock(&dmabufmgr.lru_lock); + return ret; + } + + entry->reserved = true; + if (bo->sync_buf) + get_dma_buf(bo->sync_buf); + entry->sync_buf = bo->sync_buf; + entry->sync_ofs = bo->sync_ofs; + entry->sync_val = bo->sync_val; + } + spin_unlock(&dmabufmgr.lru_lock); + + return 0; +} +EXPORT_SYMBOL_GPL(dmabufmgr_eu_reserve_buffers); + +void +dmabufmgr_eu_fence_buffer_objects(struct dma_buf *sync_buf, u32 ofs, u32 seq, struct list_head *list) +{ + struct dmabufmgr_validate *entry; + struct dma_buf *bo; + + if (list_empty(list) || WARN_ON(!sync_buf)) + return; + + spin_lock(&dmabufmgr.lru_lock); + + list_for_each_entry(entry, list, head) { + bo = entry->bo; + dmabufmgr_bo_unreserve_locked(bo); + entry->reserved = false; + if (entry->sync_buf) + dma_buf_put(entry->sync_buf); + entry->sync_buf = NULL; + + get_dma_buf(sync_buf); + bo->sync_buf = sync_buf; + bo->sync_ofs = ofs; + bo->sync_val = seq; + } + + spin_unlock(&dmabufmgr.lru_lock); +} +EXPORT_SYMBOL_GPL(dmabufmgr_eu_fence_buffer_objects); diff --git a/drivers/base/dma-buf-mgr.c b/drivers/base/dma-buf-mgr.c new file mode 100644 index 0000000..14756ff --- /dev/null +++ b/drivers/base/dma-buf-mgr.c @@ -0,0 +1,149 @@ +/* + * Copyright (C) 2012 Canonical Ltd + * + * Based on ttm_bo.c which bears the following copyright notice, + * but is dual licensed: + * + * Copyright (c) 2006-2009 VMware, Inc., Palo Alto, CA., USA + * All Rights Reserved. + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the + * "Software"), to deal in the Software without restriction, including + * without limitation the rights to use, copy, modify, merge, publish, + * distribute, sub license, and/or sell copies of the Software, and to + * permit persons to whom the Software is furnished to do so, subject to + * the following conditions: + * + * The above copyright notice and this permission notice (including the + * next paragraph) shall be included in all copies or substantial portions + * of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL + * THE COPYRIGHT HOLDERS, AUTHORS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, + * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR + * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE + * USE OR OTHER DEALINGS IN THE SOFTWARE. + * + **************************************************************************/ +/* + * Authors: Thomas Hellstrom + */ + + +#include +#include +#include +#include +#include +#include +#include + +/* Based on ttm_bo.c with vm_lock and fence_lock removed + * lru_lock takes care of fence_lock as well + */ +struct dmabufmgr dmabufmgr = { + .lru_lock = __SPIN_LOCK_UNLOCKED(dmabufmgr.lru_lock), + .counter = 1, +}; + +int +dmabufmgr_bo_reserve_locked(struct dma_buf *bo, + bool interruptible, bool no_wait, + bool use_sequence, u32 sequence) +{ + int ret; + + while (unlikely(atomic_cmpxchg(&bo->reserved, 0, 1) != 0)) { + /** + * Deadlock avoidance for multi-bo reserving. + */ + if (use_sequence && bo->seq_valid) { + /** + * We've already reserved this one. + */ + if (unlikely(sequence == bo->val_seq)) + return -EDEADLK; + /** + * Already reserved by a thread that will not back + * off for us. We need to back off. + */ + if (unlikely(sequence - bo->val_seq < (1 << 31))) + return -EAGAIN; + } + + if (no_wait) + return -EBUSY; + + spin_unlock(&dmabufmgr.lru_lock); + ret = dmabufmgr_bo_wait_unreserved(bo, interruptible); + spin_lock(&dmabufmgr.lru_lock); + + if (unlikely(ret)) + return ret; + } + + if (use_sequence) { + /** + * Wake up waiters that may need to recheck for deadlock, + * if we decreased the sequence number. + */ + if (unlikely((bo->val_seq - sequence < (1 << 31)) + || !bo->seq_valid)) + wake_up_all(&bo->event_queue); + + bo->val_seq = sequence; + bo->seq_valid = true; + } else { + bo->seq_valid = false; + } + + return 0; +} +EXPORT_SYMBOL_GPL(dmabufmgr_bo_reserve_locked); + +int +dmabufmgr_bo_reserve(struct dma_buf *bo, + bool interruptible, bool no_wait, + bool use_sequence, u32 sequence) +{ + int ret; + + spin_lock(&dmabufmgr.lru_lock); + ret = dmabufmgr_bo_reserve_locked(bo, interruptible, no_wait, + use_sequence, sequence); + spin_unlock(&dmabufmgr.lru_lock); + + return ret; +} +EXPORT_SYMBOL_GPL(dmabufmgr_bo_reserve); + +int +dmabufmgr_bo_wait_unreserved(struct dma_buf *bo, bool interruptible) +{ + if (interruptible) { + return wait_event_interruptible(bo->event_queue, + atomic_read(&bo->reserved) == 0); + } else { + wait_event(bo->event_queue, atomic_read(&bo->reserved) == 0); + return 0; + } +} +EXPORT_SYMBOL_GPL(dmabufmgr_bo_wait_unreserved); + +void dmabufmgr_bo_unreserve_locked(struct dma_buf *bo) +{ + atomic_set(&bo->reserved, 0); + wake_up_all(&bo->event_queue); +} +EXPORT_SYMBOL_GPL(dmabufmgr_bo_unreserve_locked); + +void dmabufmgr_bo_unreserve(struct dma_buf *bo) +{ + spin_lock(&dmabufmgr.lru_lock); + dmabufmgr_bo_unreserve_locked(bo); + spin_unlock(&dmabufmgr.lru_lock); +} +EXPORT_SYMBOL_GPL(dmabufmgr_bo_unreserve); diff --git a/drivers/base/dma-buf.c b/drivers/base/dma-buf.c index 24e88fe..01c4f71 100644 --- a/drivers/base/dma-buf.c +++ b/drivers/base/dma-buf.c @@ -40,6 +40,9 @@ static int dma_buf_release(struct inode *inode, struct file *file) dmabuf = file->private_data; dmabuf->ops->release(dmabuf); + BUG_ON(waitqueue_active(&dmabuf->event_queue)); + if (dmabuf->sync_buf) + dma_buf_put(dmabuf->sync_buf); kfree(dmabuf); return 0; } @@ -119,6 +122,7 @@ struct dma_buf *dma_buf_export(void *priv, const struct dma_buf_ops *ops, mutex_init(&dmabuf->lock); INIT_LIST_HEAD(&dmabuf->attachments); + init_waitqueue_head(&dmabuf->event_queue); return dmabuf; } diff --git a/include/linux/dma-buf-mgr.h b/include/linux/dma-buf-mgr.h new file mode 100644 index 0000000..b26462e --- /dev/null +++ b/include/linux/dma-buf-mgr.h @@ -0,0 +1,84 @@ +/* + * Header file for dma buffer sharing framework. + * + * Copyright(C) 2011 Linaro Limited. All rights reserved. + * Author: Sumit Semwal + * + * Many thanks to linaro-mm-sig list, and specially + * Arnd Bergmann , Rob Clark and + * Daniel Vetter for their support in creation and + * refining of this idea. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License version 2 as published by + * the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for + * more details. + * + * You should have received a copy of the GNU General Public License along with + * this program. If not, see . + */ +#ifndef __DMA_BUF_MGR_H__ +#define __DMA_BUF_MGR_H__ + +#include +#include + +/** Size of each hwcontext in synchronization dma-buf */ +#define DMABUFMGR_HWCONTEXT_SYNC_ALIGN 16 + +struct dmabufmgr { + spinlock_t lru_lock; + + u32 counter; +}; +extern struct dmabufmgr dmabufmgr; + +extern int +dmabufmgr_bo_reserve_locked(struct dma_buf *bo, + bool interruptible, bool no_wait, + bool use_sequence, u32 sequence); + +extern int +dmabufmgr_bo_reserve(struct dma_buf *bo, + bool interruptible, bool no_wait, + bool use_sequence, u32 sequence); + +extern void +dmabufmgr_bo_unreserve_locked(struct dma_buf *bo); + +extern void +dmabufmgr_bo_unreserve(struct dma_buf *bo); + +extern int +dmabufmgr_bo_wait_unreserved(struct dma_buf *bo, bool interruptible); + +/* execbuf util support for reservations + * matches ttm_execbuf_util + */ +struct dmabufmgr_validate { + struct list_head head; + struct dma_buf *bo; + bool reserved; + + /* If non-null, check for attachments */ + struct dma_buf *sync_buf; + u32 sync_ofs, sync_val; +}; + +/** reserve a linked list of struct dmabufmgr_validate entries */ +extern int +dmabufmgr_eu_reserve_buffers(struct list_head *list); + +/** Undo reservation */ +extern void +dmabufmgr_eu_backoff_reservation(struct list_head *list); + +/** Commit reservation */ +extern void +dmabufmgr_eu_fence_buffer_objects(struct dma_buf *sync_buf, u32 ofs, u32 val, struct list_head *list); + +#endif /* __DMA_BUF_MGR_H__ */ diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h index eb48f38..b2ab395 100644 --- a/include/linux/dma-buf.h +++ b/include/linux/dma-buf.h @@ -113,6 +113,8 @@ struct dma_buf_ops { * @attachments: list of dma_buf_attachment that denotes all devices attached. * @ops: dma_buf_ops associated with this buffer object. * @priv: exporter specific private data for this buffer object. + * @bufmgr_entry: used by dmabufmgr + * @bufdev: used by dmabufmgr */ struct dma_buf { size_t size; @@ -122,6 +124,24 @@ struct dma_buf { /* mutex to serialize list manipulation and attach/detach */ struct mutex lock; void *priv; + + /** dmabufmgr members */ + wait_queue_head_t event_queue; + + /** + * dmabufmgr members protected by the dmabufmgr::lru_lock. + */ + u32 val_seq; + bool seq_valid; + + struct dma_buf *sync_buf; + u32 sync_ofs, sync_val; + + /** + * dmabufmgr members protected by the dmabufmgr::lru_lock + * only when written to. + */ + atomic_t reserved; }; /**