From patchwork Thu Mar 15 01:32:28 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 7301 Return-Path: X-Original-To: patchwork@peony.canonical.com Delivered-To: patchwork@peony.canonical.com Received: from fiordland.canonical.com (fiordland.canonical.com [91.189.94.145]) by peony.canonical.com (Postfix) with ESMTP id 01EDB23E16 for ; Thu, 15 Mar 2012 01:32:42 +0000 (UTC) Received: from mail-iy0-f180.google.com (mail-iy0-f180.google.com [209.85.210.180]) by fiordland.canonical.com (Postfix) with ESMTP id 8EED1A1801E for ; Thu, 15 Mar 2012 01:32:42 +0000 (UTC) Received: by iage36 with SMTP id e36so4278638iag.11 for ; Wed, 14 Mar 2012 18:32:42 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=x-forwarded-to:x-forwarded-for:delivered-to:received-spf :dkim-signature:sender:from:to:cc:subject:date:message-id:x-mailer :x-gm-message-state; bh=GzOCJMWPolljCJuDer8EjzvhGjKJwrlIXinJMnbMdwA=; b=ExMV64GgYNJq5zYtjUYFWjSBWrSetgSWtzZqKMDcElC27qvrGymdrifxJ12NyPXxQe JvNOSnHO8aTaGxUQq4ZTm4z82trvB0CURawUmkX8mAbxdvDnbC23emlkFC7LlEg87FSs 8jWvi++JnwtBByrjXm3ZZPcybMAV6z4j1IqfwcUUGgRe7vEqiEK/AuMlaR4byEwDAirS q5LZFoAT0oEQ1dKHiruiKqWxKrLoz4xAxNuEl6Ckba73z82WeK5SIRJm6ewgvokXRn8g sI91LU9otxpEda96jP8a0fXoLPYushWUxdOz8PDtNIWoUU0Ews0y5OJ9X+SaBc2YLUb0 70oA== Received: by 10.50.183.137 with SMTP id em9mr14456435igc.58.1331775162007; Wed, 14 Mar 2012 18:32:42 -0700 (PDT) X-Forwarded-To: linaro-patchwork@canonical.com X-Forwarded-For: patch@linaro.org linaro-patchwork@canonical.com Delivered-To: patches@linaro.org Received: by 10.231.53.18 with SMTP id k18csp24309ibg; Wed, 14 Mar 2012 18:32:41 -0700 (PDT) Received: by 10.236.155.168 with SMTP id j28mr5845706yhk.16.1331775160858; Wed, 14 Mar 2012 18:32:40 -0700 (PDT) Received: from mail-yw0-f50.google.com (mail-yw0-f50.google.com [209.85.213.50]) by mx.google.com with ESMTPS id n5si362949yhm.5.2012.03.14.18.32.40 (version=TLSv1/SSLv3 cipher=OTHER); Wed, 14 Mar 2012 18:32:40 -0700 (PDT) Received-SPF: pass (google.com: domain of robdclark@gmail.com designates 209.85.213.50 as permitted sender) client-ip=209.85.213.50; Authentication-Results: mx.google.com; spf=pass (google.com: domain of robdclark@gmail.com designates 209.85.213.50 as permitted sender) smtp.mail=robdclark@gmail.com; dkim=pass header.i=@gmail.com Received: by yhjj63 with SMTP id j63so3106709yhj.37 for ; Wed, 14 Mar 2012 18:32:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=sender:from:to:cc:subject:date:message-id:x-mailer; bh=GzOCJMWPolljCJuDer8EjzvhGjKJwrlIXinJMnbMdwA=; b=j8lRw9LEDXZSu+jJys6ZKHLvN3EtusXNc2YK3FKgGWtQLFF7l5iRCI1/gKwEGcYCML qyfVNQNUVo1DpSVEoRdUERS6srq2JjfSpoM8ChI76ofivWh/IfWzVflOJGgWHQhylC3X lFdLHrhU1fKTxK0nOXRqHR+KM7myWwBieb6ACQVVOdHOn0zJ/SuBHHWbIV2mW1tvY76d 5XiVvBny/aIUt0e2l+Nw+aJE7BhXVyNC1FiNldVy6A9z5VQZ+p68Vp/j/Gm6j9PeQ+CN 7HUxdAckvlOoMCECyj488Y4hQPgDDIy8RN/WqrCriEpfu3Kg4xbgS8Vi1MudWso3zWYF aKhQ== Received: by 10.60.26.8 with SMTP id h8mr5993850oeg.15.1331775160272; Wed, 14 Mar 2012 18:32:40 -0700 (PDT) Received: from localhost (ppp-70-129-134-19.dsl.rcsntx.swbell.net. [70.129.134.19]) by mx.google.com with ESMTPS id x5sm428763obn.5.2012.03.14.18.32.38 (version=TLSv1/SSLv3 cipher=OTHER); Wed, 14 Mar 2012 18:32:39 -0700 (PDT) Sender: Rob Clark From: Rob Clark To: linaro-mm-sig@lists.linaro.org, dri-devel@lists.freedesktop.org, linux-media@vger.kernel.org Cc: patches@linaro.org, sumit.semwal@linaro.org, daniel@ffwll.ch, rschultz@google.com, Rob Clark Subject: [PATCH] RFC: dma-buf: userspace mmap support Date: Wed, 14 Mar 2012 20:32:28 -0500 Message-Id: <1331775148-5001-1-git-send-email-rob.clark@linaro.org> X-Mailer: git-send-email 1.7.5.4 X-Gm-Message-State: ALoCoQnUUAHWFEG6fnKvZN3wTz26YB8tcIju0CW9ItBm3Pe6g7zwQWiez0m0ie+saxMYrRITPJti From: Rob Clark Enable optional userspace access to dma-buf buffers via mmap() on the dma-buf file descriptor. Userspace access to the buffer should be bracketed with DMA_BUF_IOCTL_{PREPARE,FINISH}_ACCESS ioctl calls to give the exporting driver a chance to deal with cache synchronization and such for cached userspace mappings without resorting to page faulting tricks. The reasoning behind this is that, while drm drivers tend to have all the mechanisms in place for dealing with page faulting tricks, other driver subsystems may not. And in addition, while page faulting tricks make userspace simpler, there are some associated overheads. In all cases, the mmap() call is allowed to fail, and the associated dma_buf_ops are optional (mmap() will fail if at least the mmap() op is not implemented by the exporter, but in either case the {prepare,finish}_access() ops are optional). For now the prepare/finish access ioctls are kept simple with no argument, although there is possibility to add additional ioctls (or simply change the existing ioctls from _IO() to _IOW()) later to provide optimization to allow userspace to specify a region of interest. For a final patch, dma-buf.h would need to be split into what is exported to userspace, and what is kernel private, but I wanted to get feedback on the idea of requiring userspace to bracket access first (vs. limiting this to coherent mappings or exporters who play page faltings plus PTE shoot-down games) before I split the header which would cause conflicts with other pending dma-buf patches. So flame-on! --- drivers/base/dma-buf.c | 42 ++++++++++++++++++++++++++++++++++++++++++ include/linux/dma-buf.h | 22 ++++++++++++++++++++++ 2 files changed, 64 insertions(+), 0 deletions(-) diff --git a/drivers/base/dma-buf.c b/drivers/base/dma-buf.c index c9a945f..382b78a 100644 --- a/drivers/base/dma-buf.c +++ b/drivers/base/dma-buf.c @@ -30,6 +30,46 @@ static inline int is_dma_buf_file(struct file *); +static int dma_buf_mmap(struct file *file, struct vm_area_struct *vma) +{ + struct dma_buf *dmabuf; + + if (!is_dma_buf_file(file)) + return -EINVAL; + + dmabuf = file->private_data; + + if (dmabuf->ops->mmap) + return dmabuf->ops->mmap(dmabuf, file, vma); + + return -ENODEV; +} + +static long dma_buf_ioctl(struct file *file, unsigned int cmd, + unsigned long arg) +{ + struct dma_buf *dmabuf; + + if (!is_dma_buf_file(file)) + return -EINVAL; + + dmabuf = file->private_data; + + switch (_IOC_NR(cmd)) { + case _IOC_NR(DMA_BUF_IOCTL_PREPARE_ACCESS): + if (dmabuf->ops->prepare_access) + return dmabuf->ops->prepare_access(dmabuf); + return 0; + case _IOC_NR(DMA_BUF_IOCTL_FINISH_ACCESS): + if (dmabuf->ops->finish_access) + return dmabuf->ops->finish_access(dmabuf); + return 0; + default: + return -EINVAL; + } +} + + static int dma_buf_release(struct inode *inode, struct file *file) { struct dma_buf *dmabuf; @@ -45,6 +85,8 @@ static int dma_buf_release(struct inode *inode, struct file *file) } static const struct file_operations dma_buf_fops = { + .mmap = dma_buf_mmap, + .unlocked_ioctl = dma_buf_ioctl, .release = dma_buf_release, }; diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h index a885b26..cbdff81 100644 --- a/include/linux/dma-buf.h +++ b/include/linux/dma-buf.h @@ -34,6 +34,17 @@ struct dma_buf; struct dma_buf_attachment; +/* TODO: dma-buf.h should be the userspace visible header, and dma-buf-priv.h (?) + * the kernel internal header.. for now just stuff these here to avoid conflicting + * with other patches.. + * + * For now, no arg to keep things simple, but we could consider adding an + * optional region of interest later. + */ +#define DMA_BUF_IOCTL_PREPARE_ACCESS _IO('Z', 0) +#define DMA_BUF_IOCTL_FINISH_ACCESS _IO('Z', 1) + + /** * struct dma_buf_ops - operations possible on struct dma_buf * @attach: [optional] allows different devices to 'attach' themselves to the @@ -49,6 +60,13 @@ struct dma_buf_attachment; * @unmap_dma_buf: decreases usecount of buffer, might deallocate scatter * pages. * @release: release this buffer; to be called after the last dma_buf_put. + * @mmap: [optional, allowed to fail] operation called if userspace calls + * mmap() on the dmabuf fd. Note that userspace should use the + * DMA_BUF_PREPARE_ACCESS / DMA_BUF_FINISH_ACCESS ioctls before/after + * sw access to the buffer, to give the exporter an opportunity to + * deal with cache maintenance. + * @prepare_access: [optional] handler for PREPARE_ACCESS ioctl. + * @finish_access: [optional] handler for FINISH_ACCESS ioctl. */ struct dma_buf_ops { int (*attach)(struct dma_buf *, struct device *, @@ -72,6 +90,10 @@ struct dma_buf_ops { /* after final dma_buf_put() */ void (*release)(struct dma_buf *); + int (*mmap)(struct dma_buf *, struct file *, struct vm_area_struct *); + int (*prepare_access)(struct dma_buf *); + int (*finish_access)(struct dma_buf *); + }; /**