From patchwork Wed Aug 31 15:37:37 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 602068 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 754F5ECAAD1 for ; Wed, 31 Aug 2022 15:39:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231863AbiHaPjU (ORCPT ); Wed, 31 Aug 2022 11:39:20 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33678 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231161AbiHaPjT (ORCPT ); Wed, 31 Aug 2022 11:39:19 -0400 Received: from madras.collabora.co.uk (madras.collabora.co.uk [46.235.227.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C0B50D8B07; Wed, 31 Aug 2022 08:39:18 -0700 (PDT) Received: from dimapc.. (109-252-119-13.nat.spd-mgts.ru [109.252.119.13]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madras.collabora.co.uk (Postfix) with ESMTPSA id 9C50866015A7; Wed, 31 Aug 2022 16:39:14 +0100 (BST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1661960357; bh=mq+kKNWzsZPc/pkrHJnMglMwSZHTxBeTNMlcTvr2RPg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=IbpT9B92ORuHHTOTdGcTYBd7AIS/+QWfCseHtdp0O+oQE1cgYTont1FrG7hZJSNuz oUsF3ZGj1p2as0ZxQN+dQQFRpSL87SbdRdB1ju7mIM4y+5dziqZ3e5DaJweV7YuOE/ 1H320hsTsfF2G21JZ5DabpdmyJp9axAp7/fxwTkcYEovPsJBogm7F3Hd3eoDL/j9HE rJltme0+96fSPLeXGlAUHMm6XssCRo6w8eziTcT6DzM1MpOWsRo/392sOmqFsbmF3c A1py7rG+sPp7YAlUqD/27veLOxcAMqAStH7zkBrvbEuj59KSIjzMo6V7+p1IhljYvw 7EMJP9wiDeBIA== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Daniel Almeida , Gert Wollny , Gustavo Padovan , Daniel Stone , Tomeu Vizoso , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Rob Clark , Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= , "Pan, Xinhui" , Thierry Reding , Tomasz Figa , Marek Szyprowski , Mauro Carvalho Chehab , Alex Deucher , Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin , =?utf-8?q?Thomas_Hellstr?= =?utf-8?q?=C3=B6m?= , Qiang Yu , Srinivas Kandagatla , Amol Maheshwari , Jason Gunthorpe , Leon Romanovsky , Juergen Gross , Stefano Stabellini , Oleksandr Tyshchenko , Tomi Valkeinen , Russell King , Lucas Stach , Christian Gmeiner Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, Dmitry Osipenko , linux-media@vger.kernel.org, linaro-mm-sig@lists.linaro.org, amd-gfx@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, kernel@collabora.com, virtualization@lists.linux-foundation.org, linux-rdma@vger.kernel.org, linux-arm-msm@vger.kernel.org Subject: [PATCH v4 01/21] dma-buf: Add unlocked variant of vmapping functions Date: Wed, 31 Aug 2022 18:37:37 +0300 Message-Id: <20220831153757.97381-2-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20220831153757.97381-1-dmitry.osipenko@collabora.com> References: <20220831153757.97381-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org Add unlocked variant of dma_buf_vmap/vunmap() that will be utilized by drivers that don't take the reservation lock explicitly. Acked-by: Christian König Signed-off-by: Dmitry Osipenko --- drivers/dma-buf/dma-buf.c | 38 ++++++++++++++++++++++++++++++++++++++ include/linux/dma-buf.h | 2 ++ 2 files changed, 40 insertions(+) diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c index 1c912255c5d6..5e4459bb1a6f 100644 --- a/drivers/dma-buf/dma-buf.c +++ b/drivers/dma-buf/dma-buf.c @@ -1424,6 +1424,28 @@ int dma_buf_vmap(struct dma_buf *dmabuf, struct iosys_map *map) } EXPORT_SYMBOL_NS_GPL(dma_buf_vmap, DMA_BUF); +/** + * dma_buf_vmap_unlocked - Create virtual mapping for the buffer object into kernel + * address space. Same restrictions as for vmap and friends apply. + * @dmabuf: [in] buffer to vmap + * @map: [out] returns the vmap pointer + * + * Unlocked version of dma_buf_vmap() + * + * Returns 0 on success, or a negative errno code otherwise. + */ +int dma_buf_vmap_unlocked(struct dma_buf *dmabuf, struct iosys_map *map) +{ + int ret; + + dma_resv_lock(dmabuf->resv, NULL); + ret = dma_buf_vmap(dmabuf, map); + dma_resv_unlock(dmabuf->resv); + + return ret; +} +EXPORT_SYMBOL_NS_GPL(dma_buf_vmap_unlocked, DMA_BUF); + /** * dma_buf_vunmap - Unmap a vmap obtained by dma_buf_vmap. * @dmabuf: [in] buffer to vunmap @@ -1448,6 +1470,22 @@ void dma_buf_vunmap(struct dma_buf *dmabuf, struct iosys_map *map) } EXPORT_SYMBOL_NS_GPL(dma_buf_vunmap, DMA_BUF); +/** + * dma_buf_vunmap_unlocked - Unmap a vmap obtained by dma_buf_vmap. + * @dmabuf: [in] buffer to vunmap + * @map: [in] vmap pointer to vunmap + */ +void dma_buf_vunmap_unlocked(struct dma_buf *dmabuf, struct iosys_map *map) +{ + if (WARN_ON(!dmabuf)) + return; + + dma_resv_lock(dmabuf->resv, NULL); + dma_buf_vunmap(dmabuf, map); + dma_resv_unlock(dmabuf->resv); +} +EXPORT_SYMBOL_NS_GPL(dma_buf_vunmap_unlocked, DMA_BUF); + #ifdef CONFIG_DEBUG_FS static int dma_buf_debug_show(struct seq_file *s, void *unused) { diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h index 71731796c8c3..8daa054dd7fe 100644 --- a/include/linux/dma-buf.h +++ b/include/linux/dma-buf.h @@ -632,4 +632,6 @@ int dma_buf_mmap(struct dma_buf *, struct vm_area_struct *, unsigned long); int dma_buf_vmap(struct dma_buf *dmabuf, struct iosys_map *map); void dma_buf_vunmap(struct dma_buf *dmabuf, struct iosys_map *map); +int dma_buf_vmap_unlocked(struct dma_buf *dmabuf, struct iosys_map *map); +void dma_buf_vunmap_unlocked(struct dma_buf *dmabuf, struct iosys_map *map); #endif /* __DMA_BUF_H__ */ From patchwork Wed Aug 31 15:37:38 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 601716 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D156EC0502C for ; Wed, 31 Aug 2022 15:39:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231941AbiHaPja (ORCPT ); Wed, 31 Aug 2022 11:39:30 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33948 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231913AbiHaPj2 (ORCPT ); Wed, 31 Aug 2022 11:39:28 -0400 Received: from madras.collabora.co.uk (madras.collabora.co.uk [IPv6:2a00:1098:0:82:1000:25:2eeb:e5ab]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 26A22D91E8; Wed, 31 Aug 2022 08:39:22 -0700 (PDT) Received: from dimapc.. (109-252-119-13.nat.spd-mgts.ru [109.252.119.13]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madras.collabora.co.uk (Postfix) with ESMTPSA id BB1846601DE8; Wed, 31 Aug 2022 16:39:17 +0100 (BST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1661960360; bh=AwRlDUnXNPm7IrchqI1DhYwz6VGywo+kRbZaIUUsbws=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=EA11+Tf1xGgFwhLzaB4zpIaDCRNIdmnnJHMUCOo8+cNcA1+1F0D9jn9Wul3MjrQdD E0MPm/kez2xbgU9fHHNKDQXMMW4h2mK4UsWjNtwn5C67YAnuBWpF+nC+PNcQOxHwe3 HZNocGVRO7Gx+HhKqArjSV+MCgscVPZcopzRkhcwwjxgHqrrxXYu2HhbUNjqd4nBia cPLVBNFFXe1K8+Kx+80Y5D6dAd6wgcYCQv3I82150DNV6nMETDfoT+f3K4lppzovLS hUo/r0FpG/YF36umNf1y+HgWwYpDw+XlGpyjBOSHprJrXFUIrwKGfPk/NTQA99oiVM AV/jxqjw8iHiA== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Daniel Almeida , Gert Wollny , Gustavo Padovan , Daniel Stone , Tomeu Vizoso , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Rob Clark , Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= , "Pan, Xinhui" , Thierry Reding , Tomasz Figa , Marek Szyprowski , Mauro Carvalho Chehab , Alex Deucher , Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin , =?utf-8?q?Thomas_Hellstr?= =?utf-8?q?=C3=B6m?= , Qiang Yu , Srinivas Kandagatla , Amol Maheshwari , Jason Gunthorpe , Leon Romanovsky , Juergen Gross , Stefano Stabellini , Oleksandr Tyshchenko , Tomi Valkeinen , Russell King , Lucas Stach , Christian Gmeiner Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, Dmitry Osipenko , linux-media@vger.kernel.org, linaro-mm-sig@lists.linaro.org, amd-gfx@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, kernel@collabora.com, virtualization@lists.linux-foundation.org, linux-rdma@vger.kernel.org, linux-arm-msm@vger.kernel.org Subject: [PATCH v4 02/21] dma-buf: Add unlocked variant of attachment-mapping functions Date: Wed, 31 Aug 2022 18:37:38 +0300 Message-Id: <20220831153757.97381-3-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20220831153757.97381-1-dmitry.osipenko@collabora.com> References: <20220831153757.97381-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org Add unlocked variant of dma_buf_map/unmap_attachment() that will be used by drivers that don't take the reservation lock explicitly. Acked-by: Christian König Signed-off-by: Dmitry Osipenko --- drivers/dma-buf/dma-buf.c | 53 +++++++++++++++++++++++++++++++++++++++ include/linux/dma-buf.h | 6 +++++ 2 files changed, 59 insertions(+) diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c index 5e4459bb1a6f..51fb69048853 100644 --- a/drivers/dma-buf/dma-buf.c +++ b/drivers/dma-buf/dma-buf.c @@ -1099,6 +1099,34 @@ struct sg_table *dma_buf_map_attachment(struct dma_buf_attachment *attach, } EXPORT_SYMBOL_NS_GPL(dma_buf_map_attachment, DMA_BUF); +/** + * dma_buf_map_attachment_unlocked - Returns the scatterlist table of the attachment; + * mapped into _device_ address space. Is a wrapper for map_dma_buf() of the + * dma_buf_ops. + * @attach: [in] attachment whose scatterlist is to be returned + * @direction: [in] direction of DMA transfer + * + * Unlocked variant of dma_buf_map_attachment(). + */ +struct sg_table * +dma_buf_map_attachment_unlocked(struct dma_buf_attachment *attach, + enum dma_data_direction direction) +{ + struct sg_table *sg_table; + + might_sleep(); + + if (WARN_ON(!attach || !attach->dmabuf)) + return ERR_PTR(-EINVAL); + + dma_resv_lock(attach->dmabuf->resv, NULL); + sg_table = dma_buf_map_attachment(attach, direction); + dma_resv_unlock(attach->dmabuf->resv); + + return sg_table; +} +EXPORT_SYMBOL_NS_GPL(dma_buf_map_attachment_unlocked, DMA_BUF); + /** * dma_buf_unmap_attachment - unmaps and decreases usecount of the buffer;might * deallocate the scatterlist associated. Is a wrapper for unmap_dma_buf() of @@ -1135,6 +1163,31 @@ void dma_buf_unmap_attachment(struct dma_buf_attachment *attach, } EXPORT_SYMBOL_NS_GPL(dma_buf_unmap_attachment, DMA_BUF); +/** + * dma_buf_unmap_attachment_unlocked - unmaps and decreases usecount of the buffer;might + * deallocate the scatterlist associated. Is a wrapper for unmap_dma_buf() of + * dma_buf_ops. + * @attach: [in] attachment to unmap buffer from + * @sg_table: [in] scatterlist info of the buffer to unmap + * @direction: [in] direction of DMA transfer + * + * Unlocked variant of dma_buf_unmap_attachment(). + */ +void dma_buf_unmap_attachment_unlocked(struct dma_buf_attachment *attach, + struct sg_table *sg_table, + enum dma_data_direction direction) +{ + might_sleep(); + + if (WARN_ON(!attach || !attach->dmabuf || !sg_table)) + return; + + dma_resv_lock(attach->dmabuf->resv, NULL); + dma_buf_unmap_attachment(attach, sg_table, direction); + dma_resv_unlock(attach->dmabuf->resv); +} +EXPORT_SYMBOL_NS_GPL(dma_buf_unmap_attachment_unlocked, DMA_BUF); + /** * dma_buf_move_notify - notify attachments that DMA-buf is moving * diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h index 8daa054dd7fe..f11b5bbc2f37 100644 --- a/include/linux/dma-buf.h +++ b/include/linux/dma-buf.h @@ -627,6 +627,12 @@ int dma_buf_begin_cpu_access(struct dma_buf *dma_buf, enum dma_data_direction dir); int dma_buf_end_cpu_access(struct dma_buf *dma_buf, enum dma_data_direction dir); +struct sg_table * +dma_buf_map_attachment_unlocked(struct dma_buf_attachment *attach, + enum dma_data_direction direction); +void dma_buf_unmap_attachment_unlocked(struct dma_buf_attachment *attach, + struct sg_table *sg_table, + enum dma_data_direction direction); int dma_buf_mmap(struct dma_buf *, struct vm_area_struct *, unsigned long); From patchwork Wed Aug 31 15:37:39 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 602067 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CFF6AECAAD4 for ; Wed, 31 Aug 2022 15:39:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231960AbiHaPjd (ORCPT ); Wed, 31 Aug 2022 11:39:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33978 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231932AbiHaPj2 (ORCPT ); Wed, 31 Aug 2022 11:39:28 -0400 Received: from madras.collabora.co.uk (madras.collabora.co.uk [46.235.227.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AC59ED91D3; Wed, 31 Aug 2022 08:39:25 -0700 (PDT) Received: from dimapc.. (109-252-119-13.nat.spd-mgts.ru [109.252.119.13]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madras.collabora.co.uk (Postfix) with ESMTPSA id E1B4F6601DE7; Wed, 31 Aug 2022 16:39:20 +0100 (BST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1661960363; bh=ikZ6Gq15mzzDMbiTFN8Ym0Q6KM92ch38sRmgN9V3Hd0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=YoEDNKsXt7BdWZF/+oAjS/n7mKk7CL+m7eebeKb01mdvn7T4f1wD7vp+TIYm6hQm1 HkhQMbXL2FR/olkeN3eREXOFpPbXjhpufJaaPXgJQ/5gAjMgL449uk9pPyXYRzFXWS plIMEjxBCV90P6voRctsmC1F9q2vFf/jGE9Iu1LaqhdmsTrN6lN0pCaHLUqNJsIbEF Mb81Gm7aA2gv4KlsgpDi4GOunr5B4xDcy4+ZogyM9lxo7VLFBGdvC25p596Up227LR ZxsJQImZs+gRbZ33NICc5QtuSCJ0k82B06QGwKWt0Dn3WU+rJEMX87sHsu2BAdrx3z VT2YmxscPi+WA== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Daniel Almeida , Gert Wollny , Gustavo Padovan , Daniel Stone , Tomeu Vizoso , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Rob Clark , Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= , "Pan, Xinhui" , Thierry Reding , Tomasz Figa , Marek Szyprowski , Mauro Carvalho Chehab , Alex Deucher , Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin , =?utf-8?q?Thomas_Hellstr?= =?utf-8?q?=C3=B6m?= , Qiang Yu , Srinivas Kandagatla , Amol Maheshwari , Jason Gunthorpe , Leon Romanovsky , Juergen Gross , Stefano Stabellini , Oleksandr Tyshchenko , Tomi Valkeinen , Russell King , Lucas Stach , Christian Gmeiner Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, Dmitry Osipenko , linux-media@vger.kernel.org, linaro-mm-sig@lists.linaro.org, amd-gfx@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, kernel@collabora.com, virtualization@lists.linux-foundation.org, linux-rdma@vger.kernel.org, linux-arm-msm@vger.kernel.org Subject: [PATCH v4 03/21] drm/gem: Take reservation lock for vmap/vunmap operations Date: Wed, 31 Aug 2022 18:37:39 +0300 Message-Id: <20220831153757.97381-4-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20220831153757.97381-1-dmitry.osipenko@collabora.com> References: <20220831153757.97381-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org The new common dma-buf locking convention will require buffer importers to hold the reservation lock around mapping operations. Make DRM GEM core to take the lock around the vmapping operations and update DRM drivers to use the locked functions for the case where DRM core now holds the lock. This patch prepares DRM core and drivers to the common dynamic dma-buf locking convention. Acked-by: Christian König Signed-off-by: Dmitry Osipenko --- drivers/gpu/drm/drm_client.c | 4 ++-- drivers/gpu/drm/drm_gem.c | 24 ++++++++++++++++++++ drivers/gpu/drm/drm_gem_dma_helper.c | 6 ++--- drivers/gpu/drm/drm_gem_framebuffer_helper.c | 6 ++--- drivers/gpu/drm/drm_gem_ttm_helper.c | 9 +------- drivers/gpu/drm/lima/lima_sched.c | 4 ++-- drivers/gpu/drm/panfrost/panfrost_dump.c | 4 ++-- drivers/gpu/drm/panfrost/panfrost_perfcnt.c | 6 ++--- drivers/gpu/drm/qxl/qxl_object.c | 17 +++++++------- drivers/gpu/drm/qxl/qxl_prime.c | 4 ++-- include/drm/drm_gem.h | 3 +++ 11 files changed, 54 insertions(+), 33 deletions(-) diff --git a/drivers/gpu/drm/drm_client.c b/drivers/gpu/drm/drm_client.c index 2b230b4d6942..fbcb1e995384 100644 --- a/drivers/gpu/drm/drm_client.c +++ b/drivers/gpu/drm/drm_client.c @@ -323,7 +323,7 @@ drm_client_buffer_vmap(struct drm_client_buffer *buffer, * fd_install step out of the driver backend hooks, to make that * final step optional for internal users. */ - ret = drm_gem_vmap(buffer->gem, map); + ret = drm_gem_vmap_unlocked(buffer->gem, map); if (ret) return ret; @@ -345,7 +345,7 @@ void drm_client_buffer_vunmap(struct drm_client_buffer *buffer) { struct iosys_map *map = &buffer->map; - drm_gem_vunmap(buffer->gem, map); + drm_gem_vunmap_unlocked(buffer->gem, map); } EXPORT_SYMBOL(drm_client_buffer_vunmap); diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c index ad068865ba20..9c55593d662d 100644 --- a/drivers/gpu/drm/drm_gem.c +++ b/drivers/gpu/drm/drm_gem.c @@ -1156,6 +1156,8 @@ int drm_gem_vmap(struct drm_gem_object *obj, struct iosys_map *map) { int ret; + dma_resv_assert_held(obj->resv); + if (!obj->funcs->vmap) return -EOPNOTSUPP; @@ -1171,6 +1173,8 @@ EXPORT_SYMBOL(drm_gem_vmap); void drm_gem_vunmap(struct drm_gem_object *obj, struct iosys_map *map) { + dma_resv_assert_held(obj->resv); + if (iosys_map_is_null(map)) return; @@ -1182,6 +1186,26 @@ void drm_gem_vunmap(struct drm_gem_object *obj, struct iosys_map *map) } EXPORT_SYMBOL(drm_gem_vunmap); +int drm_gem_vmap_unlocked(struct drm_gem_object *obj, struct iosys_map *map) +{ + int ret; + + dma_resv_lock(obj->resv, NULL); + ret = drm_gem_vmap(obj, map); + dma_resv_unlock(obj->resv); + + return ret; +} +EXPORT_SYMBOL(drm_gem_vmap_unlocked); + +void drm_gem_vunmap_unlocked(struct drm_gem_object *obj, struct iosys_map *map) +{ + dma_resv_lock(obj->resv, NULL); + drm_gem_vunmap(obj, map); + dma_resv_unlock(obj->resv); +} +EXPORT_SYMBOL(drm_gem_vunmap_unlocked); + /** * drm_gem_lock_reservations - Sets up the ww context and acquires * the lock on an array of GEM objects. diff --git a/drivers/gpu/drm/drm_gem_dma_helper.c b/drivers/gpu/drm/drm_gem_dma_helper.c index f6901ff97bbb..1e658c448366 100644 --- a/drivers/gpu/drm/drm_gem_dma_helper.c +++ b/drivers/gpu/drm/drm_gem_dma_helper.c @@ -230,7 +230,7 @@ void drm_gem_dma_free(struct drm_gem_dma_object *dma_obj) if (gem_obj->import_attach) { if (dma_obj->vaddr) - dma_buf_vunmap(gem_obj->import_attach->dmabuf, &map); + dma_buf_vunmap_unlocked(gem_obj->import_attach->dmabuf, &map); drm_prime_gem_destroy(gem_obj, dma_obj->sgt); } else if (dma_obj->vaddr) { if (dma_obj->map_noncoherent) @@ -581,7 +581,7 @@ drm_gem_dma_prime_import_sg_table_vmap(struct drm_device *dev, struct iosys_map map; int ret; - ret = dma_buf_vmap(attach->dmabuf, &map); + ret = dma_buf_vmap_unlocked(attach->dmabuf, &map); if (ret) { DRM_ERROR("Failed to vmap PRIME buffer\n"); return ERR_PTR(ret); @@ -589,7 +589,7 @@ drm_gem_dma_prime_import_sg_table_vmap(struct drm_device *dev, obj = drm_gem_dma_prime_import_sg_table(dev, attach, sgt); if (IS_ERR(obj)) { - dma_buf_vunmap(attach->dmabuf, &map); + dma_buf_vunmap_unlocked(attach->dmabuf, &map); return obj; } diff --git a/drivers/gpu/drm/drm_gem_framebuffer_helper.c b/drivers/gpu/drm/drm_gem_framebuffer_helper.c index 880a4975507f..e35e224e6303 100644 --- a/drivers/gpu/drm/drm_gem_framebuffer_helper.c +++ b/drivers/gpu/drm/drm_gem_framebuffer_helper.c @@ -354,7 +354,7 @@ int drm_gem_fb_vmap(struct drm_framebuffer *fb, struct iosys_map *map, ret = -EINVAL; goto err_drm_gem_vunmap; } - ret = drm_gem_vmap(obj, &map[i]); + ret = drm_gem_vmap_unlocked(obj, &map[i]); if (ret) goto err_drm_gem_vunmap; } @@ -376,7 +376,7 @@ int drm_gem_fb_vmap(struct drm_framebuffer *fb, struct iosys_map *map, obj = drm_gem_fb_get_obj(fb, i); if (!obj) continue; - drm_gem_vunmap(obj, &map[i]); + drm_gem_vunmap_unlocked(obj, &map[i]); } return ret; } @@ -403,7 +403,7 @@ void drm_gem_fb_vunmap(struct drm_framebuffer *fb, struct iosys_map *map) continue; if (iosys_map_is_null(&map[i])) continue; - drm_gem_vunmap(obj, &map[i]); + drm_gem_vunmap_unlocked(obj, &map[i]); } } EXPORT_SYMBOL(drm_gem_fb_vunmap); diff --git a/drivers/gpu/drm/drm_gem_ttm_helper.c b/drivers/gpu/drm/drm_gem_ttm_helper.c index e5fc875990c4..d5962a34c01d 100644 --- a/drivers/gpu/drm/drm_gem_ttm_helper.c +++ b/drivers/gpu/drm/drm_gem_ttm_helper.c @@ -64,13 +64,8 @@ int drm_gem_ttm_vmap(struct drm_gem_object *gem, struct iosys_map *map) { struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem); - int ret; - - dma_resv_lock(gem->resv, NULL); - ret = ttm_bo_vmap(bo, map); - dma_resv_unlock(gem->resv); - return ret; + return ttm_bo_vmap(bo, map); } EXPORT_SYMBOL(drm_gem_ttm_vmap); @@ -87,9 +82,7 @@ void drm_gem_ttm_vunmap(struct drm_gem_object *gem, { struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem); - dma_resv_lock(gem->resv, NULL); ttm_bo_vunmap(bo, map); - dma_resv_unlock(gem->resv); } EXPORT_SYMBOL(drm_gem_ttm_vunmap); diff --git a/drivers/gpu/drm/lima/lima_sched.c b/drivers/gpu/drm/lima/lima_sched.c index e82931712d8a..ff003403fbbc 100644 --- a/drivers/gpu/drm/lima/lima_sched.c +++ b/drivers/gpu/drm/lima/lima_sched.c @@ -371,7 +371,7 @@ static void lima_sched_build_error_task_list(struct lima_sched_task *task) } else { buffer_chunk->size = lima_bo_size(bo); - ret = drm_gem_shmem_vmap(&bo->base, &map); + ret = drm_gem_vmap_unlocked(&bo->base.base, &map); if (ret) { kvfree(et); goto out; @@ -379,7 +379,7 @@ static void lima_sched_build_error_task_list(struct lima_sched_task *task) memcpy(buffer_chunk + 1, map.vaddr, buffer_chunk->size); - drm_gem_shmem_vunmap(&bo->base, &map); + drm_gem_vunmap_unlocked(&bo->base.base, &map); } buffer_chunk = (void *)(buffer_chunk + 1) + buffer_chunk->size; diff --git a/drivers/gpu/drm/panfrost/panfrost_dump.c b/drivers/gpu/drm/panfrost/panfrost_dump.c index 89056a1aac7d..f62a019cc523 100644 --- a/drivers/gpu/drm/panfrost/panfrost_dump.c +++ b/drivers/gpu/drm/panfrost/panfrost_dump.c @@ -209,7 +209,7 @@ void panfrost_core_dump(struct panfrost_job *job) goto dump_header; } - ret = drm_gem_shmem_vmap(&bo->base, &map); + ret = drm_gem_vmap_unlocked(&bo->base.base, &map); if (ret) { dev_err(pfdev->dev, "Panfrost Dump: couldn't map Buffer Object\n"); iter.hdr->bomap.valid = 0; @@ -236,7 +236,7 @@ void panfrost_core_dump(struct panfrost_job *job) vaddr = map.vaddr; memcpy(iter.data, vaddr, bo->base.base.size); - drm_gem_shmem_vunmap(&bo->base, &map); + drm_gem_vunmap_unlocked(&bo->base.base, &map); iter.hdr->bomap.valid = cpu_to_le32(1); diff --git a/drivers/gpu/drm/panfrost/panfrost_perfcnt.c b/drivers/gpu/drm/panfrost/panfrost_perfcnt.c index bc0df93f7f21..ba9b6e2b2636 100644 --- a/drivers/gpu/drm/panfrost/panfrost_perfcnt.c +++ b/drivers/gpu/drm/panfrost/panfrost_perfcnt.c @@ -106,7 +106,7 @@ static int panfrost_perfcnt_enable_locked(struct panfrost_device *pfdev, goto err_close_bo; } - ret = drm_gem_shmem_vmap(bo, &map); + ret = drm_gem_vmap_unlocked(&bo->base, &map); if (ret) goto err_put_mapping; perfcnt->buf = map.vaddr; @@ -165,7 +165,7 @@ static int panfrost_perfcnt_enable_locked(struct panfrost_device *pfdev, return 0; err_vunmap: - drm_gem_shmem_vunmap(bo, &map); + drm_gem_vunmap_unlocked(&bo->base, &map); err_put_mapping: panfrost_gem_mapping_put(perfcnt->mapping); err_close_bo: @@ -195,7 +195,7 @@ static int panfrost_perfcnt_disable_locked(struct panfrost_device *pfdev, GPU_PERFCNT_CFG_MODE(GPU_PERFCNT_CFG_MODE_OFF)); perfcnt->user = NULL; - drm_gem_shmem_vunmap(&perfcnt->mapping->obj->base, &map); + drm_gem_vunmap_unlocked(&perfcnt->mapping->obj->base.base, &map); perfcnt->buf = NULL; panfrost_gem_close(&perfcnt->mapping->obj->base.base, file_priv); panfrost_mmu_as_put(pfdev, perfcnt->mapping->mmu); diff --git a/drivers/gpu/drm/qxl/qxl_object.c b/drivers/gpu/drm/qxl/qxl_object.c index 695d9308d1f0..06a58dad5f5c 100644 --- a/drivers/gpu/drm/qxl/qxl_object.c +++ b/drivers/gpu/drm/qxl/qxl_object.c @@ -168,9 +168,16 @@ int qxl_bo_vmap_locked(struct qxl_bo *bo, struct iosys_map *map) bo->map_count++; goto out; } - r = ttm_bo_vmap(&bo->tbo, &bo->map); + + r = __qxl_bo_pin(bo); if (r) return r; + + r = ttm_bo_vmap(&bo->tbo, &bo->map); + if (r) { + __qxl_bo_unpin(bo); + return r; + } bo->map_count = 1; /* TODO: Remove kptr in favor of map everywhere. */ @@ -192,12 +199,6 @@ int qxl_bo_vmap(struct qxl_bo *bo, struct iosys_map *map) if (r) return r; - r = __qxl_bo_pin(bo); - if (r) { - qxl_bo_unreserve(bo); - return r; - } - r = qxl_bo_vmap_locked(bo, map); qxl_bo_unreserve(bo); return r; @@ -247,6 +248,7 @@ void qxl_bo_vunmap_locked(struct qxl_bo *bo) return; bo->kptr = NULL; ttm_bo_vunmap(&bo->tbo, &bo->map); + __qxl_bo_unpin(bo); } int qxl_bo_vunmap(struct qxl_bo *bo) @@ -258,7 +260,6 @@ int qxl_bo_vunmap(struct qxl_bo *bo) return r; qxl_bo_vunmap_locked(bo); - __qxl_bo_unpin(bo); qxl_bo_unreserve(bo); return 0; } diff --git a/drivers/gpu/drm/qxl/qxl_prime.c b/drivers/gpu/drm/qxl/qxl_prime.c index 142d01415acb..9169c26357d3 100644 --- a/drivers/gpu/drm/qxl/qxl_prime.c +++ b/drivers/gpu/drm/qxl/qxl_prime.c @@ -59,7 +59,7 @@ int qxl_gem_prime_vmap(struct drm_gem_object *obj, struct iosys_map *map) struct qxl_bo *bo = gem_to_qxl_bo(obj); int ret; - ret = qxl_bo_vmap(bo, map); + ret = qxl_bo_vmap_locked(bo, map); if (ret < 0) return ret; @@ -71,5 +71,5 @@ void qxl_gem_prime_vunmap(struct drm_gem_object *obj, { struct qxl_bo *bo = gem_to_qxl_bo(obj); - qxl_bo_vunmap(bo); + qxl_bo_vunmap_locked(bo); } diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h index 58a18a17c67e..30096f9efdbf 100644 --- a/include/drm/drm_gem.h +++ b/include/drm/drm_gem.h @@ -420,4 +420,7 @@ void drm_gem_unlock_reservations(struct drm_gem_object **objs, int count, int drm_gem_dumb_map_offset(struct drm_file *file, struct drm_device *dev, u32 handle, u64 *offset); +int drm_gem_vmap_unlocked(struct drm_gem_object *obj, struct iosys_map *map); +void drm_gem_vunmap_unlocked(struct drm_gem_object *obj, struct iosys_map *map); + #endif /* __DRM_GEM_H__ */ From patchwork Wed Aug 31 15:37:40 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 601715 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 58A4BC64991 for ; Wed, 31 Aug 2022 15:39:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231942AbiHaPjf (ORCPT ); Wed, 31 Aug 2022 11:39:35 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34006 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231901AbiHaPj3 (ORCPT ); Wed, 31 Aug 2022 11:39:29 -0400 Received: from madras.collabora.co.uk (madras.collabora.co.uk [46.235.227.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5F1C4D91D6; Wed, 31 Aug 2022 08:39:28 -0700 (PDT) Received: from dimapc.. (109-252-119-13.nat.spd-mgts.ru [109.252.119.13]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madras.collabora.co.uk (Postfix) with ESMTPSA id 17BD46601DEC; Wed, 31 Aug 2022 16:39:24 +0100 (BST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1661960367; bh=hr6PZ3PZkgZIDKEVGvhp21Gu2CuOWzIjLk/98UZ+6+w=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=PbFxKLmk+yhPhvKt9j8QVXmIw3tFBmxIHwOaI1NCTabYelI/2jFQvI65VaYm3+s++ MRIQEDt5ISJBMdrdoTQCXJgzNgOmJOLrfeGPdhTJXI2drR7V8ScO8bU5m2xKWENMeM 9OB1vZA4eP9ohRHPaXK5e9vk5hTztfkhkoiMzMlnBYgA1bJxheMaX18kZaPRgaKngO N2O1dlB9993AUt3rO2cNoIkWWagjiVkM5WGi7Nfy7fZGANwkCarb1aMEW3PPRaVxgU c+t34gdLxh0x2+bBu+C5rfWrjRIjNQH6njd1ho1cgESu7rOA4CoIpHnpNWDXKMt2YD oaM9pP9AD/2Ow== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Daniel Almeida , Gert Wollny , Gustavo Padovan , Daniel Stone , Tomeu Vizoso , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Rob Clark , Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= , "Pan, Xinhui" , Thierry Reding , Tomasz Figa , Marek Szyprowski , Mauro Carvalho Chehab , Alex Deucher , Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin , =?utf-8?q?Thomas_Hellstr?= =?utf-8?q?=C3=B6m?= , Qiang Yu , Srinivas Kandagatla , Amol Maheshwari , Jason Gunthorpe , Leon Romanovsky , Juergen Gross , Stefano Stabellini , Oleksandr Tyshchenko , Tomi Valkeinen , Russell King , Lucas Stach , Christian Gmeiner Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, Dmitry Osipenko , linux-media@vger.kernel.org, linaro-mm-sig@lists.linaro.org, amd-gfx@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, kernel@collabora.com, virtualization@lists.linux-foundation.org, linux-rdma@vger.kernel.org, linux-arm-msm@vger.kernel.org Subject: [PATCH v4 04/21] drm/prime: Prepare to dynamic dma-buf locking specification Date: Wed, 31 Aug 2022 18:37:40 +0300 Message-Id: <20220831153757.97381-5-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20220831153757.97381-1-dmitry.osipenko@collabora.com> References: <20220831153757.97381-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org Prepare DRM prime core to the common dynamic dma-buf locking convention by starting to use the unlocked versions of dma-buf API functions. Signed-off-by: Dmitry Osipenko Reviewed-by: Christian König --- drivers/gpu/drm/drm_prime.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/drivers/gpu/drm/drm_prime.c b/drivers/gpu/drm/drm_prime.c index eb09e86044c6..20e109a802ae 100644 --- a/drivers/gpu/drm/drm_prime.c +++ b/drivers/gpu/drm/drm_prime.c @@ -940,7 +940,7 @@ struct drm_gem_object *drm_gem_prime_import_dev(struct drm_device *dev, get_dma_buf(dma_buf); - sgt = dma_buf_map_attachment(attach, DMA_BIDIRECTIONAL); + sgt = dma_buf_map_attachment_unlocked(attach, DMA_BIDIRECTIONAL); if (IS_ERR(sgt)) { ret = PTR_ERR(sgt); goto fail_detach; @@ -958,7 +958,7 @@ struct drm_gem_object *drm_gem_prime_import_dev(struct drm_device *dev, return obj; fail_unmap: - dma_buf_unmap_attachment(attach, sgt, DMA_BIDIRECTIONAL); + dma_buf_unmap_attachment_unlocked(attach, sgt, DMA_BIDIRECTIONAL); fail_detach: dma_buf_detach(dma_buf, attach); dma_buf_put(dma_buf); @@ -1056,7 +1056,7 @@ void drm_prime_gem_destroy(struct drm_gem_object *obj, struct sg_table *sg) attach = obj->import_attach; if (sg) - dma_buf_unmap_attachment(attach, sg, DMA_BIDIRECTIONAL); + dma_buf_unmap_attachment_unlocked(attach, sg, DMA_BIDIRECTIONAL); dma_buf = attach->dmabuf; dma_buf_detach(attach->dmabuf, attach); /* remove the reference */ From patchwork Wed Aug 31 15:37:41 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 602066 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6DFFCC0502C for ; Wed, 31 Aug 2022 15:39:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231975AbiHaPji (ORCPT ); Wed, 31 Aug 2022 11:39:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33946 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231949AbiHaPjf (ORCPT ); Wed, 31 Aug 2022 11:39:35 -0400 Received: from madras.collabora.co.uk (madras.collabora.co.uk [IPv6:2a00:1098:0:82:1000:25:2eeb:e5ab]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 138DDD9D45; Wed, 31 Aug 2022 08:39:31 -0700 (PDT) Received: from dimapc.. (109-252-119-13.nat.spd-mgts.ru [109.252.119.13]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madras.collabora.co.uk (Postfix) with ESMTPSA id 3C1B16601DF1; Wed, 31 Aug 2022 16:39:27 +0100 (BST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1661960370; bh=0d0E/K4G1VpCa944vVBlmjIzTakzf/Ny/1hT+JzAxgI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=SSYw/8l7HkRDOU35x76FkU3THpRsV9TQ90B07NN0rXBmj1uULWKXKDb05m5kBcSV5 2QMU/aHfsgznYxJoN8zVPOMXnpyBk81L1FdV+W+EClTphtQm6AcrWhh+r2SXcOhmFG Y2rIMZSTxL14ue9sv8oE5w3tcFwE5h75YcJyLZrMJDrACUeIiHPg8q5DNdXmxl6qIY ONFOO+57WajunpTHluO3XwimozfEbUjkuCkuNbGAqdQTKCISbQerJxhLwlnuvnsSHF fcCrB0xf3Y4e3YH3rvDzaDCPE5TnvhY4DhFpdnIn/cjdUV5Aw5nRI1Bbn0uOLVBmPS qulFOiJc2o4nw== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Daniel Almeida , Gert Wollny , Gustavo Padovan , Daniel Stone , Tomeu Vizoso , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Rob Clark , Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= , "Pan, Xinhui" , Thierry Reding , Tomasz Figa , Marek Szyprowski , Mauro Carvalho Chehab , Alex Deucher , Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin , =?utf-8?q?Thomas_Hellstr?= =?utf-8?q?=C3=B6m?= , Qiang Yu , Srinivas Kandagatla , Amol Maheshwari , Jason Gunthorpe , Leon Romanovsky , Juergen Gross , Stefano Stabellini , Oleksandr Tyshchenko , Tomi Valkeinen , Russell King , Lucas Stach , Christian Gmeiner Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, Dmitry Osipenko , linux-media@vger.kernel.org, linaro-mm-sig@lists.linaro.org, amd-gfx@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, kernel@collabora.com, virtualization@lists.linux-foundation.org, linux-rdma@vger.kernel.org, linux-arm-msm@vger.kernel.org Subject: [PATCH v4 05/21] drm/armada: Prepare to dynamic dma-buf locking specification Date: Wed, 31 Aug 2022 18:37:41 +0300 Message-Id: <20220831153757.97381-6-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20220831153757.97381-1-dmitry.osipenko@collabora.com> References: <20220831153757.97381-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org Prepare Armada driver to the common dynamic dma-buf locking convention by starting to use the unlocked versions of dma-buf API functions. Signed-off-by: Dmitry Osipenko --- drivers/gpu/drm/armada/armada_gem.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/drivers/gpu/drm/armada/armada_gem.c b/drivers/gpu/drm/armada/armada_gem.c index 5430265ad458..26d10065d534 100644 --- a/drivers/gpu/drm/armada/armada_gem.c +++ b/drivers/gpu/drm/armada/armada_gem.c @@ -66,8 +66,8 @@ void armada_gem_free_object(struct drm_gem_object *obj) if (dobj->obj.import_attach) { /* We only ever display imported data */ if (dobj->sgt) - dma_buf_unmap_attachment(dobj->obj.import_attach, - dobj->sgt, DMA_TO_DEVICE); + dma_buf_unmap_attachment_unlocked(dobj->obj.import_attach, + dobj->sgt, DMA_TO_DEVICE); drm_prime_gem_destroy(&dobj->obj, NULL); } @@ -539,8 +539,8 @@ int armada_gem_map_import(struct armada_gem_object *dobj) { int ret; - dobj->sgt = dma_buf_map_attachment(dobj->obj.import_attach, - DMA_TO_DEVICE); + dobj->sgt = dma_buf_map_attachment_unlocked(dobj->obj.import_attach, + DMA_TO_DEVICE); if (IS_ERR(dobj->sgt)) { ret = PTR_ERR(dobj->sgt); dobj->sgt = NULL; From patchwork Wed Aug 31 15:37:42 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 601714 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 864AFECAAD1 for ; Wed, 31 Aug 2022 15:40:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232027AbiHaPkG (ORCPT ); Wed, 31 Aug 2022 11:40:06 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33982 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231965AbiHaPji (ORCPT ); Wed, 31 Aug 2022 11:39:38 -0400 Received: from madras.collabora.co.uk (madras.collabora.co.uk [IPv6:2a00:1098:0:82:1000:25:2eeb:e5ab]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 333B3D99D5; Wed, 31 Aug 2022 08:39:34 -0700 (PDT) Received: from dimapc.. (109-252-119-13.nat.spd-mgts.ru [109.252.119.13]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madras.collabora.co.uk (Postfix) with ESMTPSA id 5ED406601DF6; Wed, 31 Aug 2022 16:39:30 +0100 (BST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1661960373; bh=TO/xOZs1dtACY2TtNnVw4DrKgpYQloSInWovd/NRlPU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Q4RpyrFuRPXfFwE4IPeTn4f8aO9fs8mGCDqmr3lYg5wWfswotgiPg5YqK1c106c7N N+tML8Z0SX3jPKy///QcQSjV+VsPsyEZoMJDMq+YIBdB0m/lJPlhCTFNrgJ1RAex73 AaATZQzxNo2/kJi2ozDsuUllogY9NgWh3degqbUQfwo0p7uascb7MXL7190qqY8CYE e4E/5/tu8W51Cd/taQhQP0yh3Xh4dT4zR5wOtb+tYqj3L+WFqDrq2nXepRRxygLFhO xQQpfkba2uPpP/BnzoYygxmBgmvhZ10EVXFX7Xf1zq8DgME4eBewDGKYdXMw5RvklB CZAOBciV9e37Q== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Daniel Almeida , Gert Wollny , Gustavo Padovan , Daniel Stone , Tomeu Vizoso , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Rob Clark , Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= , "Pan, Xinhui" , Thierry Reding , Tomasz Figa , Marek Szyprowski , Mauro Carvalho Chehab , Alex Deucher , Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin , =?utf-8?q?Thomas_Hellstr?= =?utf-8?q?=C3=B6m?= , Qiang Yu , Srinivas Kandagatla , Amol Maheshwari , Jason Gunthorpe , Leon Romanovsky , Juergen Gross , Stefano Stabellini , Oleksandr Tyshchenko , Tomi Valkeinen , Russell King , Lucas Stach , Christian Gmeiner Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, Dmitry Osipenko , linux-media@vger.kernel.org, linaro-mm-sig@lists.linaro.org, amd-gfx@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, kernel@collabora.com, virtualization@lists.linux-foundation.org, linux-rdma@vger.kernel.org, linux-arm-msm@vger.kernel.org Subject: [PATCH v4 06/21] drm/i915: Prepare to dynamic dma-buf locking specification Date: Wed, 31 Aug 2022 18:37:42 +0300 Message-Id: <20220831153757.97381-7-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20220831153757.97381-1-dmitry.osipenko@collabora.com> References: <20220831153757.97381-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org Prepare i915 driver to the common dynamic dma-buf locking convention by starting to use the unlocked versions of dma-buf API functions and handling cases where importer now holds the reservation lock. Signed-off-by: Dmitry Osipenko Acked-by: Christian König , but it's probably --- drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c | 2 +- drivers/gpu/drm/i915/gem/i915_gem_object.c | 12 ++++++++++++ .../gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c | 16 ++++++++-------- 3 files changed, 21 insertions(+), 9 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c index f5062d0c6333..07eee1c09aaf 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c @@ -72,7 +72,7 @@ static int i915_gem_dmabuf_vmap(struct dma_buf *dma_buf, struct drm_i915_gem_object *obj = dma_buf_to_obj(dma_buf); void *vaddr; - vaddr = i915_gem_object_pin_map_unlocked(obj, I915_MAP_WB); + vaddr = i915_gem_object_pin_map(obj, I915_MAP_WB); if (IS_ERR(vaddr)) return PTR_ERR(vaddr); diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.c b/drivers/gpu/drm/i915/gem/i915_gem_object.c index 389e9f157ca5..7e2a9b02526c 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_object.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_object.c @@ -331,7 +331,19 @@ static void __i915_gem_free_objects(struct drm_i915_private *i915, continue; } + /* + * dma_buf_unmap_attachment() requires reservation to be + * locked. The imported GEM shouldn't share reservation lock, + * so it's safe to take the lock. + */ + if (obj->base.import_attach) + i915_gem_object_lock(obj, NULL); + __i915_gem_object_pages_fini(obj); + + if (obj->base.import_attach) + i915_gem_object_unlock(obj); + __i915_gem_free_object(obj); /* But keep the pointer alive for RCU-protected lookups */ diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c index 62c61af77a42..9e3ed634aa0e 100644 --- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c +++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c @@ -213,7 +213,7 @@ static int igt_dmabuf_import_same_driver(struct drm_i915_private *i915, goto out_import; } - st = dma_buf_map_attachment(import_attach, DMA_BIDIRECTIONAL); + st = dma_buf_map_attachment_unlocked(import_attach, DMA_BIDIRECTIONAL); if (IS_ERR(st)) { err = PTR_ERR(st); goto out_detach; @@ -226,7 +226,7 @@ static int igt_dmabuf_import_same_driver(struct drm_i915_private *i915, timeout = -ETIME; } err = timeout > 0 ? 0 : timeout; - dma_buf_unmap_attachment(import_attach, st, DMA_BIDIRECTIONAL); + dma_buf_unmap_attachment_unlocked(import_attach, st, DMA_BIDIRECTIONAL); out_detach: dma_buf_detach(dmabuf, import_attach); out_import: @@ -296,7 +296,7 @@ static int igt_dmabuf_import(void *arg) goto out_obj; } - err = dma_buf_vmap(dmabuf, &map); + err = dma_buf_vmap_unlocked(dmabuf, &map); dma_map = err ? NULL : map.vaddr; if (!dma_map) { pr_err("dma_buf_vmap failed\n"); @@ -337,7 +337,7 @@ static int igt_dmabuf_import(void *arg) err = 0; out_dma_map: - dma_buf_vunmap(dmabuf, &map); + dma_buf_vunmap_unlocked(dmabuf, &map); out_obj: i915_gem_object_put(obj); out_dmabuf: @@ -358,7 +358,7 @@ static int igt_dmabuf_import_ownership(void *arg) if (IS_ERR(dmabuf)) return PTR_ERR(dmabuf); - err = dma_buf_vmap(dmabuf, &map); + err = dma_buf_vmap_unlocked(dmabuf, &map); ptr = err ? NULL : map.vaddr; if (!ptr) { pr_err("dma_buf_vmap failed\n"); @@ -367,7 +367,7 @@ static int igt_dmabuf_import_ownership(void *arg) } memset(ptr, 0xc5, PAGE_SIZE); - dma_buf_vunmap(dmabuf, &map); + dma_buf_vunmap_unlocked(dmabuf, &map); obj = to_intel_bo(i915_gem_prime_import(&i915->drm, dmabuf)); if (IS_ERR(obj)) { @@ -418,7 +418,7 @@ static int igt_dmabuf_export_vmap(void *arg) } i915_gem_object_put(obj); - err = dma_buf_vmap(dmabuf, &map); + err = dma_buf_vmap_unlocked(dmabuf, &map); ptr = err ? NULL : map.vaddr; if (!ptr) { pr_err("dma_buf_vmap failed\n"); @@ -435,7 +435,7 @@ static int igt_dmabuf_export_vmap(void *arg) memset(ptr, 0xc5, dmabuf->size); err = 0; - dma_buf_vunmap(dmabuf, &map); + dma_buf_vunmap_unlocked(dmabuf, &map); out: dma_buf_put(dmabuf); return err; From patchwork Wed Aug 31 15:37:43 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 602065 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CD487ECAAD1 for ; Wed, 31 Aug 2022 15:40:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232096AbiHaPkT (ORCPT ); Wed, 31 Aug 2022 11:40:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33982 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231648AbiHaPkE (ORCPT ); Wed, 31 Aug 2022 11:40:04 -0400 Received: from madras.collabora.co.uk (madras.collabora.co.uk [46.235.227.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E6467D9D40; Wed, 31 Aug 2022 08:39:38 -0700 (PDT) Received: from dimapc.. (109-252-119-13.nat.spd-mgts.ru [109.252.119.13]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madras.collabora.co.uk (Postfix) with ESMTPSA id 7D3BF6601DE6; Wed, 31 Aug 2022 16:39:33 +0100 (BST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1661960376; bh=ZOrcHb8zKnHR7kmehtqy6q5w2hwExe7YBoS68Sg6S1A=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Ccr70/2wDaLY8cFMMlFXNiDafzMzIWMa6RCnaLL9hcGSSNP+mKRPGG0W3TvukGOZJ b1op3JTPaygecFKZHb5o09JXoYL9zbfk6EdYptOtVkAy332zUbGGJYmpfthxB2KLwJ mLgzUkHTsRXAGBJJsGcmyuRfPetvZ9C5D3jhnXmJeDkgHwzqREnCMdBE5DJxQIN/4B M1yZfCcpplezXhmxCPxEt4qgcHXaQDYDgWGoni6M2NxlI20rV+zg9rC+lVChEvZZ2b SBZEkde8f1BxFGkmlJshtd+3W4jEUtEEniVzV2AZviWcWR/q55coqegH9RIDC2wxXZ 7bqa8H7OLCa6A== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Daniel Almeida , Gert Wollny , Gustavo Padovan , Daniel Stone , Tomeu Vizoso , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Rob Clark , Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= , "Pan, Xinhui" , Thierry Reding , Tomasz Figa , Marek Szyprowski , Mauro Carvalho Chehab , Alex Deucher , Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin , =?utf-8?q?Thomas_Hellstr?= =?utf-8?q?=C3=B6m?= , Qiang Yu , Srinivas Kandagatla , Amol Maheshwari , Jason Gunthorpe , Leon Romanovsky , Juergen Gross , Stefano Stabellini , Oleksandr Tyshchenko , Tomi Valkeinen , Russell King , Lucas Stach , Christian Gmeiner Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, Dmitry Osipenko , linux-media@vger.kernel.org, linaro-mm-sig@lists.linaro.org, amd-gfx@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, kernel@collabora.com, virtualization@lists.linux-foundation.org, linux-rdma@vger.kernel.org, linux-arm-msm@vger.kernel.org Subject: [PATCH v4 07/21] drm/omapdrm: Prepare to dynamic dma-buf locking specification Date: Wed, 31 Aug 2022 18:37:43 +0300 Message-Id: <20220831153757.97381-8-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20220831153757.97381-1-dmitry.osipenko@collabora.com> References: <20220831153757.97381-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org Prepare OMAP DRM driver to the common dynamic dma-buf locking convention by starting to use the unlocked versions of dma-buf API functions. Signed-off-by: Dmitry Osipenko --- drivers/gpu/drm/omapdrm/omap_gem_dmabuf.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/omapdrm/omap_gem_dmabuf.c b/drivers/gpu/drm/omapdrm/omap_gem_dmabuf.c index 393f82e26927..8e194dbc9506 100644 --- a/drivers/gpu/drm/omapdrm/omap_gem_dmabuf.c +++ b/drivers/gpu/drm/omapdrm/omap_gem_dmabuf.c @@ -125,7 +125,7 @@ struct drm_gem_object *omap_gem_prime_import(struct drm_device *dev, get_dma_buf(dma_buf); - sgt = dma_buf_map_attachment(attach, DMA_TO_DEVICE); + sgt = dma_buf_map_attachment_unlocked(attach, DMA_TO_DEVICE); if (IS_ERR(sgt)) { ret = PTR_ERR(sgt); goto fail_detach; @@ -142,7 +142,7 @@ struct drm_gem_object *omap_gem_prime_import(struct drm_device *dev, return obj; fail_unmap: - dma_buf_unmap_attachment(attach, sgt, DMA_TO_DEVICE); + dma_buf_unmap_attachment_unlocked(attach, sgt, DMA_TO_DEVICE); fail_detach: dma_buf_detach(dma_buf, attach); dma_buf_put(dma_buf); From patchwork Wed Aug 31 15:37:44 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 602064 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5A412C54EE9 for ; Wed, 31 Aug 2022 15:40:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232063AbiHaPkh (ORCPT ); Wed, 31 Aug 2022 11:40:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33948 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232059AbiHaPkK (ORCPT ); Wed, 31 Aug 2022 11:40:10 -0400 Received: from madras.collabora.co.uk (madras.collabora.co.uk [IPv6:2a00:1098:0:82:1000:25:2eeb:e5ab]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 61A78D9E84; Wed, 31 Aug 2022 08:39:41 -0700 (PDT) Received: from dimapc.. (109-252-119-13.nat.spd-mgts.ru [109.252.119.13]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madras.collabora.co.uk (Postfix) with ESMTPSA id A21166601E0C; Wed, 31 Aug 2022 16:39:36 +0100 (BST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1661960379; bh=16kdBcfHr7NkG/hMSbFPB8YZTdoa/W2Tj8HhA5UXTjs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=UQ9/cdc9B3s4E74+susWuLftZGOe9hoHZXOuaq+uKHebd6RqDfyD4uZOcZ4eUx/w6 0CWyq9wSVVasfOGGb1i7nfu7JaNsgnu2zSvoueu9cL19deO3APZlqF1MBzSPEjn29Q EznMZUc+FB2ASK7KrH8/KAgv6ow/jmw2BrOJvQlQ615ZEpeTCoLT2Uxw+oiLBzrn81 73HYf45rrsxen1wE0Y02kHxy6s+Nm7us1Q42Xb6PlNiJKK5Ao4FAOmplut6xL3G0HS bSZmmetHKal+nDQaOH2uxr+yK0az1imcvGD4DQm9eJnuzxZkvpI93AsB6WWNBN/UR3 U1FSriyFPCXhg== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Daniel Almeida , Gert Wollny , Gustavo Padovan , Daniel Stone , Tomeu Vizoso , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Rob Clark , Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= , "Pan, Xinhui" , Thierry Reding , Tomasz Figa , Marek Szyprowski , Mauro Carvalho Chehab , Alex Deucher , Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin , =?utf-8?q?Thomas_Hellstr?= =?utf-8?q?=C3=B6m?= , Qiang Yu , Srinivas Kandagatla , Amol Maheshwari , Jason Gunthorpe , Leon Romanovsky , Juergen Gross , Stefano Stabellini , Oleksandr Tyshchenko , Tomi Valkeinen , Russell King , Lucas Stach , Christian Gmeiner Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, Dmitry Osipenko , linux-media@vger.kernel.org, linaro-mm-sig@lists.linaro.org, amd-gfx@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, kernel@collabora.com, virtualization@lists.linux-foundation.org, linux-rdma@vger.kernel.org, linux-arm-msm@vger.kernel.org Subject: [PATCH v4 08/21] drm/tegra: Prepare to dynamic dma-buf locking specification Date: Wed, 31 Aug 2022 18:37:44 +0300 Message-Id: <20220831153757.97381-9-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20220831153757.97381-1-dmitry.osipenko@collabora.com> References: <20220831153757.97381-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org Prepare Tegra DRM driver to the common dynamic dma-buf locking convention by starting to use the unlocked versions of dma-buf API functions. Signed-off-by: Dmitry Osipenko Acked-by: Christian König --- drivers/gpu/drm/tegra/gem.c | 17 +++++++++-------- 1 file changed, 9 insertions(+), 8 deletions(-) diff --git a/drivers/gpu/drm/tegra/gem.c b/drivers/gpu/drm/tegra/gem.c index 81991090adcc..b09b8ab40ae4 100644 --- a/drivers/gpu/drm/tegra/gem.c +++ b/drivers/gpu/drm/tegra/gem.c @@ -84,7 +84,7 @@ static struct host1x_bo_mapping *tegra_bo_pin(struct device *dev, struct host1x_ goto free; } - map->sgt = dma_buf_map_attachment(map->attach, direction); + map->sgt = dma_buf_map_attachment_unlocked(map->attach, direction); if (IS_ERR(map->sgt)) { dma_buf_detach(buf, map->attach); err = PTR_ERR(map->sgt); @@ -160,7 +160,8 @@ static struct host1x_bo_mapping *tegra_bo_pin(struct device *dev, struct host1x_ static void tegra_bo_unpin(struct host1x_bo_mapping *map) { if (map->attach) { - dma_buf_unmap_attachment(map->attach, map->sgt, map->direction); + dma_buf_unmap_attachment_unlocked(map->attach, map->sgt, + map->direction); dma_buf_detach(map->attach->dmabuf, map->attach); } else { dma_unmap_sgtable(map->dev, map->sgt, map->direction, 0); @@ -181,7 +182,7 @@ static void *tegra_bo_mmap(struct host1x_bo *bo) if (obj->vaddr) { return obj->vaddr; } else if (obj->gem.import_attach) { - ret = dma_buf_vmap(obj->gem.import_attach->dmabuf, &map); + ret = dma_buf_vmap_unlocked(obj->gem.import_attach->dmabuf, &map); return ret ? NULL : map.vaddr; } else { return vmap(obj->pages, obj->num_pages, VM_MAP, @@ -197,7 +198,7 @@ static void tegra_bo_munmap(struct host1x_bo *bo, void *addr) if (obj->vaddr) return; else if (obj->gem.import_attach) - dma_buf_vunmap(obj->gem.import_attach->dmabuf, &map); + dma_buf_vunmap_unlocked(obj->gem.import_attach->dmabuf, &map); else vunmap(addr); } @@ -461,7 +462,7 @@ static struct tegra_bo *tegra_bo_import(struct drm_device *drm, get_dma_buf(buf); - bo->sgt = dma_buf_map_attachment(attach, DMA_TO_DEVICE); + bo->sgt = dma_buf_map_attachment_unlocked(attach, DMA_TO_DEVICE); if (IS_ERR(bo->sgt)) { err = PTR_ERR(bo->sgt); goto detach; @@ -479,7 +480,7 @@ static struct tegra_bo *tegra_bo_import(struct drm_device *drm, detach: if (!IS_ERR_OR_NULL(bo->sgt)) - dma_buf_unmap_attachment(attach, bo->sgt, DMA_TO_DEVICE); + dma_buf_unmap_attachment_unlocked(attach, bo->sgt, DMA_TO_DEVICE); dma_buf_detach(buf, attach); dma_buf_put(buf); @@ -508,8 +509,8 @@ void tegra_bo_free_object(struct drm_gem_object *gem) tegra_bo_iommu_unmap(tegra, bo); if (gem->import_attach) { - dma_buf_unmap_attachment(gem->import_attach, bo->sgt, - DMA_TO_DEVICE); + dma_buf_unmap_attachment_unlocked(gem->import_attach, bo->sgt, + DMA_TO_DEVICE); drm_prime_gem_destroy(gem, NULL); } else { tegra_bo_free(gem->dev, bo); From patchwork Wed Aug 31 15:37:45 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 601713 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4CEA4C0502C for ; Wed, 31 Aug 2022 15:40:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232122AbiHaPkd (ORCPT ); Wed, 31 Aug 2022 11:40:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36184 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231994AbiHaPkI (ORCPT ); Wed, 31 Aug 2022 11:40:08 -0400 Received: from madras.collabora.co.uk (madras.collabora.co.uk [IPv6:2a00:1098:0:82:1000:25:2eeb:e5ab]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BA6ABB24; Wed, 31 Aug 2022 08:39:44 -0700 (PDT) Received: from dimapc.. (109-252-119-13.nat.spd-mgts.ru [109.252.119.13]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madras.collabora.co.uk (Postfix) with ESMTPSA id CAE176601DF4; Wed, 31 Aug 2022 16:39:39 +0100 (BST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1661960382; bh=zNh//xUsR9gO6Pa17YYOufQr/5kZYwJDvTfSDOMKvjg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=fJCGtZIIfrLTvH0IUhpK8WRZ7Udt9M91nUZFLNu5gw+APX3XfXzEfXbLmwe9px7IZ SszS4MzmoYZbVzk2XZJqsCgn/baTwXkFCH+wD9JWaNWByzmWYNIVrp6XT4AwyaEhEl 7W0Z/z5BKz+/9r7fsbVaqvkzyRgAbKSWRzxcxraKqrfB173l+gJ7Tal8JoWiWPwvq6 PSWSkqY1qaGJ0K09/f9HACABp3NsjzCQoHz3ETH2bpMK/Me0VEAiHKtJ3zpPSifbv+ oVrI3BCDQ4BSy2Xuc8LFhkBP1hrfFJBcHchbythi7iaN8zCYPwmXQpFVDlEpDFm4Q4 fIR7QTTM47FMA== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Daniel Almeida , Gert Wollny , Gustavo Padovan , Daniel Stone , Tomeu Vizoso , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Rob Clark , Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= , "Pan, Xinhui" , Thierry Reding , Tomasz Figa , Marek Szyprowski , Mauro Carvalho Chehab , Alex Deucher , Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin , =?utf-8?q?Thomas_Hellstr?= =?utf-8?q?=C3=B6m?= , Qiang Yu , Srinivas Kandagatla , Amol Maheshwari , Jason Gunthorpe , Leon Romanovsky , Juergen Gross , Stefano Stabellini , Oleksandr Tyshchenko , Tomi Valkeinen , Russell King , Lucas Stach , Christian Gmeiner Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, Dmitry Osipenko , linux-media@vger.kernel.org, linaro-mm-sig@lists.linaro.org, amd-gfx@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, kernel@collabora.com, virtualization@lists.linux-foundation.org, linux-rdma@vger.kernel.org, linux-arm-msm@vger.kernel.org Subject: [PATCH v4 09/21] drm/etnaviv: Prepare to dynamic dma-buf locking specification Date: Wed, 31 Aug 2022 18:37:45 +0300 Message-Id: <20220831153757.97381-10-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20220831153757.97381-1-dmitry.osipenko@collabora.com> References: <20220831153757.97381-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org Prepare Etnaviv driver to the common dynamic dma-buf locking convention by starting to use the unlocked versions of dma-buf API functions. Signed-off-by: Dmitry Osipenko --- drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c index 3fa2da149639..7031db145a77 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c +++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c @@ -65,7 +65,7 @@ static void etnaviv_gem_prime_release(struct etnaviv_gem_object *etnaviv_obj) struct iosys_map map = IOSYS_MAP_INIT_VADDR(etnaviv_obj->vaddr); if (etnaviv_obj->vaddr) - dma_buf_vunmap(etnaviv_obj->base.import_attach->dmabuf, &map); + dma_buf_vunmap_unlocked(etnaviv_obj->base.import_attach->dmabuf, &map); /* Don't drop the pages for imported dmabuf, as they are not * ours, just free the array we allocated: From patchwork Wed Aug 31 15:37:46 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 601712 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 114D5C0502C for ; Wed, 31 Aug 2022 15:40:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232145AbiHaPk4 (ORCPT ); Wed, 31 Aug 2022 11:40:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37046 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232078AbiHaPkR (ORCPT ); Wed, 31 Aug 2022 11:40:17 -0400 Received: from madras.collabora.co.uk (madras.collabora.co.uk [46.235.227.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AA09B9FC5; Wed, 31 Aug 2022 08:39:49 -0700 (PDT) Received: from dimapc.. (109-252-119-13.nat.spd-mgts.ru [109.252.119.13]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madras.collabora.co.uk (Postfix) with ESMTPSA id E84C66601DF5; Wed, 31 Aug 2022 16:39:42 +0100 (BST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1661960385; bh=/oF5kgcekO75Xv0wAeHKpq3Pgp50WSiycM6oE7wSKRk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Sl51L1bR+Y9SxOT89Q/0nVkX+TLwDp7OcA2pgpm4I/3SumRue+tVnK1q3Vw+RbjOE Xl6atoNl+C31e92BVfqyqVAKhtHD6pK31qCu8j0Uno1FBRl+Xnj8l7hTvW36MZfXLi kMxfgy3gfal5jhWoFiCx8em30aMFpWCXq0oZzUxpaNj57iukLilYisYbjMXYFNRDzp tG9ew+QPiTBRtgqpD4xjf85rqFOoNtYODtXw+U5QgdYnxMPA+B0E74wOPuZ76PdATv levx+u2+qDHOjLZIoeA6U5Vu7faXHIfhzUMg9Cz0ga92kx/uZARd8jrxI2uVDGG2GD SZaDsJp2+o7cw== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Daniel Almeida , Gert Wollny , Gustavo Padovan , Daniel Stone , Tomeu Vizoso , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Rob Clark , Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= , "Pan, Xinhui" , Thierry Reding , Tomasz Figa , Marek Szyprowski , Mauro Carvalho Chehab , Alex Deucher , Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin , =?utf-8?q?Thomas_Hellstr?= =?utf-8?q?=C3=B6m?= , Qiang Yu , Srinivas Kandagatla , Amol Maheshwari , Jason Gunthorpe , Leon Romanovsky , Juergen Gross , Stefano Stabellini , Oleksandr Tyshchenko , Tomi Valkeinen , Russell King , Lucas Stach , Christian Gmeiner Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, Dmitry Osipenko , linux-media@vger.kernel.org, linaro-mm-sig@lists.linaro.org, amd-gfx@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, kernel@collabora.com, virtualization@lists.linux-foundation.org, linux-rdma@vger.kernel.org, linux-arm-msm@vger.kernel.org Subject: [PATCH v4 10/21] RDMA/umem: Prepare to dynamic dma-buf locking specification Date: Wed, 31 Aug 2022 18:37:46 +0300 Message-Id: <20220831153757.97381-11-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20220831153757.97381-1-dmitry.osipenko@collabora.com> References: <20220831153757.97381-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org Prepare InfiniBand drivers to the common dynamic dma-buf locking convention by starting to use the unlocked versions of dma-buf API functions. Signed-off-by: Dmitry Osipenko Acked-by: Christian König --- drivers/infiniband/core/umem_dmabuf.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/drivers/infiniband/core/umem_dmabuf.c b/drivers/infiniband/core/umem_dmabuf.c index 04c04e6d24c3..43b26bc12288 100644 --- a/drivers/infiniband/core/umem_dmabuf.c +++ b/drivers/infiniband/core/umem_dmabuf.c @@ -26,7 +26,8 @@ int ib_umem_dmabuf_map_pages(struct ib_umem_dmabuf *umem_dmabuf) if (umem_dmabuf->sgt) goto wait_fence; - sgt = dma_buf_map_attachment(umem_dmabuf->attach, DMA_BIDIRECTIONAL); + sgt = dma_buf_map_attachment_unlocked(umem_dmabuf->attach, + DMA_BIDIRECTIONAL); if (IS_ERR(sgt)) return PTR_ERR(sgt); @@ -102,8 +103,8 @@ void ib_umem_dmabuf_unmap_pages(struct ib_umem_dmabuf *umem_dmabuf) umem_dmabuf->last_sg_trim = 0; } - dma_buf_unmap_attachment(umem_dmabuf->attach, umem_dmabuf->sgt, - DMA_BIDIRECTIONAL); + dma_buf_unmap_attachment_unlocked(umem_dmabuf->attach, umem_dmabuf->sgt, + DMA_BIDIRECTIONAL); umem_dmabuf->sgt = NULL; } From patchwork Wed Aug 31 15:37:47 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 601711 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 78077ECAAD1 for ; Wed, 31 Aug 2022 15:41:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231994AbiHaPlU (ORCPT ); Wed, 31 Aug 2022 11:41:20 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34916 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232060AbiHaPkh (ORCPT ); Wed, 31 Aug 2022 11:40:37 -0400 Received: from madras.collabora.co.uk (madras.collabora.co.uk [IPv6:2a00:1098:0:82:1000:25:2eeb:e5ab]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BF4EB3B949; Wed, 31 Aug 2022 08:40:09 -0700 (PDT) Received: from dimapc.. (109-252-119-13.nat.spd-mgts.ru [109.252.119.13]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madras.collabora.co.uk (Postfix) with ESMTPSA id 172376601EAC; Wed, 31 Aug 2022 16:39:46 +0100 (BST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1661960389; bh=gdOsnqn+hkC/PYCc3SOTg9WbAqCE6tugxLVX0sLlB6k=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=WLF6bT+wDRofJAtDZk+i2VmH8A0ApZxbJPdBENc2WsfMj9DWX/4nsgRPP/X0h2cPc Q+z/ITW7+8jBiClYSOevkqmQMj5ZY4DTpwBrki3X+EQRNLP2wzw8WK7zREcI2bDSNX CD260pRQ/h3fXPMJUtuf2sIDljhle9yyNj7vamirWCHaWz+fe7J5lpxcHG6vWAGz9e L6hRXO87US7+C6URv9MmyZHsXIk5YtbmEYykRTw+ak23HugSPYn+3S1zv1CSdV5eSd YGI88Y9TXgnQiNuejbDWZszW0ZXlUt/kMba0/D//NaVvJm53xfBMrQT/MhpZ4I8oIk bkkBH5QjZiXmw== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Daniel Almeida , Gert Wollny , Gustavo Padovan , Daniel Stone , Tomeu Vizoso , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Rob Clark , Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= , "Pan, Xinhui" , Thierry Reding , Tomasz Figa , Marek Szyprowski , Mauro Carvalho Chehab , Alex Deucher , Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin , =?utf-8?q?Thomas_Hellstr?= =?utf-8?q?=C3=B6m?= , Qiang Yu , Srinivas Kandagatla , Amol Maheshwari , Jason Gunthorpe , Leon Romanovsky , Juergen Gross , Stefano Stabellini , Oleksandr Tyshchenko , Tomi Valkeinen , Russell King , Lucas Stach , Christian Gmeiner Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, Dmitry Osipenko , linux-media@vger.kernel.org, linaro-mm-sig@lists.linaro.org, amd-gfx@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, kernel@collabora.com, virtualization@lists.linux-foundation.org, linux-rdma@vger.kernel.org, linux-arm-msm@vger.kernel.org Subject: [PATCH v4 11/21] misc: fastrpc: Prepare to dynamic dma-buf locking specification Date: Wed, 31 Aug 2022 18:37:47 +0300 Message-Id: <20220831153757.97381-12-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20220831153757.97381-1-dmitry.osipenko@collabora.com> References: <20220831153757.97381-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org Prepare fastrpc to the common dynamic dma-buf locking convention by starting to use the unlocked versions of dma-buf API functions. Signed-off-by: Dmitry Osipenko Acked-by: Christian König Acked-by: Srinivas Kandagatla Acked-by: Srinivas Kandagatla --- drivers/misc/fastrpc.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/drivers/misc/fastrpc.c b/drivers/misc/fastrpc.c index 93ebd174d848..6fcfb2e9f7a7 100644 --- a/drivers/misc/fastrpc.c +++ b/drivers/misc/fastrpc.c @@ -310,8 +310,8 @@ static void fastrpc_free_map(struct kref *ref) return; } } - dma_buf_unmap_attachment(map->attach, map->table, - DMA_BIDIRECTIONAL); + dma_buf_unmap_attachment_unlocked(map->attach, map->table, + DMA_BIDIRECTIONAL); dma_buf_detach(map->buf, map->attach); dma_buf_put(map->buf); } @@ -726,7 +726,7 @@ static int fastrpc_map_create(struct fastrpc_user *fl, int fd, goto attach_err; } - map->table = dma_buf_map_attachment(map->attach, DMA_BIDIRECTIONAL); + map->table = dma_buf_map_attachment_unlocked(map->attach, DMA_BIDIRECTIONAL); if (IS_ERR(map->table)) { err = PTR_ERR(map->table); goto map_err; From patchwork Wed Aug 31 15:37:48 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 602063 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 21EF1C0502A for ; Wed, 31 Aug 2022 15:41:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232105AbiHaPlP (ORCPT ); Wed, 31 Aug 2022 11:41:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33944 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230350AbiHaPke (ORCPT ); Wed, 31 Aug 2022 11:40:34 -0400 Received: from madras.collabora.co.uk (madras.collabora.co.uk [46.235.227.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BFA873C8FB; Wed, 31 Aug 2022 08:40:10 -0700 (PDT) Received: from dimapc.. (109-252-119-13.nat.spd-mgts.ru [109.252.119.13]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madras.collabora.co.uk (Postfix) with ESMTPSA id 447B46601DEF; Wed, 31 Aug 2022 16:39:49 +0100 (BST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1661960392; bh=iqiuG6elCAPxC5quyUMlq7dtDwEk3iewWUdCPSj6Wh0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=iwuMv9xLnfekm41TGBF3RPAoCqh5F5XUSOeklx7AUykOfmI/q1IknjpSNZ1sgwms9 FH+iKb1PjnddqXSDd3gPdc3S4x7qKBGbECxVttU2VoIDti22io9srpaLNYzL7JF2bK H5NL/0KIRX3oSAiKn7bB1kLUQB0IZrieWB210PjgAQaf2toiXVnABV2bjYkkDFQCkQ TYqjc1E0NRgYFEOJBIeZgC385lqCtTKP3vOTqiNSP82Rdl12QbWUPyq1Jvkm++oAwp kG1w1Gq/JuMLvurW/hngpCwCbvox273LAkvCIiy0BwT2EkiZwLv9+iRcEqAVzc9mFq UYHJWGpANzUug== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Daniel Almeida , Gert Wollny , Gustavo Padovan , Daniel Stone , Tomeu Vizoso , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Rob Clark , Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= , "Pan, Xinhui" , Thierry Reding , Tomasz Figa , Marek Szyprowski , Mauro Carvalho Chehab , Alex Deucher , Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin , =?utf-8?q?Thomas_Hellstr?= =?utf-8?q?=C3=B6m?= , Qiang Yu , Srinivas Kandagatla , Amol Maheshwari , Jason Gunthorpe , Leon Romanovsky , Juergen Gross , Stefano Stabellini , Oleksandr Tyshchenko , Tomi Valkeinen , Russell King , Lucas Stach , Christian Gmeiner Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, Dmitry Osipenko , linux-media@vger.kernel.org, linaro-mm-sig@lists.linaro.org, amd-gfx@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, kernel@collabora.com, virtualization@lists.linux-foundation.org, linux-rdma@vger.kernel.org, linux-arm-msm@vger.kernel.org Subject: [PATCH v4 12/21] xen/gntdev: Prepare to dynamic dma-buf locking specification Date: Wed, 31 Aug 2022 18:37:48 +0300 Message-Id: <20220831153757.97381-13-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20220831153757.97381-1-dmitry.osipenko@collabora.com> References: <20220831153757.97381-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org Prepare gntdev driver to the common dynamic dma-buf locking convention by starting to use the unlocked versions of dma-buf API functions. Signed-off-by: Dmitry Osipenko --- drivers/xen/gntdev-dmabuf.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/drivers/xen/gntdev-dmabuf.c b/drivers/xen/gntdev-dmabuf.c index 940e5e9e8a54..4440e626b797 100644 --- a/drivers/xen/gntdev-dmabuf.c +++ b/drivers/xen/gntdev-dmabuf.c @@ -600,7 +600,7 @@ dmabuf_imp_to_refs(struct gntdev_dmabuf_priv *priv, struct device *dev, gntdev_dmabuf->u.imp.attach = attach; - sgt = dma_buf_map_attachment(attach, DMA_BIDIRECTIONAL); + sgt = dma_buf_map_attachment_unlocked(attach, DMA_BIDIRECTIONAL); if (IS_ERR(sgt)) { ret = ERR_CAST(sgt); goto fail_detach; @@ -658,7 +658,7 @@ dmabuf_imp_to_refs(struct gntdev_dmabuf_priv *priv, struct device *dev, fail_end_access: dmabuf_imp_end_foreign_access(gntdev_dmabuf->u.imp.refs, count); fail_unmap: - dma_buf_unmap_attachment(attach, sgt, DMA_BIDIRECTIONAL); + dma_buf_unmap_attachment_unlocked(attach, sgt, DMA_BIDIRECTIONAL); fail_detach: dma_buf_detach(dma_buf, attach); fail_free_obj: @@ -708,8 +708,8 @@ static int dmabuf_imp_release(struct gntdev_dmabuf_priv *priv, u32 fd) attach = gntdev_dmabuf->u.imp.attach; if (gntdev_dmabuf->u.imp.sgt) - dma_buf_unmap_attachment(attach, gntdev_dmabuf->u.imp.sgt, - DMA_BIDIRECTIONAL); + dma_buf_unmap_attachment_unlocked(attach, gntdev_dmabuf->u.imp.sgt, + DMA_BIDIRECTIONAL); dma_buf = attach->dmabuf; dma_buf_detach(attach->dmabuf, attach); dma_buf_put(dma_buf); From patchwork Wed Aug 31 15:37:49 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 601709 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3B0BEC54EE9 for ; Wed, 31 Aug 2022 15:41:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230478AbiHaPlj (ORCPT ); Wed, 31 Aug 2022 11:41:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34324 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231648AbiHaPlJ (ORCPT ); Wed, 31 Aug 2022 11:41:09 -0400 Received: from madras.collabora.co.uk (madras.collabora.co.uk [46.235.227.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 220AC76775; Wed, 31 Aug 2022 08:40:21 -0700 (PDT) Received: from dimapc.. (109-252-119-13.nat.spd-mgts.ru [109.252.119.13]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madras.collabora.co.uk (Postfix) with ESMTPSA id 68D6C6601EBA; Wed, 31 Aug 2022 16:39:52 +0100 (BST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1661960395; bh=yuFPPZLpZu5HiT00pXyoIXNbNU+jh/z2zDqW9/uKf/o=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=N/B+b8bOYPtHEQfD37MFztmBLUQG1Fw59ElRW+eMzo5mBqnLPKKxtrTtpJ5b3A9Re TeoegHw8MNPuFRGxmerjQaX0cftS5KdDOB9bjKMJkW1d9QrMiRYxurgsUEY8dMXDlV gGytpQShxs12u0FLdR+l9TU5JAf3nC+nIGtzgJjObxtxJA5YlNZNeGstwA44tRjNIj LGz6cxeSNsncUszPcRWjDRSIZufs697VuN6aMG7EXTUJzbo5FFMYbEhJO0nPtrwhaa /H3pVyYysLeApxxArprlFRxS4YnCYkIqYz491f3i6b2n5C1Uoq4VSnPUybsmx9GWcd Z+ig7XvygSbGA== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Daniel Almeida , Gert Wollny , Gustavo Padovan , Daniel Stone , Tomeu Vizoso , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Rob Clark , Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= , "Pan, Xinhui" , Thierry Reding , Tomasz Figa , Marek Szyprowski , Mauro Carvalho Chehab , Alex Deucher , Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin , =?utf-8?q?Thomas_Hellstr?= =?utf-8?q?=C3=B6m?= , Qiang Yu , Srinivas Kandagatla , Amol Maheshwari , Jason Gunthorpe , Leon Romanovsky , Juergen Gross , Stefano Stabellini , Oleksandr Tyshchenko , Tomi Valkeinen , Russell King , Lucas Stach , Christian Gmeiner Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, Dmitry Osipenko , linux-media@vger.kernel.org, linaro-mm-sig@lists.linaro.org, amd-gfx@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, kernel@collabora.com, virtualization@lists.linux-foundation.org, linux-rdma@vger.kernel.org, linux-arm-msm@vger.kernel.org Subject: [PATCH v4 13/21] media: videobuf2: Prepare to dynamic dma-buf locking specification Date: Wed, 31 Aug 2022 18:37:49 +0300 Message-Id: <20220831153757.97381-14-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20220831153757.97381-1-dmitry.osipenko@collabora.com> References: <20220831153757.97381-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org Prepare V4L2 memory allocators to the common dynamic dma-buf locking convention by starting to use the unlocked versions of dma-buf API functions. Acked-by: Tomasz Figa Signed-off-by: Dmitry Osipenko Acked-by: Christian König --- drivers/media/common/videobuf2/videobuf2-dma-contig.c | 11 ++++++----- drivers/media/common/videobuf2/videobuf2-dma-sg.c | 8 ++++---- drivers/media/common/videobuf2/videobuf2-vmalloc.c | 6 +++--- 3 files changed, 13 insertions(+), 12 deletions(-) diff --git a/drivers/media/common/videobuf2/videobuf2-dma-contig.c b/drivers/media/common/videobuf2/videobuf2-dma-contig.c index 678b359717c4..79f4d8301fbb 100644 --- a/drivers/media/common/videobuf2/videobuf2-dma-contig.c +++ b/drivers/media/common/videobuf2/videobuf2-dma-contig.c @@ -101,7 +101,7 @@ static void *vb2_dc_vaddr(struct vb2_buffer *vb, void *buf_priv) if (buf->db_attach) { struct iosys_map map; - if (!dma_buf_vmap(buf->db_attach->dmabuf, &map)) + if (!dma_buf_vmap_unlocked(buf->db_attach->dmabuf, &map)) buf->vaddr = map.vaddr; return buf->vaddr; @@ -711,7 +711,7 @@ static int vb2_dc_map_dmabuf(void *mem_priv) } /* get the associated scatterlist for this buffer */ - sgt = dma_buf_map_attachment(buf->db_attach, buf->dma_dir); + sgt = dma_buf_map_attachment_unlocked(buf->db_attach, buf->dma_dir); if (IS_ERR(sgt)) { pr_err("Error getting dmabuf scatterlist\n"); return -EINVAL; @@ -722,7 +722,8 @@ static int vb2_dc_map_dmabuf(void *mem_priv) if (contig_size < buf->size) { pr_err("contiguous chunk is too small %lu/%lu\n", contig_size, buf->size); - dma_buf_unmap_attachment(buf->db_attach, sgt, buf->dma_dir); + dma_buf_unmap_attachment_unlocked(buf->db_attach, sgt, + buf->dma_dir); return -EFAULT; } @@ -750,10 +751,10 @@ static void vb2_dc_unmap_dmabuf(void *mem_priv) } if (buf->vaddr) { - dma_buf_vunmap(buf->db_attach->dmabuf, &map); + dma_buf_vunmap_unlocked(buf->db_attach->dmabuf, &map); buf->vaddr = NULL; } - dma_buf_unmap_attachment(buf->db_attach, sgt, buf->dma_dir); + dma_buf_unmap_attachment_unlocked(buf->db_attach, sgt, buf->dma_dir); buf->dma_addr = 0; buf->dma_sgt = NULL; diff --git a/drivers/media/common/videobuf2/videobuf2-dma-sg.c b/drivers/media/common/videobuf2/videobuf2-dma-sg.c index fa69158a65b1..36ecdea8d707 100644 --- a/drivers/media/common/videobuf2/videobuf2-dma-sg.c +++ b/drivers/media/common/videobuf2/videobuf2-dma-sg.c @@ -309,7 +309,7 @@ static void *vb2_dma_sg_vaddr(struct vb2_buffer *vb, void *buf_priv) if (!buf->vaddr) { if (buf->db_attach) { - ret = dma_buf_vmap(buf->db_attach->dmabuf, &map); + ret = dma_buf_vmap_unlocked(buf->db_attach->dmabuf, &map); buf->vaddr = ret ? NULL : map.vaddr; } else { buf->vaddr = vm_map_ram(buf->pages, buf->num_pages, -1); @@ -565,7 +565,7 @@ static int vb2_dma_sg_map_dmabuf(void *mem_priv) } /* get the associated scatterlist for this buffer */ - sgt = dma_buf_map_attachment(buf->db_attach, buf->dma_dir); + sgt = dma_buf_map_attachment_unlocked(buf->db_attach, buf->dma_dir); if (IS_ERR(sgt)) { pr_err("Error getting dmabuf scatterlist\n"); return -EINVAL; @@ -594,10 +594,10 @@ static void vb2_dma_sg_unmap_dmabuf(void *mem_priv) } if (buf->vaddr) { - dma_buf_vunmap(buf->db_attach->dmabuf, &map); + dma_buf_vunmap_unlocked(buf->db_attach->dmabuf, &map); buf->vaddr = NULL; } - dma_buf_unmap_attachment(buf->db_attach, sgt, buf->dma_dir); + dma_buf_unmap_attachment_unlocked(buf->db_attach, sgt, buf->dma_dir); buf->dma_sgt = NULL; } diff --git a/drivers/media/common/videobuf2/videobuf2-vmalloc.c b/drivers/media/common/videobuf2/videobuf2-vmalloc.c index 948152f1596b..7831bf545874 100644 --- a/drivers/media/common/videobuf2/videobuf2-vmalloc.c +++ b/drivers/media/common/videobuf2/videobuf2-vmalloc.c @@ -376,7 +376,7 @@ static int vb2_vmalloc_map_dmabuf(void *mem_priv) struct iosys_map map; int ret; - ret = dma_buf_vmap(buf->dbuf, &map); + ret = dma_buf_vmap_unlocked(buf->dbuf, &map); if (ret) return -EFAULT; buf->vaddr = map.vaddr; @@ -389,7 +389,7 @@ static void vb2_vmalloc_unmap_dmabuf(void *mem_priv) struct vb2_vmalloc_buf *buf = mem_priv; struct iosys_map map = IOSYS_MAP_INIT_VADDR(buf->vaddr); - dma_buf_vunmap(buf->dbuf, &map); + dma_buf_vunmap_unlocked(buf->dbuf, &map); buf->vaddr = NULL; } @@ -399,7 +399,7 @@ static void vb2_vmalloc_detach_dmabuf(void *mem_priv) struct iosys_map map = IOSYS_MAP_INIT_VADDR(buf->vaddr); if (buf->vaddr) - dma_buf_vunmap(buf->dbuf, &map); + dma_buf_vunmap_unlocked(buf->dbuf, &map); kfree(buf); } From patchwork Wed Aug 31 15:37:50 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 602062 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 04D11C0502A for ; Wed, 31 Aug 2022 15:41:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232077AbiHaPlc (ORCPT ); Wed, 31 Aug 2022 11:41:32 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35900 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232080AbiHaPkz (ORCPT ); Wed, 31 Aug 2022 11:40:55 -0400 Received: from madras.collabora.co.uk (madras.collabora.co.uk [IPv6:2a00:1098:0:82:1000:25:2eeb:e5ab]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 43D335A836; Wed, 31 Aug 2022 08:40:15 -0700 (PDT) Received: from dimapc.. (109-252-119-13.nat.spd-mgts.ru [109.252.119.13]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madras.collabora.co.uk (Postfix) with ESMTPSA id 8DA906601EAB; Wed, 31 Aug 2022 16:39:55 +0100 (BST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1661960398; bh=ZApSbqO1FjV2w77zy5U4R3HD15M9STCl+wb5U44H4Z8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=g8zRYWTRNkBNWGT26wLdHkOjb+QLle2qMZB0VpEhsnVL5lE3kgZrv/iM6C2f9B4fd P9X6CMh7Cfz8ATl64IX7dIUtyuzW8+xjOGv1ZR0O3dG87VIJX3UMwxE5fmAUFzt4EB 7yIhl9XIUrKUBpbjdOuzYBHKoXnQDcON3d742kO+Z0m/+f92IoSv4zHVktRjLT7biX 0rRP6pvDVRfV6QX0rCrjPGEeC0fVEtmFk7LfGf5Z7ZRKxMxpqQ2EHi9kAhVYt0Mc4t TnUySs7YPa7DjGLVd7e3tdNShYLEIo6U173RqyLqrk/v+EebpiegL610LyUTF/vHcR ZDZKPaW5A7b0A== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Daniel Almeida , Gert Wollny , Gustavo Padovan , Daniel Stone , Tomeu Vizoso , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Rob Clark , Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= , "Pan, Xinhui" , Thierry Reding , Tomasz Figa , Marek Szyprowski , Mauro Carvalho Chehab , Alex Deucher , Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin , =?utf-8?q?Thomas_Hellstr?= =?utf-8?q?=C3=B6m?= , Qiang Yu , Srinivas Kandagatla , Amol Maheshwari , Jason Gunthorpe , Leon Romanovsky , Juergen Gross , Stefano Stabellini , Oleksandr Tyshchenko , Tomi Valkeinen , Russell King , Lucas Stach , Christian Gmeiner Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, Dmitry Osipenko , linux-media@vger.kernel.org, linaro-mm-sig@lists.linaro.org, amd-gfx@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, kernel@collabora.com, virtualization@lists.linux-foundation.org, linux-rdma@vger.kernel.org, linux-arm-msm@vger.kernel.org Subject: [PATCH v4 14/21] media: tegra-vde: Prepare to dynamic dma-buf locking specification Date: Wed, 31 Aug 2022 18:37:50 +0300 Message-Id: <20220831153757.97381-15-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20220831153757.97381-1-dmitry.osipenko@collabora.com> References: <20220831153757.97381-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org Prepare Tegra video decoder driver to the common dynamic dma-buf locking convention by starting to use the unlocked versions of dma-buf API functions. Signed-off-by: Dmitry Osipenko --- drivers/media/platform/nvidia/tegra-vde/dmabuf-cache.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/drivers/media/platform/nvidia/tegra-vde/dmabuf-cache.c b/drivers/media/platform/nvidia/tegra-vde/dmabuf-cache.c index 69c346148070..1c5b94989aec 100644 --- a/drivers/media/platform/nvidia/tegra-vde/dmabuf-cache.c +++ b/drivers/media/platform/nvidia/tegra-vde/dmabuf-cache.c @@ -38,7 +38,7 @@ static void tegra_vde_release_entry(struct tegra_vde_cache_entry *entry) if (entry->vde->domain) tegra_vde_iommu_unmap(entry->vde, entry->iova); - dma_buf_unmap_attachment(entry->a, entry->sgt, entry->dma_dir); + dma_buf_unmap_attachment_unlocked(entry->a, entry->sgt, entry->dma_dir); dma_buf_detach(dmabuf, entry->a); dma_buf_put(dmabuf); @@ -102,7 +102,7 @@ int tegra_vde_dmabuf_cache_map(struct tegra_vde *vde, goto err_unlock; } - sgt = dma_buf_map_attachment(attachment, dma_dir); + sgt = dma_buf_map_attachment_unlocked(attachment, dma_dir); if (IS_ERR(sgt)) { dev_err(dev, "Failed to get dmabufs sg_table\n"); err = PTR_ERR(sgt); @@ -152,7 +152,7 @@ int tegra_vde_dmabuf_cache_map(struct tegra_vde *vde, err_free: kfree(entry); err_unmap: - dma_buf_unmap_attachment(attachment, sgt, dma_dir); + dma_buf_unmap_attachment_unlocked(attachment, sgt, dma_dir); err_detach: dma_buf_detach(dmabuf, attachment); err_unlock: From patchwork Wed Aug 31 15:37:51 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 601710 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 412E3ECAAD4 for ; Wed, 31 Aug 2022 15:41:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232080AbiHaPlf (ORCPT ); Wed, 31 Aug 2022 11:41:35 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34270 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232078AbiHaPlD (ORCPT ); Wed, 31 Aug 2022 11:41:03 -0400 Received: from madras.collabora.co.uk (madras.collabora.co.uk [IPv6:2a00:1098:0:82:1000:25:2eeb:e5ab]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A0E695FF77; Wed, 31 Aug 2022 08:40:16 -0700 (PDT) Received: from dimapc.. (109-252-119-13.nat.spd-mgts.ru [109.252.119.13]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madras.collabora.co.uk (Postfix) with ESMTPSA id B09946601E0B; Wed, 31 Aug 2022 16:39:58 +0100 (BST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1661960401; bh=gIw0LiJmbclTWGOdwrfcgGOZXkgbb+7eQ2rM8JMcCFs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=eDTg8soNjJsESfVNNrUcTFbA9XH0ERxZfrDcBM3uq+ZSIdqUh20nNOEarUQbWs34k PFovJnbPzxF3EogbvS9BRbnL/c7KFsOaMbXMGkLz+0smFOPc1ddV26U0FR64kswUkc jwaxe7WMdo6rb5rFo/X5ftYeCZx7KkQRBMpqNqdC0Z1EX7fxh+ptv6eyTCVs1tvNW2 mh+y5yP4InDFFgDKc62A/v7x4OJBD9p6Tta3U4v0sv2KP03fDz1R1ye/kDMTZRkMFm V4whnjGGGtVsCrWU/YGsXT6X/6PNrrJM992r/7+W/w6Xmct9aupP7YCY2SVTZJP3aR 5kYLzMmbuKgYg== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Daniel Almeida , Gert Wollny , Gustavo Padovan , Daniel Stone , Tomeu Vizoso , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Rob Clark , Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= , "Pan, Xinhui" , Thierry Reding , Tomasz Figa , Marek Szyprowski , Mauro Carvalho Chehab , Alex Deucher , Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin , =?utf-8?q?Thomas_Hellstr?= =?utf-8?q?=C3=B6m?= , Qiang Yu , Srinivas Kandagatla , Amol Maheshwari , Jason Gunthorpe , Leon Romanovsky , Juergen Gross , Stefano Stabellini , Oleksandr Tyshchenko , Tomi Valkeinen , Russell King , Lucas Stach , Christian Gmeiner Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, Dmitry Osipenko , linux-media@vger.kernel.org, linaro-mm-sig@lists.linaro.org, amd-gfx@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, kernel@collabora.com, virtualization@lists.linux-foundation.org, linux-rdma@vger.kernel.org, linux-arm-msm@vger.kernel.org Subject: [PATCH v4 15/21] dma-buf: Move dma_buf_vmap() to dynamic locking specification Date: Wed, 31 Aug 2022 18:37:51 +0300 Message-Id: <20220831153757.97381-16-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20220831153757.97381-1-dmitry.osipenko@collabora.com> References: <20220831153757.97381-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org Move dma_buf_vmap/vunmap_unlocked() functions to the dynamic locking specification by asserting that the reservation lock is held. Acked-by: Christian König Signed-off-by: Dmitry Osipenko --- drivers/dma-buf/dma-buf.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c index 51fb69048853..ceea4839c641 100644 --- a/drivers/dma-buf/dma-buf.c +++ b/drivers/dma-buf/dma-buf.c @@ -1449,6 +1449,8 @@ int dma_buf_vmap(struct dma_buf *dmabuf, struct iosys_map *map) if (WARN_ON(!dmabuf)) return -EINVAL; + dma_resv_assert_held(dmabuf->resv); + if (!dmabuf->ops->vmap) return -EINVAL; @@ -1509,6 +1511,8 @@ void dma_buf_vunmap(struct dma_buf *dmabuf, struct iosys_map *map) if (WARN_ON(!dmabuf)) return; + dma_resv_assert_held(dmabuf->resv); + BUG_ON(iosys_map_is_null(&dmabuf->vmap_ptr)); BUG_ON(dmabuf->vmapping_counter == 0); BUG_ON(!iosys_map_is_equal(&dmabuf->vmap_ptr, map)); From patchwork Wed Aug 31 15:37:52 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 602059 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 22589C0502C for ; Wed, 31 Aug 2022 15:42:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232225AbiHaPmF (ORCPT ); Wed, 31 Aug 2022 11:42:05 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36812 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229812AbiHaPlS (ORCPT ); Wed, 31 Aug 2022 11:41:18 -0400 Received: from madras.collabora.co.uk (madras.collabora.co.uk [46.235.227.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A663E9F8D2; Wed, 31 Aug 2022 08:40:32 -0700 (PDT) Received: from dimapc.. (109-252-119-13.nat.spd-mgts.ru [109.252.119.13]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madras.collabora.co.uk (Postfix) with ESMTPSA id CF7886601E0F; Wed, 31 Aug 2022 16:40:01 +0100 (BST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1661960404; bh=lqmbMZlr5dkrGqFSyrWeGCXHf4B0xmxBPMBh938KoCw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=EAuy4nLlfDO9TVsH5HROK6LfHSaGDAbTbN/zRIX/deEuscp5PL6GdcG1ocb0FhmQq vm/tCnYg7II5mjyvCT2fAryNdNnYPLx1YGmva96OnxCv9p2AcQMgO7a2Ry//qij6YL OCAZmhx5VYy84KsWpiKh8pJK1Lx8enKGiM/CXGR2y6Sz/asIZccaqfjaY+To/VXUBy wnuOMp2zUhytPXc3JIXdgDouOr2haYbmHySGJZOtv6kt5/EbRzbKMJSLesOYtyQl0G PMk8yBLSlC+d9CVKa/v4OlzyQG5H1mlA/NRtsILc5FyvrEewXzVmiTT5t/GforGdo9 Z/3zW6DvacAtw== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Daniel Almeida , Gert Wollny , Gustavo Padovan , Daniel Stone , Tomeu Vizoso , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Rob Clark , Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= , "Pan, Xinhui" , Thierry Reding , Tomasz Figa , Marek Szyprowski , Mauro Carvalho Chehab , Alex Deucher , Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin , =?utf-8?q?Thomas_Hellstr?= =?utf-8?q?=C3=B6m?= , Qiang Yu , Srinivas Kandagatla , Amol Maheshwari , Jason Gunthorpe , Leon Romanovsky , Juergen Gross , Stefano Stabellini , Oleksandr Tyshchenko , Tomi Valkeinen , Russell King , Lucas Stach , Christian Gmeiner Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, Dmitry Osipenko , linux-media@vger.kernel.org, linaro-mm-sig@lists.linaro.org, amd-gfx@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, kernel@collabora.com, virtualization@lists.linux-foundation.org, linux-rdma@vger.kernel.org, linux-arm-msm@vger.kernel.org Subject: [PATCH v4 16/21] dma-buf: Move dma_buf_attach() to dynamic locking specification Date: Wed, 31 Aug 2022 18:37:52 +0300 Message-Id: <20220831153757.97381-17-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20220831153757.97381-1-dmitry.osipenko@collabora.com> References: <20220831153757.97381-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org Move dma-buf attachment API functions to the dynamic locking specification by taking the reservation lock around the mapping operations. The strict locking convention prevents deadlock situations for dma-buf importers and exporters. Signed-off-by: Dmitry Osipenko Reviewed-by: Christian König --- drivers/dma-buf/dma-buf.c | 20 ++++++++------------ 1 file changed, 8 insertions(+), 12 deletions(-) diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c index ceea4839c641..073942bf5ae9 100644 --- a/drivers/dma-buf/dma-buf.c +++ b/drivers/dma-buf/dma-buf.c @@ -858,8 +858,8 @@ dma_buf_dynamic_attach(struct dma_buf *dmabuf, struct device *dev, dma_buf_is_dynamic(dmabuf)) { struct sg_table *sgt; + dma_resv_lock(attach->dmabuf->resv, NULL); if (dma_buf_is_dynamic(attach->dmabuf)) { - dma_resv_lock(attach->dmabuf->resv, NULL); ret = dmabuf->ops->pin(attach); if (ret) goto err_unlock; @@ -872,8 +872,7 @@ dma_buf_dynamic_attach(struct dma_buf *dmabuf, struct device *dev, ret = PTR_ERR(sgt); goto err_unpin; } - if (dma_buf_is_dynamic(attach->dmabuf)) - dma_resv_unlock(attach->dmabuf->resv); + dma_resv_unlock(attach->dmabuf->resv); attach->sgt = sgt; attach->dir = DMA_BIDIRECTIONAL; } @@ -889,8 +888,7 @@ dma_buf_dynamic_attach(struct dma_buf *dmabuf, struct device *dev, dmabuf->ops->unpin(attach); err_unlock: - if (dma_buf_is_dynamic(attach->dmabuf)) - dma_resv_unlock(attach->dmabuf->resv); + dma_resv_unlock(attach->dmabuf->resv); dma_buf_detach(dmabuf, attach); return ERR_PTR(ret); @@ -936,21 +934,19 @@ void dma_buf_detach(struct dma_buf *dmabuf, struct dma_buf_attachment *attach) if (WARN_ON(!dmabuf || !attach)) return; + dma_resv_lock(attach->dmabuf->resv, NULL); + if (attach->sgt) { - if (dma_buf_is_dynamic(attach->dmabuf)) - dma_resv_lock(attach->dmabuf->resv, NULL); __unmap_dma_buf(attach, attach->sgt, attach->dir); - if (dma_buf_is_dynamic(attach->dmabuf)) { + if (dma_buf_is_dynamic(attach->dmabuf)) dmabuf->ops->unpin(attach); - dma_resv_unlock(attach->dmabuf->resv); - } } - - dma_resv_lock(dmabuf->resv, NULL); list_del(&attach->node); + dma_resv_unlock(dmabuf->resv); + if (dmabuf->ops->detach) dmabuf->ops->detach(dmabuf, attach); From patchwork Wed Aug 31 15:37:53 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 601707 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E9EEFECAAD1 for ; Wed, 31 Aug 2022 15:42:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232060AbiHaPmS (ORCPT ); Wed, 31 Aug 2022 11:42:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35922 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232131AbiHaPld (ORCPT ); Wed, 31 Aug 2022 11:41:33 -0400 Received: from madras.collabora.co.uk (madras.collabora.co.uk [46.235.227.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BB93D792E1; Wed, 31 Aug 2022 08:40:36 -0700 (PDT) Received: from dimapc.. (109-252-119-13.nat.spd-mgts.ru [109.252.119.13]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madras.collabora.co.uk (Postfix) with ESMTPSA id ECE2C6601DF6; Wed, 31 Aug 2022 16:40:04 +0100 (BST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1661960407; bh=tqdR/oZFKJHCKMAPiT2kfONL23uSAzfC/46PfISfXXw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Y0fbl/3rC8aDDsV0MV7a9HOp3kNjBbJX9SYbDtxmYtk8yjvc3rkHWo5M8ppTQpiKa ZzUliICistENhQk2agFlqmQiTp9XBNKykfoC0xrSjlabl/sYIiptJ55Vd9WhuB5sHF M0epvMhwyEmDbwWQB9RhA7iBFtTNh5z+Kg66Z9GjBDMpjKs7r06OoiiUn/FWcoOrZ7 PuA498YAz+IjOVSyibs7Gaqf/MxVpZRU7bCAUqLOVJRCxTQXuEcSR+Uo1YTTXj70x5 r85mxWtpRWHr89Anv7IJ3wzhus4eLyo4K7PY2M2YcPnS7HNoFcR7R4AzBZiWm3uVdY IqAW+I5pxeilA== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Daniel Almeida , Gert Wollny , Gustavo Padovan , Daniel Stone , Tomeu Vizoso , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Rob Clark , Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= , "Pan, Xinhui" , Thierry Reding , Tomasz Figa , Marek Szyprowski , Mauro Carvalho Chehab , Alex Deucher , Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin , =?utf-8?q?Thomas_Hellstr?= =?utf-8?q?=C3=B6m?= , Qiang Yu , Srinivas Kandagatla , Amol Maheshwari , Jason Gunthorpe , Leon Romanovsky , Juergen Gross , Stefano Stabellini , Oleksandr Tyshchenko , Tomi Valkeinen , Russell King , Lucas Stach , Christian Gmeiner Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, Dmitry Osipenko , linux-media@vger.kernel.org, linaro-mm-sig@lists.linaro.org, amd-gfx@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, kernel@collabora.com, virtualization@lists.linux-foundation.org, linux-rdma@vger.kernel.org, linux-arm-msm@vger.kernel.org Subject: [PATCH v4 17/21] dma-buf: Move dma_buf_map_attachment() to dynamic locking specification Date: Wed, 31 Aug 2022 18:37:53 +0300 Message-Id: <20220831153757.97381-18-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20220831153757.97381-1-dmitry.osipenko@collabora.com> References: <20220831153757.97381-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org Move dma-buf attachment mapping functions to the dynamic locking specification by asserting that the reservation lock is held. Signed-off-by: Dmitry Osipenko Reviewed-by: Christian König --- drivers/dma-buf/dma-buf.c | 10 ++-------- 1 file changed, 2 insertions(+), 8 deletions(-) diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c index 073942bf5ae9..8e928fe6e8df 100644 --- a/drivers/dma-buf/dma-buf.c +++ b/drivers/dma-buf/dma-buf.c @@ -1037,8 +1037,7 @@ struct sg_table *dma_buf_map_attachment(struct dma_buf_attachment *attach, if (WARN_ON(!attach || !attach->dmabuf)) return ERR_PTR(-EINVAL); - if (dma_buf_attachment_is_dynamic(attach)) - dma_resv_assert_held(attach->dmabuf->resv); + dma_resv_assert_held(attach->dmabuf->resv); if (attach->sgt) { /* @@ -1053,7 +1052,6 @@ struct sg_table *dma_buf_map_attachment(struct dma_buf_attachment *attach, } if (dma_buf_is_dynamic(attach->dmabuf)) { - dma_resv_assert_held(attach->dmabuf->resv); if (!IS_ENABLED(CONFIG_DMABUF_MOVE_NOTIFY)) { r = attach->dmabuf->ops->pin(attach); if (r) @@ -1142,15 +1140,11 @@ void dma_buf_unmap_attachment(struct dma_buf_attachment *attach, if (WARN_ON(!attach || !attach->dmabuf || !sg_table)) return; - if (dma_buf_attachment_is_dynamic(attach)) - dma_resv_assert_held(attach->dmabuf->resv); + dma_resv_assert_held(attach->dmabuf->resv); if (attach->sgt == sg_table) return; - if (dma_buf_is_dynamic(attach->dmabuf)) - dma_resv_assert_held(attach->dmabuf->resv); - __unmap_dma_buf(attach, sg_table, direction); if (dma_buf_is_dynamic(attach->dmabuf) && From patchwork Wed Aug 31 15:37:54 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 602060 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D2989ECAAD1 for ; Wed, 31 Aug 2022 15:41:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231425AbiHaPlo (ORCPT ); Wed, 31 Aug 2022 11:41:44 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37832 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231416AbiHaPlE (ORCPT ); Wed, 31 Aug 2022 11:41:04 -0400 Received: from madras.collabora.co.uk (madras.collabora.co.uk [IPv6:2a00:1098:0:82:1000:25:2eeb:e5ab]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A685765563; Wed, 31 Aug 2022 08:40:18 -0700 (PDT) Received: from dimapc.. (109-252-119-13.nat.spd-mgts.ru [109.252.119.13]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madras.collabora.co.uk (Postfix) with ESMTPSA id 1817B6601F08; Wed, 31 Aug 2022 16:40:08 +0100 (BST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1661960411; bh=wM9CtxO7g+I2MDvrzT3LZe35Eb7d1xKXFtly+o9W8Ak=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=dnGv0uxhtNMBxsdvy3hH+pPM5/F2s6xH4ZhZcnNjO08lYkNBDqZm4iMB3nemHgWFB xB/nRvHipZ2WOZ5Kqpkf56IU/2D3DvGF8Ukp4zZCKoG7ndRQ+r8cINug8sgCQnMTH7 saMR0YeItwmYev/7tzwNgcXa1oRjErFEjeAAZMZPlax5q8kI+a/wG7coEm1oVYHaeI Kw0kAckg2QNpLnwZqNiSOQmKc+EGEjXs1gswAZIvlxq/ggU62BRAwq838pQ6IwjI/9 FK9xl6sHtEO4DnPpF7PMp4LzNmCD4DaNMXlGCvkAiyNSG8GlqH5e+cf1vhxAdtfnJB cCB0V24HbPT+A== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Daniel Almeida , Gert Wollny , Gustavo Padovan , Daniel Stone , Tomeu Vizoso , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Rob Clark , Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= , "Pan, Xinhui" , Thierry Reding , Tomasz Figa , Marek Szyprowski , Mauro Carvalho Chehab , Alex Deucher , Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin , =?utf-8?q?Thomas_Hellstr?= =?utf-8?q?=C3=B6m?= , Qiang Yu , Srinivas Kandagatla , Amol Maheshwari , Jason Gunthorpe , Leon Romanovsky , Juergen Gross , Stefano Stabellini , Oleksandr Tyshchenko , Tomi Valkeinen , Russell King , Lucas Stach , Christian Gmeiner Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, Dmitry Osipenko , linux-media@vger.kernel.org, linaro-mm-sig@lists.linaro.org, amd-gfx@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, kernel@collabora.com, virtualization@lists.linux-foundation.org, linux-rdma@vger.kernel.org, linux-arm-msm@vger.kernel.org Subject: [PATCH v4 18/21] dma-buf: Move dma_buf_mmap() to dynamic locking specification Date: Wed, 31 Aug 2022 18:37:54 +0300 Message-Id: <20220831153757.97381-19-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20220831153757.97381-1-dmitry.osipenko@collabora.com> References: <20220831153757.97381-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org Move dma_buf_mmap() function to the dynamic locking specification by taking the reservation lock. Neither of the today's drivers take the reservation lock within the mmap() callback, hence it's safe to enforce the locking. Acked-by: Christian König Signed-off-by: Dmitry Osipenko --- drivers/dma-buf/dma-buf.c | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-) diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c index 8e928fe6e8df..d9130486cb8f 100644 --- a/drivers/dma-buf/dma-buf.c +++ b/drivers/dma-buf/dma-buf.c @@ -1389,6 +1389,8 @@ EXPORT_SYMBOL_NS_GPL(dma_buf_end_cpu_access, DMA_BUF); int dma_buf_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma, unsigned long pgoff) { + int ret; + if (WARN_ON(!dmabuf || !vma)) return -EINVAL; @@ -1409,7 +1411,11 @@ int dma_buf_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma, vma_set_file(vma, dmabuf->file); vma->vm_pgoff = pgoff; - return dmabuf->ops->mmap(dmabuf, vma); + dma_resv_lock(dmabuf->resv, NULL); + ret = dmabuf->ops->mmap(dmabuf, vma); + dma_resv_unlock(dmabuf->resv); + + return ret; } EXPORT_SYMBOL_NS_GPL(dma_buf_mmap, DMA_BUF); From patchwork Wed Aug 31 15:37:55 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 602061 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D31F5ECAAD4 for ; Wed, 31 Aug 2022 15:41:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231137AbiHaPlh (ORCPT ); Wed, 31 Aug 2022 11:41:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37824 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229959AbiHaPlE (ORCPT ); Wed, 31 Aug 2022 11:41:04 -0400 Received: from madras.collabora.co.uk (madras.collabora.co.uk [IPv6:2a00:1098:0:82:1000:25:2eeb:e5ab]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E06CB61B05; Wed, 31 Aug 2022 08:40:17 -0700 (PDT) Received: from dimapc.. (109-252-119-13.nat.spd-mgts.ru [109.252.119.13]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madras.collabora.co.uk (Postfix) with ESMTPSA id 3F03A6601DEE; Wed, 31 Aug 2022 16:40:11 +0100 (BST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1661960414; bh=tOl8HnGHuspOaHfQ8kLvPjNpiAx+VJwtG1vjNitDsbY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=kstPveCaJveRBMwtfiKH3ZEeiXoX62OYMZ2fGOLodxxp4RoJkGu4+wX+ZBBPsIclU jC6BGy95i//NUiQnoMQ4erjBSYOBC8Ru++SGHiCT/9wRglO2GVJFcSugeKpeAWgkxM 8JtLuUEtcNQZlNOXOMUPUT5Cj80Xq6sd33axeBXK+rmA+vwkNdDb0AUKjfB6EB3ptw KaBaYu80d/FuWGBMzslNdWxVAePKz+26arVlSAG7NZLi+VvCqcjfUUarsJM71hn2Xo jO/K0Hu5Ruk7Boo4ixyRth/yzAD1hmSV+/zWxE4Zq8LIZSxSuFlUqFURm8A50yCeS1 ycL3RR0VL1zYw== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Daniel Almeida , Gert Wollny , Gustavo Padovan , Daniel Stone , Tomeu Vizoso , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Rob Clark , Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= , "Pan, Xinhui" , Thierry Reding , Tomasz Figa , Marek Szyprowski , Mauro Carvalho Chehab , Alex Deucher , Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin , =?utf-8?q?Thomas_Hellstr?= =?utf-8?q?=C3=B6m?= , Qiang Yu , Srinivas Kandagatla , Amol Maheshwari , Jason Gunthorpe , Leon Romanovsky , Juergen Gross , Stefano Stabellini , Oleksandr Tyshchenko , Tomi Valkeinen , Russell King , Lucas Stach , Christian Gmeiner Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, Dmitry Osipenko , linux-media@vger.kernel.org, linaro-mm-sig@lists.linaro.org, amd-gfx@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, kernel@collabora.com, virtualization@lists.linux-foundation.org, linux-rdma@vger.kernel.org, linux-arm-msm@vger.kernel.org Subject: [PATCH v4 19/21] dma-buf: Document dynamic locking convention Date: Wed, 31 Aug 2022 18:37:55 +0300 Message-Id: <20220831153757.97381-20-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20220831153757.97381-1-dmitry.osipenko@collabora.com> References: <20220831153757.97381-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org Add documentation for the dynamic locking convention. The documentation tells dma-buf API users when they should take the reservation lock and when not. Signed-off-by: Dmitry Osipenko --- Documentation/driver-api/dma-buf.rst | 6 +++ drivers/dma-buf/dma-buf.c | 64 ++++++++++++++++++++++++++++ 2 files changed, 70 insertions(+) diff --git a/Documentation/driver-api/dma-buf.rst b/Documentation/driver-api/dma-buf.rst index 36a76cbe9095..622b8156d212 100644 --- a/Documentation/driver-api/dma-buf.rst +++ b/Documentation/driver-api/dma-buf.rst @@ -119,6 +119,12 @@ DMA Buffer ioctls .. kernel-doc:: include/uapi/linux/dma-buf.h +DMA-BUF locking convention +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +.. kernel-doc:: drivers/dma-buf/dma-buf.c + :doc: locking convention + Kernel Functions and Structures Reference ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c index d9130486cb8f..97ce884fad76 100644 --- a/drivers/dma-buf/dma-buf.c +++ b/drivers/dma-buf/dma-buf.c @@ -794,6 +794,70 @@ static struct sg_table * __map_dma_buf(struct dma_buf_attachment *attach, return sg_table; } +/** + * DOC: locking convention + * + * In order to avoid deadlock situations between dma-buf exports and importers, + * all dma-buf API users must follow the common dma-buf locking convention. + * + * Convention for importers + * + * 1. Importers must hold the dma-buf reservation lock when calling these + * functions: + * + * - dma_buf_pin() + * - dma_buf_unpin() + * - dma_buf_map_attachment() + * - dma_buf_unmap_attachment() + * - dma_buf_vmap() + * - dma_buf_vunmap() + * + * 2. Importers must not hold the dma-buf reservation lock when calling these + * functions: + * + * - dma_buf_attach() + * - dma_buf_dynamic_attach() + * - dma_buf_detach() + * - dma_buf_export( + * - dma_buf_fd() + * - dma_buf_get() + * - dma_buf_put() + * - dma_buf_mmap() + * - dma_buf_begin_cpu_access() + * - dma_buf_end_cpu_access() + * - dma_buf_map_attachment_unlocked() + * - dma_buf_unmap_attachment_unlocked() + * - dma_buf_vmap_unlocked() + * - dma_buf_vunmap_unlocked() + * + * Convention for exporters + * + * 1. These &dma_buf_ops callbacks are invoked with unlocked dma-buf + * reservation and exporter can take the lock: + * + * - &dma_buf_ops.attach() + * - &dma_buf_ops.detach() + * - &dma_buf_ops.release() + * - &dma_buf_ops.begin_cpu_access() + * - &dma_buf_ops.end_cpu_access() + * + * 2. These &dma_buf_ops callbacks are invoked with locked dma-buf + * reservation and exporter can't take the lock: + * + * - &dma_buf_ops.pin() + * - &dma_buf_ops.unpin() + * - &dma_buf_ops.map_dma_buf() + * - &dma_buf_ops.unmap_dma_buf() + * - &dma_buf_ops.mmap() + * - &dma_buf_ops.vmap() + * - &dma_buf_ops.vunmap() + * + * 3. Exporters must hold the dma-buf reservation lock when calling these + * functions: + * + * - dma_buf_move_notify() + */ + /** * dma_buf_dynamic_attach - Add the device to dma_buf's attachments list * @dmabuf: [in] buffer to attach device to. From patchwork Wed Aug 31 15:37:56 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 602058 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7E2A2ECAAD4 for ; Wed, 31 Aug 2022 15:42:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231233AbiHaPmj (ORCPT ); Wed, 31 Aug 2022 11:42:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33950 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232168AbiHaPmJ (ORCPT ); Wed, 31 Aug 2022 11:42:09 -0400 Received: from madras.collabora.co.uk (madras.collabora.co.uk [46.235.227.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8324CDAA12; Wed, 31 Aug 2022 08:40:53 -0700 (PDT) Received: from dimapc.. (109-252-119-13.nat.spd-mgts.ru [109.252.119.13]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madras.collabora.co.uk (Postfix) with ESMTPSA id 5D4526601EAE; Wed, 31 Aug 2022 16:40:14 +0100 (BST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1661960417; bh=Vcx/herjUGb9SFY+5nbdG8eUk/jlKDV/4QxlVbI0myY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=H7P2X/MSLjMYUbowKm1td5heFuAv2x2KHLd0zXp2PPEXEjZRF1WY3W4lu/+X3AIB+ OJZ0cQKyXYJjECH6RJQGKxpJ9Wt8yHpnBTv9M0N0egVpNwLGyZqXcw/HLK6MPyimwK 9+YQuLZu7AvFgC02eHi0Awrq3JKzyqGbRnZ1a38cQp+YPZ0WFyK7KrGn7qC5YXq9sz hsUIFpbDbKldDnxkb18PjgpvSh4PHJZyi+BYZhb2viuFdHOPwXlZdQZ4ExXLHRs1C4 rXvU6nakcKOaMklK5RlKvUNqhwzkzMTvqfpXl0ueGpwsVpcWJMYWK35UaYGTLlhrkW w82LTHqG43vwQ== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Daniel Almeida , Gert Wollny , Gustavo Padovan , Daniel Stone , Tomeu Vizoso , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Rob Clark , Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= , "Pan, Xinhui" , Thierry Reding , Tomasz Figa , Marek Szyprowski , Mauro Carvalho Chehab , Alex Deucher , Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin , =?utf-8?q?Thomas_Hellstr?= =?utf-8?q?=C3=B6m?= , Qiang Yu , Srinivas Kandagatla , Amol Maheshwari , Jason Gunthorpe , Leon Romanovsky , Juergen Gross , Stefano Stabellini , Oleksandr Tyshchenko , Tomi Valkeinen , Russell King , Lucas Stach , Christian Gmeiner Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, Dmitry Osipenko , linux-media@vger.kernel.org, linaro-mm-sig@lists.linaro.org, amd-gfx@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, kernel@collabora.com, virtualization@lists.linux-foundation.org, linux-rdma@vger.kernel.org, linux-arm-msm@vger.kernel.org Subject: [PATCH v4 20/21] media: videobuf2: Stop using internal dma-buf lock Date: Wed, 31 Aug 2022 18:37:56 +0300 Message-Id: <20220831153757.97381-21-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20220831153757.97381-1-dmitry.osipenko@collabora.com> References: <20220831153757.97381-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org All drivers that use dma-bufs have been moved to the updated locking specification and now dma-buf reservation is guaranteed to be locked by importers during the mapping operations. There is no need to take the internal dma-buf lock anymore. Remove locking from the videobuf2 memory allocators. Acked-by: Tomasz Figa Acked-by: Hans Verkuil Signed-off-by: Dmitry Osipenko Acked-by: Christian König --- drivers/media/common/videobuf2/videobuf2-dma-contig.c | 11 +---------- drivers/media/common/videobuf2/videobuf2-dma-sg.c | 11 +---------- drivers/media/common/videobuf2/videobuf2-vmalloc.c | 11 +---------- 3 files changed, 3 insertions(+), 30 deletions(-) diff --git a/drivers/media/common/videobuf2/videobuf2-dma-contig.c b/drivers/media/common/videobuf2/videobuf2-dma-contig.c index 79f4d8301fbb..555bd40fa472 100644 --- a/drivers/media/common/videobuf2/videobuf2-dma-contig.c +++ b/drivers/media/common/videobuf2/videobuf2-dma-contig.c @@ -382,18 +382,12 @@ static struct sg_table *vb2_dc_dmabuf_ops_map( struct dma_buf_attachment *db_attach, enum dma_data_direction dma_dir) { struct vb2_dc_attachment *attach = db_attach->priv; - /* stealing dmabuf mutex to serialize map/unmap operations */ - struct mutex *lock = &db_attach->dmabuf->lock; struct sg_table *sgt; - mutex_lock(lock); - sgt = &attach->sgt; /* return previously mapped sg table */ - if (attach->dma_dir == dma_dir) { - mutex_unlock(lock); + if (attach->dma_dir == dma_dir) return sgt; - } /* release any previous cache */ if (attach->dma_dir != DMA_NONE) { @@ -409,14 +403,11 @@ static struct sg_table *vb2_dc_dmabuf_ops_map( if (dma_map_sgtable(db_attach->dev, sgt, dma_dir, DMA_ATTR_SKIP_CPU_SYNC)) { pr_err("failed to map scatterlist\n"); - mutex_unlock(lock); return ERR_PTR(-EIO); } attach->dma_dir = dma_dir; - mutex_unlock(lock); - return sgt; } diff --git a/drivers/media/common/videobuf2/videobuf2-dma-sg.c b/drivers/media/common/videobuf2/videobuf2-dma-sg.c index 36ecdea8d707..36981a5b5c53 100644 --- a/drivers/media/common/videobuf2/videobuf2-dma-sg.c +++ b/drivers/media/common/videobuf2/videobuf2-dma-sg.c @@ -424,18 +424,12 @@ static struct sg_table *vb2_dma_sg_dmabuf_ops_map( struct dma_buf_attachment *db_attach, enum dma_data_direction dma_dir) { struct vb2_dma_sg_attachment *attach = db_attach->priv; - /* stealing dmabuf mutex to serialize map/unmap operations */ - struct mutex *lock = &db_attach->dmabuf->lock; struct sg_table *sgt; - mutex_lock(lock); - sgt = &attach->sgt; /* return previously mapped sg table */ - if (attach->dma_dir == dma_dir) { - mutex_unlock(lock); + if (attach->dma_dir == dma_dir) return sgt; - } /* release any previous cache */ if (attach->dma_dir != DMA_NONE) { @@ -446,14 +440,11 @@ static struct sg_table *vb2_dma_sg_dmabuf_ops_map( /* mapping to the client with new direction */ if (dma_map_sgtable(db_attach->dev, sgt, dma_dir, 0)) { pr_err("failed to map scatterlist\n"); - mutex_unlock(lock); return ERR_PTR(-EIO); } attach->dma_dir = dma_dir; - mutex_unlock(lock); - return sgt; } diff --git a/drivers/media/common/videobuf2/videobuf2-vmalloc.c b/drivers/media/common/videobuf2/videobuf2-vmalloc.c index 7831bf545874..41db707e43a4 100644 --- a/drivers/media/common/videobuf2/videobuf2-vmalloc.c +++ b/drivers/media/common/videobuf2/videobuf2-vmalloc.c @@ -267,18 +267,12 @@ static struct sg_table *vb2_vmalloc_dmabuf_ops_map( struct dma_buf_attachment *db_attach, enum dma_data_direction dma_dir) { struct vb2_vmalloc_attachment *attach = db_attach->priv; - /* stealing dmabuf mutex to serialize map/unmap operations */ - struct mutex *lock = &db_attach->dmabuf->lock; struct sg_table *sgt; - mutex_lock(lock); - sgt = &attach->sgt; /* return previously mapped sg table */ - if (attach->dma_dir == dma_dir) { - mutex_unlock(lock); + if (attach->dma_dir == dma_dir) return sgt; - } /* release any previous cache */ if (attach->dma_dir != DMA_NONE) { @@ -289,14 +283,11 @@ static struct sg_table *vb2_vmalloc_dmabuf_ops_map( /* mapping to the client with new direction */ if (dma_map_sgtable(db_attach->dev, sgt, dma_dir, 0)) { pr_err("failed to map scatterlist\n"); - mutex_unlock(lock); return ERR_PTR(-EIO); } attach->dma_dir = dma_dir; - mutex_unlock(lock); - return sgt; } From patchwork Wed Aug 31 15:37:57 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 601708 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D39C7C0502A for ; Wed, 31 Aug 2022 15:42:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232206AbiHaPmB (ORCPT ); Wed, 31 Aug 2022 11:42:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36184 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229807AbiHaPlM (ORCPT ); Wed, 31 Aug 2022 11:41:12 -0400 Received: from madras.collabora.co.uk (madras.collabora.co.uk [IPv6:2a00:1098:0:82:1000:25:2eeb:e5ab]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 66D4030F7A; Wed, 31 Aug 2022 08:40:22 -0700 (PDT) Received: from dimapc.. (109-252-119-13.nat.spd-mgts.ru [109.252.119.13]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madras.collabora.co.uk (Postfix) with ESMTPSA id 839826601E58; Wed, 31 Aug 2022 16:40:17 +0100 (BST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1661960420; bh=X6+cHdBcYoofQ/ea0HJ3XSrd+3wQc+6k4P4KoLaHXmw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=CvPSAXgoUaJptHhz6ntZ4HfZFNqNSVu8Uhx5gdFZizZ5wNTkZ2TXBgjmGArOrzTsI EX9IGJoD7YNYICqH7CJHWIxcjxFsSnNnZT5QCOBFfcaSmA0LimW8vw3n3ZyJC/PM1K bkSvcksN8TvjoiXsx4l+vd639Gt/kebvz9P5mKcxDq+5ZgvMnVu+arcqSEfMx4O1Y2 j6iSkn4FYOTBtofQYDS0bqMvMZZLmuZhXl3t9SS9owkRGwdUJP3UqeSo4gWyidJtx6 MNksLlFbKwY9PCs714XVdEo80iZJRuAetN/bj97toAQ7rh2a87KOpIqcsLtBCHPN0f UJs0gPbPXdJWg== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Daniel Almeida , Gert Wollny , Gustavo Padovan , Daniel Stone , Tomeu Vizoso , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Rob Clark , Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= , "Pan, Xinhui" , Thierry Reding , Tomasz Figa , Marek Szyprowski , Mauro Carvalho Chehab , Alex Deucher , Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin , =?utf-8?q?Thomas_Hellstr?= =?utf-8?q?=C3=B6m?= , Qiang Yu , Srinivas Kandagatla , Amol Maheshwari , Jason Gunthorpe , Leon Romanovsky , Juergen Gross , Stefano Stabellini , Oleksandr Tyshchenko , Tomi Valkeinen , Russell King , Lucas Stach , Christian Gmeiner Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, Dmitry Osipenko , linux-media@vger.kernel.org, linaro-mm-sig@lists.linaro.org, amd-gfx@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, kernel@collabora.com, virtualization@lists.linux-foundation.org, linux-rdma@vger.kernel.org, linux-arm-msm@vger.kernel.org Subject: [PATCH v4 21/21] dma-buf: Remove obsoleted internal lock Date: Wed, 31 Aug 2022 18:37:57 +0300 Message-Id: <20220831153757.97381-22-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20220831153757.97381-1-dmitry.osipenko@collabora.com> References: <20220831153757.97381-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org The internal dma-buf lock isn't needed anymore because the updated locking specification claims that dma-buf reservation must be locked by importers, and thus, the internal data is already protected by the reservation lock. Remove the obsoleted internal lock. Acked-by: Christian König Signed-off-by: Dmitry Osipenko Reviewed-by: Christian König --- drivers/dma-buf/dma-buf.c | 14 ++++---------- include/linux/dma-buf.h | 9 --------- 2 files changed, 4 insertions(+), 19 deletions(-) diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c index 97ce884fad76..772fdd9eeed8 100644 --- a/drivers/dma-buf/dma-buf.c +++ b/drivers/dma-buf/dma-buf.c @@ -656,7 +656,6 @@ struct dma_buf *dma_buf_export(const struct dma_buf_export_info *exp_info) dmabuf->file = file; - mutex_init(&dmabuf->lock); INIT_LIST_HEAD(&dmabuf->attachments); mutex_lock(&db_list.lock); @@ -1502,7 +1501,7 @@ EXPORT_SYMBOL_NS_GPL(dma_buf_mmap, DMA_BUF); int dma_buf_vmap(struct dma_buf *dmabuf, struct iosys_map *map) { struct iosys_map ptr; - int ret = 0; + int ret; iosys_map_clear(map); @@ -1514,28 +1513,25 @@ int dma_buf_vmap(struct dma_buf *dmabuf, struct iosys_map *map) if (!dmabuf->ops->vmap) return -EINVAL; - mutex_lock(&dmabuf->lock); if (dmabuf->vmapping_counter) { dmabuf->vmapping_counter++; BUG_ON(iosys_map_is_null(&dmabuf->vmap_ptr)); *map = dmabuf->vmap_ptr; - goto out_unlock; + return 0; } BUG_ON(iosys_map_is_set(&dmabuf->vmap_ptr)); ret = dmabuf->ops->vmap(dmabuf, &ptr); if (WARN_ON_ONCE(ret)) - goto out_unlock; + return ret; dmabuf->vmap_ptr = ptr; dmabuf->vmapping_counter = 1; *map = dmabuf->vmap_ptr; -out_unlock: - mutex_unlock(&dmabuf->lock); - return ret; + return 0; } EXPORT_SYMBOL_NS_GPL(dma_buf_vmap, DMA_BUF); @@ -1577,13 +1573,11 @@ void dma_buf_vunmap(struct dma_buf *dmabuf, struct iosys_map *map) BUG_ON(dmabuf->vmapping_counter == 0); BUG_ON(!iosys_map_is_equal(&dmabuf->vmap_ptr, map)); - mutex_lock(&dmabuf->lock); if (--dmabuf->vmapping_counter == 0) { if (dmabuf->ops->vunmap) dmabuf->ops->vunmap(dmabuf, map); iosys_map_clear(&dmabuf->vmap_ptr); } - mutex_unlock(&dmabuf->lock); } EXPORT_SYMBOL_NS_GPL(dma_buf_vunmap, DMA_BUF); diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h index f11b5bbc2f37..6fa8d4e29719 100644 --- a/include/linux/dma-buf.h +++ b/include/linux/dma-buf.h @@ -326,15 +326,6 @@ struct dma_buf { /** @ops: dma_buf_ops associated with this buffer object. */ const struct dma_buf_ops *ops; - /** - * @lock: - * - * Used internally to serialize list manipulation, attach/detach and - * vmap/unmap. Note that in many cases this is superseeded by - * dma_resv_lock() on @resv. - */ - struct mutex lock; - /** * @vmapping_counter: *