From patchwork Thu May 26 23:50:19 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 577128 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EB81BC433FE for ; Thu, 26 May 2022 23:54:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1349542AbiEZXyP (ORCPT ); Thu, 26 May 2022 19:54:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52368 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1349532AbiEZXyO (ORCPT ); Thu, 26 May 2022 19:54:14 -0400 Received: from bhuna.collabora.co.uk (bhuna.collabora.co.uk [46.235.227.227]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 660D86D946; Thu, 26 May 2022 16:54:13 -0700 (PDT) Received: from [127.0.0.1] (localhost [127.0.0.1]) (Authenticated sender: dmitry.osipenko) with ESMTPSA id 59AAC1F40878 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1653609252; bh=hQQeDrOOx+1JCRSXi7OKWs0mUskygEo1Q1XD4BRlE0w=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=denQs0WIZ08+E6kW9JjgKq0KczlpedRQBync8cvXdUc/idQKbB6uafoeg89eoZRXX fd0O+TV9m9gN9MnlfFYQ6hiAqeG7PjzLToZEKGh3pruZK5PnOBbvPCW+UXHPzaJVaA Bgigrnz4HvCZBsAm0uP0lmQWOPKJiQb56yz07hyFtvKMbCGXvoUHcMfIWTmDFJ1YI8 moEUQReSjPpVzp90kx4McPoRvrc/gFUZfdb/ufWZd2v6Zsy7YknSLXuDteEl4Fh2Rh lP82QwUta8X7dfbA7HPR3fq60l8kPX0Zxm9aa5TMUQOCc/1UKNp1W8HRCOYp01Cbuo VSTuDPrFCedhw== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Daniel Almeida , Gert Wollny , Gustavo Padovan , Daniel Stone , Tomeu Vizoso , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Rob Herring , Steven Price , Alyssa Rosenzweig , Rob Clark , Emil Velikov , Robin Murphy , Qiang Yu , Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= , "Pan, Xinhui" , Thierry Reding , Tomasz Figa , Marek Szyprowski , Mauro Carvalho Chehab , Alex Deucher , Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org, Dmitry Osipenko , Dmitry Osipenko , linux-tegra@vger.kernel.org, linux-media@vger.kernel.org, linaro-mm-sig@lists.linaro.org, amd-gfx@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, kernel@collabora.com Subject: [PATCH v6 01/22] drm/gem: Properly annotate WW context on drm_gem_lock_reservations() error Date: Fri, 27 May 2022 02:50:19 +0300 Message-Id: <20220526235040.678984-2-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20220526235040.678984-1-dmitry.osipenko@collabora.com> References: <20220526235040.678984-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org Use ww_acquire_fini() in the error code paths. Otherwise lockdep thinks that lock is held when lock's memory is freed after the drm_gem_lock_reservations() error. The WW needs to be annotated as "freed", which fixes the noisy "WARNING: held lock freed!" splat of VirtIO-GPU driver with CONFIG_DEBUG_MUTEXES=y and enabled lockdep. Cc: stable@vger.kernel.org Signed-off-by: Dmitry Osipenko --- drivers/gpu/drm/drm_gem.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c index eb0c2d041f13..86d670c71286 100644 --- a/drivers/gpu/drm/drm_gem.c +++ b/drivers/gpu/drm/drm_gem.c @@ -1226,7 +1226,7 @@ drm_gem_lock_reservations(struct drm_gem_object **objs, int count, ret = dma_resv_lock_slow_interruptible(obj->resv, acquire_ctx); if (ret) { - ww_acquire_done(acquire_ctx); + ww_acquire_fini(acquire_ctx); return ret; } } @@ -1251,7 +1251,7 @@ drm_gem_lock_reservations(struct drm_gem_object **objs, int count, goto retry; } - ww_acquire_done(acquire_ctx); + ww_acquire_fini(acquire_ctx); return ret; } } From patchwork Thu May 26 23:50:20 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 576383 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 91FDCC433F5 for ; Thu, 26 May 2022 23:54:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1349584AbiEZXyY (ORCPT ); Thu, 26 May 2022 19:54:24 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52418 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1349545AbiEZXyR (ORCPT ); Thu, 26 May 2022 19:54:17 -0400 Received: from bhuna.collabora.co.uk (bhuna.collabora.co.uk [46.235.227.227]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6787E7B9E0; Thu, 26 May 2022 16:54:16 -0700 (PDT) Received: from [127.0.0.1] (localhost [127.0.0.1]) (Authenticated sender: dmitry.osipenko) with ESMTPSA id 62C221F40887 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1653609255; bh=5LIdhJtan+LZ6LnQ64jsbMrAvP7fINcUYf3MaY5BoNM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=f3hb0zk7VJGPn7ap02Rwgz7TrZ/O5CjxmjebLky7DXFjxwh1bGhEp5KdhmP9PuAlW ZVs/oT2mQG2LHjkG94AHUpI5j+6jkKfSTeC666f+Vnzg0MCBoFk/HB3KqNV3aFCeci mpb/yMpgDiuKC91hpp/ErpjhLFcDMsOCGcnUis/tlNxRRYM9dL5Uuyh0AVkOuuqyZO XjbeTgTToclrmOeI+ZEO+N7ioYFvs6q7MdaRDWNUK8Q/TVFV5+m5BBeUhPTERr1jrH ZVjKt8stdo9RGE876EpODo2XGhnQLHZHg8h4JGFnvYopRH/EBRHZs+lbAB8TtOY3Jk oFSx0RPGOuLSw== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Daniel Almeida , Gert Wollny , Gustavo Padovan , Daniel Stone , Tomeu Vizoso , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Rob Herring , Steven Price , Alyssa Rosenzweig , Rob Clark , Emil Velikov , Robin Murphy , Qiang Yu , Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= , "Pan, Xinhui" , Thierry Reding , Tomasz Figa , Marek Szyprowski , Mauro Carvalho Chehab , Alex Deucher , Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org, Dmitry Osipenko , Dmitry Osipenko , linux-tegra@vger.kernel.org, linux-media@vger.kernel.org, linaro-mm-sig@lists.linaro.org, amd-gfx@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, kernel@collabora.com Subject: [PATCH v6 02/22] drm/gem: Move mapping of imported dma-bufs to drm_gem_mmap_obj() Date: Fri, 27 May 2022 02:50:20 +0300 Message-Id: <20220526235040.678984-3-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20220526235040.678984-1-dmitry.osipenko@collabora.com> References: <20220526235040.678984-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org Drivers that use drm_gem_mmap() and drm_gem_mmap_obj() helpers don't handle imported dma-bufs properly, which results in mapping of something else than the imported dma-buf. For example, on NVIDIA Tegra we get a hard lockup when userspace writes to the memory mapping of a dma-buf that was imported into Tegra's DRM GEM. To fix this bug, move mapping of imported dma-bufs to drm_gem_mmap_obj(). Now mmaping of imported dma-bufs works properly for all DRM drivers. Cc: stable@vger.kernel.org Signed-off-by: Dmitry Osipenko --- drivers/gpu/drm/drm_gem.c | 3 +++ drivers/gpu/drm/drm_gem_shmem_helper.c | 9 --------- drivers/gpu/drm/tegra/gem.c | 4 ++++ 3 files changed, 7 insertions(+), 9 deletions(-) diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c index 86d670c71286..7c0b025508e4 100644 --- a/drivers/gpu/drm/drm_gem.c +++ b/drivers/gpu/drm/drm_gem.c @@ -1038,6 +1038,9 @@ int drm_gem_mmap_obj(struct drm_gem_object *obj, unsigned long obj_size, if (obj_size < vma->vm_end - vma->vm_start) return -EINVAL; + if (obj->import_attach) + return dma_buf_mmap(obj->dma_buf, vma, 0); + /* Take a ref for this mapping of the object, so that the fault * handler can dereference the mmap offset's pointer to the object. * This reference is cleaned up by the corresponding vm_close diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c index 8ad0e02991ca..6190f5018986 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -609,17 +609,8 @@ EXPORT_SYMBOL_GPL(drm_gem_shmem_vm_ops); */ int drm_gem_shmem_mmap(struct drm_gem_shmem_object *shmem, struct vm_area_struct *vma) { - struct drm_gem_object *obj = &shmem->base; int ret; - if (obj->import_attach) { - /* Drop the reference drm_gem_mmap_obj() acquired.*/ - drm_gem_object_put(obj); - vma->vm_private_data = NULL; - - return dma_buf_mmap(obj->dma_buf, vma, 0); - } - ret = drm_gem_shmem_get_pages(shmem); if (ret) { drm_gem_vm_close(vma); diff --git a/drivers/gpu/drm/tegra/gem.c b/drivers/gpu/drm/tegra/gem.c index 7c7dd84e6db8..f92aa20d63bb 100644 --- a/drivers/gpu/drm/tegra/gem.c +++ b/drivers/gpu/drm/tegra/gem.c @@ -564,6 +564,10 @@ int __tegra_gem_mmap(struct drm_gem_object *gem, struct vm_area_struct *vma) { struct tegra_bo *bo = to_tegra_bo(gem); + /* imported dmu-buf is mapped by drm_gem_mmap_obj() */ + if (gem->import_attach) + return 0; + if (!bo->pages) { unsigned long vm_pgoff = vma->vm_pgoff; int err; From patchwork Thu May 26 23:50:21 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 576382 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B7AF2C433F5 for ; Thu, 26 May 2022 23:54:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1349545AbiEZXy2 (ORCPT ); Thu, 26 May 2022 19:54:28 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52542 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1349532AbiEZXyW (ORCPT ); Thu, 26 May 2022 19:54:22 -0400 Received: from bhuna.collabora.co.uk (bhuna.collabora.co.uk [46.235.227.227]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8D0B9B82CC; Thu, 26 May 2022 16:54:19 -0700 (PDT) Received: from [127.0.0.1] (localhost [127.0.0.1]) (Authenticated sender: dmitry.osipenko) with ESMTPSA id 7343C1F4088D DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1653609258; bh=5Mavp4oALIXdPN+ziHKcCdoikrrk4yQKrexW04s8O+A=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=GCNfDyHwva3jnk23dJ6QCm8KOFAi1gjRCYP9xvOs94W2p7deUvj44P68aLsjE/7u4 yfwcfvNQr4nmbq52kvoniOE8I9kqYDxW3E/zJsq0wztbyxUJoWfBM+sGz5sj1vP3H1 2vGOjrTvEf1oTtWzvBJYCeI30H+IFh8vs5cf8GYlVPtAuYsIX/4NOWwLNjQpHii/el xpF91QL9vdycQFlnx5rTQyBbB8eKCgGWzed9YAwG2heRHBLbIhmlReZDCc5gSLh54+ qewVvEolbc/vSS/eRiv+h9jFugsNQsuaCSWUqlG3uE3J5QZULCTn9GPAlkp9ZR9mS8 1NxdCkIbiyuoQ== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Daniel Almeida , Gert Wollny , Gustavo Padovan , Daniel Stone , Tomeu Vizoso , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Rob Herring , Steven Price , Alyssa Rosenzweig , Rob Clark , Emil Velikov , Robin Murphy , Qiang Yu , Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= , "Pan, Xinhui" , Thierry Reding , Tomasz Figa , Marek Szyprowski , Mauro Carvalho Chehab , Alex Deucher , Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org, Dmitry Osipenko , Dmitry Osipenko , linux-tegra@vger.kernel.org, linux-media@vger.kernel.org, linaro-mm-sig@lists.linaro.org, amd-gfx@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, kernel@collabora.com Subject: [PATCH v6 03/22] drm/panfrost: Put mapping instead of shmem obj on panfrost_mmu_map_fault_addr() error Date: Fri, 27 May 2022 02:50:21 +0300 Message-Id: <20220526235040.678984-4-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20220526235040.678984-1-dmitry.osipenko@collabora.com> References: <20220526235040.678984-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org When panfrost_mmu_map_fault_addr() fails, the BO's mapping should be unreferenced and not the shmem object which backs the mapping. Cc: stable@vger.kernel.org Fixes: bdefca2d8dc0 ("drm/panfrost: Add the panfrost_gem_mapping concept") Reviewed-by: Steven Price Signed-off-by: Dmitry Osipenko --- drivers/gpu/drm/panfrost/panfrost_mmu.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.c b/drivers/gpu/drm/panfrost/panfrost_mmu.c index d3f82b26a631..b285a8001b1d 100644 --- a/drivers/gpu/drm/panfrost/panfrost_mmu.c +++ b/drivers/gpu/drm/panfrost/panfrost_mmu.c @@ -518,7 +518,7 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as, err_pages: drm_gem_shmem_put_pages(&bo->base); err_bo: - drm_gem_object_put(&bo->base.base); + panfrost_gem_mapping_put(bomapping); return ret; } From patchwork Thu May 26 23:50:22 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 577127 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id ED73BC4332F for ; Thu, 26 May 2022 23:54:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1349569AbiEZXy0 (ORCPT ); Thu, 26 May 2022 19:54:26 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52552 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1349577AbiEZXyX (ORCPT ); Thu, 26 May 2022 19:54:23 -0400 Received: from bhuna.collabora.co.uk (bhuna.collabora.co.uk [46.235.227.227]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7859CE8BBA; Thu, 26 May 2022 16:54:22 -0700 (PDT) Received: from [127.0.0.1] (localhost [127.0.0.1]) (Authenticated sender: dmitry.osipenko) with ESMTPSA id 7FE781F40878 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1653609261; bh=mb8OzuWYSbm/AwBqD3mtEmj7ojjTOwTQHEjozWJ6AEk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=PhWK+IykYUYDFR1HcPNf0jNTY7TZY0Ln8bRitBt0kB04+hQst9WEWAPvJrsPNbW/Z zIQsLj3s8FkbFS4J6AAh9FGS2SwW1x8ICQ0O/cvyoPbSY9PlksO1KkHleCnmYzw7Z6 Ps8r6ksbvWvvZMDMZbofS5W+UPPdelQLtTYTEoz/XHigCcMJC4tMpERsMpTB6zdAic 8fb8WgUWkUOccDal1GJu2bE6uZh0Ptg5/jLFxQaJtKw60Ga4PMr48eHgPMc+RrV70m KD8BH07+7/LJbMYplSdVI7Vrv32FhCvxsaGBiQiOPaDk68WNK5CFK34N2/2GNSCthp uuNX5S9HTuVQQ== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Daniel Almeida , Gert Wollny , Gustavo Padovan , Daniel Stone , Tomeu Vizoso , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Rob Herring , Steven Price , Alyssa Rosenzweig , Rob Clark , Emil Velikov , Robin Murphy , Qiang Yu , Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= , "Pan, Xinhui" , Thierry Reding , Tomasz Figa , Marek Szyprowski , Mauro Carvalho Chehab , Alex Deucher , Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org, Dmitry Osipenko , Dmitry Osipenko , linux-tegra@vger.kernel.org, linux-media@vger.kernel.org, linaro-mm-sig@lists.linaro.org, amd-gfx@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, kernel@collabora.com Subject: [PATCH v6 04/22] drm/panfrost: Fix shrinker list corruption by madvise IOCTL Date: Fri, 27 May 2022 02:50:22 +0300 Message-Id: <20220526235040.678984-5-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20220526235040.678984-1-dmitry.osipenko@collabora.com> References: <20220526235040.678984-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org Calling madvise IOCTL twice on BO causes memory shrinker list corruption and crashes kernel because BO is already on the list and it's added to the list again, while BO should be removed from from the list before it's re-added. Fix it. Cc: stable@vger.kernel.org Fixes: 013b65101315 ("drm/panfrost: Add madvise and shrinker support") Signed-off-by: Dmitry Osipenko --- drivers/gpu/drm/panfrost/panfrost_drv.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/panfrost/panfrost_drv.c b/drivers/gpu/drm/panfrost/panfrost_drv.c index 087e69b98d06..b1e6d238674f 100644 --- a/drivers/gpu/drm/panfrost/panfrost_drv.c +++ b/drivers/gpu/drm/panfrost/panfrost_drv.c @@ -433,8 +433,8 @@ static int panfrost_ioctl_madvise(struct drm_device *dev, void *data, if (args->retained) { if (args->madv == PANFROST_MADV_DONTNEED) - list_add_tail(&bo->base.madv_list, - &pfdev->shrinker_list); + list_move_tail(&bo->base.madv_list, + &pfdev->shrinker_list); else if (args->madv == PANFROST_MADV_WILLNEED) list_del_init(&bo->base.madv_list); } From patchwork Thu May 26 23:50:23 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 577126 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5B8EAC433FE for ; Thu, 26 May 2022 23:54:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1349620AbiEZXyg (ORCPT ); Thu, 26 May 2022 19:54:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52850 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1349619AbiEZXyd (ORCPT ); Thu, 26 May 2022 19:54:33 -0400 Received: from bhuna.collabora.co.uk (bhuna.collabora.co.uk [46.235.227.227]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 79221D6829; Thu, 26 May 2022 16:54:25 -0700 (PDT) Received: from [127.0.0.1] (localhost [127.0.0.1]) (Authenticated sender: dmitry.osipenko) with ESMTPSA id 874411F409DA DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1653609264; bh=S5lPIOkFBZ/pxkGf3spJfxaVtSGwy0y8jmdOsiAMgdc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Lg62DDh21vtmkdIS8xg6XG5/5n+/FkMQv6yZUvNboK4hEgCbL5z172KDGIXNi8qCW w23AGO/Wp1pfJUwN+TRwVvu/hKAD9bRftGl8r0EiZ8uelJoxeg5y6cREYkrk6mpKD2 BNpSZI58w/DauiFQ0rE30zrNr/LAkKkUR2ZSopplZ6eGAw5GgLEwIfKM6iknDVZ+n1 eLrn5re2tv478zMT5HSTum8cHePRcWFV0K7oZ0fG7bnZt9Ykcr+uYtRisDSVhOZHsL mD3woWmqFPrJRAtMV3SJHPWk7+lNthgUecjl23OO+1L9CpKIvpwJ7JS8oyFU+iu8gg LhuBMhFoETsVQ== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Daniel Almeida , Gert Wollny , Gustavo Padovan , Daniel Stone , Tomeu Vizoso , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Rob Herring , Steven Price , Alyssa Rosenzweig , Rob Clark , Emil Velikov , Robin Murphy , Qiang Yu , Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= , "Pan, Xinhui" , Thierry Reding , Tomasz Figa , Marek Szyprowski , Mauro Carvalho Chehab , Alex Deucher , Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org, Dmitry Osipenko , Dmitry Osipenko , linux-tegra@vger.kernel.org, linux-media@vger.kernel.org, linaro-mm-sig@lists.linaro.org, amd-gfx@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, kernel@collabora.com Subject: [PATCH v6 05/22] drm/virtio: Correct drm_gem_shmem_get_sg_table() error handling Date: Fri, 27 May 2022 02:50:23 +0300 Message-Id: <20220526235040.678984-6-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20220526235040.678984-1-dmitry.osipenko@collabora.com> References: <20220526235040.678984-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org drm_gem_shmem_get_sg_table() never ever returned NULL on error. Correct the error handling to avoid crash on OOM. Cc: stable@vger.kernel.org Reviewed-by: Emil Velikov Signed-off-by: Dmitry Osipenko --- drivers/gpu/drm/virtio/virtgpu_object.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/virtio/virtgpu_object.c b/drivers/gpu/drm/virtio/virtgpu_object.c index f293e6ad52da..3d0c8d4d1c20 100644 --- a/drivers/gpu/drm/virtio/virtgpu_object.c +++ b/drivers/gpu/drm/virtio/virtgpu_object.c @@ -168,9 +168,11 @@ static int virtio_gpu_object_shmem_init(struct virtio_gpu_device *vgdev, * since virtio_gpu doesn't support dma-buf import from other devices. */ shmem->pages = drm_gem_shmem_get_sg_table(&bo->base); - if (!shmem->pages) { + ret = PTR_ERR_OR_ZERO(shmem->pages); + if (ret) { drm_gem_shmem_unpin(&bo->base); - return -EINVAL; + shmem->pages = NULL; + return ret; } if (use_dma_api) { From patchwork Thu May 26 23:50:24 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 577125 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5702DC433EF for ; Thu, 26 May 2022 23:54:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1349643AbiEZXyn (ORCPT ); Thu, 26 May 2022 19:54:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52896 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1349642AbiEZXye (ORCPT ); Thu, 26 May 2022 19:54:34 -0400 Received: from bhuna.collabora.co.uk (bhuna.collabora.co.uk [46.235.227.227]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 89A03EAD1C; Thu, 26 May 2022 16:54:28 -0700 (PDT) Received: from [127.0.0.1] (localhost [127.0.0.1]) (Authenticated sender: dmitry.osipenko) with ESMTPSA id 8F3AE1F40887 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1653609267; bh=xPahHFXhmzFRvwHWi9um3lr6dS/zSQQnyahtz13WrNk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=E2dOSzH2bYscAM2yw+OsJQF6ASXzoHeHyCW2JAYd9BoRUHhbsYdmJ8LiEFfuw4PKg 9+sypQggxGc7h/M7vxZP9YcFWj+u2FkT5drukiTUXFiPtzBrkBwqGJLQAYK8ZZQPqC 3kIkbt0/wh1LzZa6jReVPQMdIoyAGL+sOirNZZxosMKbaVhSAEbSsQCPio5EaWc6g2 7WBVF1do7xnITJPp47lKbqvEd1BHUmph+ejBV/8nPwS+v5ywMOtUccLe0UFQInaddj TOuGZOeaBJhIkpw56yt4VLRZXcb/qMhwx6cBABDqhHh0pnxPh/p6ttBjiKP2QVO80L jV8dOkXnd5L5A== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Daniel Almeida , Gert Wollny , Gustavo Padovan , Daniel Stone , Tomeu Vizoso , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Rob Herring , Steven Price , Alyssa Rosenzweig , Rob Clark , Emil Velikov , Robin Murphy , Qiang Yu , Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= , "Pan, Xinhui" , Thierry Reding , Tomasz Figa , Marek Szyprowski , Mauro Carvalho Chehab , Alex Deucher , Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org, Dmitry Osipenko , Dmitry Osipenko , linux-tegra@vger.kernel.org, linux-media@vger.kernel.org, linaro-mm-sig@lists.linaro.org, amd-gfx@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, kernel@collabora.com Subject: [PATCH v6 06/22] drm/virtio: Check whether transferred 2D BO is shmem Date: Fri, 27 May 2022 02:50:24 +0300 Message-Id: <20220526235040.678984-7-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20220526235040.678984-1-dmitry.osipenko@collabora.com> References: <20220526235040.678984-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org Transferred 2D BO always must be a shmem BO. Add check for that to prevent NULL dereference if userspace passes a VRAM BO. Cc: stable@vger.kernel.org Reviewed-by: Emil Velikov Signed-off-by: Dmitry Osipenko --- drivers/gpu/drm/virtio/virtgpu_vq.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/gpu/drm/virtio/virtgpu_vq.c b/drivers/gpu/drm/virtio/virtgpu_vq.c index 7c052efe8836..2edf31806b74 100644 --- a/drivers/gpu/drm/virtio/virtgpu_vq.c +++ b/drivers/gpu/drm/virtio/virtgpu_vq.c @@ -595,7 +595,7 @@ void virtio_gpu_cmd_transfer_to_host_2d(struct virtio_gpu_device *vgdev, bool use_dma_api = !virtio_has_dma_quirk(vgdev->vdev); struct virtio_gpu_object_shmem *shmem = to_virtio_gpu_shmem(bo); - if (use_dma_api) + if (virtio_gpu_is_shmem(bo) && use_dma_api) dma_sync_sgtable_for_device(vgdev->vdev->dev.parent, shmem->pages, DMA_TO_DEVICE); From patchwork Thu May 26 23:50:25 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 576381 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id F1BE0C433F5 for ; Thu, 26 May 2022 23:54:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1349689AbiEZXym (ORCPT ); Thu, 26 May 2022 19:54:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52986 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1349609AbiEZXye (ORCPT ); Thu, 26 May 2022 19:54:34 -0400 Received: from bhuna.collabora.co.uk (bhuna.collabora.co.uk [46.235.227.227]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AF53BEBAA9; Thu, 26 May 2022 16:54:31 -0700 (PDT) Received: from [127.0.0.1] (localhost [127.0.0.1]) (Authenticated sender: dmitry.osipenko) with ESMTPSA id 946501F409B2 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1653609270; bh=GyVgUUI60R9kfFpC7zHb2iBoubCtP9Z4j/Xh4ZS18dE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Nn5sw/ZBitN9MFcIz3dgoN1ioDPFU3hpYeDRVqXC4lpUBS+Aa/rVxTekzPvCX4liQ rsO7ZZLzBsZpuCjl2wba2FHpjPRzYupNGyMWgZ/KgbMYuDshKkSuL+Co7MjgACqRBu n3o2N7AH31AKp0ThCJ0WlHwoYCMoY77msQqXt9Padc9/q/tmGsKqgrWqH+xa8YVRY4 05/PUhygOAtITGKC/dNWaseNEgyoJkY2hW5Ffptf1NKu5sEAMQAhJpWebn6LaWe0kI UxzlIcFj/Hy0uEFSoZlCRNPZnEW1j6x27uoh/+K7sYfwR8MKPos1RFiMJwYvhU+18V kIVpRwvKzwIIQ== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Daniel Almeida , Gert Wollny , Gustavo Padovan , Daniel Stone , Tomeu Vizoso , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Rob Herring , Steven Price , Alyssa Rosenzweig , Rob Clark , Emil Velikov , Robin Murphy , Qiang Yu , Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= , "Pan, Xinhui" , Thierry Reding , Tomasz Figa , Marek Szyprowski , Mauro Carvalho Chehab , Alex Deucher , Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org, Dmitry Osipenko , Dmitry Osipenko , linux-tegra@vger.kernel.org, linux-media@vger.kernel.org, linaro-mm-sig@lists.linaro.org, amd-gfx@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, kernel@collabora.com Subject: [PATCH v6 07/22] drm/virtio: Unlock reservations on virtio_gpu_object_shmem_init() error Date: Fri, 27 May 2022 02:50:25 +0300 Message-Id: <20220526235040.678984-8-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20220526235040.678984-1-dmitry.osipenko@collabora.com> References: <20220526235040.678984-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org Unlock reservations in the error code path of virtio_gpu_object_create() to silence debug warning splat produced by ww_mutex_destroy(&obj->lock) when GEM is released with the held lock. Cc: stable@vger.kernel.org Reviewed-by: Emil Velikov Signed-off-by: Dmitry Osipenko --- drivers/gpu/drm/virtio/virtgpu_object.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/drivers/gpu/drm/virtio/virtgpu_object.c b/drivers/gpu/drm/virtio/virtgpu_object.c index 3d0c8d4d1c20..21c19cdedce0 100644 --- a/drivers/gpu/drm/virtio/virtgpu_object.c +++ b/drivers/gpu/drm/virtio/virtgpu_object.c @@ -250,6 +250,8 @@ int virtio_gpu_object_create(struct virtio_gpu_device *vgdev, ret = virtio_gpu_object_shmem_init(vgdev, bo, &ents, &nents); if (ret != 0) { + if (fence) + virtio_gpu_array_unlock_resv(objs); virtio_gpu_array_put_free(objs); virtio_gpu_free_object(&shmem_obj->base); return ret; From patchwork Thu May 26 23:50:26 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 576380 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4BF11C433F5 for ; Thu, 26 May 2022 23:54:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1349642AbiEZXyx (ORCPT ); Thu, 26 May 2022 19:54:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53132 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1349670AbiEZXyi (ORCPT ); Thu, 26 May 2022 19:54:38 -0400 Received: from bhuna.collabora.co.uk (bhuna.collabora.co.uk [46.235.227.227]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 38BEAE8BBA; Thu, 26 May 2022 16:54:35 -0700 (PDT) Received: from [127.0.0.1] (localhost [127.0.0.1]) (Authenticated sender: dmitry.osipenko) with ESMTPSA id 990981F409FB DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1653609273; bh=eF2XX5/j1kKfeZpCaTpZ0jlF7vtBDjwSe+BehMSKUaU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=LuHwpeims5dqdJ816i/QR4pJpGilSUoC+x5O7sFLvpBzeMOiEBIRt9XydEwI26Qt6 GIdgccQ0FwpJ5B/hBGc9WbWI007BhxShN8d9k7fJsf6osiPeaqGKpRZzWysZ8eJW6d P77xJbRQoGhG6h58qPG8cMFY2Lc+g4jS7hYVyvlGEc6tspExJ5xh5P51q1aZyQ5qJ5 pinN/CF5bo4occ8NNLGhJT3UeAJq0kWVh0GouNQTGnkn2hhoakpwMF1JEOveST3+Ta G7ZQRqMiS0HZSVgBfom3f6xfgpsdvS3F6SvOF9S/ERyDRkPoUaB+NeuCGRrWgfCZ8b oseNggf8SpziQ== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Daniel Almeida , Gert Wollny , Gustavo Padovan , Daniel Stone , Tomeu Vizoso , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Rob Herring , Steven Price , Alyssa Rosenzweig , Rob Clark , Emil Velikov , Robin Murphy , Qiang Yu , Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= , "Pan, Xinhui" , Thierry Reding , Tomasz Figa , Marek Szyprowski , Mauro Carvalho Chehab , Alex Deucher , Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org, Dmitry Osipenko , Dmitry Osipenko , linux-tegra@vger.kernel.org, linux-media@vger.kernel.org, linaro-mm-sig@lists.linaro.org, amd-gfx@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, kernel@collabora.com Subject: [PATCH v6 08/22] drm/virtio: Unlock reservations on dma_resv_reserve_fences() error Date: Fri, 27 May 2022 02:50:26 +0300 Message-Id: <20220526235040.678984-9-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20220526235040.678984-1-dmitry.osipenko@collabora.com> References: <20220526235040.678984-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org Unlock reservations on dma_resv_reserve_fences() error to fix recursive locking of the reservations when this error happens. Cc: stable@vger.kernel.org Signed-off-by: Dmitry Osipenko --- drivers/gpu/drm/virtio/virtgpu_gem.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/virtio/virtgpu_gem.c b/drivers/gpu/drm/virtio/virtgpu_gem.c index 580a78809836..7db48d17ee3a 100644 --- a/drivers/gpu/drm/virtio/virtgpu_gem.c +++ b/drivers/gpu/drm/virtio/virtgpu_gem.c @@ -228,8 +228,10 @@ int virtio_gpu_array_lock_resv(struct virtio_gpu_object_array *objs) for (i = 0; i < objs->nents; ++i) { ret = dma_resv_reserve_fences(objs->objs[i]->resv, 1); - if (ret) + if (ret) { + virtio_gpu_array_unlock_resv(objs); return ret; + } } return ret; } From patchwork Thu May 26 23:50:27 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 577124 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5AF57C433F5 for ; Thu, 26 May 2022 23:55:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235844AbiEZXy6 (ORCPT ); Thu, 26 May 2022 19:54:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52974 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1349602AbiEZXyj (ORCPT ); Thu, 26 May 2022 19:54:39 -0400 Received: from bhuna.collabora.co.uk (bhuna.collabora.co.uk [46.235.227.227]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A21BFEBE83; Thu, 26 May 2022 16:54:37 -0700 (PDT) Received: from [127.0.0.1] (localhost [127.0.0.1]) (Authenticated sender: dmitry.osipenko) with ESMTPSA id A34931F40887 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1653609276; bh=yzbiLRskhwUnKi8FuSZ/Li8as+mXkO3KrAAU56S2aFI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Af352/Ponwfde8alC1lYZz/ODMV/pdk2FiMi8cKYhruO+snZP/fcdv8DtPSyIEPkX C1NG915yozGxnkojZrMnDlnj5C27toqp2ZJpeVSvcRYUxbcaEfjThosEKK85jBMN1w fAsLdL6H+f2EflalAZPwIwhwEFKJUWsvUL9DLuriWb4HSVRiPVgVfZFSjE619sOqli G25+CcmWgHfVtCbTMGKjjlIEQqZF9iTLv5sgnF2DB9ug0DcrXa9cuogSjOfVSTF2Pp n7o6ra1QeQB/8XMHMuxYJ8lqET2Bk+X+agYoE3cglAqFCsOU1BNXL48zdmy4l+WeN2 NcZLjif3gYXfg== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Daniel Almeida , Gert Wollny , Gustavo Padovan , Daniel Stone , Tomeu Vizoso , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Rob Herring , Steven Price , Alyssa Rosenzweig , Rob Clark , Emil Velikov , Robin Murphy , Qiang Yu , Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= , "Pan, Xinhui" , Thierry Reding , Tomasz Figa , Marek Szyprowski , Mauro Carvalho Chehab , Alex Deucher , Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org, Dmitry Osipenko , Dmitry Osipenko , linux-tegra@vger.kernel.org, linux-media@vger.kernel.org, linaro-mm-sig@lists.linaro.org, amd-gfx@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, kernel@collabora.com Subject: [PATCH v6 09/22] drm/virtio: Use appropriate atomic state in virtio_gpu_plane_cleanup_fb() Date: Fri, 27 May 2022 02:50:27 +0300 Message-Id: <20220526235040.678984-10-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20220526235040.678984-1-dmitry.osipenko@collabora.com> References: <20220526235040.678984-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org Make virtio_gpu_plane_cleanup_fb() to clean the state which DRM core wants to clean up and not the current plane's state. Normally the older atomic state is cleaned up, but the newer state could also be cleaned up in case of aborted commits. Cc: stable@vger.kernel.org Signed-off-by: Dmitry Osipenko --- drivers/gpu/drm/virtio/virtgpu_plane.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/drivers/gpu/drm/virtio/virtgpu_plane.c b/drivers/gpu/drm/virtio/virtgpu_plane.c index 6d3cc9e238a4..7148f3813d8b 100644 --- a/drivers/gpu/drm/virtio/virtgpu_plane.c +++ b/drivers/gpu/drm/virtio/virtgpu_plane.c @@ -266,14 +266,14 @@ static int virtio_gpu_plane_prepare_fb(struct drm_plane *plane, } static void virtio_gpu_plane_cleanup_fb(struct drm_plane *plane, - struct drm_plane_state *old_state) + struct drm_plane_state *state) { struct virtio_gpu_framebuffer *vgfb; - if (!plane->state->fb) + if (!state->fb) return; - vgfb = to_virtio_gpu_framebuffer(plane->state->fb); + vgfb = to_virtio_gpu_framebuffer(state->fb); if (vgfb->fence) { dma_fence_put(&vgfb->fence->f); vgfb->fence = NULL; From patchwork Thu May 26 23:50:28 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 577123 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id F057EC4332F for ; Thu, 26 May 2022 23:55:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243210AbiEZXzH (ORCPT ); Thu, 26 May 2022 19:55:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52908 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1349701AbiEZXyr (ORCPT ); Thu, 26 May 2022 19:54:47 -0400 Received: from bhuna.collabora.co.uk (bhuna.collabora.co.uk [46.235.227.227]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B8016EBAAB; Thu, 26 May 2022 16:54:40 -0700 (PDT) Received: from [127.0.0.1] (localhost [127.0.0.1]) (Authenticated sender: dmitry.osipenko) with ESMTPSA id AF11D1F459E1 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1653609279; bh=+dsYPBe0MtL9U6A9MrplQ6Fk/h2u9i7WdSCVoC7x6z4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=eJE7wbdhuCKQ7ARuAxD2jakaOllV4XZp+wGbuzmwdjZ3AIJXdidlMp45UgBjdSOOD glpNR2ysMVh/exq/dF/BlNApylcmRvmKbtxJTe3lAZ9yCQHa0gOmlq6OmSU4HNZ8AQ j/1OuqQJfB6lruRLryGyl0wPtlwowLPhFkznUgbswyosxFmRIUNzNITJOclCMrRhnn 65Fat7aAvOkrbl9cliJKjoeEIg4SJmxvXHlZ44NBvwxUzbEvT10c6y0M8rp7mmWGF4 9tQvQ9uNi7WLf/gkC4tjw1g2Jff4+nRrg4Fick8NUGQ+gbgjFX/RBeFTv37OoCYeNg FRZZFSrdn9H4w== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Daniel Almeida , Gert Wollny , Gustavo Padovan , Daniel Stone , Tomeu Vizoso , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Rob Herring , Steven Price , Alyssa Rosenzweig , Rob Clark , Emil Velikov , Robin Murphy , Qiang Yu , Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= , "Pan, Xinhui" , Thierry Reding , Tomasz Figa , Marek Szyprowski , Mauro Carvalho Chehab , Alex Deucher , Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org, Dmitry Osipenko , Dmitry Osipenko , linux-tegra@vger.kernel.org, linux-media@vger.kernel.org, linaro-mm-sig@lists.linaro.org, amd-gfx@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, kernel@collabora.com Subject: [PATCH v6 10/22] drm/shmem-helper: Add missing vunmap on error Date: Fri, 27 May 2022 02:50:28 +0300 Message-Id: <20220526235040.678984-11-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20220526235040.678984-1-dmitry.osipenko@collabora.com> References: <20220526235040.678984-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org The vmapping of dma-buf may succeed, but DRM SHMEM rejects the iomem mappings, and thus, drm_gem_shmem_vmap_locked() should unvmap the iomem before erroring out. Cc: stable@vger.kernel.org Fixes: 49a3f51dfeee ("drm/gem: Use struct dma_buf_map in GEM vmap ops and convert GEM backends") Signed-off-by: Dmitry Osipenko --- drivers/gpu/drm/drm_gem_shmem_helper.c | 1 + 1 file changed, 1 insertion(+) diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c index 6190f5018986..54b0ba28aa0a 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -302,6 +302,7 @@ static int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem, ret = dma_buf_vmap(obj->import_attach->dmabuf, map); if (!ret) { if (WARN_ON(map->is_iomem)) { + dma_buf_vunmap(obj->import_attach->dmabuf, map); ret = -EIO; goto err_put_pages; } From patchwork Thu May 26 23:50:29 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 576379 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2DC80C433F5 for ; Thu, 26 May 2022 23:55:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1349729AbiEZXzE (ORCPT ); Thu, 26 May 2022 19:55:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53016 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346463AbiEZXys (ORCPT ); Thu, 26 May 2022 19:54:48 -0400 Received: from bhuna.collabora.co.uk (bhuna.collabora.co.uk [46.235.227.227]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D3697EBEA3; Thu, 26 May 2022 16:54:43 -0700 (PDT) Received: from [127.0.0.1] (localhost [127.0.0.1]) (Authenticated sender: dmitry.osipenko) with ESMTPSA id B73371F409FB DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1653609282; bh=LFlzHZwAyFsCuldzWdNWjboYshAvld5NVcB549Tumb8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=GigVHXqPQ1UcTCnTuhxphImtODnG4CrWAiWlc/yC2oNfRDmECLgFzjbCbAqw6EqH4 DPtlp2TWrOSVV6pGFDCFYmNLgEOptU8xcB0QOIlh0yWVfKeAaN9PCezXe3xK6Mvrko lSylmljH7iRLpOWD46t5xooOTpr0QnnfMIohLV7J58MsxWCv8OZgEjyHYZfiVquXLs uGYkUAkUFtI6WdG3LX5ervwkZ1Lx0jLKsUeob7yi2N1kJ8hYtzVvu6EEpogKBGG1e7 2L8cjj9dcRrwg0HPMXFBuWbzWGKBTo2O2dYBlPnAl2pp4j5ilPU7yIz5HHHbTT+5B0 49FbriWR//k1g== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Daniel Almeida , Gert Wollny , Gustavo Padovan , Daniel Stone , Tomeu Vizoso , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Rob Herring , Steven Price , Alyssa Rosenzweig , Rob Clark , Emil Velikov , Robin Murphy , Qiang Yu , Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= , "Pan, Xinhui" , Thierry Reding , Tomasz Figa , Marek Szyprowski , Mauro Carvalho Chehab , Alex Deucher , Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org, Dmitry Osipenko , Dmitry Osipenko , linux-tegra@vger.kernel.org, linux-media@vger.kernel.org, linaro-mm-sig@lists.linaro.org, amd-gfx@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, kernel@collabora.com Subject: [PATCH v6 11/22] drm/shmem-helper: Correct doc-comment of drm_gem_shmem_get_sg_table() Date: Fri, 27 May 2022 02:50:29 +0300 Message-Id: <20220526235040.678984-12-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20220526235040.678984-1-dmitry.osipenko@collabora.com> References: <20220526235040.678984-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org drm_gem_shmem_get_sg_table() never returns NULL on error, but a ERR_PTR. Correct the doc comment which says that it returns NULL on error. Acked-by: Thomas Zimmermann Signed-off-by: Dmitry Osipenko --- drivers/gpu/drm/drm_gem_shmem_helper.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c index 54b0ba28aa0a..7232e321fdb4 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -654,7 +654,8 @@ EXPORT_SYMBOL(drm_gem_shmem_print_info); * drm_gem_shmem_get_pages_sgt() instead. * * Returns: - * A pointer to the scatter/gather table of pinned pages or NULL on failure. + * A pointer to the scatter/gather table of pinned pages or an ERR_PTR()-encoded + * error code on failure. */ struct sg_table *drm_gem_shmem_get_sg_table(struct drm_gem_shmem_object *shmem) { @@ -680,7 +681,8 @@ EXPORT_SYMBOL_GPL(drm_gem_shmem_get_sg_table); * drm_gem_shmem_get_sg_table() should not be directly called by drivers. * * Returns: - * A pointer to the scatter/gather table of pinned pages or errno on failure. + * A pointer to the scatter/gather table of pinned pages or an ERR_PTR()-encoded + * error code on failure. */ struct sg_table *drm_gem_shmem_get_pages_sgt(struct drm_gem_shmem_object *shmem) { From patchwork Thu May 26 23:50:30 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 576378 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C8FBAC4332F for ; Thu, 26 May 2022 23:55:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1349700AbiEZXz0 (ORCPT ); Thu, 26 May 2022 19:55:26 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54104 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1349756AbiEZXzB (ORCPT ); Thu, 26 May 2022 19:55:01 -0400 Received: from bhuna.collabora.co.uk (bhuna.collabora.co.uk [46.235.227.227]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 78811EC316; Thu, 26 May 2022 16:54:51 -0700 (PDT) Received: from [127.0.0.1] (localhost [127.0.0.1]) (Authenticated sender: dmitry.osipenko) with ESMTPSA id E84531F459ED DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1653609290; bh=a6CciZvu/niFj4gUJ7nmx5idPKuwsFoNvS/I3tYS4zY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=WGNfPLqGoNps/+DSLrepwv9EQg2HT/Vjb2fWG+klzDhKtzKgTkNc6akuOSiIZrnAL jSfAnstFuKBe9Oo4rLCuS+SbAQjn6fpsUy2CAs+d6q7twRJi2aobsl3R6YgbCD4oVP L72O5Rr4QHy4/A9Qf2yQhzq5nm6554SYVmTW+eT5s/HRqCTsb2n5JC8jTatnbmlLpS J3z8OWGv1bK4Kp/Ol3IXc69207qWAcJLtvsKLSQhouNY8Udugypc7bq1huPS35p1RJ S5W897eU44N8wVcjuTKs+Kvk6KhS/AeKz47+xSc663FPGf8B9hTP+41NIJMb+oO8XV ywGN87A7zBj6Q== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Daniel Almeida , Gert Wollny , Gustavo Padovan , Daniel Stone , Tomeu Vizoso , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Rob Herring , Steven Price , Alyssa Rosenzweig , Rob Clark , Emil Velikov , Robin Murphy , Qiang Yu , Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= , "Pan, Xinhui" , Thierry Reding , Tomasz Figa , Marek Szyprowski , Mauro Carvalho Chehab , Alex Deucher , Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org, Dmitry Osipenko , Dmitry Osipenko , linux-tegra@vger.kernel.org, linux-media@vger.kernel.org, linaro-mm-sig@lists.linaro.org, amd-gfx@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, kernel@collabora.com Subject: [PATCH v6 12/22] drm/virtio: Simplify error handling of virtio_gpu_object_create() Date: Fri, 27 May 2022 02:50:30 +0300 Message-Id: <20220526235040.678984-13-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20220526235040.678984-1-dmitry.osipenko@collabora.com> References: <20220526235040.678984-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org Change the order of SHMEM initialization and reservation locking to make code cleaner and to prepare for transitioning of the common GEM SHMEM code to use the GEM's reservation lock instead of the shmem.page_lock. There is no need to lock reservation during allocation of the SHMEM pages because the lock is needed only to avoid racing with the async host-side allocation. Hence we can safely move the SHMEM initialization out of the reservation lock. Signed-off-by: Dmitry Osipenko --- drivers/gpu/drm/virtio/virtgpu_object.c | 13 ++++--------- 1 file changed, 4 insertions(+), 9 deletions(-) diff --git a/drivers/gpu/drm/virtio/virtgpu_object.c b/drivers/gpu/drm/virtio/virtgpu_object.c index 21c19cdedce0..18f70ef6b4d0 100644 --- a/drivers/gpu/drm/virtio/virtgpu_object.c +++ b/drivers/gpu/drm/virtio/virtgpu_object.c @@ -236,6 +236,10 @@ int virtio_gpu_object_create(struct virtio_gpu_device *vgdev, bo->dumb = params->dumb; + ret = virtio_gpu_object_shmem_init(vgdev, bo, &ents, &nents); + if (ret != 0) + goto err_put_id; + if (fence) { ret = -ENOMEM; objs = virtio_gpu_array_alloc(1); @@ -248,15 +252,6 @@ int virtio_gpu_object_create(struct virtio_gpu_device *vgdev, goto err_put_objs; } - ret = virtio_gpu_object_shmem_init(vgdev, bo, &ents, &nents); - if (ret != 0) { - if (fence) - virtio_gpu_array_unlock_resv(objs); - virtio_gpu_array_put_free(objs); - virtio_gpu_free_object(&shmem_obj->base); - return ret; - } - if (params->blob) { if (params->blob_mem == VIRTGPU_BLOB_MEM_GUEST) bo->guest_blob = true; From patchwork Thu May 26 23:50:31 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 577122 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 59E52C433FE for ; Thu, 26 May 2022 23:55:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236677AbiEZXzt (ORCPT ); Thu, 26 May 2022 19:55:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53604 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1349727AbiEZXzD (ORCPT ); Thu, 26 May 2022 19:55:03 -0400 Received: from bhuna.collabora.co.uk (bhuna.collabora.co.uk [46.235.227.227]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2A1EDEC32E; Thu, 26 May 2022 16:54:56 -0700 (PDT) Received: from [127.0.0.1] (localhost [127.0.0.1]) (Authenticated sender: dmitry.osipenko) with ESMTPSA id BA73A1F459E1 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1653609294; bh=DfGpDB1iV1ixwCzh/Jz+qOPyTI739+QNjEbh7MI3SKc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=TtPQEs40wcntc109HGqx7J6Vxp8aB2CXxucraiEqf5LZPHlZlp2OW2gLyoYDpZbmO AbgqfWCH145/vjTZsxB4cKHKs9hY8/jZUtwEQHsc7eVQ6WWuURoxif8ihnjPG+hTeG LwSNEx/+z71LhNjEj3Qwgz1qKH7vEBxl+8WqKWL+XTXGYxEoD5UxseCKU4jrKI/lC5 wCB1C4w9W5rIcNwiJaY8P4XrxD9C6Ik2oQuo5gHN+uKBP7i+M+NtxR1Sfx5cteq/qL RSZrapHCuZkU/SVh9GChNxqHwFsnpkgTemN/DKvexkGBvMDjt05Qf0JA5R3g6dxJXS AtGHXemB5AkNQ== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Daniel Almeida , Gert Wollny , Gustavo Padovan , Daniel Stone , Tomeu Vizoso , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Rob Herring , Steven Price , Alyssa Rosenzweig , Rob Clark , Emil Velikov , Robin Murphy , Qiang Yu , Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= , "Pan, Xinhui" , Thierry Reding , Tomasz Figa , Marek Szyprowski , Mauro Carvalho Chehab , Alex Deucher , Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org, Dmitry Osipenko , Dmitry Osipenko , linux-tegra@vger.kernel.org, linux-media@vger.kernel.org, linaro-mm-sig@lists.linaro.org, amd-gfx@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, kernel@collabora.com Subject: [PATCH v6 13/22] drm/virtio: Improve DMA API usage for shmem BOs Date: Fri, 27 May 2022 02:50:31 +0300 Message-Id: <20220526235040.678984-14-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20220526235040.678984-1-dmitry.osipenko@collabora.com> References: <20220526235040.678984-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org DRM API requires the DRM's driver to be backed with the device that can be used for generic DMA operations. The VirtIO-GPU device can't perform DMA operations if it uses PCI transport because PCI device driver creates a virtual VirtIO-GPU device that isn't associated with the PCI. Use PCI's GPU device for the DRM's device instead of the VirtIO-GPU device and drop DMA-related hacks from the VirtIO-GPU driver. Signed-off-by: Dmitry Osipenko --- drivers/gpu/drm/virtio/virtgpu_drv.c | 51 ++++++---------------- drivers/gpu/drm/virtio/virtgpu_drv.h | 5 +-- drivers/gpu/drm/virtio/virtgpu_kms.c | 7 ++-- drivers/gpu/drm/virtio/virtgpu_object.c | 56 +++++-------------------- drivers/gpu/drm/virtio/virtgpu_vq.c | 13 +++--- 5 files changed, 32 insertions(+), 100 deletions(-) diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.c b/drivers/gpu/drm/virtio/virtgpu_drv.c index 5f25a8d15464..0141b7df97ec 100644 --- a/drivers/gpu/drm/virtio/virtgpu_drv.c +++ b/drivers/gpu/drm/virtio/virtgpu_drv.c @@ -46,12 +46,11 @@ static int virtio_gpu_modeset = -1; MODULE_PARM_DESC(modeset, "Disable/Enable modesetting"); module_param_named(modeset, virtio_gpu_modeset, int, 0400); -static int virtio_gpu_pci_quirk(struct drm_device *dev, struct virtio_device *vdev) +static int virtio_gpu_pci_quirk(struct drm_device *dev) { - struct pci_dev *pdev = to_pci_dev(vdev->dev.parent); + struct pci_dev *pdev = to_pci_dev(dev->dev); const char *pname = dev_name(&pdev->dev); bool vga = (pdev->class >> 8) == PCI_CLASS_DISPLAY_VGA; - char unique[20]; int ret; DRM_INFO("pci: %s detected at %s\n", @@ -63,39 +62,7 @@ static int virtio_gpu_pci_quirk(struct drm_device *dev, struct virtio_device *vd return ret; } - /* - * Normally the drm_dev_set_unique() call is done by core DRM. - * The following comment covers, why virtio cannot rely on it. - * - * Unlike the other virtual GPU drivers, virtio abstracts the - * underlying bus type by using struct virtio_device. - * - * Hence the dev_is_pci() check, used in core DRM, will fail - * and the unique returned will be the virtio_device "virtio0", - * while a "pci:..." one is required. - * - * A few other ideas were considered: - * - Extend the dev_is_pci() check [in drm_set_busid] to - * consider virtio. - * Seems like a bigger hack than what we have already. - * - * - Point drm_device::dev to the parent of the virtio_device - * Semantic changes: - * * Using the wrong device for i2c, framebuffer_alloc and - * prime import. - * Visual changes: - * * Helpers such as DRM_DEV_ERROR, dev_info, drm_printer, - * will print the wrong information. - * - * We could address the latter issues, by introducing - * drm_device::bus_dev, ... which would be used solely for this. - * - * So for the moment keep things as-is, with a bulky comment - * for the next person who feels like removing this - * drm_dev_set_unique() quirk. - */ - snprintf(unique, sizeof(unique), "pci:%s", pname); - return drm_dev_set_unique(dev, unique); + return 0; } static int virtio_gpu_probe(struct virtio_device *vdev) @@ -109,18 +76,24 @@ static int virtio_gpu_probe(struct virtio_device *vdev) if (virtio_gpu_modeset == 0) return -EINVAL; - dev = drm_dev_alloc(&driver, &vdev->dev); + /* + * The virtio-gpu device is a virtual device that doesn't have DMA + * ops assigned to it, nor DMA mask set and etc. Its parent device + * is actual GPU device we want to use it for the DRM's device in + * order to benefit from using generic DRM APIs. + */ + dev = drm_dev_alloc(&driver, vdev->dev.parent); if (IS_ERR(dev)) return PTR_ERR(dev); vdev->priv = dev; if (!strcmp(vdev->dev.parent->bus->name, "pci")) { - ret = virtio_gpu_pci_quirk(dev, vdev); + ret = virtio_gpu_pci_quirk(dev); if (ret) goto err_free; } - ret = virtio_gpu_init(dev); + ret = virtio_gpu_init(vdev, dev); if (ret) goto err_free; diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.h b/drivers/gpu/drm/virtio/virtgpu_drv.h index 0a194aaad419..b2d93cb12ebf 100644 --- a/drivers/gpu/drm/virtio/virtgpu_drv.h +++ b/drivers/gpu/drm/virtio/virtgpu_drv.h @@ -100,8 +100,6 @@ struct virtio_gpu_object { struct virtio_gpu_object_shmem { struct virtio_gpu_object base; - struct sg_table *pages; - uint32_t mapped; }; struct virtio_gpu_object_vram { @@ -214,7 +212,6 @@ struct virtio_gpu_drv_cap_cache { }; struct virtio_gpu_device { - struct device *dev; struct drm_device *ddev; struct virtio_device *vdev; @@ -282,7 +279,7 @@ extern struct drm_ioctl_desc virtio_gpu_ioctls[DRM_VIRTIO_NUM_IOCTLS]; void virtio_gpu_create_context(struct drm_device *dev, struct drm_file *file); /* virtgpu_kms.c */ -int virtio_gpu_init(struct drm_device *dev); +int virtio_gpu_init(struct virtio_device *vdev, struct drm_device *dev); void virtio_gpu_deinit(struct drm_device *dev); void virtio_gpu_release(struct drm_device *dev); int virtio_gpu_driver_open(struct drm_device *dev, struct drm_file *file); diff --git a/drivers/gpu/drm/virtio/virtgpu_kms.c b/drivers/gpu/drm/virtio/virtgpu_kms.c index 3313b92db531..0d1e3eb61bee 100644 --- a/drivers/gpu/drm/virtio/virtgpu_kms.c +++ b/drivers/gpu/drm/virtio/virtgpu_kms.c @@ -110,7 +110,7 @@ static void virtio_gpu_get_capsets(struct virtio_gpu_device *vgdev, vgdev->num_capsets = num_capsets; } -int virtio_gpu_init(struct drm_device *dev) +int virtio_gpu_init(struct virtio_device *vdev, struct drm_device *dev) { static vq_callback_t *callbacks[] = { virtio_gpu_ctrl_ack, virtio_gpu_cursor_ack @@ -123,7 +123,7 @@ int virtio_gpu_init(struct drm_device *dev) u32 num_scanouts, num_capsets; int ret = 0; - if (!virtio_has_feature(dev_to_virtio(dev->dev), VIRTIO_F_VERSION_1)) + if (!virtio_has_feature(vdev, VIRTIO_F_VERSION_1)) return -ENODEV; vgdev = kzalloc(sizeof(struct virtio_gpu_device), GFP_KERNEL); @@ -132,8 +132,7 @@ int virtio_gpu_init(struct drm_device *dev) vgdev->ddev = dev; dev->dev_private = vgdev; - vgdev->vdev = dev_to_virtio(dev->dev); - vgdev->dev = dev->dev; + vgdev->vdev = vdev; spin_lock_init(&vgdev->display_info_lock); spin_lock_init(&vgdev->resource_export_lock); diff --git a/drivers/gpu/drm/virtio/virtgpu_object.c b/drivers/gpu/drm/virtio/virtgpu_object.c index 18f70ef6b4d0..8d7728181de0 100644 --- a/drivers/gpu/drm/virtio/virtgpu_object.c +++ b/drivers/gpu/drm/virtio/virtgpu_object.c @@ -67,21 +67,6 @@ void virtio_gpu_cleanup_object(struct virtio_gpu_object *bo) virtio_gpu_resource_id_put(vgdev, bo->hw_res_handle); if (virtio_gpu_is_shmem(bo)) { - struct virtio_gpu_object_shmem *shmem = to_virtio_gpu_shmem(bo); - - if (shmem->pages) { - if (shmem->mapped) { - dma_unmap_sgtable(vgdev->vdev->dev.parent, - shmem->pages, DMA_TO_DEVICE, 0); - shmem->mapped = 0; - } - - sg_free_table(shmem->pages); - kfree(shmem->pages); - shmem->pages = NULL; - drm_gem_shmem_unpin(&bo->base); - } - drm_gem_shmem_free(&bo->base); } else if (virtio_gpu_is_vram(bo)) { struct virtio_gpu_object_vram *vram = to_virtio_gpu_vram(bo); @@ -153,37 +138,18 @@ static int virtio_gpu_object_shmem_init(struct virtio_gpu_device *vgdev, unsigned int *nents) { bool use_dma_api = !virtio_has_dma_quirk(vgdev->vdev); - struct virtio_gpu_object_shmem *shmem = to_virtio_gpu_shmem(bo); struct scatterlist *sg; - int si, ret; + struct sg_table *pages; + int si; - ret = drm_gem_shmem_pin(&bo->base); - if (ret < 0) - return -EINVAL; - - /* - * virtio_gpu uses drm_gem_shmem_get_sg_table instead of - * drm_gem_shmem_get_pages_sgt because virtio has it's own set of - * dma-ops. This is discouraged for other drivers, but should be fine - * since virtio_gpu doesn't support dma-buf import from other devices. - */ - shmem->pages = drm_gem_shmem_get_sg_table(&bo->base); - ret = PTR_ERR_OR_ZERO(shmem->pages); - if (ret) { - drm_gem_shmem_unpin(&bo->base); - shmem->pages = NULL; - return ret; - } + pages = drm_gem_shmem_get_pages_sgt(&bo->base); + if (IS_ERR(pages)) + return PTR_ERR(pages); - if (use_dma_api) { - ret = dma_map_sgtable(vgdev->vdev->dev.parent, - shmem->pages, DMA_TO_DEVICE, 0); - if (ret) - return ret; - *nents = shmem->mapped = shmem->pages->nents; - } else { - *nents = shmem->pages->orig_nents; - } + if (use_dma_api) + *nents = pages->nents; + else + *nents = pages->orig_nents; *ents = kvmalloc_array(*nents, sizeof(struct virtio_gpu_mem_entry), @@ -194,13 +160,13 @@ static int virtio_gpu_object_shmem_init(struct virtio_gpu_device *vgdev, } if (use_dma_api) { - for_each_sgtable_dma_sg(shmem->pages, sg, si) { + for_each_sgtable_dma_sg(pages, sg, si) { (*ents)[si].addr = cpu_to_le64(sg_dma_address(sg)); (*ents)[si].length = cpu_to_le32(sg_dma_len(sg)); (*ents)[si].padding = 0; } } else { - for_each_sgtable_sg(shmem->pages, sg, si) { + for_each_sgtable_sg(pages, sg, si) { (*ents)[si].addr = cpu_to_le64(sg_phys(sg)); (*ents)[si].length = cpu_to_le32(sg->length); (*ents)[si].padding = 0; diff --git a/drivers/gpu/drm/virtio/virtgpu_vq.c b/drivers/gpu/drm/virtio/virtgpu_vq.c index 2edf31806b74..06566e44307d 100644 --- a/drivers/gpu/drm/virtio/virtgpu_vq.c +++ b/drivers/gpu/drm/virtio/virtgpu_vq.c @@ -593,11 +593,10 @@ void virtio_gpu_cmd_transfer_to_host_2d(struct virtio_gpu_device *vgdev, struct virtio_gpu_transfer_to_host_2d *cmd_p; struct virtio_gpu_vbuffer *vbuf; bool use_dma_api = !virtio_has_dma_quirk(vgdev->vdev); - struct virtio_gpu_object_shmem *shmem = to_virtio_gpu_shmem(bo); if (virtio_gpu_is_shmem(bo) && use_dma_api) - dma_sync_sgtable_for_device(vgdev->vdev->dev.parent, - shmem->pages, DMA_TO_DEVICE); + dma_sync_sgtable_for_device(&vgdev->vdev->dev, + bo->base.sgt, DMA_TO_DEVICE); cmd_p = virtio_gpu_alloc_cmd(vgdev, &vbuf, sizeof(*cmd_p)); memset(cmd_p, 0, sizeof(*cmd_p)); @@ -1017,11 +1016,9 @@ void virtio_gpu_cmd_transfer_to_host_3d(struct virtio_gpu_device *vgdev, struct virtio_gpu_vbuffer *vbuf; bool use_dma_api = !virtio_has_dma_quirk(vgdev->vdev); - if (virtio_gpu_is_shmem(bo) && use_dma_api) { - struct virtio_gpu_object_shmem *shmem = to_virtio_gpu_shmem(bo); - dma_sync_sgtable_for_device(vgdev->vdev->dev.parent, - shmem->pages, DMA_TO_DEVICE); - } + if (virtio_gpu_is_shmem(bo) && use_dma_api) + dma_sync_sgtable_for_device(&vgdev->vdev->dev, + bo->base.sgt, DMA_TO_DEVICE); cmd_p = virtio_gpu_alloc_cmd(vgdev, &vbuf, sizeof(*cmd_p)); memset(cmd_p, 0, sizeof(*cmd_p)); From patchwork Thu May 26 23:50:32 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 576377 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 69F4BC433EF for ; Thu, 26 May 2022 23:55:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1349741AbiEZXzw (ORCPT ); Thu, 26 May 2022 19:55:52 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53014 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1349701AbiEZXzO (ORCPT ); Thu, 26 May 2022 19:55:14 -0400 Received: from bhuna.collabora.co.uk (bhuna.collabora.co.uk [46.235.227.227]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 377FBEC3D9; Thu, 26 May 2022 16:54:59 -0700 (PDT) Received: from [127.0.0.1] (localhost [127.0.0.1]) (Authenticated sender: dmitry.osipenko) with ESMTPSA id 08A4D1F459EB DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1653609297; bh=k+zX6K8xFxaRrOGlVykBVDod65TUEg1/z2+ALSEhaKY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=GyvPBpUAZOnzXPMSk3aNXyxskIx/FdwsolvsR4+9pFghcHEv/NoJ05Oprp40hhkyh BHXlpfnU2GjCfUXHzvS0HKGFOr0qenF7wIApFgQCiIpqD1Dy8u2WJKA4ABF6p+ZPlp 00cVIqFbut6/N9rtSKQWrmpxMJ9XTSMTT6916Up8wY0bHx0AK+B1jxygiELV22Awyq mFNMA7ZdDAPHXpb1fUai6Mj048woUwNDr2LjhFzabMCDjbFeZD8iCY/znzfu+Uz73N N+f5reJcoATyG0+M2N02StQpX0Hh+84RstAZHaohNirqqt9/lBUwrl6QZj3NVRPwBQ QAA/S45v8COdA== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Daniel Almeida , Gert Wollny , Gustavo Padovan , Daniel Stone , Tomeu Vizoso , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Rob Herring , Steven Price , Alyssa Rosenzweig , Rob Clark , Emil Velikov , Robin Murphy , Qiang Yu , Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= , "Pan, Xinhui" , Thierry Reding , Tomasz Figa , Marek Szyprowski , Mauro Carvalho Chehab , Alex Deucher , Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org, Dmitry Osipenko , Dmitry Osipenko , linux-tegra@vger.kernel.org, linux-media@vger.kernel.org, linaro-mm-sig@lists.linaro.org, amd-gfx@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, kernel@collabora.com Subject: [PATCH v6 14/22] dma-buf: Introduce new locking convention Date: Fri, 27 May 2022 02:50:32 +0300 Message-Id: <20220526235040.678984-15-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20220526235040.678984-1-dmitry.osipenko@collabora.com> References: <20220526235040.678984-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org All dma-bufs have dma-reservation lock that allows drivers to perform exclusive operations over shared dma-bufs. Today's dma-buf API has incomplete locking specification, which creates dead lock situation for dma-buf importers and exporters that don't coordinate theirs locks. This patch introduces new locking convention for dma-buf users. From now on all dma-buf importers are responsible for holding dma-buf reservation lock around operations performed over dma-bufs. This patch implements the new dma-buf locking convention by: 1. Making dma-buf API functions to take the reservation lock. 2. Adding new locked variants of the dma-buf API functions for drivers that need to manage imported dma-bufs under the held lock. 3. Converting all drivers to the new locking scheme. Signed-off-by: Dmitry Osipenko Reported-by: kernel test robot --- drivers/dma-buf/dma-buf.c | 270 +++++++++++------- drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 6 +- drivers/gpu/drm/drm_client.c | 4 +- drivers/gpu/drm/drm_gem.c | 33 +++ drivers/gpu/drm/drm_gem_framebuffer_helper.c | 6 +- drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c | 10 +- drivers/gpu/drm/qxl/qxl_object.c | 17 +- drivers/gpu/drm/qxl/qxl_prime.c | 4 +- .../common/videobuf2/videobuf2-dma-contig.c | 11 +- .../media/common/videobuf2/videobuf2-dma-sg.c | 11 +- .../common/videobuf2/videobuf2-vmalloc.c | 11 +- include/drm/drm_gem.h | 3 + include/linux/dma-buf.h | 14 +- 13 files changed, 241 insertions(+), 159 deletions(-) diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c index 32f55640890c..64a9909ccfa2 100644 --- a/drivers/dma-buf/dma-buf.c +++ b/drivers/dma-buf/dma-buf.c @@ -552,7 +552,6 @@ struct dma_buf *dma_buf_export(const struct dma_buf_export_info *exp_info) file->f_mode |= FMODE_LSEEK; dmabuf->file = file; - mutex_init(&dmabuf->lock); INIT_LIST_HEAD(&dmabuf->attachments); mutex_lock(&db_list.lock); @@ -737,14 +736,14 @@ dma_buf_dynamic_attach(struct dma_buf *dmabuf, struct device *dev, attach->importer_ops = importer_ops; attach->importer_priv = importer_priv; + dma_resv_lock(dmabuf->resv, NULL); + if (dmabuf->ops->attach) { ret = dmabuf->ops->attach(dmabuf, attach); if (ret) goto err_attach; } - dma_resv_lock(dmabuf->resv, NULL); list_add(&attach->node, &dmabuf->attachments); - dma_resv_unlock(dmabuf->resv); /* When either the importer or the exporter can't handle dynamic * mappings we cache the mapping here to avoid issues with the @@ -755,7 +754,6 @@ dma_buf_dynamic_attach(struct dma_buf *dmabuf, struct device *dev, struct sg_table *sgt; if (dma_buf_is_dynamic(attach->dmabuf)) { - dma_resv_lock(attach->dmabuf->resv, NULL); ret = dmabuf->ops->pin(attach); if (ret) goto err_unlock; @@ -768,15 +766,16 @@ dma_buf_dynamic_attach(struct dma_buf *dmabuf, struct device *dev, ret = PTR_ERR(sgt); goto err_unpin; } - if (dma_buf_is_dynamic(attach->dmabuf)) - dma_resv_unlock(attach->dmabuf->resv); attach->sgt = sgt; attach->dir = DMA_BIDIRECTIONAL; } + dma_resv_unlock(dmabuf->resv); + return attach; err_attach: + dma_resv_unlock(attach->dmabuf->resv); kfree(attach); return ERR_PTR(ret); @@ -785,10 +784,10 @@ dma_buf_dynamic_attach(struct dma_buf *dmabuf, struct device *dev, dmabuf->ops->unpin(attach); err_unlock: - if (dma_buf_is_dynamic(attach->dmabuf)) - dma_resv_unlock(attach->dmabuf->resv); + dma_resv_unlock(dmabuf->resv); dma_buf_detach(dmabuf, attach); + return ERR_PTR(ret); } EXPORT_SYMBOL_NS_GPL(dma_buf_dynamic_attach, DMA_BUF); @@ -832,24 +831,23 @@ void dma_buf_detach(struct dma_buf *dmabuf, struct dma_buf_attachment *attach) if (WARN_ON(!dmabuf || !attach)) return; - if (attach->sgt) { - if (dma_buf_is_dynamic(attach->dmabuf)) - dma_resv_lock(attach->dmabuf->resv, NULL); + if (WARN_ON(dmabuf != attach->dmabuf)) + return; + dma_resv_lock(dmabuf->resv, NULL); + + if (attach->sgt) { __unmap_dma_buf(attach, attach->sgt, attach->dir); - if (dma_buf_is_dynamic(attach->dmabuf)) { + if (dma_buf_is_dynamic(attach->dmabuf)) dmabuf->ops->unpin(attach); - dma_resv_unlock(attach->dmabuf->resv); - } } - dma_resv_lock(dmabuf->resv, NULL); list_del(&attach->node); - dma_resv_unlock(dmabuf->resv); if (dmabuf->ops->detach) dmabuf->ops->detach(dmabuf, attach); + dma_resv_unlock(dmabuf->resv); kfree(attach); } EXPORT_SYMBOL_NS_GPL(dma_buf_detach, DMA_BUF); @@ -906,28 +904,18 @@ void dma_buf_unpin(struct dma_buf_attachment *attach) EXPORT_SYMBOL_NS_GPL(dma_buf_unpin, DMA_BUF); /** - * dma_buf_map_attachment - Returns the scatterlist table of the attachment; + * dma_buf_map_attachment_locked - Returns the scatterlist table of the attachment; * mapped into _device_ address space. Is a wrapper for map_dma_buf() of the * dma_buf_ops. * @attach: [in] attachment whose scatterlist is to be returned * @direction: [in] direction of DMA transfer * - * Returns sg_table containing the scatterlist to be returned; returns ERR_PTR - * on error. May return -EINTR if it is interrupted by a signal. - * - * On success, the DMA addresses and lengths in the returned scatterlist are - * PAGE_SIZE aligned. - * - * A mapping must be unmapped by using dma_buf_unmap_attachment(). Note that - * the underlying backing storage is pinned for as long as a mapping exists, - * therefore users/importers should not hold onto a mapping for undue amounts of - * time. + * Locked variant of dma_buf_map_attachment(). * - * Important: Dynamic importers must wait for the exclusive fence of the struct - * dma_resv attached to the DMA-BUF first. + * Caller is responsible for holding dmabuf's reservation lock. */ -struct sg_table *dma_buf_map_attachment(struct dma_buf_attachment *attach, - enum dma_data_direction direction) +struct sg_table *dma_buf_map_attachment_locked(struct dma_buf_attachment *attach, + enum dma_data_direction direction) { struct sg_table *sg_table; int r; @@ -937,8 +925,7 @@ struct sg_table *dma_buf_map_attachment(struct dma_buf_attachment *attach, if (WARN_ON(!attach || !attach->dmabuf)) return ERR_PTR(-EINVAL); - if (dma_buf_attachment_is_dynamic(attach)) - dma_resv_assert_held(attach->dmabuf->resv); + dma_resv_assert_held(attach->dmabuf->resv); if (attach->sgt) { /* @@ -953,7 +940,6 @@ struct sg_table *dma_buf_map_attachment(struct dma_buf_attachment *attach, } if (dma_buf_is_dynamic(attach->dmabuf)) { - dma_resv_assert_held(attach->dmabuf->resv); if (!IS_ENABLED(CONFIG_DMABUF_MOVE_NOTIFY)) { r = attach->dmabuf->ops->pin(attach); if (r) @@ -993,42 +979,101 @@ struct sg_table *dma_buf_map_attachment(struct dma_buf_attachment *attach, #endif /* CONFIG_DMA_API_DEBUG */ return sg_table; } -EXPORT_SYMBOL_NS_GPL(dma_buf_map_attachment, DMA_BUF); +EXPORT_SYMBOL_NS_GPL(dma_buf_map_attachment_locked, DMA_BUF); /** - * dma_buf_unmap_attachment - unmaps and decreases usecount of the buffer;might - * deallocate the scatterlist associated. Is a wrapper for unmap_dma_buf() of + * dma_buf_map_attachment - Returns the scatterlist table of the attachment; + * mapped into _device_ address space. Is a wrapper for map_dma_buf() of the * dma_buf_ops. - * @attach: [in] attachment to unmap buffer from - * @sg_table: [in] scatterlist info of the buffer to unmap - * @direction: [in] direction of DMA transfer + * @attach: [in] attachment whose scatterlist is to be returned + * @direction: [in] direction of DMA transfer * - * This unmaps a DMA mapping for @attached obtained by dma_buf_map_attachment(). + * Returns sg_table containing the scatterlist to be returned; returns ERR_PTR + * on error. May return -EINTR if it is interrupted by a signal. + * + * On success, the DMA addresses and lengths in the returned scatterlist are + * PAGE_SIZE aligned. + * + * A mapping must be unmapped by using dma_buf_unmap_attachment(). Note that + * the underlying backing storage is pinned for as long as a mapping exists, + * therefore users/importers should not hold onto a mapping for undue amounts of + * time. + * + * Important: Dynamic importers must wait for the exclusive fence of the struct + * dma_resv attached to the DMA-BUF first. */ -void dma_buf_unmap_attachment(struct dma_buf_attachment *attach, - struct sg_table *sg_table, +struct sg_table * +dma_buf_map_attachment(struct dma_buf_attachment *attach, enum dma_data_direction direction) { + struct sg_table *sg_table; + might_sleep(); - if (WARN_ON(!attach || !attach->dmabuf || !sg_table)) - return; + if (WARN_ON(!attach || !attach->dmabuf)) + return ERR_PTR(-EINVAL); + + dma_resv_lock(attach->dmabuf->resv, NULL); + sg_table = dma_buf_map_attachment_locked(attach, direction); + dma_resv_unlock(attach->dmabuf->resv); - if (dma_buf_attachment_is_dynamic(attach)) - dma_resv_assert_held(attach->dmabuf->resv); + return sg_table; +} +EXPORT_SYMBOL_NS_GPL(dma_buf_map_attachment, DMA_BUF); + +/** + * dma_buf_unmap_attachment_locked - Returns the scatterlist table of the attachment; + * mapped into _device_ address space. Is a wrapper for map_dma_buf() of the + * dma_buf_ops. + * @attach: [in] attachment whose scatterlist is to be returned + * @direction: [in] direction of DMA transfer + * + * Locked variant of dma_buf_unmap_attachment(). + * + * Caller is responsible for holding dmabuf's reservation lock. + */ +void dma_buf_unmap_attachment_locked(struct dma_buf_attachment *attach, + struct sg_table *sg_table, + enum dma_data_direction direction) +{ + might_sleep(); + + dma_resv_assert_held(attach->dmabuf->resv); if (attach->sgt == sg_table) return; - if (dma_buf_is_dynamic(attach->dmabuf)) - dma_resv_assert_held(attach->dmabuf->resv); - __unmap_dma_buf(attach, sg_table, direction); if (dma_buf_is_dynamic(attach->dmabuf) && !IS_ENABLED(CONFIG_DMABUF_MOVE_NOTIFY)) dma_buf_unpin(attach); } +EXPORT_SYMBOL_NS_GPL(dma_buf_unmap_attachment_locked, DMA_BUF); + +/** + * dma_buf_unmap_attachment - unmaps and decreases usecount of the buffer;might + * deallocate the scatterlist associated. Is a wrapper for unmap_dma_buf() of + * dma_buf_ops. + * @attach: [in] attachment to unmap buffer from + * @sg_table: [in] scatterlist info of the buffer to unmap + * @direction: [in] direction of DMA transfer + * + * This unmaps a DMA mapping for @attached obtained by dma_buf_map_attachment(). + */ +void dma_buf_unmap_attachment(struct dma_buf_attachment *attach, + struct sg_table *sg_table, + enum dma_data_direction direction) +{ + might_sleep(); + + if (WARN_ON(!attach || !attach->dmabuf || !sg_table)) + return; + + dma_resv_lock(attach->dmabuf->resv, NULL); + dma_buf_unmap_attachment_locked(attach, sg_table, direction); + dma_resv_unlock(attach->dmabuf->resv); +} EXPORT_SYMBOL_NS_GPL(dma_buf_unmap_attachment, DMA_BUF); /** @@ -1224,6 +1269,31 @@ int dma_buf_end_cpu_access(struct dma_buf *dmabuf, } EXPORT_SYMBOL_NS_GPL(dma_buf_end_cpu_access, DMA_BUF); +static int dma_buf_mmap_locked(struct dma_buf *dmabuf, + struct vm_area_struct *vma, + unsigned long pgoff) +{ + dma_resv_assert_held(dmabuf->resv); + + /* check if buffer supports mmap */ + if (!dmabuf->ops->mmap) + return -EINVAL; + + /* check for offset overflow */ + if (pgoff + vma_pages(vma) < pgoff) + return -EOVERFLOW; + + /* check for overflowing the buffer's size */ + if (pgoff + vma_pages(vma) > + dmabuf->size >> PAGE_SHIFT) + return -EINVAL; + + /* readjust the vma */ + vma_set_file(vma, dmabuf->file); + vma->vm_pgoff = pgoff; + + return dmabuf->ops->mmap(dmabuf, vma); +} /** * dma_buf_mmap - Setup up a userspace mmap with the given vma @@ -1242,29 +1312,46 @@ EXPORT_SYMBOL_NS_GPL(dma_buf_end_cpu_access, DMA_BUF); int dma_buf_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma, unsigned long pgoff) { + int ret; + if (WARN_ON(!dmabuf || !vma)) return -EINVAL; - /* check if buffer supports mmap */ - if (!dmabuf->ops->mmap) - return -EINVAL; + dma_resv_lock(dmabuf->resv, NULL); + ret = dma_buf_mmap_locked(dmabuf, vma, pgoff); + dma_resv_unlock(dmabuf->resv); - /* check for offset overflow */ - if (pgoff + vma_pages(vma) < pgoff) - return -EOVERFLOW; + return ret; +} +EXPORT_SYMBOL_NS_GPL(dma_buf_mmap, DMA_BUF); - /* check for overflowing the buffer's size */ - if (pgoff + vma_pages(vma) > - dmabuf->size >> PAGE_SHIFT) - return -EINVAL; +static int dma_buf_vmap_locked(struct dma_buf *dmabuf, struct iosys_map *map) +{ + struct iosys_map ptr; + int ret; - /* readjust the vma */ - vma_set_file(vma, dmabuf->file); - vma->vm_pgoff = pgoff; + dma_resv_assert_held(dmabuf->resv); - return dmabuf->ops->mmap(dmabuf, vma); + if (dmabuf->vmapping_counter) { + dmabuf->vmapping_counter++; + BUG_ON(iosys_map_is_null(&dmabuf->vmap_ptr)); + *map = dmabuf->vmap_ptr; + return ret; + } + + BUG_ON(iosys_map_is_set(&dmabuf->vmap_ptr)); + + ret = dmabuf->ops->vmap(dmabuf, &ptr); + if (WARN_ON_ONCE(ret)) + return ret; + + dmabuf->vmap_ptr = ptr; + dmabuf->vmapping_counter = 1; + + *map = dmabuf->vmap_ptr; + + return 0; } -EXPORT_SYMBOL_NS_GPL(dma_buf_mmap, DMA_BUF); /** * dma_buf_vmap - Create virtual mapping for the buffer object into kernel @@ -1284,8 +1371,7 @@ EXPORT_SYMBOL_NS_GPL(dma_buf_mmap, DMA_BUF); */ int dma_buf_vmap(struct dma_buf *dmabuf, struct iosys_map *map) { - struct iosys_map ptr; - int ret = 0; + int ret; iosys_map_clear(map); @@ -1295,52 +1381,40 @@ int dma_buf_vmap(struct dma_buf *dmabuf, struct iosys_map *map) if (!dmabuf->ops->vmap) return -EINVAL; - mutex_lock(&dmabuf->lock); - if (dmabuf->vmapping_counter) { - dmabuf->vmapping_counter++; - BUG_ON(iosys_map_is_null(&dmabuf->vmap_ptr)); - *map = dmabuf->vmap_ptr; - goto out_unlock; - } - - BUG_ON(iosys_map_is_set(&dmabuf->vmap_ptr)); - - ret = dmabuf->ops->vmap(dmabuf, &ptr); - if (WARN_ON_ONCE(ret)) - goto out_unlock; - - dmabuf->vmap_ptr = ptr; - dmabuf->vmapping_counter = 1; - - *map = dmabuf->vmap_ptr; + dma_resv_lock(dmabuf->resv, NULL); + ret = dma_buf_vmap_locked(dmabuf, map); + dma_resv_unlock(dmabuf->resv); -out_unlock: - mutex_unlock(&dmabuf->lock); return ret; } EXPORT_SYMBOL_NS_GPL(dma_buf_vmap, DMA_BUF); -/** - * dma_buf_vunmap - Unmap a vmap obtained by dma_buf_vmap. - * @dmabuf: [in] buffer to vunmap - * @map: [in] vmap pointer to vunmap - */ -void dma_buf_vunmap(struct dma_buf *dmabuf, struct iosys_map *map) +static void dma_buf_vunmap_locked(struct dma_buf *dmabuf, struct iosys_map *map) { - if (WARN_ON(!dmabuf)) - return; - BUG_ON(iosys_map_is_null(&dmabuf->vmap_ptr)); BUG_ON(dmabuf->vmapping_counter == 0); BUG_ON(!iosys_map_is_equal(&dmabuf->vmap_ptr, map)); - mutex_lock(&dmabuf->lock); if (--dmabuf->vmapping_counter == 0) { if (dmabuf->ops->vunmap) dmabuf->ops->vunmap(dmabuf, map); iosys_map_clear(&dmabuf->vmap_ptr); } - mutex_unlock(&dmabuf->lock); +} + +/** + * dma_buf_vunmap - Unmap a vmap obtained by dma_buf_vmap. + * @dmabuf: [in] buffer to vunmap + * @map: [in] vmap pointer to vunmap + */ +void dma_buf_vunmap(struct dma_buf *dmabuf, struct iosys_map *map) +{ + if (WARN_ON(!dmabuf)) + return; + + dma_resv_lock(dmabuf->resv, NULL); + dma_buf_vunmap_locked(dmabuf, map); + dma_resv_unlock(dmabuf->resv); } EXPORT_SYMBOL_NS_GPL(dma_buf_vunmap, DMA_BUF); diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c index be6f76a30ac6..b704bdf5601a 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c @@ -882,7 +882,8 @@ static int amdgpu_ttm_backend_bind(struct ttm_device *bdev, struct sg_table *sgt; attach = gtt->gobj->import_attach; - sgt = dma_buf_map_attachment(attach, DMA_BIDIRECTIONAL); + sgt = dma_buf_map_attachment_locked(attach, + DMA_BIDIRECTIONAL); if (IS_ERR(sgt)) return PTR_ERR(sgt); @@ -1007,7 +1008,8 @@ static void amdgpu_ttm_backend_unbind(struct ttm_device *bdev, struct dma_buf_attachment *attach; attach = gtt->gobj->import_attach; - dma_buf_unmap_attachment(attach, ttm->sg, DMA_BIDIRECTIONAL); + dma_buf_unmap_attachment_locked(attach, ttm->sg, + DMA_BIDIRECTIONAL); ttm->sg = NULL; } diff --git a/drivers/gpu/drm/drm_client.c b/drivers/gpu/drm/drm_client.c index af3b7395bf69..e9a1cd310352 100644 --- a/drivers/gpu/drm/drm_client.c +++ b/drivers/gpu/drm/drm_client.c @@ -323,7 +323,7 @@ drm_client_buffer_vmap(struct drm_client_buffer *buffer, * fd_install step out of the driver backend hooks, to make that * final step optional for internal users. */ - ret = drm_gem_vmap(buffer->gem, map); + ret = drm_gem_vmap_unlocked(buffer->gem, map); if (ret) return ret; @@ -345,7 +345,7 @@ void drm_client_buffer_vunmap(struct drm_client_buffer *buffer) { struct iosys_map *map = &buffer->map; - drm_gem_vunmap(buffer->gem, map); + drm_gem_vunmap_unlocked(buffer->gem, map); } EXPORT_SYMBOL(drm_client_buffer_vunmap); diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c index 7c0b025508e4..c61674887582 100644 --- a/drivers/gpu/drm/drm_gem.c +++ b/drivers/gpu/drm/drm_gem.c @@ -1053,7 +1053,12 @@ int drm_gem_mmap_obj(struct drm_gem_object *obj, unsigned long obj_size, vma->vm_ops = obj->funcs->vm_ops; if (obj->funcs->mmap) { + ret = dma_resv_lock_interruptible(obj->resv, NULL); + if (ret) + goto err_drm_gem_object_put; + ret = obj->funcs->mmap(obj, vma); + dma_resv_unlock(obj->resv); if (ret) goto err_drm_gem_object_put; WARN_ON(!(vma->vm_flags & VM_DONTEXPAND)); @@ -1158,6 +1163,8 @@ void drm_gem_print_info(struct drm_printer *p, unsigned int indent, int drm_gem_pin(struct drm_gem_object *obj) { + dma_resv_assert_held(obj->resv); + if (obj->funcs->pin) return obj->funcs->pin(obj); else @@ -1166,6 +1173,8 @@ int drm_gem_pin(struct drm_gem_object *obj) void drm_gem_unpin(struct drm_gem_object *obj) { + dma_resv_assert_held(obj->resv); + if (obj->funcs->unpin) obj->funcs->unpin(obj); } @@ -1174,6 +1183,8 @@ int drm_gem_vmap(struct drm_gem_object *obj, struct iosys_map *map) { int ret; + dma_resv_assert_held(obj->resv); + if (!obj->funcs->vmap) return -EOPNOTSUPP; @@ -1189,6 +1200,8 @@ EXPORT_SYMBOL(drm_gem_vmap); void drm_gem_vunmap(struct drm_gem_object *obj, struct iosys_map *map) { + dma_resv_assert_held(obj->resv); + if (iosys_map_is_null(map)) return; @@ -1200,6 +1213,26 @@ void drm_gem_vunmap(struct drm_gem_object *obj, struct iosys_map *map) } EXPORT_SYMBOL(drm_gem_vunmap); +int drm_gem_vmap_unlocked(struct drm_gem_object *obj, struct iosys_map *map) +{ + int ret; + + dma_resv_lock(obj->resv, NULL); + ret = drm_gem_vmap(obj, map); + dma_resv_unlock(obj->resv); + + return ret; +} +EXPORT_SYMBOL(drm_gem_vmap_unlocked); + +void drm_gem_vunmap_unlocked(struct drm_gem_object *obj, struct iosys_map *map) +{ + dma_resv_lock(obj->resv, NULL); + drm_gem_vunmap(obj, map); + dma_resv_unlock(obj->resv); +} +EXPORT_SYMBOL(drm_gem_vunmap_unlocked); + /** * drm_gem_lock_reservations - Sets up the ww context and acquires * the lock on an array of GEM objects. diff --git a/drivers/gpu/drm/drm_gem_framebuffer_helper.c b/drivers/gpu/drm/drm_gem_framebuffer_helper.c index f4619803acd0..a0bff53b158e 100644 --- a/drivers/gpu/drm/drm_gem_framebuffer_helper.c +++ b/drivers/gpu/drm/drm_gem_framebuffer_helper.c @@ -348,7 +348,7 @@ int drm_gem_fb_vmap(struct drm_framebuffer *fb, iosys_map_clear(&map[i]); continue; } - ret = drm_gem_vmap(obj, &map[i]); + ret = drm_gem_vmap_unlocked(obj, &map[i]); if (ret) goto err_drm_gem_vunmap; } @@ -370,7 +370,7 @@ int drm_gem_fb_vmap(struct drm_framebuffer *fb, obj = drm_gem_fb_get_obj(fb, i); if (!obj) continue; - drm_gem_vunmap(obj, &map[i]); + drm_gem_vunmap_unlocked(obj, &map[i]); } return ret; } @@ -398,7 +398,7 @@ void drm_gem_fb_vunmap(struct drm_framebuffer *fb, continue; if (iosys_map_is_null(&map[i])) continue; - drm_gem_vunmap(obj, &map[i]); + drm_gem_vunmap_unlocked(obj, &map[i]); } } EXPORT_SYMBOL(drm_gem_fb_vunmap); diff --git a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c index f5062d0c6333..09502d490da8 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c @@ -72,7 +72,7 @@ static int i915_gem_dmabuf_vmap(struct dma_buf *dma_buf, struct drm_i915_gem_object *obj = dma_buf_to_obj(dma_buf); void *vaddr; - vaddr = i915_gem_object_pin_map_unlocked(obj, I915_MAP_WB); + vaddr = i915_gem_object_pin_map(obj, I915_MAP_WB); if (IS_ERR(vaddr)) return PTR_ERR(vaddr); @@ -241,8 +241,8 @@ static int i915_gem_object_get_pages_dmabuf(struct drm_i915_gem_object *obj) assert_object_held(obj); - pages = dma_buf_map_attachment(obj->base.import_attach, - DMA_BIDIRECTIONAL); + pages = dma_buf_map_attachment_locked(obj->base.import_attach, + DMA_BIDIRECTIONAL); if (IS_ERR(pages)) return PTR_ERR(pages); @@ -270,8 +270,8 @@ static int i915_gem_object_get_pages_dmabuf(struct drm_i915_gem_object *obj) static void i915_gem_object_put_pages_dmabuf(struct drm_i915_gem_object *obj, struct sg_table *pages) { - dma_buf_unmap_attachment(obj->base.import_attach, pages, - DMA_BIDIRECTIONAL); + dma_buf_unmap_attachment_locked(obj->base.import_attach, pages, + DMA_BIDIRECTIONAL); } static const struct drm_i915_gem_object_ops i915_gem_object_dmabuf_ops = { diff --git a/drivers/gpu/drm/qxl/qxl_object.c b/drivers/gpu/drm/qxl/qxl_object.c index b42a657e4c2f..a64cd635fbc0 100644 --- a/drivers/gpu/drm/qxl/qxl_object.c +++ b/drivers/gpu/drm/qxl/qxl_object.c @@ -168,9 +168,16 @@ int qxl_bo_vmap_locked(struct qxl_bo *bo, struct iosys_map *map) bo->map_count++; goto out; } - r = ttm_bo_vmap(&bo->tbo, &bo->map); + + r = __qxl_bo_pin(bo); if (r) return r; + + r = ttm_bo_vmap(&bo->tbo, &bo->map); + if (r) { + __qxl_bo_unpin(bo); + return r; + } bo->map_count = 1; /* TODO: Remove kptr in favor of map everywhere. */ @@ -192,12 +199,6 @@ int qxl_bo_vmap(struct qxl_bo *bo, struct iosys_map *map) if (r) return r; - r = __qxl_bo_pin(bo); - if (r) { - qxl_bo_unreserve(bo); - return r; - } - r = qxl_bo_vmap_locked(bo, map); qxl_bo_unreserve(bo); return r; @@ -247,6 +248,7 @@ void qxl_bo_vunmap_locked(struct qxl_bo *bo) return; bo->kptr = NULL; ttm_bo_vunmap(&bo->tbo, &bo->map); + __qxl_bo_unpin(bo); } int qxl_bo_vunmap(struct qxl_bo *bo) @@ -258,7 +260,6 @@ int qxl_bo_vunmap(struct qxl_bo *bo) return r; qxl_bo_vunmap_locked(bo); - __qxl_bo_unpin(bo); qxl_bo_unreserve(bo); return 0; } diff --git a/drivers/gpu/drm/qxl/qxl_prime.c b/drivers/gpu/drm/qxl/qxl_prime.c index 142d01415acb..9169c26357d3 100644 --- a/drivers/gpu/drm/qxl/qxl_prime.c +++ b/drivers/gpu/drm/qxl/qxl_prime.c @@ -59,7 +59,7 @@ int qxl_gem_prime_vmap(struct drm_gem_object *obj, struct iosys_map *map) struct qxl_bo *bo = gem_to_qxl_bo(obj); int ret; - ret = qxl_bo_vmap(bo, map); + ret = qxl_bo_vmap_locked(bo, map); if (ret < 0) return ret; @@ -71,5 +71,5 @@ void qxl_gem_prime_vunmap(struct drm_gem_object *obj, { struct qxl_bo *bo = gem_to_qxl_bo(obj); - qxl_bo_vunmap(bo); + qxl_bo_vunmap_locked(bo); } diff --git a/drivers/media/common/videobuf2/videobuf2-dma-contig.c b/drivers/media/common/videobuf2/videobuf2-dma-contig.c index 678b359717c4..617062076370 100644 --- a/drivers/media/common/videobuf2/videobuf2-dma-contig.c +++ b/drivers/media/common/videobuf2/videobuf2-dma-contig.c @@ -382,18 +382,12 @@ static struct sg_table *vb2_dc_dmabuf_ops_map( struct dma_buf_attachment *db_attach, enum dma_data_direction dma_dir) { struct vb2_dc_attachment *attach = db_attach->priv; - /* stealing dmabuf mutex to serialize map/unmap operations */ - struct mutex *lock = &db_attach->dmabuf->lock; struct sg_table *sgt; - mutex_lock(lock); - sgt = &attach->sgt; /* return previously mapped sg table */ - if (attach->dma_dir == dma_dir) { - mutex_unlock(lock); + if (attach->dma_dir == dma_dir) return sgt; - } /* release any previous cache */ if (attach->dma_dir != DMA_NONE) { @@ -409,14 +403,11 @@ static struct sg_table *vb2_dc_dmabuf_ops_map( if (dma_map_sgtable(db_attach->dev, sgt, dma_dir, DMA_ATTR_SKIP_CPU_SYNC)) { pr_err("failed to map scatterlist\n"); - mutex_unlock(lock); return ERR_PTR(-EIO); } attach->dma_dir = dma_dir; - mutex_unlock(lock); - return sgt; } diff --git a/drivers/media/common/videobuf2/videobuf2-dma-sg.c b/drivers/media/common/videobuf2/videobuf2-dma-sg.c index fa69158a65b1..d2075e7078cd 100644 --- a/drivers/media/common/videobuf2/videobuf2-dma-sg.c +++ b/drivers/media/common/videobuf2/videobuf2-dma-sg.c @@ -424,18 +424,12 @@ static struct sg_table *vb2_dma_sg_dmabuf_ops_map( struct dma_buf_attachment *db_attach, enum dma_data_direction dma_dir) { struct vb2_dma_sg_attachment *attach = db_attach->priv; - /* stealing dmabuf mutex to serialize map/unmap operations */ - struct mutex *lock = &db_attach->dmabuf->lock; struct sg_table *sgt; - mutex_lock(lock); - sgt = &attach->sgt; /* return previously mapped sg table */ - if (attach->dma_dir == dma_dir) { - mutex_unlock(lock); + if (attach->dma_dir == dma_dir) return sgt; - } /* release any previous cache */ if (attach->dma_dir != DMA_NONE) { @@ -446,14 +440,11 @@ static struct sg_table *vb2_dma_sg_dmabuf_ops_map( /* mapping to the client with new direction */ if (dma_map_sgtable(db_attach->dev, sgt, dma_dir, 0)) { pr_err("failed to map scatterlist\n"); - mutex_unlock(lock); return ERR_PTR(-EIO); } attach->dma_dir = dma_dir; - mutex_unlock(lock); - return sgt; } diff --git a/drivers/media/common/videobuf2/videobuf2-vmalloc.c b/drivers/media/common/videobuf2/videobuf2-vmalloc.c index 948152f1596b..3d00a7f0aac1 100644 --- a/drivers/media/common/videobuf2/videobuf2-vmalloc.c +++ b/drivers/media/common/videobuf2/videobuf2-vmalloc.c @@ -267,18 +267,12 @@ static struct sg_table *vb2_vmalloc_dmabuf_ops_map( struct dma_buf_attachment *db_attach, enum dma_data_direction dma_dir) { struct vb2_vmalloc_attachment *attach = db_attach->priv; - /* stealing dmabuf mutex to serialize map/unmap operations */ - struct mutex *lock = &db_attach->dmabuf->lock; struct sg_table *sgt; - mutex_lock(lock); - sgt = &attach->sgt; /* return previously mapped sg table */ - if (attach->dma_dir == dma_dir) { - mutex_unlock(lock); + if (attach->dma_dir == dma_dir) return sgt; - } /* release any previous cache */ if (attach->dma_dir != DMA_NONE) { @@ -289,14 +283,11 @@ static struct sg_table *vb2_vmalloc_dmabuf_ops_map( /* mapping to the client with new direction */ if (dma_map_sgtable(db_attach->dev, sgt, dma_dir, 0)) { pr_err("failed to map scatterlist\n"); - mutex_unlock(lock); return ERR_PTR(-EIO); } attach->dma_dir = dma_dir; - mutex_unlock(lock); - return sgt; } diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h index 9d7c61a122dc..0b427939f466 100644 --- a/include/drm/drm_gem.h +++ b/include/drm/drm_gem.h @@ -410,4 +410,7 @@ void drm_gem_unlock_reservations(struct drm_gem_object **objs, int count, int drm_gem_dumb_map_offset(struct drm_file *file, struct drm_device *dev, u32 handle, u64 *offset); +int drm_gem_vmap_unlocked(struct drm_gem_object *obj, struct iosys_map *map); +void drm_gem_vunmap_unlocked(struct drm_gem_object *obj, struct iosys_map *map); + #endif /* __DRM_GEM_H__ */ diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h index 71731796c8c3..23698c6b1d1e 100644 --- a/include/linux/dma-buf.h +++ b/include/linux/dma-buf.h @@ -326,15 +326,6 @@ struct dma_buf { /** @ops: dma_buf_ops associated with this buffer object. */ const struct dma_buf_ops *ops; - /** - * @lock: - * - * Used internally to serialize list manipulation, attach/detach and - * vmap/unmap. Note that in many cases this is superseeded by - * dma_resv_lock() on @resv. - */ - struct mutex lock; - /** * @vmapping_counter: * @@ -618,6 +609,11 @@ int dma_buf_fd(struct dma_buf *dmabuf, int flags); struct dma_buf *dma_buf_get(int fd); void dma_buf_put(struct dma_buf *dmabuf); +struct sg_table *dma_buf_map_attachment_locked(struct dma_buf_attachment *, + enum dma_data_direction); +void dma_buf_unmap_attachment_locked(struct dma_buf_attachment *, + struct sg_table *, + enum dma_data_direction); struct sg_table *dma_buf_map_attachment(struct dma_buf_attachment *, enum dma_data_direction); void dma_buf_unmap_attachment(struct dma_buf_attachment *, struct sg_table *, From patchwork Thu May 26 23:50:33 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 576376 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B9B24C433FE for ; Thu, 26 May 2022 23:56:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235821AbiEZX4H (ORCPT ); Thu, 26 May 2022 19:56:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53514 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1349746AbiEZXzZ (ORCPT ); Thu, 26 May 2022 19:55:25 -0400 Received: from bhuna.collabora.co.uk (bhuna.collabora.co.uk [46.235.227.227]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 642B1EBA8B; Thu, 26 May 2022 16:55:02 -0700 (PDT) Received: from [127.0.0.1] (localhost [127.0.0.1]) (Authenticated sender: dmitry.osipenko) with ESMTPSA id 2267F1F459F1 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1653609301; bh=lVCPHIzviQq20mwfYzjAOHeqfWQImXXPIbBe9D1xXlE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=bgJiezMp0flB1v25Q2Qf+DKO9Z1KSTZhHbhRu4emPSpVxY8ydKzqqljeN8OvT3Zfm GrP0nApOiROsSpJFfBcj1DyH73EH0cx4M+sXeMFmY2X1O+tgBqJmIgvNg2zNZPTrq5 7aaOMTMy06r67/fYRlAAGg8Uj8LgyTB2oLLIK5m5SqQsQ0F3bzhxaFYXKeuxUx/9O3 aGf+6kjjtfDGFXJe6P3P5Dj7ARG9p3Esis4XUMKvQ14ji6tydjFaZnTly7l9zc5aMF G3ZJsS7pa52FSEEgAAWuj0+B5tDvwLfEiUhZbVoRVzd0E14r52LRd+xs0iz4bi5BwN beCPhQzt0HN2Q== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Daniel Almeida , Gert Wollny , Gustavo Padovan , Daniel Stone , Tomeu Vizoso , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Rob Herring , Steven Price , Alyssa Rosenzweig , Rob Clark , Emil Velikov , Robin Murphy , Qiang Yu , Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= , "Pan, Xinhui" , Thierry Reding , Tomasz Figa , Marek Szyprowski , Mauro Carvalho Chehab , Alex Deucher , Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org, Dmitry Osipenko , Dmitry Osipenko , linux-tegra@vger.kernel.org, linux-media@vger.kernel.org, linaro-mm-sig@lists.linaro.org, amd-gfx@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, kernel@collabora.com Subject: [PATCH v6 15/22] drm/shmem-helper: Don't use vmap_use_count for dma-bufs Date: Fri, 27 May 2022 02:50:33 +0300 Message-Id: <20220526235040.678984-16-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20220526235040.678984-1-dmitry.osipenko@collabora.com> References: <20220526235040.678984-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org There is no need to refcount vmappings of dma-bufs because dma-buf core has its own refcounting. Drop the refcounting of dma-bufs. This will ease replacing of all drm-shmem locks with a single dma-buf reservation lock, preparing drm-shmem code for addition of the generic drm-shmem shrinker. Signed-off-by: Dmitry Osipenko --- drivers/gpu/drm/drm_gem_shmem_helper.c | 35 +++++++++++++++----------- 1 file changed, 20 insertions(+), 15 deletions(-) diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c index 7232e321fdb4..fd2647690bf7 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -293,24 +293,22 @@ static int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem, struct drm_gem_object *obj = &shmem->base; int ret = 0; - if (shmem->vmap_use_count++ > 0) { - iosys_map_set_vaddr(map, shmem->vaddr); - return 0; - } - if (obj->import_attach) { ret = dma_buf_vmap(obj->import_attach->dmabuf, map); if (!ret) { if (WARN_ON(map->is_iomem)) { dma_buf_vunmap(obj->import_attach->dmabuf, map); - ret = -EIO; - goto err_put_pages; + return -EIO; } - shmem->vaddr = map->vaddr; } } else { pgprot_t prot = PAGE_KERNEL; + if (shmem->vmap_use_count++ > 0) { + iosys_map_set_vaddr(map, shmem->vaddr); + return 0; + } + ret = drm_gem_shmem_get_pages(shmem); if (ret) goto err_zero_use; @@ -376,15 +374,15 @@ static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem, { struct drm_gem_object *obj = &shmem->base; - if (WARN_ON_ONCE(!shmem->vmap_use_count)) - return; - - if (--shmem->vmap_use_count > 0) - return; - if (obj->import_attach) { dma_buf_vunmap(obj->import_attach->dmabuf, map); } else { + if (WARN_ON_ONCE(!shmem->vmap_use_count)) + return; + + if (--shmem->vmap_use_count > 0) + return; + vunmap(shmem->vaddr); drm_gem_shmem_put_pages(shmem); } @@ -637,7 +635,14 @@ void drm_gem_shmem_print_info(const struct drm_gem_shmem_object *shmem, struct drm_printer *p, unsigned int indent) { drm_printf_indent(p, indent, "pages_use_count=%u\n", shmem->pages_use_count); - drm_printf_indent(p, indent, "vmap_use_count=%u\n", shmem->vmap_use_count); + + if (shmem->base.import_attach) + drm_printf_indent(p, indent, "vmap_use_count=%u\n", + shmem->base.dma_buf->vmapping_counter); + else + drm_printf_indent(p, indent, "vmap_use_count=%u\n", + shmem->vmap_use_count); + drm_printf_indent(p, indent, "vaddr=%p\n", shmem->vaddr); } EXPORT_SYMBOL(drm_gem_shmem_print_info); From patchwork Thu May 26 23:50:34 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 577121 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 479CBC433F5 for ; Thu, 26 May 2022 23:55:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1349608AbiEZXz4 (ORCPT ); Thu, 26 May 2022 19:55:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53604 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1349770AbiEZXzr (ORCPT ); Thu, 26 May 2022 19:55:47 -0400 Received: from bhuna.collabora.co.uk (bhuna.collabora.co.uk [46.235.227.227]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0FADAEBEB9; Thu, 26 May 2022 16:55:05 -0700 (PDT) Received: from [127.0.0.1] (localhost [127.0.0.1]) (Authenticated sender: dmitry.osipenko) with ESMTPSA id 2BFD21F459F5 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1653609304; bh=Fl9KmvGY8TmlbvY665rfeUfUuzBKIpn1Lyj/AK/rFBU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=l7jFr9d3ZAz8pBAN068yg7n1vm6vMYK5oXSPnSEoGS1y++IXajSY21o61ibOqccgT 2DJpiq/Yxizc3o+QtOhUIkFcnt+2gzVo8zcNtzqxhjGgHwKKVElWwZD3wZGFiQ/Fb+ yDPoc1PS6Q+CF5EFq4Q9vxBQfPa9zD3QiQjNwB1BRRvieBvlZyFApgGOgaWLu1gktG i2U0CdybclAc51DN7INhHegH3AWbgBeo8C4qskZHvbJsFtPh4OSwFXaqw4w5bEmIs8 rWa9vn2M1/S0Vg4DfkVUaEaW+xvUMcEDlKKlFNzuRZyoEt6iBYXj6x2fRP5tEiSNwa wX77K5w3Kf9UA== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Daniel Almeida , Gert Wollny , Gustavo Padovan , Daniel Stone , Tomeu Vizoso , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Rob Herring , Steven Price , Alyssa Rosenzweig , Rob Clark , Emil Velikov , Robin Murphy , Qiang Yu , Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= , "Pan, Xinhui" , Thierry Reding , Tomasz Figa , Marek Szyprowski , Mauro Carvalho Chehab , Alex Deucher , Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org, Dmitry Osipenko , Dmitry Osipenko , linux-tegra@vger.kernel.org, linux-media@vger.kernel.org, linaro-mm-sig@lists.linaro.org, amd-gfx@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, kernel@collabora.com Subject: [PATCH v6 16/22] drm/shmem-helper: Use reservation lock Date: Fri, 27 May 2022 02:50:34 +0300 Message-Id: <20220526235040.678984-17-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20220526235040.678984-1-dmitry.osipenko@collabora.com> References: <20220526235040.678984-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org Replace drm_gem_shmem locks with GEM reservation lock to make drm-shmem locks consistent with the new locking convention of dma-bufs which tells that dma-buf importers are responsible for holding reservation lock for all operations performed over dma-bufs. This prepares drm-shmem code for addition of the generic shmem shrinker framework. Suggested-by: Daniel Vetter Signed-off-by: Dmitry Osipenko --- drivers/gpu/drm/drm_gem_shmem_helper.c | 181 +++++++----------- drivers/gpu/drm/lima/lima_gem.c | 8 +- drivers/gpu/drm/lima/lima_sched.c | 4 +- drivers/gpu/drm/panfrost/panfrost_drv.c | 7 +- .../gpu/drm/panfrost/panfrost_gem_shrinker.c | 6 +- drivers/gpu/drm/panfrost/panfrost_mmu.c | 19 +- drivers/gpu/drm/panfrost/panfrost_perfcnt.c | 6 +- include/drm/drm_gem_shmem_helper.h | 14 +- 8 files changed, 97 insertions(+), 148 deletions(-) diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c index fd2647690bf7..555fe212bd98 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -86,8 +86,6 @@ __drm_gem_shmem_create(struct drm_device *dev, size_t size, bool private) if (ret) goto err_release; - mutex_init(&shmem->pages_lock); - mutex_init(&shmem->vmap_lock); INIT_LIST_HEAD(&shmem->madv_list); if (!private) { @@ -139,11 +137,13 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem) { struct drm_gem_object *obj = &shmem->base; - WARN_ON(shmem->vmap_use_count); - if (obj->import_attach) { drm_prime_gem_destroy(obj, shmem->sgt); } else { + dma_resv_lock(shmem->base.resv, NULL); + + WARN_ON(shmem->vmap_use_count); + if (shmem->sgt) { dma_unmap_sgtable(obj->dev->dev, shmem->sgt, DMA_BIDIRECTIONAL, 0); @@ -152,18 +152,18 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem) } if (shmem->pages) drm_gem_shmem_put_pages(shmem); - } - WARN_ON(shmem->pages_use_count); + WARN_ON(shmem->pages_use_count); + + dma_resv_unlock(shmem->base.resv); + } drm_gem_object_release(obj); - mutex_destroy(&shmem->pages_lock); - mutex_destroy(&shmem->vmap_lock); kfree(shmem); } EXPORT_SYMBOL_GPL(drm_gem_shmem_free); -static int drm_gem_shmem_get_pages_locked(struct drm_gem_shmem_object *shmem) +static int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem) { struct drm_gem_object *obj = &shmem->base; struct page **pages; @@ -194,35 +194,17 @@ static int drm_gem_shmem_get_pages_locked(struct drm_gem_shmem_object *shmem) } /* - * drm_gem_shmem_get_pages - Allocate backing pages for a shmem GEM object + * drm_gem_shmem_put_pages - Decrease use count on the backing pages for a shmem GEM object * @shmem: shmem GEM object * - * This function makes sure that backing pages exists for the shmem GEM object - * and increases the use count. - * - * Returns: - * 0 on success or a negative error code on failure. + * This function decreases the use count and puts the backing pages when use drops to zero. */ -int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem) -{ - int ret; - - WARN_ON(shmem->base.import_attach); - - ret = mutex_lock_interruptible(&shmem->pages_lock); - if (ret) - return ret; - ret = drm_gem_shmem_get_pages_locked(shmem); - mutex_unlock(&shmem->pages_lock); - - return ret; -} -EXPORT_SYMBOL(drm_gem_shmem_get_pages); - -static void drm_gem_shmem_put_pages_locked(struct drm_gem_shmem_object *shmem) +void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem) { struct drm_gem_object *obj = &shmem->base; + dma_resv_assert_held(shmem->base.resv); + if (WARN_ON_ONCE(!shmem->pages_use_count)) return; @@ -239,19 +221,6 @@ static void drm_gem_shmem_put_pages_locked(struct drm_gem_shmem_object *shmem) shmem->pages_mark_accessed_on_put); shmem->pages = NULL; } - -/* - * drm_gem_shmem_put_pages - Decrease use count on the backing pages for a shmem GEM object - * @shmem: shmem GEM object - * - * This function decreases the use count and puts the backing pages when use drops to zero. - */ -void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem) -{ - mutex_lock(&shmem->pages_lock); - drm_gem_shmem_put_pages_locked(shmem); - mutex_unlock(&shmem->pages_lock); -} EXPORT_SYMBOL(drm_gem_shmem_put_pages); /** @@ -266,6 +235,8 @@ EXPORT_SYMBOL(drm_gem_shmem_put_pages); */ int drm_gem_shmem_pin(struct drm_gem_shmem_object *shmem) { + dma_resv_assert_held(shmem->base.resv); + WARN_ON(shmem->base.import_attach); return drm_gem_shmem_get_pages(shmem); @@ -281,14 +252,31 @@ EXPORT_SYMBOL(drm_gem_shmem_pin); */ void drm_gem_shmem_unpin(struct drm_gem_shmem_object *shmem) { + dma_resv_assert_held(shmem->base.resv); + WARN_ON(shmem->base.import_attach); drm_gem_shmem_put_pages(shmem); } EXPORT_SYMBOL(drm_gem_shmem_unpin); -static int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem, - struct iosys_map *map) +/* + * drm_gem_shmem_vmap - Create a virtual mapping for a shmem GEM object + * @shmem: shmem GEM object + * @map: Returns the kernel virtual address of the SHMEM GEM object's backing + * store. + * + * This function makes sure that a contiguous kernel virtual address mapping + * exists for the buffer backing the shmem GEM object. It hides the differences + * between dma-buf imported and natively allocated objects. + * + * Acquired mappings should be cleaned up by calling drm_gem_shmem_vunmap(). + * + * Returns: + * 0 on success or a negative error code on failure. + */ +int drm_gem_shmem_vmap(struct drm_gem_shmem_object *shmem, + struct iosys_map *map) { struct drm_gem_object *obj = &shmem->base; int ret = 0; @@ -304,6 +292,8 @@ static int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem, } else { pgprot_t prot = PAGE_KERNEL; + dma_resv_assert_held(shmem->base.resv); + if (shmem->vmap_use_count++ > 0) { iosys_map_set_vaddr(map, shmem->vaddr); return 0; @@ -338,45 +328,30 @@ static int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem, return ret; } +EXPORT_SYMBOL(drm_gem_shmem_vmap); /* - * drm_gem_shmem_vmap - Create a virtual mapping for a shmem GEM object + * drm_gem_shmem_vunmap - Unmap a virtual mapping for a shmem GEM object * @shmem: shmem GEM object - * @map: Returns the kernel virtual address of the SHMEM GEM object's backing - * store. - * - * This function makes sure that a contiguous kernel virtual address mapping - * exists for the buffer backing the shmem GEM object. It hides the differences - * between dma-buf imported and natively allocated objects. + * @map: Kernel virtual address where the SHMEM GEM object was mapped * - * Acquired mappings should be cleaned up by calling drm_gem_shmem_vunmap(). + * This function cleans up a kernel virtual address mapping acquired by + * drm_gem_shmem_vmap(). The mapping is only removed when the use count drops to + * zero. * - * Returns: - * 0 on success or a negative error code on failure. + * This function hides the differences between dma-buf imported and natively + * allocated objects. */ -int drm_gem_shmem_vmap(struct drm_gem_shmem_object *shmem, - struct iosys_map *map) -{ - int ret; - - ret = mutex_lock_interruptible(&shmem->vmap_lock); - if (ret) - return ret; - ret = drm_gem_shmem_vmap_locked(shmem, map); - mutex_unlock(&shmem->vmap_lock); - - return ret; -} -EXPORT_SYMBOL(drm_gem_shmem_vmap); - -static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem, - struct iosys_map *map) +void drm_gem_shmem_vunmap(struct drm_gem_shmem_object *shmem, + struct iosys_map *map) { struct drm_gem_object *obj = &shmem->base; if (obj->import_attach) { dma_buf_vunmap(obj->import_attach->dmabuf, map); } else { + dma_resv_assert_held(shmem->base.resv); + if (WARN_ON_ONCE(!shmem->vmap_use_count)) return; @@ -389,26 +364,6 @@ static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem, shmem->vaddr = NULL; } - -/* - * drm_gem_shmem_vunmap - Unmap a virtual mapping for a shmem GEM object - * @shmem: shmem GEM object - * @map: Kernel virtual address where the SHMEM GEM object was mapped - * - * This function cleans up a kernel virtual address mapping acquired by - * drm_gem_shmem_vmap(). The mapping is only removed when the use count drops to - * zero. - * - * This function hides the differences between dma-buf imported and natively - * allocated objects. - */ -void drm_gem_shmem_vunmap(struct drm_gem_shmem_object *shmem, - struct iosys_map *map) -{ - mutex_lock(&shmem->vmap_lock); - drm_gem_shmem_vunmap_locked(shmem, map); - mutex_unlock(&shmem->vmap_lock); -} EXPORT_SYMBOL(drm_gem_shmem_vunmap); static struct drm_gem_shmem_object * @@ -441,24 +396,24 @@ drm_gem_shmem_create_with_handle(struct drm_file *file_priv, */ int drm_gem_shmem_madvise(struct drm_gem_shmem_object *shmem, int madv) { - mutex_lock(&shmem->pages_lock); + dma_resv_assert_held(shmem->base.resv); if (shmem->madv >= 0) shmem->madv = madv; madv = shmem->madv; - mutex_unlock(&shmem->pages_lock); - return (madv >= 0); } EXPORT_SYMBOL(drm_gem_shmem_madvise); -void drm_gem_shmem_purge_locked(struct drm_gem_shmem_object *shmem) +void drm_gem_shmem_purge(struct drm_gem_shmem_object *shmem) { struct drm_gem_object *obj = &shmem->base; struct drm_device *dev = obj->dev; + dma_resv_assert_held(shmem->base.resv); + WARN_ON(!drm_gem_shmem_is_purgeable(shmem)); dma_unmap_sgtable(dev->dev, shmem->sgt, DMA_BIDIRECTIONAL, 0); @@ -466,7 +421,7 @@ void drm_gem_shmem_purge_locked(struct drm_gem_shmem_object *shmem) kfree(shmem->sgt); shmem->sgt = NULL; - drm_gem_shmem_put_pages_locked(shmem); + drm_gem_shmem_put_pages(shmem); shmem->madv = -1; @@ -482,17 +437,6 @@ void drm_gem_shmem_purge_locked(struct drm_gem_shmem_object *shmem) invalidate_mapping_pages(file_inode(obj->filp)->i_mapping, 0, (loff_t)-1); } -EXPORT_SYMBOL(drm_gem_shmem_purge_locked); - -bool drm_gem_shmem_purge(struct drm_gem_shmem_object *shmem) -{ - if (!mutex_trylock(&shmem->pages_lock)) - return false; - drm_gem_shmem_purge_locked(shmem); - mutex_unlock(&shmem->pages_lock); - - return true; -} EXPORT_SYMBOL(drm_gem_shmem_purge); /** @@ -548,7 +492,7 @@ static vm_fault_t drm_gem_shmem_fault(struct vm_fault *vmf) /* We don't use vmf->pgoff since that has the fake offset */ page_offset = (vmf->address - vma->vm_start) >> PAGE_SHIFT; - mutex_lock(&shmem->pages_lock); + dma_resv_lock(shmem->base.resv, NULL); if (page_offset >= num_pages || WARN_ON_ONCE(!shmem->pages) || @@ -560,7 +504,7 @@ static vm_fault_t drm_gem_shmem_fault(struct vm_fault *vmf) ret = vmf_insert_pfn(vma, vmf->address, page_to_pfn(page)); } - mutex_unlock(&shmem->pages_lock); + dma_resv_unlock(shmem->base.resv); return ret; } @@ -573,8 +517,10 @@ static void drm_gem_shmem_vm_open(struct vm_area_struct *vma) WARN_ON(shmem->base.import_attach); + dma_resv_lock(shmem->base.resv, NULL); ret = drm_gem_shmem_get_pages(shmem); WARN_ON_ONCE(ret != 0); + dma_resv_unlock(shmem->base.resv); drm_gem_vm_open(vma); } @@ -584,7 +530,10 @@ static void drm_gem_shmem_vm_close(struct vm_area_struct *vma) struct drm_gem_object *obj = vma->vm_private_data; struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj); + dma_resv_lock(shmem->base.resv, NULL); drm_gem_shmem_put_pages(shmem); + dma_resv_unlock(shmem->base.resv); + drm_gem_vm_close(vma); } @@ -700,9 +649,11 @@ struct sg_table *drm_gem_shmem_get_pages_sgt(struct drm_gem_shmem_object *shmem) WARN_ON(obj->import_attach); + dma_resv_lock(shmem->base.resv, NULL); + ret = drm_gem_shmem_get_pages(shmem); if (ret) - return ERR_PTR(ret); + goto err_unlock; sgt = drm_gem_shmem_get_sg_table(shmem); if (IS_ERR(sgt)) { @@ -716,6 +667,8 @@ struct sg_table *drm_gem_shmem_get_pages_sgt(struct drm_gem_shmem_object *shmem) shmem->sgt = sgt; + dma_resv_unlock(shmem->base.resv); + return sgt; err_free_sgt: @@ -723,6 +676,8 @@ struct sg_table *drm_gem_shmem_get_pages_sgt(struct drm_gem_shmem_object *shmem) kfree(sgt); err_put_pages: drm_gem_shmem_put_pages(shmem); +err_unlock: + dma_resv_unlock(shmem->base.resv); return ERR_PTR(ret); } EXPORT_SYMBOL_GPL(drm_gem_shmem_get_pages_sgt); diff --git a/drivers/gpu/drm/lima/lima_gem.c b/drivers/gpu/drm/lima/lima_gem.c index 0f1ca0b0db49..5008f0c2428f 100644 --- a/drivers/gpu/drm/lima/lima_gem.c +++ b/drivers/gpu/drm/lima/lima_gem.c @@ -34,7 +34,7 @@ int lima_heap_alloc(struct lima_bo *bo, struct lima_vm *vm) new_size = min(new_size, bo->base.base.size); - mutex_lock(&bo->base.pages_lock); + dma_resv_lock(bo->base.base.resv, NULL); if (bo->base.pages) { pages = bo->base.pages; @@ -42,7 +42,7 @@ int lima_heap_alloc(struct lima_bo *bo, struct lima_vm *vm) pages = kvmalloc_array(bo->base.base.size >> PAGE_SHIFT, sizeof(*pages), GFP_KERNEL | __GFP_ZERO); if (!pages) { - mutex_unlock(&bo->base.pages_lock); + dma_resv_unlock(bo->base.base.resv); return -ENOMEM; } @@ -56,13 +56,13 @@ int lima_heap_alloc(struct lima_bo *bo, struct lima_vm *vm) struct page *page = shmem_read_mapping_page(mapping, i); if (IS_ERR(page)) { - mutex_unlock(&bo->base.pages_lock); + dma_resv_unlock(bo->base.base.resv); return PTR_ERR(page); } pages[i] = page; } - mutex_unlock(&bo->base.pages_lock); + dma_resv_unlock(bo->base.base.resv); ret = sg_alloc_table_from_pages(&sgt, pages, i, 0, new_size, GFP_KERNEL); diff --git a/drivers/gpu/drm/lima/lima_sched.c b/drivers/gpu/drm/lima/lima_sched.c index e82931712d8a..ff003403fbbc 100644 --- a/drivers/gpu/drm/lima/lima_sched.c +++ b/drivers/gpu/drm/lima/lima_sched.c @@ -371,7 +371,7 @@ static void lima_sched_build_error_task_list(struct lima_sched_task *task) } else { buffer_chunk->size = lima_bo_size(bo); - ret = drm_gem_shmem_vmap(&bo->base, &map); + ret = drm_gem_vmap_unlocked(&bo->base.base, &map); if (ret) { kvfree(et); goto out; @@ -379,7 +379,7 @@ static void lima_sched_build_error_task_list(struct lima_sched_task *task) memcpy(buffer_chunk + 1, map.vaddr, buffer_chunk->size); - drm_gem_shmem_vunmap(&bo->base, &map); + drm_gem_vunmap_unlocked(&bo->base.base, &map); } buffer_chunk = (void *)(buffer_chunk + 1) + buffer_chunk->size; diff --git a/drivers/gpu/drm/panfrost/panfrost_drv.c b/drivers/gpu/drm/panfrost/panfrost_drv.c index b1e6d238674f..859e240161d1 100644 --- a/drivers/gpu/drm/panfrost/panfrost_drv.c +++ b/drivers/gpu/drm/panfrost/panfrost_drv.c @@ -405,6 +405,10 @@ static int panfrost_ioctl_madvise(struct drm_device *dev, void *data, bo = to_panfrost_bo(gem_obj); + ret = dma_resv_lock_interruptible(bo->base.base.resv, NULL); + if (ret) + goto out_put_object; + mutex_lock(&pfdev->shrinker_lock); mutex_lock(&bo->mappings.lock); if (args->madv == PANFROST_MADV_DONTNEED) { @@ -442,7 +446,8 @@ static int panfrost_ioctl_madvise(struct drm_device *dev, void *data, out_unlock_mappings: mutex_unlock(&bo->mappings.lock); mutex_unlock(&pfdev->shrinker_lock); - + dma_resv_unlock(bo->base.base.resv); +out_put_object: drm_gem_object_put(gem_obj); return ret; } diff --git a/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c b/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c index 77e7cb6d1ae3..a4bedfeb2ec4 100644 --- a/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c +++ b/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c @@ -48,14 +48,14 @@ static bool panfrost_gem_purge(struct drm_gem_object *obj) if (!mutex_trylock(&bo->mappings.lock)) return false; - if (!mutex_trylock(&shmem->pages_lock)) + if (!dma_resv_trylock(shmem->base.resv)) goto unlock_mappings; panfrost_gem_teardown_mappings_locked(bo); - drm_gem_shmem_purge_locked(&bo->base); + drm_gem_shmem_purge(&bo->base); ret = true; - mutex_unlock(&shmem->pages_lock); + dma_resv_unlock(shmem->base.resv); unlock_mappings: mutex_unlock(&bo->mappings.lock); diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.c b/drivers/gpu/drm/panfrost/panfrost_mmu.c index b285a8001b1d..e164017e84cd 100644 --- a/drivers/gpu/drm/panfrost/panfrost_mmu.c +++ b/drivers/gpu/drm/panfrost/panfrost_mmu.c @@ -424,6 +424,7 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as, struct panfrost_gem_mapping *bomapping; struct panfrost_gem_object *bo; struct address_space *mapping; + struct drm_gem_object *obj; pgoff_t page_offset; struct sg_table *sgt; struct page **pages; @@ -446,15 +447,16 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as, page_offset = addr >> PAGE_SHIFT; page_offset -= bomapping->mmnode.start; - mutex_lock(&bo->base.pages_lock); + obj = &bo->base.base; + + dma_resv_lock(obj->resv, NULL); if (!bo->base.pages) { bo->sgts = kvmalloc_array(bo->base.base.size / SZ_2M, sizeof(struct sg_table), GFP_KERNEL | __GFP_ZERO); if (!bo->sgts) { - mutex_unlock(&bo->base.pages_lock); ret = -ENOMEM; - goto err_bo; + goto err_unlock; } pages = kvmalloc_array(bo->base.base.size >> PAGE_SHIFT, @@ -462,9 +464,8 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as, if (!pages) { kvfree(bo->sgts); bo->sgts = NULL; - mutex_unlock(&bo->base.pages_lock); ret = -ENOMEM; - goto err_bo; + goto err_unlock; } bo->base.pages = pages; bo->base.pages_use_count = 1; @@ -472,7 +473,6 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as, pages = bo->base.pages; if (pages[page_offset]) { /* Pages are already mapped, bail out. */ - mutex_unlock(&bo->base.pages_lock); goto out; } } @@ -483,14 +483,11 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as, for (i = page_offset; i < page_offset + NUM_FAULT_PAGES; i++) { pages[i] = shmem_read_mapping_page(mapping, i); if (IS_ERR(pages[i])) { - mutex_unlock(&bo->base.pages_lock); ret = PTR_ERR(pages[i]); goto err_pages; } } - mutex_unlock(&bo->base.pages_lock); - sgt = &bo->sgts[page_offset / (SZ_2M / PAGE_SIZE)]; ret = sg_alloc_table_from_pages(sgt, pages + page_offset, NUM_FAULT_PAGES, 0, SZ_2M, GFP_KERNEL); @@ -509,6 +506,8 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as, dev_dbg(pfdev->dev, "mapped page fault @ AS%d %llx", as, addr); out: + dma_resv_unlock(obj->resv); + panfrost_gem_mapping_put(bomapping); return 0; @@ -517,6 +516,8 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as, sg_free_table(sgt); err_pages: drm_gem_shmem_put_pages(&bo->base); +err_unlock: + dma_resv_unlock(obj->resv); err_bo: panfrost_gem_mapping_put(bomapping); return ret; diff --git a/drivers/gpu/drm/panfrost/panfrost_perfcnt.c b/drivers/gpu/drm/panfrost/panfrost_perfcnt.c index bc0df93f7f21..ba9b6e2b2636 100644 --- a/drivers/gpu/drm/panfrost/panfrost_perfcnt.c +++ b/drivers/gpu/drm/panfrost/panfrost_perfcnt.c @@ -106,7 +106,7 @@ static int panfrost_perfcnt_enable_locked(struct panfrost_device *pfdev, goto err_close_bo; } - ret = drm_gem_shmem_vmap(bo, &map); + ret = drm_gem_vmap_unlocked(&bo->base, &map); if (ret) goto err_put_mapping; perfcnt->buf = map.vaddr; @@ -165,7 +165,7 @@ static int panfrost_perfcnt_enable_locked(struct panfrost_device *pfdev, return 0; err_vunmap: - drm_gem_shmem_vunmap(bo, &map); + drm_gem_vunmap_unlocked(&bo->base, &map); err_put_mapping: panfrost_gem_mapping_put(perfcnt->mapping); err_close_bo: @@ -195,7 +195,7 @@ static int panfrost_perfcnt_disable_locked(struct panfrost_device *pfdev, GPU_PERFCNT_CFG_MODE(GPU_PERFCNT_CFG_MODE_OFF)); perfcnt->user = NULL; - drm_gem_shmem_vunmap(&perfcnt->mapping->obj->base, &map); + drm_gem_vunmap_unlocked(&perfcnt->mapping->obj->base.base, &map); perfcnt->buf = NULL; panfrost_gem_close(&perfcnt->mapping->obj->base.base, file_priv); panfrost_mmu_as_put(pfdev, perfcnt->mapping->mmu); diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h index d0a57853c188..9a8983ee8abe 100644 --- a/include/drm/drm_gem_shmem_helper.h +++ b/include/drm/drm_gem_shmem_helper.h @@ -26,11 +26,6 @@ struct drm_gem_shmem_object { */ struct drm_gem_object base; - /** - * @pages_lock: Protects the page table and use count - */ - struct mutex pages_lock; - /** * @pages: Page table */ @@ -79,11 +74,6 @@ struct drm_gem_shmem_object { */ struct sg_table *sgt; - /** - * @vmap_lock: Protects the vmap address and use count - */ - struct mutex vmap_lock; - /** * @vaddr: Kernel virtual address of the backing memory */ @@ -109,7 +99,6 @@ struct drm_gem_shmem_object { struct drm_gem_shmem_object *drm_gem_shmem_create(struct drm_device *dev, size_t size); void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem); -int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem); void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem); int drm_gem_shmem_pin(struct drm_gem_shmem_object *shmem); void drm_gem_shmem_unpin(struct drm_gem_shmem_object *shmem); @@ -128,8 +117,7 @@ static inline bool drm_gem_shmem_is_purgeable(struct drm_gem_shmem_object *shmem !shmem->base.dma_buf && !shmem->base.import_attach; } -void drm_gem_shmem_purge_locked(struct drm_gem_shmem_object *shmem); -bool drm_gem_shmem_purge(struct drm_gem_shmem_object *shmem); +void drm_gem_shmem_purge(struct drm_gem_shmem_object *shmem); struct sg_table *drm_gem_shmem_get_sg_table(struct drm_gem_shmem_object *shmem); struct sg_table *drm_gem_shmem_get_pages_sgt(struct drm_gem_shmem_object *shmem); From patchwork Thu May 26 23:50:35 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 576375 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C1837C433FE for ; Thu, 26 May 2022 23:56:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240374AbiEZX4V (ORCPT ); Thu, 26 May 2022 19:56:21 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54050 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1349799AbiEZXzu (ORCPT ); Thu, 26 May 2022 19:55:50 -0400 Received: from bhuna.collabora.co.uk (bhuna.collabora.co.uk [46.235.227.227]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CEF5470374; Thu, 26 May 2022 16:55:09 -0700 (PDT) Received: from [127.0.0.1] (localhost [127.0.0.1]) (Authenticated sender: dmitry.osipenko) with ESMTPSA id 38E651F459E1 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1653609307; bh=nxrW6K9Jb1aWwVxO+GL/w6JkkXJ2Cggcw50Ih2z5iuY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=BI0DRlEYfsfOd2SxhPwDpVoo33fj14kipu3HA03rSyyD+BciCTZ/gKCCTpI9EWAPg Wn07agYia3UJvF9HSHgv7vz6TooAlqnM/CXGI8ZKCOT24j3Oa1tsdrYH31XHZfp/b6 SfIe+/xOsEqVhbG54Jb1swehPk7t0OlHSVryBgr557rF8edfGIh2H76sqDQSLtViSW QK9weOYaljFU7rVOVtPHHKlOL1vlhZjm0ijgyqqQIlCBqeqGKjFRVmu3vjJVFIrYij sABkVYj9BcM1OCf2EP6kkP3DQcwABinF/F/9oyd4HY/od0yF2D58QsR6MEWl3tdksT hHWDOcfv9kKzw== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Daniel Almeida , Gert Wollny , Gustavo Padovan , Daniel Stone , Tomeu Vizoso , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Rob Herring , Steven Price , Alyssa Rosenzweig , Rob Clark , Emil Velikov , Robin Murphy , Qiang Yu , Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= , "Pan, Xinhui" , Thierry Reding , Tomasz Figa , Marek Szyprowski , Mauro Carvalho Chehab , Alex Deucher , Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org, Dmitry Osipenko , Dmitry Osipenko , linux-tegra@vger.kernel.org, linux-media@vger.kernel.org, linaro-mm-sig@lists.linaro.org, amd-gfx@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, kernel@collabora.com Subject: [PATCH v6 17/22] drm/shmem-helper: Add generic memory shrinker Date: Fri, 27 May 2022 02:50:35 +0300 Message-Id: <20220526235040.678984-18-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20220526235040.678984-1-dmitry.osipenko@collabora.com> References: <20220526235040.678984-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org Introduce a common DRM SHMEM shrinker framework that allows to reduce code duplication among DRM drivers by replacing theirs custom shrinker implementations with the generic shrinker. In order to start using DRM SHMEM shrinker drivers should: 1. Implement new evict() shmem object callback. 2. Register shrinker using drm_gem_shmem_shrinker_register(drm_device). 3. Use drm_gem_shmem_set_purgeable(shmem) and alike API functions to activate shrinking of shmem GEMs. This patch is based on a ideas borrowed from Rob's Clark MSM shrinker, Thomas' Zimmermann variant of SHMEM shrinker and Intel's i915 shrinker. Signed-off-by: Daniel Almeida Signed-off-by: Dmitry Osipenko --- drivers/gpu/drm/drm_gem_shmem_helper.c | 540 ++++++++++++++++-- .../gpu/drm/panfrost/panfrost_gem_shrinker.c | 9 +- drivers/gpu/drm/virtio/virtgpu_drv.h | 3 + include/drm/drm_device.h | 4 + include/drm/drm_gem_shmem_helper.h | 87 ++- 5 files changed, 594 insertions(+), 49 deletions(-) diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c index 555fe212bd98..4cd0b5913492 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -126,6 +126,42 @@ struct drm_gem_shmem_object *drm_gem_shmem_create(struct drm_device *dev, size_t } EXPORT_SYMBOL_GPL(drm_gem_shmem_create); +static bool drm_gem_shmem_is_evictable(struct drm_gem_shmem_object *shmem) +{ + return (shmem->madv >= 0) && shmem->evict && + shmem->eviction_enabled && shmem->pages_use_count && + !shmem->pages_pin_count && !shmem->base.dma_buf && + !shmem->base.import_attach && shmem->sgt && !shmem->evicted; +} + +static void +drm_gem_shmem_update_pages_state(struct drm_gem_shmem_object *shmem) +{ + struct drm_gem_object *obj = &shmem->base; + struct drm_gem_shmem_shrinker *gem_shrinker = obj->dev->shmem_shrinker; + + dma_resv_assert_held(shmem->base.resv); + + if (!gem_shrinker || obj->import_attach) + return; + + mutex_lock(&gem_shrinker->lock); + + if (drm_gem_shmem_is_evictable(shmem) || + drm_gem_shmem_is_purgeable(shmem)) + list_move_tail(&shmem->madv_list, &gem_shrinker->lru_evictable); + else if (shmem->madv < 0) + list_del_init(&shmem->madv_list); + else if (shmem->evicted) + list_move_tail(&shmem->madv_list, &gem_shrinker->lru_evicted); + else if (!shmem->pages) + list_del_init(&shmem->madv_list); + else + list_move_tail(&shmem->madv_list, &gem_shrinker->lru_pinned); + + mutex_unlock(&gem_shrinker->lock); +} + /** * drm_gem_shmem_free - Free resources associated with a shmem GEM object * @shmem: shmem GEM object to free @@ -142,6 +178,9 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem) } else { dma_resv_lock(shmem->base.resv, NULL); + /* take out shmem GEM object from the memory shrinker */ + drm_gem_shmem_madvise(shmem, -1); + WARN_ON(shmem->vmap_use_count); if (shmem->sgt) { @@ -150,7 +189,7 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem) sg_free_table(shmem->sgt); kfree(shmem->sgt); } - if (shmem->pages) + if (shmem->pages_use_count) drm_gem_shmem_put_pages(shmem); WARN_ON(shmem->pages_use_count); @@ -163,18 +202,82 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem) } EXPORT_SYMBOL_GPL(drm_gem_shmem_free); -static int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem) +/** + * drm_gem_shmem_set_evictable() - Make GEM evictable by memory shrinker + * @shmem: shmem GEM object + * + * Tell memory shrinker that this GEM can be evicted. Initially eviction is + * disabled for all GEMs. If GEM was purged, then -ENOMEM is returned. + * + * Returns: + * 0 on success or a negative error code on failure. + */ +int drm_gem_shmem_set_evictable(struct drm_gem_shmem_object *shmem) +{ + dma_resv_lock(shmem->base.resv, NULL); + + if (shmem->madv < 0) + return -ENOMEM; + + shmem->eviction_enabled = true; + + dma_resv_unlock(shmem->base.resv); + + return 0; +} +EXPORT_SYMBOL_GPL(drm_gem_shmem_set_evictable); + +/** + * drm_gem_shmem_set_purgeable() - Make GEM purgeable by memory shrinker + * @shmem: shmem GEM object + * + * Tell memory shrinker that this GEM can be purged. Initially purging is + * disabled for all GEMs. If GEM was purged, then -ENOMEM is returned. + * + * Returns: + * 0 on success or a negative error code on failure. + */ +int drm_gem_shmem_set_purgeable(struct drm_gem_shmem_object *shmem) +{ + dma_resv_lock(shmem->base.resv, NULL); + + if (shmem->madv < 0) + return -ENOMEM; + + shmem->purge_enabled = true; + + drm_gem_shmem_update_pages_state(shmem); + + dma_resv_unlock(shmem->base.resv); + + return 0; +} +EXPORT_SYMBOL_GPL(drm_gem_shmem_set_purgeable); + +static int +drm_gem_shmem_acquire_pages(struct drm_gem_shmem_object *shmem) { struct drm_gem_object *obj = &shmem->base; struct page **pages; - if (shmem->pages_use_count++ > 0) + dma_resv_assert_held(shmem->base.resv); + + if (shmem->madv < 0) { + WARN_ON(shmem->pages); + return -ENOMEM; + } + + if (shmem->pages) { + WARN_ON(!shmem->evicted); return 0; + } + + if (WARN_ON(!shmem->pages_use_count)) + return -EINVAL; pages = drm_gem_get_pages(obj); if (IS_ERR(pages)) { DRM_DEBUG_KMS("Failed to get pages (%ld)\n", PTR_ERR(pages)); - shmem->pages_use_count = 0; return PTR_ERR(pages); } @@ -193,6 +296,58 @@ static int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem) return 0; } +static int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem) +{ + int err; + + dma_resv_assert_held(shmem->base.resv); + + if (shmem->madv < 0) + return -ENOMEM; + + if (shmem->pages_use_count++ > 0) { + err = drm_gem_shmem_swap_in(shmem); + if (err) + goto err_zero_use; + + return 0; + } + + err = drm_gem_shmem_acquire_pages(shmem); + if (err) + goto err_zero_use; + + drm_gem_shmem_update_pages_state(shmem); + + return 0; + +err_zero_use: + shmem->pages_use_count = 0; + + return err; +} + +static void +drm_gem_shmem_release_pages(struct drm_gem_shmem_object *shmem) +{ + struct drm_gem_object *obj = &shmem->base; + + if (!shmem->pages) { + WARN_ON(!shmem->evicted && shmem->madv >= 0); + return; + } + +#ifdef CONFIG_X86 + if (shmem->map_wc) + set_pages_array_wb(shmem->pages, obj->size >> PAGE_SHIFT); +#endif + + drm_gem_put_pages(obj, shmem->pages, + shmem->pages_mark_dirty_on_put, + shmem->pages_mark_accessed_on_put); + shmem->pages = NULL; +} + /* * drm_gem_shmem_put_pages - Decrease use count on the backing pages for a shmem GEM object * @shmem: shmem GEM object @@ -201,8 +356,6 @@ static int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem) */ void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem) { - struct drm_gem_object *obj = &shmem->base; - dma_resv_assert_held(shmem->base.resv); if (WARN_ON_ONCE(!shmem->pages_use_count)) @@ -211,15 +364,9 @@ void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem) if (--shmem->pages_use_count > 0) return; -#ifdef CONFIG_X86 - if (shmem->map_wc) - set_pages_array_wb(shmem->pages, obj->size >> PAGE_SHIFT); -#endif + drm_gem_shmem_release_pages(shmem); - drm_gem_put_pages(obj, shmem->pages, - shmem->pages_mark_dirty_on_put, - shmem->pages_mark_accessed_on_put); - shmem->pages = NULL; + drm_gem_shmem_update_pages_state(shmem); } EXPORT_SYMBOL(drm_gem_shmem_put_pages); @@ -235,11 +382,17 @@ EXPORT_SYMBOL(drm_gem_shmem_put_pages); */ int drm_gem_shmem_pin(struct drm_gem_shmem_object *shmem) { + int ret; + dma_resv_assert_held(shmem->base.resv); WARN_ON(shmem->base.import_attach); - return drm_gem_shmem_get_pages(shmem); + ret = drm_gem_shmem_get_pages(shmem); + if (!ret) + shmem->pages_pin_count++; + + return ret; } EXPORT_SYMBOL(drm_gem_shmem_pin); @@ -257,6 +410,8 @@ void drm_gem_shmem_unpin(struct drm_gem_shmem_object *shmem) WARN_ON(shmem->base.import_attach); drm_gem_shmem_put_pages(shmem); + + shmem->pages_pin_count--; } EXPORT_SYMBOL(drm_gem_shmem_unpin); @@ -299,7 +454,7 @@ int drm_gem_shmem_vmap(struct drm_gem_shmem_object *shmem, return 0; } - ret = drm_gem_shmem_get_pages(shmem); + ret = drm_gem_shmem_pin(shmem); if (ret) goto err_zero_use; @@ -322,7 +477,7 @@ int drm_gem_shmem_vmap(struct drm_gem_shmem_object *shmem, err_put_pages: if (!obj->import_attach) - drm_gem_shmem_put_pages(shmem); + drm_gem_shmem_unpin(shmem); err_zero_use: shmem->vmap_use_count = 0; @@ -359,7 +514,7 @@ void drm_gem_shmem_vunmap(struct drm_gem_shmem_object *shmem, return; vunmap(shmem->vaddr); - drm_gem_shmem_put_pages(shmem); + drm_gem_shmem_unpin(shmem); } shmem->vaddr = NULL; @@ -403,41 +558,77 @@ int drm_gem_shmem_madvise(struct drm_gem_shmem_object *shmem, int madv) madv = shmem->madv; + drm_gem_shmem_update_pages_state(shmem); + return (madv >= 0); } EXPORT_SYMBOL(drm_gem_shmem_madvise); -void drm_gem_shmem_purge(struct drm_gem_shmem_object *shmem) +/** + * drm_gem_shmem_swap_in() - Moves shmem GEM back to memory and enables + * hardware access to the memory. + * @shmem: shmem GEM object + * + * This function moves shmem GEM back to memory if it was previously evicted + * by the memory shrinker. The GEM is ready to use on success. + * + * Returns: + * 0 on success or a negative error code on failure. + */ +int drm_gem_shmem_swap_in(struct drm_gem_shmem_object *shmem) { struct drm_gem_object *obj = &shmem->base; - struct drm_device *dev = obj->dev; + struct sg_table *sgt; + int err; dma_resv_assert_held(shmem->base.resv); - WARN_ON(!drm_gem_shmem_is_purgeable(shmem)); + if (shmem->evicted) { + err = drm_gem_shmem_acquire_pages(shmem); + if (err) + return err; + + sgt = drm_gem_shmem_get_sg_table(shmem); + if (IS_ERR(sgt)) + return PTR_ERR(sgt); + + err = dma_map_sgtable(obj->dev->dev, sgt, + DMA_BIDIRECTIONAL, 0); + if (err) { + sg_free_table(sgt); + kfree(sgt); + return err; + } - dma_unmap_sgtable(dev->dev, shmem->sgt, DMA_BIDIRECTIONAL, 0); - sg_free_table(shmem->sgt); - kfree(shmem->sgt); - shmem->sgt = NULL; + shmem->sgt = sgt; + shmem->evicted = false; - drm_gem_shmem_put_pages(shmem); + drm_gem_shmem_update_pages_state(shmem); + } - shmem->madv = -1; + if (!shmem->pages) + return -ENOMEM; - drm_vma_node_unmap(&obj->vma_node, dev->anon_inode->i_mapping); - drm_gem_free_mmap_offset(obj); + return 0; +} +EXPORT_SYMBOL_GPL(drm_gem_shmem_swap_in); - /* Our goal here is to return as much of the memory as - * is possible back to the system as we are called from OOM. - * To do this we must instruct the shmfs to drop all of its - * backing pages, *now*. - */ - shmem_truncate_range(file_inode(obj->filp), 0, (loff_t)-1); +static void drm_gem_shmem_unpin_pages(struct drm_gem_shmem_object *shmem) +{ + struct drm_gem_object *obj = &shmem->base; + struct drm_device *dev = obj->dev; - invalidate_mapping_pages(file_inode(obj->filp)->i_mapping, 0, (loff_t)-1); + if (shmem->evicted) + return; + + dma_unmap_sgtable(dev->dev, shmem->sgt, DMA_BIDIRECTIONAL, 0); + drm_gem_shmem_release_pages(shmem); + drm_vma_node_unmap(&obj->vma_node, dev->anon_inode->i_mapping); + + sg_free_table(shmem->sgt); + kfree(shmem->sgt); + shmem->sgt = NULL; } -EXPORT_SYMBOL(drm_gem_shmem_purge); /** * drm_gem_shmem_dumb_create - Create a dumb shmem buffer object @@ -488,22 +679,33 @@ static vm_fault_t drm_gem_shmem_fault(struct vm_fault *vmf) vm_fault_t ret; struct page *page; pgoff_t page_offset; + bool pages_unpinned; + int err; /* We don't use vmf->pgoff since that has the fake offset */ page_offset = (vmf->address - vma->vm_start) >> PAGE_SHIFT; dma_resv_lock(shmem->base.resv, NULL); - if (page_offset >= num_pages || - WARN_ON_ONCE(!shmem->pages) || - shmem->madv < 0) { + /* Sanity-check that we have the pages pointer when it should present */ + pages_unpinned = (shmem->evicted || shmem->madv < 0 || !shmem->pages_use_count); + WARN_ON_ONCE(!shmem->pages ^ pages_unpinned); + + if (page_offset >= num_pages || (!shmem->pages && !shmem->evicted)) { ret = VM_FAULT_SIGBUS; } else { + err = drm_gem_shmem_swap_in(shmem); + if (err) { + ret = VM_FAULT_OOM; + goto unlock; + } + page = shmem->pages[page_offset]; ret = vmf_insert_pfn(vma, vmf->address, page_to_pfn(page)); } +unlock: dma_resv_unlock(shmem->base.resv); return ret; @@ -513,13 +715,15 @@ static void drm_gem_shmem_vm_open(struct vm_area_struct *vma) { struct drm_gem_object *obj = vma->vm_private_data; struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj); - int ret; WARN_ON(shmem->base.import_attach); dma_resv_lock(shmem->base.resv, NULL); - ret = drm_gem_shmem_get_pages(shmem); - WARN_ON_ONCE(ret != 0); + + if (drm_gem_shmem_get_pages(shmem)) + shmem->pages_use_count++; + + drm_gem_shmem_update_pages_state(shmem); dma_resv_unlock(shmem->base.resv); drm_gem_vm_open(vma); @@ -583,6 +787,8 @@ EXPORT_SYMBOL_GPL(drm_gem_shmem_mmap); void drm_gem_shmem_print_info(const struct drm_gem_shmem_object *shmem, struct drm_printer *p, unsigned int indent) { + drm_printf_indent(p, indent, "eviction_enabled=%d\n", shmem->eviction_enabled); + drm_printf_indent(p, indent, "purge_enabled=%d\n", shmem->purge_enabled); drm_printf_indent(p, indent, "pages_use_count=%u\n", shmem->pages_use_count); if (shmem->base.import_attach) @@ -592,7 +798,9 @@ void drm_gem_shmem_print_info(const struct drm_gem_shmem_object *shmem, drm_printf_indent(p, indent, "vmap_use_count=%u\n", shmem->vmap_use_count); + drm_printf_indent(p, indent, "evicted=%d\n", shmem->evicted); drm_printf_indent(p, indent, "vaddr=%p\n", shmem->vaddr); + drm_printf_indent(p, indent, "madv=%d\n", shmem->madv); } EXPORT_SYMBOL(drm_gem_shmem_print_info); @@ -667,6 +875,8 @@ struct sg_table *drm_gem_shmem_get_pages_sgt(struct drm_gem_shmem_object *shmem) shmem->sgt = sgt; + drm_gem_shmem_update_pages_state(shmem); + dma_resv_unlock(shmem->base.resv); return sgt; @@ -717,6 +927,250 @@ drm_gem_shmem_prime_import_sg_table(struct drm_device *dev, } EXPORT_SYMBOL_GPL(drm_gem_shmem_prime_import_sg_table); +static struct drm_gem_shmem_shrinker * +to_drm_shrinker(struct shrinker *shrinker) +{ + return container_of(shrinker, struct drm_gem_shmem_shrinker, base); +} + +static unsigned long +drm_gem_shmem_shrinker_count_objects(struct shrinker *shrinker, + struct shrink_control *sc) +{ + struct drm_gem_shmem_shrinker *gem_shrinker = to_drm_shrinker(shrinker); + struct drm_gem_shmem_object *shmem; + unsigned long count = 0; + + if (!mutex_trylock(&gem_shrinker->lock)) + return 0; + + list_for_each_entry(shmem, &gem_shrinker->lru_evictable, madv_list) { + count += shmem->base.size; + + if (count >= SHRINK_EMPTY) + break; + } + + mutex_unlock(&gem_shrinker->lock); + + if (count >= SHRINK_EMPTY) + return SHRINK_EMPTY - 1; + + return count ?: SHRINK_EMPTY; +} + +int drm_gem_shmem_evict(struct drm_gem_shmem_object *shmem) +{ + WARN_ON(!drm_gem_shmem_is_evictable(shmem)); + WARN_ON(shmem->evicted); + + drm_gem_shmem_unpin_pages(shmem); + + shmem->evicted = true; + drm_gem_shmem_update_pages_state(shmem); + + return 0; +} +EXPORT_SYMBOL_GPL(drm_gem_shmem_evict); + +int drm_gem_shmem_purge(struct drm_gem_shmem_object *shmem) +{ + struct drm_gem_object *obj = &shmem->base; + + WARN_ON(!drm_gem_shmem_is_purgeable(shmem)); + + drm_gem_shmem_unpin_pages(shmem); + drm_gem_free_mmap_offset(obj); + + /* Our goal here is to return as much of the memory as + * is possible back to the system as we are called from OOM. + * To do this we must instruct the shmfs to drop all of its + * backing pages, *now*. + */ + shmem_truncate_range(file_inode(obj->filp), 0, (loff_t)-1); + + invalidate_mapping_pages(file_inode(obj->filp)->i_mapping, 0, (loff_t)-1); + + shmem->madv = -1; + shmem->evicted = false; + drm_gem_shmem_update_pages_state(shmem); + + return 0; +} +EXPORT_SYMBOL_GPL(drm_gem_shmem_purge); + +static unsigned long +drm_gem_shmem_shrinker_run_objects_scan(struct shrinker *shrinker, + unsigned long nr_to_scan, + bool *lock_contention, + bool evict) +{ + struct drm_gem_shmem_shrinker *gem_shrinker = to_drm_shrinker(shrinker); + struct drm_gem_shmem_object *shmem; + struct list_head still_in_list; + struct drm_gem_object *obj; + unsigned long freed = 0; + size_t page_count; + int err; + + INIT_LIST_HEAD(&still_in_list); + + mutex_lock(&gem_shrinker->lock); + + while (freed < nr_to_scan) { + shmem = list_first_entry_or_null(&gem_shrinker->lru_evictable, + typeof(*shmem), madv_list); + if (!shmem) + break; + + obj = &shmem->base; + page_count = obj->size >> PAGE_SHIFT; + list_move_tail(&shmem->madv_list, &still_in_list); + + if (evict) { + if (!drm_gem_shmem_is_evictable(shmem) || + get_nr_swap_pages() < page_count) + continue; + } else { + if (!drm_gem_shmem_is_purgeable(shmem)) + continue; + } + + /* + * If it's in the process of being freed, gem_object->free() + * may be blocked on lock waiting to remove it. So just + * skip it. + */ + if (!kref_get_unless_zero(&obj->refcount)) + continue; + + mutex_unlock(&gem_shrinker->lock); + + /* prevent racing with job-submission code paths */ + if (!dma_resv_trylock(obj->resv)) { + *lock_contention |= true; + goto shrinker_lock; + } + + /* prevent racing with the dma-buf importing/exporting */ + if (!mutex_trylock(&gem_shrinker->dev->object_name_lock)) { + *lock_contention |= true; + goto resv_unlock; + } + + /* check whether h/w uses this object */ + if (!dma_resv_test_signaled(obj->resv, DMA_RESV_USAGE_WRITE)) + goto object_name_unlock; + + /* re-check whether eviction status hasn't changed */ + if (!drm_gem_shmem_is_evictable(shmem) && + !drm_gem_shmem_is_purgeable(shmem)) + goto object_name_unlock; + + err = shmem->evict(shmem); + if (!err) + freed += obj->size >> PAGE_SHIFT; + +object_name_unlock: + mutex_unlock(&gem_shrinker->dev->object_name_lock); +resv_unlock: + dma_resv_unlock(obj->resv); +shrinker_lock: + drm_gem_object_put(&shmem->base); + mutex_lock(&gem_shrinker->lock); + } + + list_splice_tail(&still_in_list, &gem_shrinker->lru_evictable); + + mutex_unlock(&gem_shrinker->lock); + + return freed; +} + +static unsigned long +drm_gem_shmem_shrinker_scan_objects(struct shrinker *shrinker, + struct shrink_control *sc) +{ + unsigned long nr_to_scan = sc->nr_to_scan; + bool lock_contention = false; + unsigned long freed; + + /* purge as many objects as we can */ + freed = drm_gem_shmem_shrinker_run_objects_scan(shrinker, nr_to_scan, + &lock_contention, false); + + /* evict as many objects as we can */ + if (freed < nr_to_scan) + freed += drm_gem_shmem_shrinker_run_objects_scan(shrinker, + nr_to_scan - freed, + &lock_contention, + true); + + return (!freed && !lock_contention) ? SHRINK_STOP : freed; +} + +/** + * drm_gem_shmem_shrinker_register() - Register shmem shrinker + * @dev: DRM device + * + * Returns: + * 0 on success or a negative error code on failure. + */ +int drm_gem_shmem_shrinker_register(struct drm_device *dev) +{ + struct drm_gem_shmem_shrinker *gem_shrinker; + int err; + + if (WARN_ON(dev->shmem_shrinker)) + return -EBUSY; + + gem_shrinker = kzalloc(sizeof(*gem_shrinker), GFP_KERNEL); + if (!gem_shrinker) + return -ENOMEM; + + gem_shrinker->base.count_objects = drm_gem_shmem_shrinker_count_objects; + gem_shrinker->base.scan_objects = drm_gem_shmem_shrinker_scan_objects; + gem_shrinker->base.seeks = DEFAULT_SEEKS; + gem_shrinker->dev = dev; + + INIT_LIST_HEAD(&gem_shrinker->lru_evictable); + INIT_LIST_HEAD(&gem_shrinker->lru_evicted); + INIT_LIST_HEAD(&gem_shrinker->lru_pinned); + mutex_init(&gem_shrinker->lock); + + dev->shmem_shrinker = gem_shrinker; + + err = register_shrinker(&gem_shrinker->base); + if (err) { + dev->shmem_shrinker = NULL; + kfree(gem_shrinker); + return err; + } + + return 0; +} +EXPORT_SYMBOL_GPL(drm_gem_shmem_shrinker_register); + +/** + * drm_gem_shmem_shrinker_unregister() - Unregister shmem shrinker + * @dev: DRM device + */ +void drm_gem_shmem_shrinker_unregister(struct drm_device *dev) +{ + struct drm_gem_shmem_shrinker *gem_shrinker = dev->shmem_shrinker; + + if (gem_shrinker) { + unregister_shrinker(&gem_shrinker->base); + WARN_ON(!list_empty(&gem_shrinker->lru_evictable)); + WARN_ON(!list_empty(&gem_shrinker->lru_evicted)); + WARN_ON(!list_empty(&gem_shrinker->lru_pinned)); + mutex_destroy(&gem_shrinker->lock); + dev->shmem_shrinker = NULL; + kfree(gem_shrinker); + } +} +EXPORT_SYMBOL_GPL(drm_gem_shmem_shrinker_unregister); + MODULE_DESCRIPTION("DRM SHMEM memory-management helpers"); MODULE_IMPORT_NS(DMA_BUF); MODULE_LICENSE("GPL v2"); diff --git a/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c b/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c index a4bedfeb2ec4..7cc32556f908 100644 --- a/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c +++ b/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c @@ -15,6 +15,13 @@ #include "panfrost_gem.h" #include "panfrost_mmu.h" +static bool panfrost_gem_shmem_is_purgeable(struct drm_gem_shmem_object *shmem) +{ + return (shmem->madv > 0) && + !shmem->pages_pin_count && shmem->sgt && + !shmem->base.dma_buf && !shmem->base.import_attach; +} + static unsigned long panfrost_gem_shrinker_count(struct shrinker *shrinker, struct shrink_control *sc) { @@ -27,7 +34,7 @@ panfrost_gem_shrinker_count(struct shrinker *shrinker, struct shrink_control *sc return 0; list_for_each_entry(shmem, &pfdev->shrinker_list, madv_list) { - if (drm_gem_shmem_is_purgeable(shmem)) + if (panfrost_gem_shmem_is_purgeable(shmem)) count += shmem->base.size >> PAGE_SHIFT; } diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.h b/drivers/gpu/drm/virtio/virtgpu_drv.h index b2d93cb12ebf..81bacc7e1873 100644 --- a/drivers/gpu/drm/virtio/virtgpu_drv.h +++ b/drivers/gpu/drm/virtio/virtgpu_drv.h @@ -89,6 +89,7 @@ struct virtio_gpu_object { uint32_t hw_res_handle; bool dumb; bool created; + bool detached; bool host3d_blob, guest_blob; uint32_t blob_mem, blob_flags; @@ -453,6 +454,8 @@ int virtio_gpu_object_create(struct virtio_gpu_device *vgdev, bool virtio_gpu_is_shmem(struct virtio_gpu_object *bo); +int virtio_gpu_reattach_shmem_object(struct virtio_gpu_object *bo); + int virtio_gpu_resource_id_get(struct virtio_gpu_device *vgdev, uint32_t *resid); /* virtgpu_prime.c */ diff --git a/include/drm/drm_device.h b/include/drm/drm_device.h index 9923c7a6885e..929546cad894 100644 --- a/include/drm/drm_device.h +++ b/include/drm/drm_device.h @@ -16,6 +16,7 @@ struct drm_vblank_crtc; struct drm_vma_offset_manager; struct drm_vram_mm; struct drm_fb_helper; +struct drm_gem_shmem_shrinker; struct inode; @@ -277,6 +278,9 @@ struct drm_device { /** @vram_mm: VRAM MM memory manager */ struct drm_vram_mm *vram_mm; + /** @shmem_shrinker: SHMEM GEM memory shrinker */ + struct drm_gem_shmem_shrinker *shmem_shrinker; + /** * @switch_power_state: * diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h index 9a8983ee8abe..62c640678a91 100644 --- a/include/drm/drm_gem_shmem_helper.h +++ b/include/drm/drm_gem_shmem_helper.h @@ -6,6 +6,7 @@ #include #include #include +#include #include #include @@ -15,6 +16,7 @@ struct dma_buf_attachment; struct drm_mode_create_dumb; struct drm_printer; +struct drm_device; struct sg_table; /** @@ -39,12 +41,21 @@ struct drm_gem_shmem_object { */ unsigned int pages_use_count; + /** + * @pages_pin_count: + * + * Reference count on the pinned pages table. + * The pages can be evicted by memory shrinker + * when the count reaches zero. + */ + unsigned int pages_pin_count; + /** * @madv: State for madvise * * 0 is active/inuse. + * 1 is not-needed/can-be-purged * A negative value is the object is purged. - * Positive values are driver specific and not used by the helpers. */ int madv; @@ -91,6 +102,39 @@ struct drm_gem_shmem_object { * @map_wc: map object write-combined (instead of using shmem defaults). */ bool map_wc; + + /** + * @eviction_enabled: + * + * The shmem pages can be evicted only if @eviction_enabled is set to true. + * Used internally by memory shrinker. + */ + bool eviction_enabled; + + /** + * @purge_enabled: + * + * The shmem pages can be purged only if @purge_enabled is set to true. + * Used internally by memory shrinker. + */ + bool purge_enabled; + + /** + * @evicted: True if shmem pages are evicted by the memory shrinker. + * Used internally by memory shrinker. + */ + bool evicted; + + /** + * @evict: + * + * Invoked by shmem shrinker before evicting shmem GEM from memory. + * GEM's DMA reservation is kept locked by the shrinker. This is + * optional callback that should be specified by drivers. + * + * Returns 0 on success, or -errno on error. + */ + int (*evict)(struct drm_gem_shmem_object *shmem); }; #define to_drm_gem_shmem_obj(obj) \ @@ -110,14 +154,21 @@ int drm_gem_shmem_mmap(struct drm_gem_shmem_object *shmem, struct vm_area_struct int drm_gem_shmem_madvise(struct drm_gem_shmem_object *shmem, int madv); +int drm_gem_shmem_set_purgeable(struct drm_gem_shmem_object *shmem); +int drm_gem_shmem_set_evictable(struct drm_gem_shmem_object *shmem); + static inline bool drm_gem_shmem_is_purgeable(struct drm_gem_shmem_object *shmem) { - return (shmem->madv > 0) && - !shmem->vmap_use_count && shmem->sgt && - !shmem->base.dma_buf && !shmem->base.import_attach; + return (shmem->madv > 0) && shmem->evict && + shmem->purge_enabled && shmem->pages_use_count && + !shmem->pages_pin_count && !shmem->base.dma_buf && + !shmem->base.import_attach && (shmem->sgt || shmem->evicted); } -void drm_gem_shmem_purge(struct drm_gem_shmem_object *shmem); +int drm_gem_shmem_swap_in(struct drm_gem_shmem_object *shmem); + +int drm_gem_shmem_evict(struct drm_gem_shmem_object *shmem); +int drm_gem_shmem_purge(struct drm_gem_shmem_object *shmem); struct sg_table *drm_gem_shmem_get_sg_table(struct drm_gem_shmem_object *shmem); struct sg_table *drm_gem_shmem_get_pages_sgt(struct drm_gem_shmem_object *shmem); @@ -260,6 +311,32 @@ static inline int drm_gem_shmem_object_mmap(struct drm_gem_object *obj, struct v return drm_gem_shmem_mmap(shmem, vma); } +/** + * struct drm_gem_shmem_shrinker - Generic memory shrinker for shmem GEMs + */ +struct drm_gem_shmem_shrinker { + /** @base: Shrinker for purging shmem GEM objects */ + struct shrinker base; + + /** @lock: Protects @lru_* */ + struct mutex lock; + + /** @lru_pinned: List of pinned shmem GEM objects */ + struct list_head lru_pinned; + + /** @lru_evictable: List of shmem GEM objects to be evicted */ + struct list_head lru_evictable; + + /** @lru_evicted: List of evicted shmem GEM objects */ + struct list_head lru_evicted; + + /** @dev: DRM device that uses this shrinker */ + struct drm_device *dev; +}; + +int drm_gem_shmem_shrinker_register(struct drm_device *dev); +void drm_gem_shmem_shrinker_unregister(struct drm_device *dev); + /* * Driver ops */ From patchwork Thu May 26 23:50:36 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 577120 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6E0EAC433EF for ; Thu, 26 May 2022 23:56:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1348233AbiEZX4T (ORCPT ); Thu, 26 May 2022 19:56:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53014 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1349663AbiEZXzu (ORCPT ); Thu, 26 May 2022 19:55:50 -0400 Received: from bhuna.collabora.co.uk (bhuna.collabora.co.uk [46.235.227.227]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 12038EBE95; Thu, 26 May 2022 16:55:11 -0700 (PDT) Received: from [127.0.0.1] (localhost [127.0.0.1]) (Authenticated sender: dmitry.osipenko) with ESMTPSA id 52C691F459FA DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1653609310; bh=nPEuKswQlR9Y3lB1Q2WuSBCu+q4IQv1b2R4ojzwKwKQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=m8aIO9spexkXB7Y8oBGI0YPYaWs/o9KynNSMnYM+QCOBYGce+N6mI0toHIP/pevv4 PlxmoM6raMVUbKMbZU4yqFIrwNzX1L2fR6z8pjpPqjpWVyI1qQHkbwl826zk1RkvCd B5p8s8la4i7uXnsYcU3b5rqVkgspWV12oarc04KFQ/srmesKgwe5Sq7hNvA1SrrU4w laM5D27QHrw8jeXLouHaAsmKIwEEmHZ5fbqrcQBamm8jCQT3HtFNnC2ZoX358vRF+o 3PW+ug5rE3stsfRINXRuPTcGQtOQs2d7EniYZPtWRp8voclYkmCeOajhS9ARnK6w+9 3tk/7Z0yybMwg== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Daniel Almeida , Gert Wollny , Gustavo Padovan , Daniel Stone , Tomeu Vizoso , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Rob Herring , Steven Price , Alyssa Rosenzweig , Rob Clark , Emil Velikov , Robin Murphy , Qiang Yu , Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= , "Pan, Xinhui" , Thierry Reding , Tomasz Figa , Marek Szyprowski , Mauro Carvalho Chehab , Alex Deucher , Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org, Dmitry Osipenko , Dmitry Osipenko , linux-tegra@vger.kernel.org, linux-media@vger.kernel.org, linaro-mm-sig@lists.linaro.org, amd-gfx@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, kernel@collabora.com Subject: [PATCH v6 18/22] drm/gem: Add drm_gem_pin_unlocked() Date: Fri, 27 May 2022 02:50:36 +0300 Message-Id: <20220526235040.678984-19-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20220526235040.678984-1-dmitry.osipenko@collabora.com> References: <20220526235040.678984-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org Add unlocked variants of drm_gem_un/pin() functions and make them public. These new helpers will take care of GEM dma-reservation locking for DRM drivers. We are going to add memory shrinking support to the VirtIO-GPU driver that will need to pin framebuffers explicitly to prevent eviction of the actively used buffers by the shrinker. VirtIO-GPU driver will use these new generic helpers to pin shmem framebuffers. Signed-off-by: Dmitry Osipenko --- drivers/gpu/drm/drm_gem.c | 29 +++++++++++++++++++++++++++++ include/drm/drm_gem.h | 3 +++ 2 files changed, 32 insertions(+) diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c index c61674887582..c909c935cfda 100644 --- a/drivers/gpu/drm/drm_gem.c +++ b/drivers/gpu/drm/drm_gem.c @@ -1179,6 +1179,35 @@ void drm_gem_unpin(struct drm_gem_object *obj) obj->funcs->unpin(obj); } +int drm_gem_pin_unlocked(struct drm_gem_object *obj) +{ + int ret; + + if (!obj->funcs->pin) + return 0; + + ret = dma_resv_lock_interruptible(obj->resv, NULL); + if (ret) + return ret; + + ret = obj->funcs->pin(obj); + dma_resv_unlock(obj->resv); + + return ret; +} +EXPORT_SYMBOL(drm_gem_pin_unlocked); + +void drm_gem_unpin_unlocked(struct drm_gem_object *obj) +{ + if (!obj->funcs->unpin) + return; + + dma_resv_lock(obj->resv, NULL); + obj->funcs->unpin(obj); + dma_resv_unlock(obj->resv); +} +EXPORT_SYMBOL(drm_gem_unpin_unlocked); + int drm_gem_vmap(struct drm_gem_object *obj, struct iosys_map *map) { int ret; diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h index 0b427939f466..870d81e7a104 100644 --- a/include/drm/drm_gem.h +++ b/include/drm/drm_gem.h @@ -413,4 +413,7 @@ int drm_gem_dumb_map_offset(struct drm_file *file, struct drm_device *dev, int drm_gem_vmap_unlocked(struct drm_gem_object *obj, struct iosys_map *map); void drm_gem_vunmap_unlocked(struct drm_gem_object *obj, struct iosys_map *map); +int drm_gem_pin_unlocked(struct drm_gem_object *obj); +void drm_gem_unpin_unlocked(struct drm_gem_object *obj); + #endif /* __DRM_GEM_H__ */ From patchwork Thu May 26 23:50:37 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 577119 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5682FC433EF for ; Thu, 26 May 2022 23:56:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1349776AbiEZX4Z (ORCPT ); Thu, 26 May 2022 19:56:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52896 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1349822AbiEZXzw (ORCPT ); Thu, 26 May 2022 19:55:52 -0400 Received: from bhuna.collabora.co.uk (bhuna.collabora.co.uk [46.235.227.227]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D10F7EAB8F; Thu, 26 May 2022 16:55:14 -0700 (PDT) Received: from [127.0.0.1] (localhost [127.0.0.1]) (Authenticated sender: dmitry.osipenko) with ESMTPSA id 628F81F459F1 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1653609313; bh=85TXlmrEL16nPSy0z9qeQV9GmylBOH57/JI7RTS4oZE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=U6CL5hilbtGR359+F9u18dTMEGE69Hb3z9NVkPH4IcNe+kLbos+XUddme3uu4DlEA 4RXJ5C0B/6R10cJzDLN2+7ABqMJRt8fsJ5VTiq+Tv4JNgntkiK58kPP9qvLxkDUyA7 EaGaL+QG7IffkwkPeKy65RFSiHsVqIeKBZllweMjKAINYc1UsXnxoS3xlNW7yhoQJg id+AIXMBRctGi1NPyNSK5mbfl6RIIKYtYjwDpKbOd/m9xp+k9c7TArKtyQdas2C+3t D1p1RU3HxUAmPWT8hKejm4CkplAbagBhlys60ohKVGBl5eswpAcwV7Bs2j/1m0pfh/ liX5Ge0qOsdTA== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Daniel Almeida , Gert Wollny , Gustavo Padovan , Daniel Stone , Tomeu Vizoso , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Rob Herring , Steven Price , Alyssa Rosenzweig , Rob Clark , Emil Velikov , Robin Murphy , Qiang Yu , Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= , "Pan, Xinhui" , Thierry Reding , Tomasz Figa , Marek Szyprowski , Mauro Carvalho Chehab , Alex Deucher , Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org, Dmitry Osipenko , Dmitry Osipenko , linux-tegra@vger.kernel.org, linux-media@vger.kernel.org, linaro-mm-sig@lists.linaro.org, amd-gfx@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, kernel@collabora.com Subject: [PATCH v6 19/22] drm/virtio: Support memory shrinking Date: Fri, 27 May 2022 02:50:37 +0300 Message-Id: <20220526235040.678984-20-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20220526235040.678984-1-dmitry.osipenko@collabora.com> References: <20220526235040.678984-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org Support generic drm-shmem memory shrinker and add new madvise IOCTL to the VirtIO-GPU driver. BO cache manager of Mesa driver will mark BOs as "don't need" using the new IOCTL to let shrinker purge the marked BOs on OOM, the shrinker will also evict unpurgeable shmem BOs from memory if guest supports SWAP file or partition. Altogether this allows to prevent OOM kills of guest applications that use VirGL by lowering memory pressure. Signed-off-by: Daniel Almeida Signed-off-by: Dmitry Osipenko --- drivers/gpu/drm/virtio/virtgpu_drv.h | 15 ++- drivers/gpu/drm/virtio/virtgpu_gem.c | 55 ++++++++++ drivers/gpu/drm/virtio/virtgpu_ioctl.c | 37 +++++++ drivers/gpu/drm/virtio/virtgpu_kms.c | 9 ++ drivers/gpu/drm/virtio/virtgpu_object.c | 138 +++++++++++++++++++----- drivers/gpu/drm/virtio/virtgpu_plane.c | 22 +++- drivers/gpu/drm/virtio/virtgpu_vq.c | 40 +++++++ include/uapi/drm/virtgpu_drm.h | 14 +++ 8 files changed, 300 insertions(+), 30 deletions(-) diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.h b/drivers/gpu/drm/virtio/virtgpu_drv.h index 81bacc7e1873..26b570029940 100644 --- a/drivers/gpu/drm/virtio/virtgpu_drv.h +++ b/drivers/gpu/drm/virtio/virtgpu_drv.h @@ -275,7 +275,7 @@ struct virtio_gpu_fpriv { }; /* virtgpu_ioctl.c */ -#define DRM_VIRTIO_NUM_IOCTLS 12 +#define DRM_VIRTIO_NUM_IOCTLS 13 extern struct drm_ioctl_desc virtio_gpu_ioctls[DRM_VIRTIO_NUM_IOCTLS]; void virtio_gpu_create_context(struct drm_device *dev, struct drm_file *file); @@ -311,6 +311,10 @@ void virtio_gpu_array_put_free(struct virtio_gpu_object_array *objs); void virtio_gpu_array_put_free_delayed(struct virtio_gpu_device *vgdev, struct virtio_gpu_object_array *objs); void virtio_gpu_array_put_free_work(struct work_struct *work); +int virtio_gpu_array_prepare(struct virtio_gpu_device *vgdev, + struct virtio_gpu_object_array *objs); +int virtio_gpu_gem_host_mem_release(struct virtio_gpu_object *bo); +int virtio_gpu_gem_madvise(struct virtio_gpu_object *obj, int madv); /* virtgpu_vq.c */ int virtio_gpu_alloc_vbufs(struct virtio_gpu_device *vgdev); @@ -322,6 +326,8 @@ void virtio_gpu_cmd_create_resource(struct virtio_gpu_device *vgdev, struct virtio_gpu_fence *fence); void virtio_gpu_cmd_unref_resource(struct virtio_gpu_device *vgdev, struct virtio_gpu_object *bo); +int virtio_gpu_cmd_release_resource(struct virtio_gpu_device *vgdev, + struct virtio_gpu_object *bo); void virtio_gpu_cmd_transfer_to_host_2d(struct virtio_gpu_device *vgdev, uint64_t offset, uint32_t width, uint32_t height, @@ -342,6 +348,9 @@ void virtio_gpu_object_attach(struct virtio_gpu_device *vgdev, struct virtio_gpu_object *obj, struct virtio_gpu_mem_entry *ents, unsigned int nents); +void virtio_gpu_object_detach(struct virtio_gpu_device *vgdev, + struct virtio_gpu_object *obj, + struct virtio_gpu_fence *fence); int virtio_gpu_attach_status_page(struct virtio_gpu_device *vgdev); int virtio_gpu_detach_status_page(struct virtio_gpu_device *vgdev); void virtio_gpu_cursor_ping(struct virtio_gpu_device *vgdev, @@ -486,4 +495,8 @@ void virtio_gpu_vram_unmap_dma_buf(struct device *dev, struct sg_table *sgt, enum dma_data_direction dir); +/* virtgpu_gem_shrinker.c */ +int virtio_gpu_gem_shrinker_init(struct virtio_gpu_device *vgdev); +void virtio_gpu_gem_shrinker_fini(struct virtio_gpu_device *vgdev); + #endif diff --git a/drivers/gpu/drm/virtio/virtgpu_gem.c b/drivers/gpu/drm/virtio/virtgpu_gem.c index 7db48d17ee3a..6c5d98e0f071 100644 --- a/drivers/gpu/drm/virtio/virtgpu_gem.c +++ b/drivers/gpu/drm/virtio/virtgpu_gem.c @@ -294,3 +294,58 @@ void virtio_gpu_array_put_free_work(struct work_struct *work) } spin_unlock(&vgdev->obj_free_lock); } + +int virtio_gpu_array_prepare(struct virtio_gpu_device *vgdev, + struct virtio_gpu_object_array *objs) +{ + struct virtio_gpu_object *bo; + int ret = 0; + u32 i; + + for (i = 0; i < objs->nents; i++) { + bo = gem_to_virtio_gpu_obj(objs->objs[i]); + + if (virtio_gpu_is_shmem(bo) && bo->detached) { + ret = virtio_gpu_reattach_shmem_object(bo); + if (ret) + break; + } + } + + return ret; +} + +int virtio_gpu_gem_madvise(struct virtio_gpu_object *bo, int madv) +{ + int ret; + + /* + * For now we support only purging BOs that are backed by guest's + * memory. + */ + if (!virtio_gpu_is_shmem(bo)) + return true; + + dma_resv_lock(bo->base.base.resv, NULL); + ret = drm_gem_shmem_madvise(&bo->base, madv); + dma_resv_unlock(bo->base.base.resv); + + return ret; +} + +int virtio_gpu_gem_host_mem_release(struct virtio_gpu_object *bo) +{ + struct virtio_gpu_device *vgdev = bo->base.base.dev->dev_private; + int err; + + if (bo->created) { + err = virtio_gpu_cmd_release_resource(vgdev, bo); + if (err) + return err; + + virtio_gpu_notify(vgdev); + bo->created = false; + } + + return 0; +} diff --git a/drivers/gpu/drm/virtio/virtgpu_ioctl.c b/drivers/gpu/drm/virtio/virtgpu_ioctl.c index f8d83358d2a0..55ee9bd2098e 100644 --- a/drivers/gpu/drm/virtio/virtgpu_ioctl.c +++ b/drivers/gpu/drm/virtio/virtgpu_ioctl.c @@ -217,6 +217,10 @@ static int virtio_gpu_execbuffer_ioctl(struct drm_device *dev, void *data, ret = virtio_gpu_array_lock_resv(buflist); if (ret) goto out_memdup; + + ret = virtio_gpu_array_prepare(vgdev, buflist); + if (ret) + goto out_unresv; } out_fence = virtio_gpu_fence_alloc(vgdev, fence_ctx, ring_idx); @@ -423,6 +427,10 @@ static int virtio_gpu_transfer_from_host_ioctl(struct drm_device *dev, if (ret != 0) goto err_put_free; + ret = virtio_gpu_array_prepare(vgdev, objs); + if (ret) + goto err_unlock; + fence = virtio_gpu_fence_alloc(vgdev, vgdev->fence_drv.context, 0); if (!fence) { ret = -ENOMEM; @@ -482,6 +490,10 @@ static int virtio_gpu_transfer_to_host_ioctl(struct drm_device *dev, void *data, if (ret != 0) goto err_put_free; + ret = virtio_gpu_array_prepare(vgdev, objs); + if (ret) + goto err_unlock; + ret = -ENOMEM; fence = virtio_gpu_fence_alloc(vgdev, vgdev->fence_drv.context, 0); @@ -836,6 +848,28 @@ static int virtio_gpu_context_init_ioctl(struct drm_device *dev, return ret; } +static int virtio_gpu_madvise_ioctl(struct drm_device *dev, + void *data, + struct drm_file *file) +{ + struct drm_virtgpu_madvise *args = data; + struct virtio_gpu_object *bo; + struct drm_gem_object *obj; + + if (args->madv > VIRTGPU_MADV_DONTNEED) + return -EOPNOTSUPP; + + obj = drm_gem_object_lookup(file, args->bo_handle); + if (!obj) + return -ENOENT; + + bo = gem_to_virtio_gpu_obj(obj); + args->retained = virtio_gpu_gem_madvise(bo, args->madv); + drm_gem_object_put(obj); + + return 0; +} + struct drm_ioctl_desc virtio_gpu_ioctls[DRM_VIRTIO_NUM_IOCTLS] = { DRM_IOCTL_DEF_DRV(VIRTGPU_MAP, virtio_gpu_map_ioctl, DRM_RENDER_ALLOW), @@ -875,4 +909,7 @@ struct drm_ioctl_desc virtio_gpu_ioctls[DRM_VIRTIO_NUM_IOCTLS] = { DRM_IOCTL_DEF_DRV(VIRTGPU_CONTEXT_INIT, virtio_gpu_context_init_ioctl, DRM_RENDER_ALLOW), + + DRM_IOCTL_DEF_DRV(VIRTGPU_MADVISE, virtio_gpu_madvise_ioctl, + DRM_RENDER_ALLOW), }; diff --git a/drivers/gpu/drm/virtio/virtgpu_kms.c b/drivers/gpu/drm/virtio/virtgpu_kms.c index 0d1e3eb61bee..1175999acea1 100644 --- a/drivers/gpu/drm/virtio/virtgpu_kms.c +++ b/drivers/gpu/drm/virtio/virtgpu_kms.c @@ -238,6 +238,12 @@ int virtio_gpu_init(struct virtio_device *vdev, struct drm_device *dev) goto err_scanouts; } + ret = drm_gem_shmem_shrinker_register(dev); + if (ret) { + DRM_ERROR("shrinker init failed\n"); + goto err_modeset; + } + virtio_device_ready(vgdev->vdev); if (num_capsets) @@ -250,6 +256,8 @@ int virtio_gpu_init(struct virtio_device *vdev, struct drm_device *dev) 5 * HZ); return 0; +err_modeset: + virtio_gpu_modeset_fini(vgdev); err_scanouts: virtio_gpu_free_vbufs(vgdev); err_vbufs: @@ -289,6 +297,7 @@ void virtio_gpu_release(struct drm_device *dev) if (!vgdev) return; + drm_gem_shmem_shrinker_unregister(dev); virtio_gpu_modeset_fini(vgdev); virtio_gpu_free_vbufs(vgdev); virtio_gpu_cleanup_cap_cache(vgdev); diff --git a/drivers/gpu/drm/virtio/virtgpu_object.c b/drivers/gpu/drm/virtio/virtgpu_object.c index 8d7728181de0..ddae4f9402f7 100644 --- a/drivers/gpu/drm/virtio/virtgpu_object.c +++ b/drivers/gpu/drm/virtio/virtgpu_object.c @@ -97,39 +97,54 @@ static void virtio_gpu_free_object(struct drm_gem_object *obj) virtio_gpu_cleanup_object(bo); } -static const struct drm_gem_object_funcs virtio_gpu_shmem_funcs = { - .free = virtio_gpu_free_object, - .open = virtio_gpu_gem_object_open, - .close = virtio_gpu_gem_object_close, - .print_info = drm_gem_shmem_object_print_info, - .export = virtgpu_gem_prime_export, - .pin = drm_gem_shmem_object_pin, - .unpin = drm_gem_shmem_object_unpin, - .get_sg_table = drm_gem_shmem_object_get_sg_table, - .vmap = drm_gem_shmem_object_vmap, - .vunmap = drm_gem_shmem_object_vunmap, - .mmap = drm_gem_shmem_object_mmap, - .vm_ops = &drm_gem_shmem_vm_ops, -}; - -bool virtio_gpu_is_shmem(struct virtio_gpu_object *bo) +static int virtio_gpu_detach_object_fenced(struct virtio_gpu_object *bo) { - return bo->base.base.funcs == &virtio_gpu_shmem_funcs; + struct virtio_gpu_device *vgdev = bo->base.base.dev->dev_private; + struct virtio_gpu_fence *fence; + + fence = virtio_gpu_fence_alloc(vgdev, vgdev->fence_drv.context, 0); + if (!fence) + return -ENOMEM; + + virtio_gpu_object_detach(vgdev, bo, fence); + virtio_gpu_notify(vgdev); + + dma_fence_wait(&fence->f, false); + dma_fence_put(&fence->f); + + bo->detached = true; + + return 0; } -struct drm_gem_object *virtio_gpu_create_object(struct drm_device *dev, - size_t size) +static int virtio_gpu_shmem_evict(struct drm_gem_shmem_object *shmem) { - struct virtio_gpu_object_shmem *shmem; - struct drm_gem_shmem_object *dshmem; + struct virtio_gpu_object *bo = gem_to_virtio_gpu_obj(&shmem->base); + int err; + + /* + * At first tell host to stop using guest's memory to ensure that + * host won't touch the released guest's memory once it's gone. + */ + if (!shmem->evicted) { + err = virtio_gpu_detach_object_fenced(bo); + if (err) + return err; + } - shmem = kzalloc(sizeof(*shmem), GFP_KERNEL); - if (!shmem) - return ERR_PTR(-ENOMEM); + if (drm_gem_shmem_is_purgeable(shmem)) { + err = virtio_gpu_gem_host_mem_release(bo); + if (err) { + virtio_gpu_reattach_shmem_object(bo); + return err; + } - dshmem = &shmem->base.base; - dshmem->base.funcs = &virtio_gpu_shmem_funcs; - return &dshmem->base; + drm_gem_shmem_purge(shmem); + } else { + drm_gem_shmem_evict(shmem); + } + + return 0; } static int virtio_gpu_object_shmem_init(struct virtio_gpu_device *vgdev, @@ -176,6 +191,64 @@ static int virtio_gpu_object_shmem_init(struct virtio_gpu_device *vgdev, return 0; } +int virtio_gpu_reattach_shmem_object(struct virtio_gpu_object *bo) +{ + struct virtio_gpu_device *vgdev = bo->base.base.dev->dev_private; + struct virtio_gpu_mem_entry *ents; + unsigned int nents; + int err; + + err = drm_gem_shmem_swap_in(&bo->base); + if (err) + return err; + + err = virtio_gpu_object_shmem_init(vgdev, bo, &ents, &nents); + if (err) + return err; + + virtio_gpu_object_attach(vgdev, bo, ents, nents); + virtio_gpu_notify(vgdev); + + bo->detached = false; + + return 0; +} + +static const struct drm_gem_object_funcs virtio_gpu_shmem_funcs = { + .free = virtio_gpu_free_object, + .open = virtio_gpu_gem_object_open, + .close = virtio_gpu_gem_object_close, + .print_info = drm_gem_shmem_object_print_info, + .export = virtgpu_gem_prime_export, + .pin = drm_gem_shmem_object_pin, + .unpin = drm_gem_shmem_object_unpin, + .get_sg_table = drm_gem_shmem_object_get_sg_table, + .vmap = drm_gem_shmem_object_vmap, + .vunmap = drm_gem_shmem_object_vunmap, + .mmap = drm_gem_shmem_object_mmap, + .vm_ops = &drm_gem_shmem_vm_ops, +}; + +bool virtio_gpu_is_shmem(struct virtio_gpu_object *bo) +{ + return bo->base.base.funcs == &virtio_gpu_shmem_funcs; +} + +struct drm_gem_object *virtio_gpu_create_object(struct drm_device *dev, + size_t size) +{ + struct virtio_gpu_object_shmem *shmem; + struct drm_gem_shmem_object *dshmem; + + shmem = kzalloc(sizeof(*shmem), GFP_KERNEL); + if (!shmem) + return ERR_PTR(-ENOMEM); + + dshmem = &shmem->base.base; + dshmem->base.funcs = &virtio_gpu_shmem_funcs; + return &dshmem->base; +} + int virtio_gpu_object_create(struct virtio_gpu_device *vgdev, struct virtio_gpu_object_params *params, struct virtio_gpu_object **bo_ptr, @@ -201,6 +274,7 @@ int virtio_gpu_object_create(struct virtio_gpu_device *vgdev, goto err_free_gem; bo->dumb = params->dumb; + bo->base.evict = virtio_gpu_shmem_evict; ret = virtio_gpu_object_shmem_init(vgdev, bo, &ents, &nents); if (ret != 0) @@ -228,10 +302,20 @@ int virtio_gpu_object_create(struct virtio_gpu_device *vgdev, virtio_gpu_cmd_resource_create_3d(vgdev, bo, params, objs, fence); virtio_gpu_object_attach(vgdev, bo, ents, nents); + + shmem_obj->pages_mark_dirty_on_put = 1; + + drm_gem_shmem_set_evictable(shmem_obj); + drm_gem_shmem_set_purgeable(shmem_obj); } else { virtio_gpu_cmd_create_resource(vgdev, bo, params, objs, fence); virtio_gpu_object_attach(vgdev, bo, ents, nents); + + shmem_obj->pages_mark_dirty_on_put = 1; + + drm_gem_shmem_set_evictable(shmem_obj); + drm_gem_shmem_set_purgeable(shmem_obj); } *bo_ptr = bo; diff --git a/drivers/gpu/drm/virtio/virtgpu_plane.c b/drivers/gpu/drm/virtio/virtgpu_plane.c index 7148f3813d8b..246bf0c54996 100644 --- a/drivers/gpu/drm/virtio/virtgpu_plane.c +++ b/drivers/gpu/drm/virtio/virtgpu_plane.c @@ -246,20 +246,32 @@ static int virtio_gpu_plane_prepare_fb(struct drm_plane *plane, struct virtio_gpu_device *vgdev = dev->dev_private; struct virtio_gpu_framebuffer *vgfb; struct virtio_gpu_object *bo; + int err; if (!new_state->fb) return 0; vgfb = to_virtio_gpu_framebuffer(new_state->fb); bo = gem_to_virtio_gpu_obj(vgfb->base.obj[0]); - if (!bo || (plane->type == DRM_PLANE_TYPE_PRIMARY && !bo->guest_blob)) + + if (virtio_gpu_is_shmem(bo)) { + err = drm_gem_pin_unlocked(&bo->base.base); + if (err) + return err; + } + + if (plane->type == DRM_PLANE_TYPE_PRIMARY && !bo->guest_blob) return 0; if (bo->dumb && (plane->state->fb != new_state->fb)) { vgfb->fence = virtio_gpu_fence_alloc(vgdev, vgdev->fence_drv.context, 0); - if (!vgfb->fence) + if (!vgfb->fence) { + if (virtio_gpu_is_shmem(bo)) + drm_gem_unpin_unlocked(&bo->base.base); + return -ENOMEM; + } } return 0; @@ -269,15 +281,21 @@ static void virtio_gpu_plane_cleanup_fb(struct drm_plane *plane, struct drm_plane_state *state) { struct virtio_gpu_framebuffer *vgfb; + struct virtio_gpu_object *bo; if (!state->fb) return; vgfb = to_virtio_gpu_framebuffer(state->fb); + bo = gem_to_virtio_gpu_obj(vgfb->base.obj[0]); + if (vgfb->fence) { dma_fence_put(&vgfb->fence->f); vgfb->fence = NULL; } + + if (virtio_gpu_is_shmem(bo)) + drm_gem_unpin_unlocked(&bo->base.base); } static void virtio_gpu_cursor_plane_update(struct drm_plane *plane, diff --git a/drivers/gpu/drm/virtio/virtgpu_vq.c b/drivers/gpu/drm/virtio/virtgpu_vq.c index 06566e44307d..2a04dad1ae89 100644 --- a/drivers/gpu/drm/virtio/virtgpu_vq.c +++ b/drivers/gpu/drm/virtio/virtgpu_vq.c @@ -536,6 +536,21 @@ void virtio_gpu_cmd_unref_resource(struct virtio_gpu_device *vgdev, virtio_gpu_cleanup_object(bo); } +int virtio_gpu_cmd_release_resource(struct virtio_gpu_device *vgdev, + struct virtio_gpu_object *bo) +{ + struct virtio_gpu_resource_unref *cmd_p; + struct virtio_gpu_vbuffer *vbuf; + + cmd_p = virtio_gpu_alloc_cmd(vgdev, &vbuf, sizeof(*cmd_p)); + memset(cmd_p, 0, sizeof(*cmd_p)); + + cmd_p->hdr.type = cpu_to_le32(VIRTIO_GPU_CMD_RESOURCE_UNREF); + cmd_p->resource_id = cpu_to_le32(bo->hw_res_handle); + + return virtio_gpu_queue_ctrl_buffer(vgdev, vbuf); +} + void virtio_gpu_cmd_set_scanout(struct virtio_gpu_device *vgdev, uint32_t scanout_id, uint32_t resource_id, uint32_t width, uint32_t height, @@ -636,6 +651,23 @@ virtio_gpu_cmd_resource_attach_backing(struct virtio_gpu_device *vgdev, virtio_gpu_queue_fenced_ctrl_buffer(vgdev, vbuf, fence); } +static void +virtio_gpu_cmd_resource_detach_backing(struct virtio_gpu_device *vgdev, + u32 resource_id, + struct virtio_gpu_fence *fence) +{ + struct virtio_gpu_resource_attach_backing *cmd_p; + struct virtio_gpu_vbuffer *vbuf; + + cmd_p = virtio_gpu_alloc_cmd(vgdev, &vbuf, sizeof(*cmd_p)); + memset(cmd_p, 0, sizeof(*cmd_p)); + + cmd_p->hdr.type = cpu_to_le32(VIRTIO_GPU_CMD_RESOURCE_DETACH_BACKING); + cmd_p->resource_id = cpu_to_le32(resource_id); + + virtio_gpu_queue_fenced_ctrl_buffer(vgdev, vbuf, fence); +} + static void virtio_gpu_cmd_get_display_info_cb(struct virtio_gpu_device *vgdev, struct virtio_gpu_vbuffer *vbuf) { @@ -1099,6 +1131,14 @@ void virtio_gpu_object_attach(struct virtio_gpu_device *vgdev, ents, nents, NULL); } +void virtio_gpu_object_detach(struct virtio_gpu_device *vgdev, + struct virtio_gpu_object *obj, + struct virtio_gpu_fence *fence) +{ + virtio_gpu_cmd_resource_detach_backing(vgdev, obj->hw_res_handle, + fence); +} + void virtio_gpu_cursor_ping(struct virtio_gpu_device *vgdev, struct virtio_gpu_output *output) { diff --git a/include/uapi/drm/virtgpu_drm.h b/include/uapi/drm/virtgpu_drm.h index 0512fde5e697..12197d8e9759 100644 --- a/include/uapi/drm/virtgpu_drm.h +++ b/include/uapi/drm/virtgpu_drm.h @@ -48,6 +48,7 @@ extern "C" { #define DRM_VIRTGPU_GET_CAPS 0x09 #define DRM_VIRTGPU_RESOURCE_CREATE_BLOB 0x0a #define DRM_VIRTGPU_CONTEXT_INIT 0x0b +#define DRM_VIRTGPU_MADVISE 0x0c #define VIRTGPU_EXECBUF_FENCE_FD_IN 0x01 #define VIRTGPU_EXECBUF_FENCE_FD_OUT 0x02 @@ -196,6 +197,15 @@ struct drm_virtgpu_context_init { __u64 ctx_set_params; }; +#define VIRTGPU_MADV_WILLNEED 0 +#define VIRTGPU_MADV_DONTNEED 1 +struct drm_virtgpu_madvise { + __u32 bo_handle; + __u32 retained; /* out, non-zero if BO can be used */ + __u32 madv; + __u32 pad; +}; + /* * Event code that's given when VIRTGPU_CONTEXT_PARAM_POLL_RINGS_MASK is in * effect. The event size is sizeof(drm_event), since there is no additional @@ -246,6 +256,10 @@ struct drm_virtgpu_context_init { DRM_IOWR(DRM_COMMAND_BASE + DRM_VIRTGPU_CONTEXT_INIT, \ struct drm_virtgpu_context_init) +#define DRM_IOCTL_VIRTGPU_MADVISE \ + DRM_IOWR(DRM_COMMAND_BASE + DRM_VIRTGPU_MADVISE, \ + struct drm_virtgpu_madvise) + #if defined(__cplusplus) } #endif From patchwork Thu May 26 23:50:38 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 576374 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 40742C433F5 for ; Thu, 26 May 2022 23:56:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1349791AbiEZX41 (ORCPT ); Thu, 26 May 2022 19:56:27 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53198 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1349834AbiEZXzw (ORCPT ); Thu, 26 May 2022 19:55:52 -0400 Received: from bhuna.collabora.co.uk (bhuna.collabora.co.uk [46.235.227.227]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DC47AED704; Thu, 26 May 2022 16:55:17 -0700 (PDT) Received: from [127.0.0.1] (localhost [127.0.0.1]) (Authenticated sender: dmitry.osipenko) with ESMTPSA id 67E0F1F459F5 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1653609316; bh=nY0MryRm8EK3fifz2ya2uo9G85zwxQHFBXlNamou3uM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=TPeKw0UeuJCFCD2HqILO4z07soz9OoaVsjzzQgbDfal9kRh9+jz7LvWATtBgQ/b/x kvsVGbXAIRgLSj/o3IsgG3e0m9ULZ09K0x1PLQHx2s1wX7O3I/nZ4aKNuIOeBTYCLM Jz14QKo4VICvqsgcicgS7cvaPCUF3AKEra860bEiriwvcZ5boXfPUq2UOT7/McCtXn 7aDOtQqFws0SBl2VPJtXQ7uWVDsUi1tK8X3qWklVQr01dlx2F+Y7Edi4uXRMSP3TXf y/NyD+h7NZMis9pT+P/8sCSKiMGjnY/ji3qklgJLLLPGW5QIO3uMNNLCXl2TLoAgoR 15ZWBWtRuh7HQ== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Daniel Almeida , Gert Wollny , Gustavo Padovan , Daniel Stone , Tomeu Vizoso , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Rob Herring , Steven Price , Alyssa Rosenzweig , Rob Clark , Emil Velikov , Robin Murphy , Qiang Yu , Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= , "Pan, Xinhui" , Thierry Reding , Tomasz Figa , Marek Szyprowski , Mauro Carvalho Chehab , Alex Deucher , Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org, Dmitry Osipenko , Dmitry Osipenko , linux-tegra@vger.kernel.org, linux-media@vger.kernel.org, linaro-mm-sig@lists.linaro.org, amd-gfx@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, kernel@collabora.com Subject: [PATCH v6 20/22] drm/virtio: Use dev_is_pci() Date: Fri, 27 May 2022 02:50:38 +0300 Message-Id: <20220526235040.678984-21-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20220526235040.678984-1-dmitry.osipenko@collabora.com> References: <20220526235040.678984-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org Use common dev_is_pci() helper to replace the strcmp("pci") used by driver. Suggested-by: Robin Murphy Signed-off-by: Dmitry Osipenko --- drivers/gpu/drm/virtio/virtgpu_drv.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.c b/drivers/gpu/drm/virtio/virtgpu_drv.c index 0141b7df97ec..0035affc3e59 100644 --- a/drivers/gpu/drm/virtio/virtgpu_drv.c +++ b/drivers/gpu/drm/virtio/virtgpu_drv.c @@ -87,7 +87,7 @@ static int virtio_gpu_probe(struct virtio_device *vdev) return PTR_ERR(dev); vdev->priv = dev; - if (!strcmp(vdev->dev.parent->bus->name, "pci")) { + if (dev_is_pci(vdev->dev.parent)) { ret = virtio_gpu_pci_quirk(dev); if (ret) goto err_free; From patchwork Thu May 26 23:50:39 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 577118 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 721E4C433EF for ; Thu, 26 May 2022 23:56:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244892AbiEZX4d (ORCPT ); Thu, 26 May 2022 19:56:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53606 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1349842AbiEZXzx (ORCPT ); Thu, 26 May 2022 19:55:53 -0400 Received: from bhuna.collabora.co.uk (bhuna.collabora.co.uk [46.235.227.227]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 94484ED71A; Thu, 26 May 2022 16:55:20 -0700 (PDT) Received: from [127.0.0.1] (localhost [127.0.0.1]) (Authenticated sender: dmitry.osipenko) with ESMTPSA id 73A211F459FE DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1653609319; bh=e7WawsCreQZA6oq9juG2n98bdBThV2TqnbrdmbGR/iM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ZDjpXRH1m6FKUcq+BXCWaqIO/5RtOY9SMw67juPPt67p9qIENyjxZio3RG2eXtbIU EFxCTrqqkl2e/2bubVhWZ7DGgqpjrW4mi9eXtLce1ntgPuqWZslCqVdudNCN7C8SgX nkxR08+/Du4jh2tfvHMAuzjd1RKrErCYTuBd96vgRxyegNdSegbJmT54xlrS8kkj70 KDeTWHTi2CTu3cS6m+kc/AQaRyCFLKFZJYXeSOe1zhEnTdVzXqlHBNynf7tG6lANHw pYOb1K4aWEKDCzYQUNSSd06+CPx3hsEc4BhNLfjm4BSrZh+o+b7OVw+ICTgQ1bLHKn wDHwAPmaNjcBA== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Daniel Almeida , Gert Wollny , Gustavo Padovan , Daniel Stone , Tomeu Vizoso , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Rob Herring , Steven Price , Alyssa Rosenzweig , Rob Clark , Emil Velikov , Robin Murphy , Qiang Yu , Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= , "Pan, Xinhui" , Thierry Reding , Tomasz Figa , Marek Szyprowski , Mauro Carvalho Chehab , Alex Deucher , Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org, Dmitry Osipenko , Dmitry Osipenko , linux-tegra@vger.kernel.org, linux-media@vger.kernel.org, linaro-mm-sig@lists.linaro.org, amd-gfx@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, kernel@collabora.com Subject: [PATCH v6 21/22] drm/virtio: Return proper error codes instead of -1 Date: Fri, 27 May 2022 02:50:39 +0300 Message-Id: <20220526235040.678984-22-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20220526235040.678984-1-dmitry.osipenko@collabora.com> References: <20220526235040.678984-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org Don't return -1 in error cases, return proper error code. The returned error codes propagate to error messages and to userspace and it's always good to have a meaningful error number for debugging purposes. Signed-off-by: Dmitry Osipenko --- drivers/gpu/drm/virtio/virtgpu_vq.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/drivers/gpu/drm/virtio/virtgpu_vq.c b/drivers/gpu/drm/virtio/virtgpu_vq.c index 2a04dad1ae89..40402367d593 100644 --- a/drivers/gpu/drm/virtio/virtgpu_vq.c +++ b/drivers/gpu/drm/virtio/virtgpu_vq.c @@ -320,7 +320,7 @@ static int virtio_gpu_queue_ctrl_sgs(struct virtio_gpu_device *vgdev, if (fence && vbuf->objs) virtio_gpu_array_unlock_resv(vbuf->objs); free_vbuf(vgdev, vbuf); - return -1; + return -ENODEV; } if (vgdev->has_indirect) @@ -384,7 +384,7 @@ static int virtio_gpu_queue_fenced_ctrl_buffer(struct virtio_gpu_device *vgdev, if (!sgt) { if (fence && vbuf->objs) virtio_gpu_array_unlock_resv(vbuf->objs); - return -1; + return -ENOMEM; } elemcnt += sg_ents; @@ -750,7 +750,7 @@ static int virtio_get_edid_block(void *data, u8 *buf, size_t start = block * EDID_LENGTH; if (start + len > le32_to_cpu(resp->size)) - return -1; + return -EINVAL; memcpy(buf, resp->edid + start, len); return 0; } From patchwork Thu May 26 23:50:40 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 576373 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0F64BC433EF for ; Thu, 26 May 2022 23:57:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235269AbiEZX4u (ORCPT ); Thu, 26 May 2022 19:56:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54062 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346078AbiEZX4D (ORCPT ); Thu, 26 May 2022 19:56:03 -0400 Received: from bhuna.collabora.co.uk (bhuna.collabora.co.uk [46.235.227.227]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2D211EBABA; Thu, 26 May 2022 16:55:23 -0700 (PDT) Received: from [127.0.0.1] (localhost [127.0.0.1]) (Authenticated sender: dmitry.osipenko) with ESMTPSA id 83EE31F459E1 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1653609322; bh=GcJU3CrMCSiWrlIoKBMfmzEMw/gmugBtIngM2iKfxh0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=F7FDWhmfFfDjmJG7K78f3P9f33Tl24zN83fZluYsppV0F1wtX5BlP8lVfeIgNqgMl RfYVslSEBGuHzHGBvAGl6svt6kjflBT/kacYZt1dWozChgDPAv2aGS31tGqRWbg26M vDl15WW9Xz6PnHEJdEdGhW4lVQ4MRQ0s1ejzXEszBhwk3jdy0Rbp/nFLrtNWxbMRqX KD9XTJkju6ld9602GAXy/4Yx06atqIO+7nLfvZJbAfj3kBjPUQu+f9E75DzHSXaIMe 2FuQNT/pj0BnXG0GTzn7q/J+l+kNBXzt0hXtvFWVP096r5a+SA2NTRyMWjneipKEa6 +qUhKVEBVKugg== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Daniel Almeida , Gert Wollny , Gustavo Padovan , Daniel Stone , Tomeu Vizoso , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Rob Herring , Steven Price , Alyssa Rosenzweig , Rob Clark , Emil Velikov , Robin Murphy , Qiang Yu , Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= , "Pan, Xinhui" , Thierry Reding , Tomasz Figa , Marek Szyprowski , Mauro Carvalho Chehab , Alex Deucher , Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org, Dmitry Osipenko , Dmitry Osipenko , linux-tegra@vger.kernel.org, linux-media@vger.kernel.org, linaro-mm-sig@lists.linaro.org, amd-gfx@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, kernel@collabora.com Subject: [PATCH v6 22/22] drm/panfrost: Switch to generic memory shrinker Date: Fri, 27 May 2022 02:50:40 +0300 Message-Id: <20220526235040.678984-23-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20220526235040.678984-1-dmitry.osipenko@collabora.com> References: <20220526235040.678984-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org Replace Panfrost's memory shrinker with a generic drm-shmem memory shrinker. Tested-by: Steven Price Signed-off-by: Dmitry Osipenko Acked-by: Alyssa Rosenzweig --- drivers/gpu/drm/panfrost/Makefile | 1 - drivers/gpu/drm/panfrost/panfrost_device.h | 4 - drivers/gpu/drm/panfrost/panfrost_drv.c | 19 +-- drivers/gpu/drm/panfrost/panfrost_gem.c | 33 +++-- drivers/gpu/drm/panfrost/panfrost_gem.h | 9 -- .../gpu/drm/panfrost/panfrost_gem_shrinker.c | 129 ------------------ drivers/gpu/drm/panfrost/panfrost_job.c | 18 ++- 7 files changed, 42 insertions(+), 171 deletions(-) delete mode 100644 drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c diff --git a/drivers/gpu/drm/panfrost/Makefile b/drivers/gpu/drm/panfrost/Makefile index b71935862417..ecf0864cb515 100644 --- a/drivers/gpu/drm/panfrost/Makefile +++ b/drivers/gpu/drm/panfrost/Makefile @@ -5,7 +5,6 @@ panfrost-y := \ panfrost_device.o \ panfrost_devfreq.o \ panfrost_gem.o \ - panfrost_gem_shrinker.o \ panfrost_gpu.o \ panfrost_job.o \ panfrost_mmu.o \ diff --git a/drivers/gpu/drm/panfrost/panfrost_device.h b/drivers/gpu/drm/panfrost/panfrost_device.h index 8b25278f34c8..fe04b21fc044 100644 --- a/drivers/gpu/drm/panfrost/panfrost_device.h +++ b/drivers/gpu/drm/panfrost/panfrost_device.h @@ -115,10 +115,6 @@ struct panfrost_device { atomic_t pending; } reset; - struct mutex shrinker_lock; - struct list_head shrinker_list; - struct shrinker shrinker; - struct panfrost_devfreq pfdevfreq; }; diff --git a/drivers/gpu/drm/panfrost/panfrost_drv.c b/drivers/gpu/drm/panfrost/panfrost_drv.c index 859e240161d1..b77c99ba2475 100644 --- a/drivers/gpu/drm/panfrost/panfrost_drv.c +++ b/drivers/gpu/drm/panfrost/panfrost_drv.c @@ -160,7 +160,6 @@ panfrost_lookup_bos(struct drm_device *dev, break; } - atomic_inc(&bo->gpu_usecount); job->mappings[i] = mapping; } @@ -392,7 +391,6 @@ static int panfrost_ioctl_madvise(struct drm_device *dev, void *data, { struct panfrost_file_priv *priv = file_priv->driver_priv; struct drm_panfrost_madvise *args = data; - struct panfrost_device *pfdev = dev->dev_private; struct drm_gem_object *gem_obj; struct panfrost_gem_object *bo; int ret = 0; @@ -409,7 +407,6 @@ static int panfrost_ioctl_madvise(struct drm_device *dev, void *data, if (ret) goto out_put_object; - mutex_lock(&pfdev->shrinker_lock); mutex_lock(&bo->mappings.lock); if (args->madv == PANFROST_MADV_DONTNEED) { struct panfrost_gem_mapping *first; @@ -435,17 +432,8 @@ static int panfrost_ioctl_madvise(struct drm_device *dev, void *data, args->retained = drm_gem_shmem_madvise(&bo->base, args->madv); - if (args->retained) { - if (args->madv == PANFROST_MADV_DONTNEED) - list_move_tail(&bo->base.madv_list, - &pfdev->shrinker_list); - else if (args->madv == PANFROST_MADV_WILLNEED) - list_del_init(&bo->base.madv_list); - } - out_unlock_mappings: mutex_unlock(&bo->mappings.lock); - mutex_unlock(&pfdev->shrinker_lock); dma_resv_unlock(bo->base.base.resv); out_put_object: drm_gem_object_put(gem_obj); @@ -577,9 +565,6 @@ static int panfrost_probe(struct platform_device *pdev) ddev->dev_private = pfdev; pfdev->ddev = ddev; - mutex_init(&pfdev->shrinker_lock); - INIT_LIST_HEAD(&pfdev->shrinker_list); - err = panfrost_device_init(pfdev); if (err) { if (err != -EPROBE_DEFER) @@ -601,7 +586,7 @@ static int panfrost_probe(struct platform_device *pdev) if (err < 0) goto err_out1; - panfrost_gem_shrinker_init(ddev); + drm_gem_shmem_shrinker_register(ddev); return 0; @@ -619,8 +604,8 @@ static int panfrost_remove(struct platform_device *pdev) struct panfrost_device *pfdev = platform_get_drvdata(pdev); struct drm_device *ddev = pfdev->ddev; + drm_gem_shmem_shrinker_unregister(ddev); drm_dev_unregister(ddev); - panfrost_gem_shrinker_cleanup(ddev); pm_runtime_get_sync(pfdev->dev); pm_runtime_disable(pfdev->dev); diff --git a/drivers/gpu/drm/panfrost/panfrost_gem.c b/drivers/gpu/drm/panfrost/panfrost_gem.c index 293e799e2fe8..f1436405e3a0 100644 --- a/drivers/gpu/drm/panfrost/panfrost_gem.c +++ b/drivers/gpu/drm/panfrost/panfrost_gem.c @@ -19,16 +19,6 @@ static void panfrost_gem_free_object(struct drm_gem_object *obj) struct panfrost_gem_object *bo = to_panfrost_bo(obj); struct panfrost_device *pfdev = obj->dev->dev_private; - /* - * Make sure the BO is no longer inserted in the shrinker list before - * taking care of the destruction itself. If we don't do that we have a - * race condition between this function and what's done in - * panfrost_gem_shrinker_scan(). - */ - mutex_lock(&pfdev->shrinker_lock); - list_del_init(&bo->base.madv_list); - mutex_unlock(&pfdev->shrinker_lock); - /* * If we still have mappings attached to the BO, there's a problem in * our refcounting. @@ -209,6 +199,25 @@ static const struct drm_gem_object_funcs panfrost_gem_funcs = { .vm_ops = &drm_gem_shmem_vm_ops, }; +static int panfrost_shmem_evict(struct drm_gem_shmem_object *shmem) +{ + struct panfrost_gem_object *bo = to_panfrost_bo(&shmem->base); + + if (!drm_gem_shmem_is_purgeable(shmem)) + return -EOPNOTSUPP; + + if (!mutex_trylock(&bo->mappings.lock)) + return -EBUSY; + + panfrost_gem_teardown_mappings_locked(bo); + + drm_gem_shmem_purge(shmem); + + mutex_unlock(&bo->mappings.lock); + + return 0; +} + /** * panfrost_gem_create_object - Implementation of driver->gem_create_object. * @dev: DRM device @@ -230,6 +239,7 @@ struct drm_gem_object *panfrost_gem_create_object(struct drm_device *dev, size_t mutex_init(&obj->mappings.lock); obj->base.base.funcs = &panfrost_gem_funcs; obj->base.map_wc = !pfdev->coherent; + obj->base.evict = panfrost_shmem_evict; return &obj->base.base; } @@ -266,6 +276,9 @@ panfrost_gem_create_with_handle(struct drm_file *file_priv, if (ret) return ERR_PTR(ret); + if (!bo->is_heap) + drm_gem_shmem_set_purgeable(shmem); + return bo; } diff --git a/drivers/gpu/drm/panfrost/panfrost_gem.h b/drivers/gpu/drm/panfrost/panfrost_gem.h index 8088d5fd8480..09da064f1c07 100644 --- a/drivers/gpu/drm/panfrost/panfrost_gem.h +++ b/drivers/gpu/drm/panfrost/panfrost_gem.h @@ -30,12 +30,6 @@ struct panfrost_gem_object { struct mutex lock; } mappings; - /* - * Count the number of jobs referencing this BO so we don't let the - * shrinker reclaim this object prematurely. - */ - atomic_t gpu_usecount; - bool noexec :1; bool is_heap :1; }; @@ -84,7 +78,4 @@ panfrost_gem_mapping_get(struct panfrost_gem_object *bo, void panfrost_gem_mapping_put(struct panfrost_gem_mapping *mapping); void panfrost_gem_teardown_mappings_locked(struct panfrost_gem_object *bo); -void panfrost_gem_shrinker_init(struct drm_device *dev); -void panfrost_gem_shrinker_cleanup(struct drm_device *dev); - #endif /* __PANFROST_GEM_H__ */ diff --git a/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c b/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c deleted file mode 100644 index 7cc32556f908..000000000000 --- a/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c +++ /dev/null @@ -1,129 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0 -/* Copyright (C) 2019 Arm Ltd. - * - * Based on msm_gem_freedreno.c: - * Copyright (C) 2016 Red Hat - * Author: Rob Clark - */ - -#include - -#include -#include - -#include "panfrost_device.h" -#include "panfrost_gem.h" -#include "panfrost_mmu.h" - -static bool panfrost_gem_shmem_is_purgeable(struct drm_gem_shmem_object *shmem) -{ - return (shmem->madv > 0) && - !shmem->pages_pin_count && shmem->sgt && - !shmem->base.dma_buf && !shmem->base.import_attach; -} - -static unsigned long -panfrost_gem_shrinker_count(struct shrinker *shrinker, struct shrink_control *sc) -{ - struct panfrost_device *pfdev = - container_of(shrinker, struct panfrost_device, shrinker); - struct drm_gem_shmem_object *shmem; - unsigned long count = 0; - - if (!mutex_trylock(&pfdev->shrinker_lock)) - return 0; - - list_for_each_entry(shmem, &pfdev->shrinker_list, madv_list) { - if (panfrost_gem_shmem_is_purgeable(shmem)) - count += shmem->base.size >> PAGE_SHIFT; - } - - mutex_unlock(&pfdev->shrinker_lock); - - return count; -} - -static bool panfrost_gem_purge(struct drm_gem_object *obj) -{ - struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj); - struct panfrost_gem_object *bo = to_panfrost_bo(obj); - bool ret = false; - - if (atomic_read(&bo->gpu_usecount)) - return false; - - if (!mutex_trylock(&bo->mappings.lock)) - return false; - - if (!dma_resv_trylock(shmem->base.resv)) - goto unlock_mappings; - - panfrost_gem_teardown_mappings_locked(bo); - drm_gem_shmem_purge(&bo->base); - ret = true; - - dma_resv_unlock(shmem->base.resv); - -unlock_mappings: - mutex_unlock(&bo->mappings.lock); - return ret; -} - -static unsigned long -panfrost_gem_shrinker_scan(struct shrinker *shrinker, struct shrink_control *sc) -{ - struct panfrost_device *pfdev = - container_of(shrinker, struct panfrost_device, shrinker); - struct drm_gem_shmem_object *shmem, *tmp; - unsigned long freed = 0; - - if (!mutex_trylock(&pfdev->shrinker_lock)) - return SHRINK_STOP; - - list_for_each_entry_safe(shmem, tmp, &pfdev->shrinker_list, madv_list) { - if (freed >= sc->nr_to_scan) - break; - if (drm_gem_shmem_is_purgeable(shmem) && - panfrost_gem_purge(&shmem->base)) { - freed += shmem->base.size >> PAGE_SHIFT; - list_del_init(&shmem->madv_list); - } - } - - mutex_unlock(&pfdev->shrinker_lock); - - if (freed > 0) - pr_info_ratelimited("Purging %lu bytes\n", freed << PAGE_SHIFT); - - return freed; -} - -/** - * panfrost_gem_shrinker_init - Initialize panfrost shrinker - * @dev: DRM device - * - * This function registers and sets up the panfrost shrinker. - */ -void panfrost_gem_shrinker_init(struct drm_device *dev) -{ - struct panfrost_device *pfdev = dev->dev_private; - pfdev->shrinker.count_objects = panfrost_gem_shrinker_count; - pfdev->shrinker.scan_objects = panfrost_gem_shrinker_scan; - pfdev->shrinker.seeks = DEFAULT_SEEKS; - WARN_ON(register_shrinker(&pfdev->shrinker)); -} - -/** - * panfrost_gem_shrinker_cleanup - Clean up panfrost shrinker - * @dev: DRM device - * - * This function unregisters the panfrost shrinker. - */ -void panfrost_gem_shrinker_cleanup(struct drm_device *dev) -{ - struct panfrost_device *pfdev = dev->dev_private; - - if (pfdev->shrinker.nr_deferred) { - unregister_shrinker(&pfdev->shrinker); - } -} diff --git a/drivers/gpu/drm/panfrost/panfrost_job.c b/drivers/gpu/drm/panfrost/panfrost_job.c index 7c4208476fbd..5c327a79455f 100644 --- a/drivers/gpu/drm/panfrost/panfrost_job.c +++ b/drivers/gpu/drm/panfrost/panfrost_job.c @@ -271,6 +271,19 @@ static void panfrost_attach_object_fences(struct drm_gem_object **bos, dma_resv_add_fence(bos[i]->resv, fence, DMA_RESV_USAGE_WRITE); } +static int panfrost_objects_prepare(struct drm_gem_object **bos, int bo_count) +{ + struct panfrost_gem_object *bo; + int ret = 0; + + while (!ret && bo_count--) { + bo = to_panfrost_bo(bos[bo_count]); + ret = bo->base.madv ? -ENOMEM : 0; + } + + return ret; +} + int panfrost_job_push(struct panfrost_job *job) { struct panfrost_device *pfdev = job->pfdev; @@ -282,6 +295,10 @@ int panfrost_job_push(struct panfrost_job *job) if (ret) return ret; + ret = panfrost_objects_prepare(job->bos, job->bo_count); + if (ret) + goto unlock; + mutex_lock(&pfdev->sched_lock); drm_sched_job_arm(&job->base); @@ -323,7 +340,6 @@ static void panfrost_job_cleanup(struct kref *ref) if (!job->mappings[i]) break; - atomic_dec(&job->mappings[i]->obj->gpu_usecount); panfrost_gem_mapping_put(job->mappings[i]); } kvfree(job->mappings);