mbox series

[v4,00/33] drm/msm: sparse / "VM_BIND" support

Message ID 20250502165831.44850-1-robdclark@gmail.com
Headers show
Series drm/msm: sparse / "VM_BIND" support | expand

Message

Rob Clark May 2, 2025, 4:56 p.m. UTC
From: Rob Clark <robdclark@chromium.org>

Conversion to DRM GPU VA Manager[1], and adding support for Vulkan Sparse
Memory[2] in the form of:

1. A new VM_BIND submitqueue type for executing VM MSM_SUBMIT_BO_OP_MAP/
   MAP_NULL/UNMAP commands

2. A new VM_BIND ioctl to allow submitting batches of one or more
   MAP/MAP_NULL/UNMAP commands to a VM_BIND submitqueue

I did not implement support for synchronous VM_BIND commands.  Since
userspace could just immediately wait for the `SUBMIT` to complete, I don't
think we need this extra complexity in the kernel.  Synchronous/immediate
VM_BIND operations could be implemented with a 2nd VM_BIND submitqueue.

The corresponding mesa MR: https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/32533

Changes in v4:
- Replace selftests_running flag with IO_PGTABLE_QUIRK_NO_WARN_ON [Robin
  Murphy]
- Rework msm_gem_vm_sm_step_remap() for cases that orig_vma is evicted
  to solve some crashes
- Block when drm_file is closed until pending VM_BIND ops complete, before
  tearing down the VM's scheduler, to solve some memory leaks.
- Link to v3: https://lore.kernel.org/all/20250428205619.227835-1-robdclark@gmail.com/

Changes in v3:
- Switched to separate VM_BIND ioctl.  This makes the UABI a bit
  cleaner, but OTOH the userspace code was cleaner when the end result
  of either type of VkQueue lead to the same ioctl.  So I'm a bit on
  the fence.
- Switched to doing the gpuvm bookkeeping synchronously, and only
  deferring the pgtable updates.  This avoids needing to hold any resv
  locks in the fence signaling path, resolving the last shrinker related
  lockdep complaints.  OTOH it means userspace can trigger invalid
  pgtable updates with multiple VM_BIND queues.  In this case, we ensure
  that unmaps happen completely (to prevent userspace from using this to
  access free'd pages), mark the context as unusable, and move on with
  life.
- Link to v2: https://lore.kernel.org/all/20250319145425.51935-1-robdclark@gmail.com/

Changes in v2:
- Dropped Bibek Kumar Patro's arm-smmu patches[3], which have since been
  merged.
- Pre-allocate all the things, and drop HACK patch which disabled shrinker.
  This includes ensuring that vm_bo objects are allocated up front, pre-
  allocating VMA objects, and pre-allocating pages used for pgtable updates.
  The latter utilizes io_pgtable_cfg callbacks for pgtable alloc/free, that
  were initially added for panthor.
- Add back support for BO dumping for devcoredump.
- Link to v1 (RFC): https://lore.kernel.org/dri-devel/20241207161651.410556-1-robdclark@gmail.com/T/#t

[1] https://www.kernel.org/doc/html/next/gpu/drm-mm.html#drm-gpuvm
[2] https://docs.vulkan.org/spec/latest/chapters/sparsemem.html
[3] https://patchwork.kernel.org/project/linux-arm-kernel/list/?series=909700

Rob Clark (33):
  drm/gpuvm: Don't require obj lock in destructor path
  drm/gpuvm: Allow VAs to hold soft reference to BOs
  iommu/io-pgtable-arm: Add quirk to quiet WARN_ON()
  drm/msm: Rename msm_file_private -> msm_context
  drm/msm: Improve msm_context comments
  drm/msm: Rename msm_gem_address_space -> msm_gem_vm
  drm/msm: Remove vram carveout support
  drm/msm: Collapse vma allocation and initialization
  drm/msm: Collapse vma close and delete
  drm/msm: Don't close VMAs on purge
  drm/msm: drm_gpuvm conversion
  drm/msm: Convert vm locking
  drm/msm: Use drm_gpuvm types more
  drm/msm: Split out helper to get iommu prot flags
  drm/msm: Add mmu support for non-zero offset
  drm/msm: Add PRR support
  drm/msm: Rename msm_gem_vma_purge() -> _unmap()
  drm/msm: Lazily create context VM
  drm/msm: Add opt-in for VM_BIND
  drm/msm: Mark VM as unusable on GPU hangs
  drm/msm: Add _NO_SHARE flag
  drm/msm: Crashdump prep for sparse mappings
  drm/msm: rd dumping prep for sparse mappings
  drm/msm: Crashdec support for sparse
  drm/msm: rd dumping support for sparse
  drm/msm: Extract out syncobj helpers
  drm/msm: Use DMA_RESV_USAGE_BOOKKEEP/KERNEL
  drm/msm: Add VM_BIND submitqueue
  drm/msm: Support IO_PGTABLE_QUIRK_NO_WARN_ON
  drm/msm: Support pgtable preallocation
  drm/msm: Split out map/unmap ops
  drm/msm: Add VM_BIND ioctl
  drm/msm: Bump UAPI version

 drivers/gpu/drm/drm_gpuvm.c                   |   15 +-
 drivers/gpu/drm/msm/Kconfig                   |    1 +
 drivers/gpu/drm/msm/Makefile                  |    1 +
 drivers/gpu/drm/msm/adreno/a2xx_gpu.c         |   25 +-
 drivers/gpu/drm/msm/adreno/a2xx_gpummu.c      |    5 +-
 drivers/gpu/drm/msm/adreno/a3xx_gpu.c         |   17 +-
 drivers/gpu/drm/msm/adreno/a4xx_gpu.c         |   17 +-
 drivers/gpu/drm/msm/adreno/a5xx_debugfs.c     |    4 +-
 drivers/gpu/drm/msm/adreno/a5xx_gpu.c         |   22 +-
 drivers/gpu/drm/msm/adreno/a5xx_power.c       |    2 +-
 drivers/gpu/drm/msm/adreno/a5xx_preempt.c     |   10 +-
 drivers/gpu/drm/msm/adreno/a6xx_gmu.c         |   32 +-
 drivers/gpu/drm/msm/adreno/a6xx_gmu.h         |    2 +-
 drivers/gpu/drm/msm/adreno/a6xx_gpu.c         |   49 +-
 drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c   |    6 +-
 drivers/gpu/drm/msm/adreno/a6xx_preempt.c     |   10 +-
 drivers/gpu/drm/msm/adreno/adreno_device.c    |    4 -
 drivers/gpu/drm/msm/adreno/adreno_gpu.c       |   88 +-
 drivers/gpu/drm/msm/adreno/adreno_gpu.h       |   23 +-
 .../drm/msm/disp/dpu1/dpu_encoder_phys_wb.c   |   14 +-
 drivers/gpu/drm/msm/disp/dpu1/dpu_formats.c   |   18 +-
 drivers/gpu/drm/msm/disp/dpu1/dpu_formats.h   |    2 +-
 drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c       |   18 +-
 drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c     |   14 +-
 drivers/gpu/drm/msm/disp/dpu1/dpu_plane.h     |    4 +-
 drivers/gpu/drm/msm/disp/mdp4/mdp4_crtc.c     |    6 +-
 drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c      |   28 +-
 drivers/gpu/drm/msm/disp/mdp4/mdp4_plane.c    |   12 +-
 drivers/gpu/drm/msm/disp/mdp5/mdp5_crtc.c     |    4 +-
 drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c      |   19 +-
 drivers/gpu/drm/msm/disp/mdp5/mdp5_plane.c    |   12 +-
 drivers/gpu/drm/msm/dsi/dsi_host.c            |   14 +-
 drivers/gpu/drm/msm/msm_drv.c                 |  183 +--
 drivers/gpu/drm/msm/msm_drv.h                 |   35 +-
 drivers/gpu/drm/msm/msm_fb.c                  |   18 +-
 drivers/gpu/drm/msm/msm_fbdev.c               |    2 +-
 drivers/gpu/drm/msm/msm_gem.c                 |  489 +++----
 drivers/gpu/drm/msm/msm_gem.h                 |  217 ++-
 drivers/gpu/drm/msm/msm_gem_prime.c           |   15 +
 drivers/gpu/drm/msm/msm_gem_shrinker.c        |    4 +-
 drivers/gpu/drm/msm/msm_gem_submit.c          |  295 ++--
 drivers/gpu/drm/msm/msm_gem_vma.c             | 1288 +++++++++++++++--
 drivers/gpu/drm/msm/msm_gpu.c                 |  171 ++-
 drivers/gpu/drm/msm/msm_gpu.h                 |  132 +-
 drivers/gpu/drm/msm/msm_iommu.c               |  298 +++-
 drivers/gpu/drm/msm/msm_kms.c                 |   18 +-
 drivers/gpu/drm/msm/msm_kms.h                 |    2 +-
 drivers/gpu/drm/msm/msm_mmu.h                 |   38 +-
 drivers/gpu/drm/msm/msm_rd.c                  |   62 +-
 drivers/gpu/drm/msm/msm_ringbuffer.c          |    4 +-
 drivers/gpu/drm/msm/msm_submitqueue.c         |   96 +-
 drivers/gpu/drm/msm/msm_syncobj.c             |  172 +++
 drivers/gpu/drm/msm/msm_syncobj.h             |   37 +
 drivers/iommu/io-pgtable-arm.c                |   27 +-
 include/drm/drm_gpuvm.h                       |   12 +-
 include/linux/io-pgtable.h                    |    8 +
 include/uapi/drm/msm_drm.h                    |  149 +-
 57 files changed, 3043 insertions(+), 1227 deletions(-)
 create mode 100644 drivers/gpu/drm/msm/msm_syncobj.c
 create mode 100644 drivers/gpu/drm/msm/msm_syncobj.h

Comments

Christian König May 5, 2025, 7:54 a.m. UTC | #1
On 5/2/25 18:56, Rob Clark wrote:
> From: Rob Clark <robdclark@chromium.org>
> 
> Buffers that are not shared between contexts can share a single resv
> object.  This way drm_gpuvm will not track them as external objects, and
> submit-time validating overhead will be O(1) for all N non-shared BOs,
> instead of O(n).
> 
> Signed-off-by: Rob Clark <robdclark@chromium.org>
> ---
>  drivers/gpu/drm/msm/msm_drv.h       |  1 +
>  drivers/gpu/drm/msm/msm_gem.c       | 23 +++++++++++++++++++++++
>  drivers/gpu/drm/msm/msm_gem_prime.c | 15 +++++++++++++++
>  include/uapi/drm/msm_drm.h          | 14 ++++++++++++++
>  4 files changed, 53 insertions(+)
> 
> diff --git a/drivers/gpu/drm/msm/msm_drv.h b/drivers/gpu/drm/msm/msm_drv.h
> index b77fd2c531c3..b0add236cbb3 100644
> --- a/drivers/gpu/drm/msm/msm_drv.h
> +++ b/drivers/gpu/drm/msm/msm_drv.h
> @@ -246,6 +246,7 @@ int msm_gem_prime_vmap(struct drm_gem_object *obj, struct iosys_map *map);
>  void msm_gem_prime_vunmap(struct drm_gem_object *obj, struct iosys_map *map);
>  struct drm_gem_object *msm_gem_prime_import_sg_table(struct drm_device *dev,
>  		struct dma_buf_attachment *attach, struct sg_table *sg);
> +struct dma_buf *msm_gem_prime_export(struct drm_gem_object *obj, int flags);
>  int msm_gem_prime_pin(struct drm_gem_object *obj);
>  void msm_gem_prime_unpin(struct drm_gem_object *obj);
>  
> diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c
> index 3708d4579203..d0f44c981351 100644
> --- a/drivers/gpu/drm/msm/msm_gem.c
> +++ b/drivers/gpu/drm/msm/msm_gem.c
> @@ -532,6 +532,9 @@ static int get_and_pin_iova_range_locked(struct drm_gem_object *obj,
>  
>  	msm_gem_assert_locked(obj);
>  
> +	if (to_msm_bo(obj)->flags & MSM_BO_NO_SHARE)
> +		return -EINVAL;
> +
>  	vma = get_vma_locked(obj, vm, range_start, range_end);
>  	if (IS_ERR(vma))
>  		return PTR_ERR(vma);
> @@ -1060,6 +1063,16 @@ static void msm_gem_free_object(struct drm_gem_object *obj)
>  		put_pages(obj);
>  	}
>  
> +	if (msm_obj->flags & MSM_BO_NO_SHARE) {
> +		struct drm_gem_object *r_obj =
> +			container_of(obj->resv, struct drm_gem_object, _resv);
> +
> +		BUG_ON(obj->resv == &obj->_resv);
> +
> +		/* Drop reference we hold to shared resv obj: */
> +		drm_gem_object_put(r_obj);
> +	}
> +
>  	drm_gem_object_release(obj);
>  
>  	kfree(msm_obj->metadata);
> @@ -1092,6 +1105,15 @@ int msm_gem_new_handle(struct drm_device *dev, struct drm_file *file,
>  	if (name)
>  		msm_gem_object_set_name(obj, "%s", name);
>  
> +	if (flags & MSM_BO_NO_SHARE) {
> +		struct msm_context *ctx = file->driver_priv;
> +		struct drm_gem_object *r_obj = drm_gpuvm_resv_obj(ctx->vm);
> +
> +		drm_gem_object_get(r_obj);
> +
> +		obj->resv = r_obj->resv;
> +	}
> +
>  	ret = drm_gem_handle_create(file, obj, handle);
>  
>  	/* drop reference from allocate - handle holds it now */
> @@ -1124,6 +1146,7 @@ static const struct drm_gem_object_funcs msm_gem_object_funcs = {
>  	.free = msm_gem_free_object,
>  	.open = msm_gem_open,
>  	.close = msm_gem_close,
> +	.export = msm_gem_prime_export,
>  	.pin = msm_gem_prime_pin,
>  	.unpin = msm_gem_prime_unpin,
>  	.get_sg_table = msm_gem_prime_get_sg_table,
> diff --git a/drivers/gpu/drm/msm/msm_gem_prime.c b/drivers/gpu/drm/msm/msm_gem_prime.c
> index ee267490c935..1a6d8099196a 100644
> --- a/drivers/gpu/drm/msm/msm_gem_prime.c
> +++ b/drivers/gpu/drm/msm/msm_gem_prime.c
> @@ -16,6 +16,9 @@ struct sg_table *msm_gem_prime_get_sg_table(struct drm_gem_object *obj)
>  	struct msm_gem_object *msm_obj = to_msm_bo(obj);
>  	int npages = obj->size >> PAGE_SHIFT;
>  
> +	if (msm_obj->flags & MSM_BO_NO_SHARE)
> +		return ERR_PTR(-EINVAL);
> +
>  	if (WARN_ON(!msm_obj->pages))  /* should have already pinned! */
>  		return ERR_PTR(-ENOMEM);
>  
> @@ -45,6 +48,15 @@ struct drm_gem_object *msm_gem_prime_import_sg_table(struct drm_device *dev,
>  	return msm_gem_import(dev, attach->dmabuf, sg);
>  }
>  
> +
> +struct dma_buf *msm_gem_prime_export(struct drm_gem_object *obj, int flags)
> +{
> +	if (to_msm_bo(obj)->flags & MSM_BO_NO_SHARE)
> +		return ERR_PTR(-EPERM);
> +
> +	return drm_gem_prime_export(obj, flags);
> +}
> +
>  int msm_gem_prime_pin(struct drm_gem_object *obj)
>  {
>  	struct page **pages;
> @@ -53,6 +65,9 @@ int msm_gem_prime_pin(struct drm_gem_object *obj)
>  	if (obj->import_attach)
>  		return 0;
>  
> +	if (to_msm_bo(obj)->flags & MSM_BO_NO_SHARE)
> +		return -EINVAL;
> +
>  	pages = msm_gem_pin_pages_locked(obj);
>  	if (IS_ERR(pages))
>  		ret = PTR_ERR(pages);
> diff --git a/include/uapi/drm/msm_drm.h b/include/uapi/drm/msm_drm.h
> index b974f5a24dbc..1bccc347945c 100644
> --- a/include/uapi/drm/msm_drm.h
> +++ b/include/uapi/drm/msm_drm.h
> @@ -140,6 +140,19 @@ struct drm_msm_param {
>  
>  #define MSM_BO_SCANOUT       0x00000001     /* scanout capable */
>  #define MSM_BO_GPU_READONLY  0x00000002
> +/* Private buffers do not need to be explicitly listed in the SUBMIT
> + * ioctl, unless referenced by a drm_msm_gem_submit_cmd.  Private
> + * buffers may NOT be imported/exported or used for scanout (or any
> + * other situation where buffers can be indefinitely pinned, but
> + * cases other than scanout are all kernel owned BOs which are not
> + * visible to userspace).

Why is pinning for scanout a problem with those?

Maybe I missed something but for other drivers that doesn't seem to be a problem.

Regards,
Christian.


> + *
> + * In exchange for those constraints, all private BOs associated with
> + * a single context (drm_file) share a single dma_resv, and if there
> + * has been no eviction since the last submit, there are no per-BO
> + * bookeeping to do, significantly cutting the SUBMIT overhead.
> + */
> +#define MSM_BO_NO_SHARE      0x00000004
>  #define MSM_BO_CACHE_MASK    0x000f0000
>  /* cache modes */
>  #define MSM_BO_CACHED        0x00010000
> @@ -149,6 +162,7 @@ struct drm_msm_param {
>  
>  #define MSM_BO_FLAGS         (MSM_BO_SCANOUT | \
>                                MSM_BO_GPU_READONLY | \
> +                              MSM_BO_NO_SHARE | \
>                                MSM_BO_CACHE_MASK)
>  
>  struct drm_msm_gem_new {
Rob Clark May 5, 2025, 2:15 p.m. UTC | #2
On Mon, May 5, 2025 at 12:54 AM Christian König
<christian.koenig@amd.com> wrote:
>
> On 5/2/25 18:56, Rob Clark wrote:
> > From: Rob Clark <robdclark@chromium.org>
> >
> > Buffers that are not shared between contexts can share a single resv
> > object.  This way drm_gpuvm will not track them as external objects, and
> > submit-time validating overhead will be O(1) for all N non-shared BOs,
> > instead of O(n).
> >
> > Signed-off-by: Rob Clark <robdclark@chromium.org>
> > ---
> >  drivers/gpu/drm/msm/msm_drv.h       |  1 +
> >  drivers/gpu/drm/msm/msm_gem.c       | 23 +++++++++++++++++++++++
> >  drivers/gpu/drm/msm/msm_gem_prime.c | 15 +++++++++++++++
> >  include/uapi/drm/msm_drm.h          | 14 ++++++++++++++
> >  4 files changed, 53 insertions(+)
> >
> > diff --git a/drivers/gpu/drm/msm/msm_drv.h b/drivers/gpu/drm/msm/msm_drv.h
> > index b77fd2c531c3..b0add236cbb3 100644
> > --- a/drivers/gpu/drm/msm/msm_drv.h
> > +++ b/drivers/gpu/drm/msm/msm_drv.h
> > @@ -246,6 +246,7 @@ int msm_gem_prime_vmap(struct drm_gem_object *obj, struct iosys_map *map);
> >  void msm_gem_prime_vunmap(struct drm_gem_object *obj, struct iosys_map *map);
> >  struct drm_gem_object *msm_gem_prime_import_sg_table(struct drm_device *dev,
> >               struct dma_buf_attachment *attach, struct sg_table *sg);
> > +struct dma_buf *msm_gem_prime_export(struct drm_gem_object *obj, int flags);
> >  int msm_gem_prime_pin(struct drm_gem_object *obj);
> >  void msm_gem_prime_unpin(struct drm_gem_object *obj);
> >
> > diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c
> > index 3708d4579203..d0f44c981351 100644
> > --- a/drivers/gpu/drm/msm/msm_gem.c
> > +++ b/drivers/gpu/drm/msm/msm_gem.c
> > @@ -532,6 +532,9 @@ static int get_and_pin_iova_range_locked(struct drm_gem_object *obj,
> >
> >       msm_gem_assert_locked(obj);
> >
> > +     if (to_msm_bo(obj)->flags & MSM_BO_NO_SHARE)
> > +             return -EINVAL;
> > +
> >       vma = get_vma_locked(obj, vm, range_start, range_end);
> >       if (IS_ERR(vma))
> >               return PTR_ERR(vma);
> > @@ -1060,6 +1063,16 @@ static void msm_gem_free_object(struct drm_gem_object *obj)
> >               put_pages(obj);
> >       }
> >
> > +     if (msm_obj->flags & MSM_BO_NO_SHARE) {
> > +             struct drm_gem_object *r_obj =
> > +                     container_of(obj->resv, struct drm_gem_object, _resv);
> > +
> > +             BUG_ON(obj->resv == &obj->_resv);
> > +
> > +             /* Drop reference we hold to shared resv obj: */
> > +             drm_gem_object_put(r_obj);
> > +     }
> > +
> >       drm_gem_object_release(obj);
> >
> >       kfree(msm_obj->metadata);
> > @@ -1092,6 +1105,15 @@ int msm_gem_new_handle(struct drm_device *dev, struct drm_file *file,
> >       if (name)
> >               msm_gem_object_set_name(obj, "%s", name);
> >
> > +     if (flags & MSM_BO_NO_SHARE) {
> > +             struct msm_context *ctx = file->driver_priv;
> > +             struct drm_gem_object *r_obj = drm_gpuvm_resv_obj(ctx->vm);
> > +
> > +             drm_gem_object_get(r_obj);
> > +
> > +             obj->resv = r_obj->resv;
> > +     }
> > +
> >       ret = drm_gem_handle_create(file, obj, handle);
> >
> >       /* drop reference from allocate - handle holds it now */
> > @@ -1124,6 +1146,7 @@ static const struct drm_gem_object_funcs msm_gem_object_funcs = {
> >       .free = msm_gem_free_object,
> >       .open = msm_gem_open,
> >       .close = msm_gem_close,
> > +     .export = msm_gem_prime_export,
> >       .pin = msm_gem_prime_pin,
> >       .unpin = msm_gem_prime_unpin,
> >       .get_sg_table = msm_gem_prime_get_sg_table,
> > diff --git a/drivers/gpu/drm/msm/msm_gem_prime.c b/drivers/gpu/drm/msm/msm_gem_prime.c
> > index ee267490c935..1a6d8099196a 100644
> > --- a/drivers/gpu/drm/msm/msm_gem_prime.c
> > +++ b/drivers/gpu/drm/msm/msm_gem_prime.c
> > @@ -16,6 +16,9 @@ struct sg_table *msm_gem_prime_get_sg_table(struct drm_gem_object *obj)
> >       struct msm_gem_object *msm_obj = to_msm_bo(obj);
> >       int npages = obj->size >> PAGE_SHIFT;
> >
> > +     if (msm_obj->flags & MSM_BO_NO_SHARE)
> > +             return ERR_PTR(-EINVAL);
> > +
> >       if (WARN_ON(!msm_obj->pages))  /* should have already pinned! */
> >               return ERR_PTR(-ENOMEM);
> >
> > @@ -45,6 +48,15 @@ struct drm_gem_object *msm_gem_prime_import_sg_table(struct drm_device *dev,
> >       return msm_gem_import(dev, attach->dmabuf, sg);
> >  }
> >
> > +
> > +struct dma_buf *msm_gem_prime_export(struct drm_gem_object *obj, int flags)
> > +{
> > +     if (to_msm_bo(obj)->flags & MSM_BO_NO_SHARE)
> > +             return ERR_PTR(-EPERM);
> > +
> > +     return drm_gem_prime_export(obj, flags);
> > +}
> > +
> >  int msm_gem_prime_pin(struct drm_gem_object *obj)
> >  {
> >       struct page **pages;
> > @@ -53,6 +65,9 @@ int msm_gem_prime_pin(struct drm_gem_object *obj)
> >       if (obj->import_attach)
> >               return 0;
> >
> > +     if (to_msm_bo(obj)->flags & MSM_BO_NO_SHARE)
> > +             return -EINVAL;
> > +
> >       pages = msm_gem_pin_pages_locked(obj);
> >       if (IS_ERR(pages))
> >               ret = PTR_ERR(pages);
> > diff --git a/include/uapi/drm/msm_drm.h b/include/uapi/drm/msm_drm.h
> > index b974f5a24dbc..1bccc347945c 100644
> > --- a/include/uapi/drm/msm_drm.h
> > +++ b/include/uapi/drm/msm_drm.h
> > @@ -140,6 +140,19 @@ struct drm_msm_param {
> >
> >  #define MSM_BO_SCANOUT       0x00000001     /* scanout capable */
> >  #define MSM_BO_GPU_READONLY  0x00000002
> > +/* Private buffers do not need to be explicitly listed in the SUBMIT
> > + * ioctl, unless referenced by a drm_msm_gem_submit_cmd.  Private
> > + * buffers may NOT be imported/exported or used for scanout (or any
> > + * other situation where buffers can be indefinitely pinned, but
> > + * cases other than scanout are all kernel owned BOs which are not
> > + * visible to userspace).
>
> Why is pinning for scanout a problem with those?
>
> Maybe I missed something but for other drivers that doesn't seem to be a problem.

I guess _technically_ it could be ok because we track pin-count
separately from dma_resv.  But the motivation for that statement was
simply that _NO_SHARE buffers share a resv obj with the VM, so they
should not be used in a different VM (in this case, the display, which
has it's own VM).

BR,
-R

> Regards,
> Christian.
>
>
> > + *
> > + * In exchange for those constraints, all private BOs associated with
> > + * a single context (drm_file) share a single dma_resv, and if there
> > + * has been no eviction since the last submit, there are no per-BO
> > + * bookeeping to do, significantly cutting the SUBMIT overhead.
> > + */
> > +#define MSM_BO_NO_SHARE      0x00000004
> >  #define MSM_BO_CACHE_MASK    0x000f0000
> >  /* cache modes */
> >  #define MSM_BO_CACHED        0x00010000
> > @@ -149,6 +162,7 @@ struct drm_msm_param {
> >
> >  #define MSM_BO_FLAGS         (MSM_BO_SCANOUT | \
> >                                MSM_BO_GPU_READONLY | \
> > +                              MSM_BO_NO_SHARE | \
> >                                MSM_BO_CACHE_MASK)
> >
> >  struct drm_msm_gem_new {
>
Christian König May 5, 2025, 3:17 p.m. UTC | #3
On 5/5/25 16:15, Rob Clark wrote:
> On Mon, May 5, 2025 at 12:54 AM Christian König
> <christian.koenig@amd.com> wrote:
>>
>> On 5/2/25 18:56, Rob Clark wrote:
>>> From: Rob Clark <robdclark@chromium.org>
>>>
>>> Buffers that are not shared between contexts can share a single resv
>>> object.  This way drm_gpuvm will not track them as external objects, and
>>> submit-time validating overhead will be O(1) for all N non-shared BOs,
>>> instead of O(n).
>>>
>>> Signed-off-by: Rob Clark <robdclark@chromium.org>
>>> ---
>>>  drivers/gpu/drm/msm/msm_drv.h       |  1 +
>>>  drivers/gpu/drm/msm/msm_gem.c       | 23 +++++++++++++++++++++++
>>>  drivers/gpu/drm/msm/msm_gem_prime.c | 15 +++++++++++++++
>>>  include/uapi/drm/msm_drm.h          | 14 ++++++++++++++
>>>  4 files changed, 53 insertions(+)
>>>
>>> diff --git a/drivers/gpu/drm/msm/msm_drv.h b/drivers/gpu/drm/msm/msm_drv.h
>>> index b77fd2c531c3..b0add236cbb3 100644
>>> --- a/drivers/gpu/drm/msm/msm_drv.h
>>> +++ b/drivers/gpu/drm/msm/msm_drv.h
>>> @@ -246,6 +246,7 @@ int msm_gem_prime_vmap(struct drm_gem_object *obj, struct iosys_map *map);
>>>  void msm_gem_prime_vunmap(struct drm_gem_object *obj, struct iosys_map *map);
>>>  struct drm_gem_object *msm_gem_prime_import_sg_table(struct drm_device *dev,
>>>               struct dma_buf_attachment *attach, struct sg_table *sg);
>>> +struct dma_buf *msm_gem_prime_export(struct drm_gem_object *obj, int flags);
>>>  int msm_gem_prime_pin(struct drm_gem_object *obj);
>>>  void msm_gem_prime_unpin(struct drm_gem_object *obj);
>>>
>>> diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c
>>> index 3708d4579203..d0f44c981351 100644
>>> --- a/drivers/gpu/drm/msm/msm_gem.c
>>> +++ b/drivers/gpu/drm/msm/msm_gem.c
>>> @@ -532,6 +532,9 @@ static int get_and_pin_iova_range_locked(struct drm_gem_object *obj,
>>>
>>>       msm_gem_assert_locked(obj);
>>>
>>> +     if (to_msm_bo(obj)->flags & MSM_BO_NO_SHARE)
>>> +             return -EINVAL;
>>> +
>>>       vma = get_vma_locked(obj, vm, range_start, range_end);
>>>       if (IS_ERR(vma))
>>>               return PTR_ERR(vma);
>>> @@ -1060,6 +1063,16 @@ static void msm_gem_free_object(struct drm_gem_object *obj)
>>>               put_pages(obj);
>>>       }
>>>
>>> +     if (msm_obj->flags & MSM_BO_NO_SHARE) {
>>> +             struct drm_gem_object *r_obj =
>>> +                     container_of(obj->resv, struct drm_gem_object, _resv);
>>> +
>>> +             BUG_ON(obj->resv == &obj->_resv);
>>> +
>>> +             /* Drop reference we hold to shared resv obj: */
>>> +             drm_gem_object_put(r_obj);
>>> +     }
>>> +
>>>       drm_gem_object_release(obj);
>>>
>>>       kfree(msm_obj->metadata);
>>> @@ -1092,6 +1105,15 @@ int msm_gem_new_handle(struct drm_device *dev, struct drm_file *file,
>>>       if (name)
>>>               msm_gem_object_set_name(obj, "%s", name);
>>>
>>> +     if (flags & MSM_BO_NO_SHARE) {
>>> +             struct msm_context *ctx = file->driver_priv;
>>> +             struct drm_gem_object *r_obj = drm_gpuvm_resv_obj(ctx->vm);
>>> +
>>> +             drm_gem_object_get(r_obj);
>>> +
>>> +             obj->resv = r_obj->resv;
>>> +     }
>>> +
>>>       ret = drm_gem_handle_create(file, obj, handle);
>>>
>>>       /* drop reference from allocate - handle holds it now */
>>> @@ -1124,6 +1146,7 @@ static const struct drm_gem_object_funcs msm_gem_object_funcs = {
>>>       .free = msm_gem_free_object,
>>>       .open = msm_gem_open,
>>>       .close = msm_gem_close,
>>> +     .export = msm_gem_prime_export,
>>>       .pin = msm_gem_prime_pin,
>>>       .unpin = msm_gem_prime_unpin,
>>>       .get_sg_table = msm_gem_prime_get_sg_table,
>>> diff --git a/drivers/gpu/drm/msm/msm_gem_prime.c b/drivers/gpu/drm/msm/msm_gem_prime.c
>>> index ee267490c935..1a6d8099196a 100644
>>> --- a/drivers/gpu/drm/msm/msm_gem_prime.c
>>> +++ b/drivers/gpu/drm/msm/msm_gem_prime.c
>>> @@ -16,6 +16,9 @@ struct sg_table *msm_gem_prime_get_sg_table(struct drm_gem_object *obj)
>>>       struct msm_gem_object *msm_obj = to_msm_bo(obj);
>>>       int npages = obj->size >> PAGE_SHIFT;
>>>
>>> +     if (msm_obj->flags & MSM_BO_NO_SHARE)
>>> +             return ERR_PTR(-EINVAL);
>>> +
>>>       if (WARN_ON(!msm_obj->pages))  /* should have already pinned! */
>>>               return ERR_PTR(-ENOMEM);
>>>
>>> @@ -45,6 +48,15 @@ struct drm_gem_object *msm_gem_prime_import_sg_table(struct drm_device *dev,
>>>       return msm_gem_import(dev, attach->dmabuf, sg);
>>>  }
>>>
>>> +
>>> +struct dma_buf *msm_gem_prime_export(struct drm_gem_object *obj, int flags)
>>> +{
>>> +     if (to_msm_bo(obj)->flags & MSM_BO_NO_SHARE)
>>> +             return ERR_PTR(-EPERM);
>>> +
>>> +     return drm_gem_prime_export(obj, flags);
>>> +}
>>> +
>>>  int msm_gem_prime_pin(struct drm_gem_object *obj)
>>>  {
>>>       struct page **pages;
>>> @@ -53,6 +65,9 @@ int msm_gem_prime_pin(struct drm_gem_object *obj)
>>>       if (obj->import_attach)
>>>               return 0;
>>>
>>> +     if (to_msm_bo(obj)->flags & MSM_BO_NO_SHARE)
>>> +             return -EINVAL;
>>> +
>>>       pages = msm_gem_pin_pages_locked(obj);
>>>       if (IS_ERR(pages))
>>>               ret = PTR_ERR(pages);
>>> diff --git a/include/uapi/drm/msm_drm.h b/include/uapi/drm/msm_drm.h
>>> index b974f5a24dbc..1bccc347945c 100644
>>> --- a/include/uapi/drm/msm_drm.h
>>> +++ b/include/uapi/drm/msm_drm.h
>>> @@ -140,6 +140,19 @@ struct drm_msm_param {
>>>
>>>  #define MSM_BO_SCANOUT       0x00000001     /* scanout capable */
>>>  #define MSM_BO_GPU_READONLY  0x00000002
>>> +/* Private buffers do not need to be explicitly listed in the SUBMIT
>>> + * ioctl, unless referenced by a drm_msm_gem_submit_cmd.  Private
>>> + * buffers may NOT be imported/exported or used for scanout (or any
>>> + * other situation where buffers can be indefinitely pinned, but
>>> + * cases other than scanout are all kernel owned BOs which are not
>>> + * visible to userspace).
>>
>> Why is pinning for scanout a problem with those?
>>
>> Maybe I missed something but for other drivers that doesn't seem to be a problem.
> 
> I guess _technically_ it could be ok because we track pin-count
> separately from dma_resv.  But the motivation for that statement was
> simply that _NO_SHARE buffers share a resv obj with the VM, so they
> should not be used in a different VM (in this case, the display, which
> has it's own VM).

Ah, yes that makes perfect sense.

You should indeed avoid importing the BO into a different VM when it shares the reservation object with it. That will only cause trouble.

But at least amdgpu/radeon and I think i915 as well don't need to do that. Scanout is just separate from all VMs.

Regards,
Christian.


> 
> BR,
> -R
> 
>> Regards,
>> Christian.
>>
>>
>>> + *
>>> + * In exchange for those constraints, all private BOs associated with
>>> + * a single context (drm_file) share a single dma_resv, and if there
>>> + * has been no eviction since the last submit, there are no per-BO
>>> + * bookeeping to do, significantly cutting the SUBMIT overhead.
>>> + */
>>> +#define MSM_BO_NO_SHARE      0x00000004
>>>  #define MSM_BO_CACHE_MASK    0x000f0000
>>>  /* cache modes */
>>>  #define MSM_BO_CACHED        0x00010000
>>> @@ -149,6 +162,7 @@ struct drm_msm_param {
>>>
>>>  #define MSM_BO_FLAGS         (MSM_BO_SCANOUT | \
>>>                                MSM_BO_GPU_READONLY | \
>>> +                              MSM_BO_NO_SHARE | \
>>>                                MSM_BO_CACHE_MASK)
>>>
>>>  struct drm_msm_gem_new {
>>