From patchwork Mon Apr 28 20:54:19 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 885524 Received: from mail-pl1-f178.google.com (mail-pl1-f178.google.com [209.85.214.178]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 74AD420E314; Mon, 28 Apr 2025 20:57:11 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.178 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1745873833; cv=none; b=shPi6RYREEVGkWhMApwMUZbN4C3ogRVXGrlKgvCAdUaZCDUOjfKA/DlKhMsUwgFByjLE177DNMeGz0SWZyYz6kG3hEr8+rdbz0hDM74wgAKH/wmh3W0/JbXYk2WipdV5kWxKJCfDKxdpUgTyd12YtrLZKIjSGFMrsFsdln4/kL4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1745873833; c=relaxed/simple; bh=36u9m2Qzv9snTLrmnAn9Ak/upsZajrtSCf7ORVrKfu0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=QjHg3mmC2tj42+0UBMzRMv9ywEuRV8Po0Qd18bL9fntkKBmJe2OHKBgCB6sQAIN9mi+ver2h4ytZ7ZxG2v+jUXaqiy5m+zHo5zQYT+04FqRs2IIC+w0fp6AuxS10RUE8HRnDSEIQkC3qPCIqx2H6JjEENLq+HC7LbbcJa0kCKl4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=khBln9Fp; arc=none smtp.client-ip=209.85.214.178 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="khBln9Fp" Received: by mail-pl1-f178.google.com with SMTP id d9443c01a7336-22423adf751so53959905ad.2; Mon, 28 Apr 2025 13:57:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1745873831; x=1746478631; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=gErB2cAlx18kltWLiFVo8AdmrLjwsPdZ/+6AQSxk6c0=; b=khBln9FpPVgtZvQjHFvZVJsatfKEOwkzfgcMD6+pKkObyz0p7j1sB3MzPCluxkDKBr BGO7wYc/I2oTFy6EzagIKeTM5dBtqiSqO05JIWXtFUqr8PwfWK4yIr8KlreOxWY0/bg+ OnpdpWRH9Nd2xagnu1HsmhFMoqDYN1tBAp3Te4msMVSBokFdWMTs4QlpRVl3VLybhr4q XQyNhhWb+aM0N89h3xiKPQkWTIe/FHWqhgDY+SjkJsEO04ZGt7EdFnlkk0K8haccYNNG wVxh+hWDSpkKhGYHfucg9oh639UAIFeQzBhskut1qRHXa8YLJwr+bbPByRBJW1kmMD84 TwvQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1745873831; x=1746478631; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=gErB2cAlx18kltWLiFVo8AdmrLjwsPdZ/+6AQSxk6c0=; b=nXGCqmn+Q6a0GxnGjJnt1SwFpnR/shuP5X6wkBcVdePMlX1djp4M273O1Ku6aYSbCz fH9Y9ZRv5S63WhR1iSpVqfHrW4+M7haAb12GuXmR0oTvAjIhW1KtXL2nj8/v6OICujfJ v0vLEqyzqwI3gq3KCOthXCM388Fr56p02Kkwv5G6MzIfIxemou4WPaB81v38xEFG+500 v6NyhBu8yohw4x0aHHBlMWScOofDjtuc0rmYq5RAbNwEHRXQ02QGuFYqII4xcTFj5L9I A9rHAQ88h0ZtwDeH2PzNE48mK4oW1chiIi71uSu8uVzhUrQvqB9n5aDl+IBRw8PHRXRY taIA== X-Forwarded-Encrypted: i=1; AJvYcCUPFfQCVtVbPzydPmy5bKixyrY9Rptes9+h5MzP+a99M6uuv23Rxu3WPIwkUIb7d2Vxt2ZpmzoRj1Ww5uMa@vger.kernel.org, AJvYcCVQ+nxsmBpugDFIq4Gd7sSajl8p2pXDsmzQENm7V/Oh6npguKOpJRRAcDcD0B4L4tW4yawLjhUjYtIkXXDA@vger.kernel.org X-Gm-Message-State: AOJu0YznllZyAv0xkbrfLD0amBXE4ltoEzvAU41cUmfmqarYeKNjruPH dd/k6ksirKCeddzOIQPt1Nii8UrijGbyg3p9UNAdI01na7Wwlm8p X-Gm-Gg: ASbGncuMYXTsNAMXDmSvdydJd6bPSuQsAGkOa9cFCuIyn4L6aUGZ2J0V2kPUv1B+7Zo zXvxXSbkZG70GyNCxKGWg9LKpGV86cAtQIj+zzlX3D1CP/5jBQPN8KoqAO5MY9MFG9lIc67K3cp 3Ql1oBSVthMnnsaH8SAnfUr6wsalb8+jJF8+j5cup39OH6Tq8VTZboYJOfweg2S9RE33IHRy4f5 Yl1DIPkF2iD+EsW2yvd9btkotDT3RBLwsyK6cSUYnPp1hkWQZDLtCOdLxvLaoR1HcRQVqcGue5x BGsa/DUq/QUt9BQNpqludt+/Ho8+5szRWa1Qn87B1z5nD95cMIkCxDPTs+/KIklQvsNZoU2B5c7 nNIvSpAd8M1L6HcY= X-Google-Smtp-Source: AGHT+IFSfuXpEPt6ZvJ6Ct/9ZPngfF6lv7But/NJPtpIgf8tCdiDFYKQi5ELMicg8phq+w6nywt5dg== X-Received: by 2002:a17:902:d485:b0:224:1af1:87f4 with SMTP id d9443c01a7336-22de6eb3a80mr8910775ad.22.1745873830795; Mon, 28 Apr 2025 13:57:10 -0700 (PDT) Received: from localhost ([2a00:79e0:3e00:2601:3afc:446b:f0df:eadc]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-22db4d76c22sm88113915ad.24.2025.04.28.13.57.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 28 Apr 2025 13:57:10 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v3 12/33] drm/msm: Convert vm locking Date: Mon, 28 Apr 2025 13:54:19 -0700 Message-ID: <20250428205619.227835-13-robdclark@gmail.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250428205619.227835-1-robdclark@gmail.com> References: <20250428205619.227835-1-robdclark@gmail.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Rob Clark Convert to using the gpuvm's r_obj for serializing access to the VM. This way we can use the drm_exec helper for dealing with deadlock detection and backoff. This will let us deal with upcoming locking order conflicts with the VM_BIND implmentation (ie. in some scenarious we need to acquire the obj lock first, for ex. to iterate all the VMs an obj is bound in, and in other scenarious we need to acquire the VM lock first). Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gem.c | 35 ++++++++++++++++++------ drivers/gpu/drm/msm/msm_gem.h | 37 +++++++++++++++++++++----- drivers/gpu/drm/msm/msm_gem_shrinker.c | 2 +- drivers/gpu/drm/msm/msm_gem_submit.c | 9 ++++++- drivers/gpu/drm/msm/msm_gem_vma.c | 23 +++++----------- 5 files changed, 74 insertions(+), 32 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 523e6dd3ad06..f767452f168d 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -52,6 +52,7 @@ static void put_iova_spaces(struct drm_gem_object *obj, struct drm_gpuvm *vm, bo static void msm_gem_close(struct drm_gem_object *obj, struct drm_file *file) { struct msm_context *ctx = file->driver_priv; + struct drm_exec exec; update_ctx_mem(file, -obj->size); @@ -70,9 +71,9 @@ static void msm_gem_close(struct drm_gem_object *obj, struct drm_file *file) dma_resv_wait_timeout(obj->resv, DMA_RESV_USAGE_READ, false, msecs_to_jiffies(1000)); - msm_gem_lock(obj); + msm_gem_lock_vm_and_obj(&exec, obj, ctx->vm); put_iova_spaces(obj, &ctx->vm->base, true); - msm_gem_unlock(obj); + drm_exec_fini(&exec); /* drop locks */ } /* @@ -538,11 +539,12 @@ int msm_gem_get_and_pin_iova_range(struct drm_gem_object *obj, struct msm_gem_vm *vm, uint64_t *iova, u64 range_start, u64 range_end) { + struct drm_exec exec; int ret; - msm_gem_lock(obj); + msm_gem_lock_vm_and_obj(&exec, obj, vm); ret = get_and_pin_iova_range_locked(obj, vm, iova, range_start, range_end); - msm_gem_unlock(obj); + drm_exec_fini(&exec); /* drop locks */ return ret; } @@ -562,16 +564,17 @@ int msm_gem_get_iova(struct drm_gem_object *obj, struct msm_gem_vm *vm, uint64_t *iova) { struct msm_gem_vma *vma; + struct drm_exec exec; int ret = 0; - msm_gem_lock(obj); + msm_gem_lock_vm_and_obj(&exec, obj, vm); vma = get_vma_locked(obj, vm, 0, U64_MAX); if (IS_ERR(vma)) { ret = PTR_ERR(vma); } else { *iova = vma->base.va.addr; } - msm_gem_unlock(obj); + drm_exec_fini(&exec); /* drop locks */ return ret; } @@ -600,9 +603,10 @@ static int clear_iova(struct drm_gem_object *obj, int msm_gem_set_iova(struct drm_gem_object *obj, struct msm_gem_vm *vm, uint64_t iova) { + struct drm_exec exec; int ret = 0; - msm_gem_lock(obj); + msm_gem_lock_vm_and_obj(&exec, obj, vm); if (!iova) { ret = clear_iova(obj, vm); } else { @@ -615,7 +619,7 @@ int msm_gem_set_iova(struct drm_gem_object *obj, ret = -EBUSY; } } - msm_gem_unlock(obj); + drm_exec_fini(&exec); /* drop locks */ return ret; } @@ -1007,12 +1011,27 @@ static void msm_gem_free_object(struct drm_gem_object *obj) struct msm_gem_object *msm_obj = to_msm_bo(obj); struct drm_device *dev = obj->dev; struct msm_drm_private *priv = dev->dev_private; + struct drm_exec exec; mutex_lock(&priv->obj_lock); list_del(&msm_obj->node); mutex_unlock(&priv->obj_lock); + /* + * We need to lock any VMs the object is still attached to, but not + * the object itself (see explaination in msm_gem_assert_locked()), + * so just open-code this special case: + */ + drm_exec_init(&exec, 0, 0); + drm_exec_until_all_locked (&exec) { + struct drm_gpuvm_bo *vm_bo; + drm_gem_for_each_gpuvm_bo (vm_bo, obj) { + drm_exec_lock_obj(&exec, drm_gpuvm_resv_obj(vm_bo->vm)); + drm_exec_retry_on_contention(&exec); + } + } put_iova_spaces(obj, NULL, true); + drm_exec_fini(&exec); /* drop locks */ if (obj->import_attach) { GEM_WARN_ON(msm_obj->vaddr); diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index f7f7e7910754..36a846e9b943 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -62,12 +62,6 @@ struct msm_gem_vm { */ struct drm_mm mm; - /** @mm_lock: protects @mm node allocation/removal */ - struct spinlock mm_lock; - - /** @vm_lock: protects gpuvm insert/remove/traverse */ - struct mutex vm_lock; - /** @mmu: The mmu object which manages the pgtables */ struct msm_mmu *mmu; @@ -246,6 +240,37 @@ msm_gem_unlock(struct drm_gem_object *obj) dma_resv_unlock(obj->resv); } +/** + * msm_gem_lock_vm_and_obj() - Helper to lock an obj + VM + * @exec: the exec context helper which will be initalized + * @obj: the GEM object to lock + * @vm: the VM to lock + * + * Operations which modify a VM frequently need to lock both the VM and + * the object being mapped/unmapped/etc. This helper uses drm_exec to + * acquire both locks, dealing with potential deadlock/backoff scenarios + * which arise when multiple locks are involved. + */ +static inline int +msm_gem_lock_vm_and_obj(struct drm_exec *exec, + struct drm_gem_object *obj, + struct msm_gem_vm *vm) +{ + int ret = 0; + + drm_exec_init(exec, 0, 2); + drm_exec_until_all_locked (exec) { + ret = drm_exec_lock_obj(exec, drm_gpuvm_resv_obj(&vm->base)); + if (!ret && (obj->resv != drm_gpuvm_resv(&vm->base))) + ret = drm_exec_lock_obj(exec, obj); + drm_exec_retry_on_contention(exec); + if (GEM_WARN_ON(ret)) + break; + } + + return ret; +} + static inline void msm_gem_assert_locked(struct drm_gem_object *obj) { diff --git a/drivers/gpu/drm/msm/msm_gem_shrinker.c b/drivers/gpu/drm/msm/msm_gem_shrinker.c index 07ca4ddfe4e3..4cd75001aca8 100644 --- a/drivers/gpu/drm/msm/msm_gem_shrinker.c +++ b/drivers/gpu/drm/msm/msm_gem_shrinker.c @@ -123,7 +123,7 @@ msm_gem_shrinker_scan(struct shrinker *shrinker, struct shrink_control *sc) stages[i].freed = drm_gem_lru_scan(stages[i].lru, nr, &stages[i].remaining, - stages[i].shrink); + stages[i].shrink); nr -= stages[i].freed; freed += stages[i].freed; remaining += stages[i].remaining; diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c index e8a670566147..71fe43825f2c 100644 --- a/drivers/gpu/drm/msm/msm_gem_submit.c +++ b/drivers/gpu/drm/msm/msm_gem_submit.c @@ -247,11 +247,18 @@ static int submit_lookup_cmds(struct msm_gem_submit *submit, /* This is where we make sure all the bo's are reserved and pin'd: */ static int submit_lock_objects(struct msm_gem_submit *submit) { + unsigned flags = DRM_EXEC_IGNORE_DUPLICATES | DRM_EXEC_INTERRUPTIBLE_WAIT; int ret; - drm_exec_init(&submit->exec, DRM_EXEC_INTERRUPTIBLE_WAIT, submit->nr_bos); +// TODO need to add vm_bind path which locks vm resv + external objs + drm_exec_init(&submit->exec, flags, submit->nr_bos); drm_exec_until_all_locked (&submit->exec) { + ret = drm_exec_lock_obj(&submit->exec, + drm_gpuvm_resv_obj(&submit->vm->base)); + drm_exec_retry_on_contention(&submit->exec); + if (ret) + goto error; for (unsigned i = 0; i < submit->nr_bos; i++) { struct drm_gem_object *obj = submit->bos[i].obj; ret = drm_exec_prepare_obj(&submit->exec, obj, 1); diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_gem_vma.c index a8c98490edaa..223fb50aaf26 100644 --- a/drivers/gpu/drm/msm/msm_gem_vma.c +++ b/drivers/gpu/drm/msm/msm_gem_vma.c @@ -92,15 +92,13 @@ void msm_gem_vma_close(struct msm_gem_vma *vma) GEM_WARN_ON(vma->mapped); - spin_lock(&vm->mm_lock); + drm_gpuvm_resv_assert_held(&vm->base); + if (vma->base.va.addr) drm_mm_remove_node(&vma->node); - spin_unlock(&vm->mm_lock); - mutex_lock(&vm->vm_lock); drm_gpuva_remove(&vma->base); drm_gpuva_unlink(&vma->base); - mutex_unlock(&vm->vm_lock); kfree(vma); } @@ -114,16 +112,16 @@ msm_gem_vma_new(struct msm_gem_vm *vm, struct drm_gem_object *obj, struct msm_gem_vma *vma; int ret; + drm_gpuvm_resv_assert_held(&vm->base); + vma = kzalloc(sizeof(*vma), GFP_KERNEL); if (!vma) return ERR_PTR(-ENOMEM); if (vm->managed) { - spin_lock(&vm->mm_lock); ret = drm_mm_insert_node_in_range(&vm->mm, &vma->node, obj->size, PAGE_SIZE, 0, range_start, range_end, 0); - spin_unlock(&vm->mm_lock); if (ret) goto err_free_vma; @@ -137,9 +135,7 @@ msm_gem_vma_new(struct msm_gem_vm *vm, struct drm_gem_object *obj, drm_gpuva_init(&vma->base, range_start, range_end - range_start, obj, 0); vma->mapped = false; - mutex_lock(&vm->vm_lock); ret = drm_gpuva_insert(&vm->base, &vma->base); - mutex_unlock(&vm->vm_lock); if (ret) goto err_free_range; @@ -149,17 +145,13 @@ msm_gem_vma_new(struct msm_gem_vm *vm, struct drm_gem_object *obj, goto err_va_remove; } - mutex_lock(&vm->vm_lock); drm_gpuva_link(&vma->base, vm_bo); - mutex_unlock(&vm->vm_lock); GEM_WARN_ON(drm_gpuvm_bo_put(vm_bo)); return vma; err_va_remove: - mutex_lock(&vm->vm_lock); drm_gpuva_remove(&vma->base); - mutex_unlock(&vm->vm_lock); err_free_range: if (vm->managed) drm_mm_remove_node(&vma->node); @@ -190,7 +182,9 @@ struct msm_gem_vm * msm_gem_vm_create(struct drm_device *drm, struct msm_mmu *mmu, const char *name, u64 va_start, u64 va_size, bool managed) { - enum drm_gpuvm_flags flags = managed ? DRM_GPUVM_VA_WEAK_REF : 0; + enum drm_gpuvm_flags flags = + DRM_GPUVM_RESV_PROTECTED | + (managed ? DRM_GPUVM_VA_WEAK_REF : 0); struct msm_gem_vm *vm; struct drm_gem_object *dummy_gem; int ret = 0; @@ -212,9 +206,6 @@ msm_gem_vm_create(struct drm_device *drm, struct msm_mmu *mmu, const char *name, va_start, va_size, 0, 0, &msm_gpuvm_ops); drm_gem_object_put(dummy_gem); - spin_lock_init(&vm->mm_lock); - mutex_init(&vm->vm_lock); - vm->mmu = mmu; vm->managed = managed;