From patchwork Mon May 19 17:51:25 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 891140 Received: from mail-pf1-f169.google.com (mail-pf1-f169.google.com [209.85.210.169]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CD678289342; Mon, 19 May 2025 17:54:19 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.169 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747677262; cv=none; b=lrBD28E6a1fgJDmpxxNByXLPHF55gZbvzbciJdaF37UKQfhDESrotWLtg3vZHW4uejKcTnt/SyvW16LRYGFQtOko29hij0zHcqb/RXoYEM2RxMuRFZZGTSshzFfzm3OjI7She7eEA+oep1Ti8rtyYaQ+EUlvwRfGaHlQ7fBaf+s= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747677262; c=relaxed/simple; bh=tibBYXh+OHMaU82MthggxsnfGTDjqalhpzUjKNyekhM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=LuZIYkf8QhrgJRdnR0ZlHzUdlxMXB8WE/0z3UAQqSz3vNZ6qvM0DgShDnyV2GkaZ0OASfmyM35S1FdkkiyO2JKRmUksy20iPbHW9PAuYkpcJPrMyCOODnzUm2CXNUooZLPSHBh3ZhdT5cXaTN3lRPsnkp5cwC1ky/6oKWsT8vm0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=E1vVtuu2; arc=none smtp.client-ip=209.85.210.169 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="E1vVtuu2" Received: by mail-pf1-f169.google.com with SMTP id d2e1a72fcca58-7424ccbef4eso4806685b3a.2; Mon, 19 May 2025 10:54:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1747677259; x=1748282059; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=GtcNSFug1MEkA4buKrnQDhhH7gHYNLRZJbU0s44kbqs=; b=E1vVtuu2qOYopz//eUpl6EwDPPtNMVMV4cCDr3jaZ+sXFupyaVtgTTWW/SO23SgYI5 qANC//Pia/d04VjEg8KoG8mlXQSqpUNZaXSEmLUm46C2DIlIMzbsTNuI241Ytz94zeTI 3oG5Jlhzy1fpP8zC3nFtFvs7PDwlGA5ETvAkLaxjMM2T4HNFG10y/mHdglQq20ePK3Xj 6sBa0uELPV5xKbGzq5EkwpyrzPM1AVzYYFD7jCjwJstyilwzNAydKzt8j6L7nYqjeH/f I0czVYAJfqhYCOUsXUHIjUOx29480JrSC0+UnKdDLRvrmiOob32AfGgkJKos0h3tSOzz pkfA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747677259; x=1748282059; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=GtcNSFug1MEkA4buKrnQDhhH7gHYNLRZJbU0s44kbqs=; b=MKLl0e7NsOJfW+ew2/mN9JRbpR7KpopwFxTinPHQ5FKF0o7Yc1ZKZw3AjkdUdzlRmN MZCk3PMAGTxPX0lgKyfXcUSCbVOxO1uD5kWI/RUw7nR44i9rA86k+Gli0xNPZVDL7BcB 3GLDkWrVUixm4W2UG9+gL2d7DJCZIFsKjk4IqbR5/di7REEBtsvsmJT2sqdKgQeHNWr6 +wEPOe0yIx1WdDPkn4hVmEqow+0YvjgTch0+nJNVLNhdQH4I/isVWwZnnBsU5ciH4hAV +ZNnX2fx6bCIB0zSqbT2dbxY3Un8yT2Q0aO8F99mBd8cRTKGMyw04+fPKX7jTJ0CBHUd jABQ== X-Forwarded-Encrypted: i=1; AJvYcCUk0Xg3QSfu+aQu935kmbBiixU3UZsA17k0bFYUxGpwHh1ogercjknp3Miw49OkbRyZbqE0ZqS5OYOxSoim@vger.kernel.org, AJvYcCWjKAvFM77fwUaDbsDWOLRBatsNIK43LSrrzxTHgeScg3e3s9X/ZJ2XgrkFt30P0k2jDiHsKPLLJD8451Hp@vger.kernel.org X-Gm-Message-State: AOJu0YxIwrHRfaUpUns+ifCPsIBcZoZXjJmjKRJxUAsvnzkfMGb2q3EE bpLNCEAjd4gkQRqxGi5Hd1qw8vaV0FKp8103czFszPLnURXspe7AQNPR/p5YQA== X-Gm-Gg: ASbGncvjJx5qDg8e2AHjidvIvfba0gJ70FFiMFdNYy+N/kH5MURM4FTvwRFXbwI1YLg dEE0ASf3BdBlzdX7bXqzrPSyvbLj2z5OjlYIEnlxCZptSSpdYBnn6Hz/BBF0P8Sjr16CfQ+dTxJ hfEl83FaldfD617qA5Ex1LDVawiPN0+kePoiT/BRxAU6iNElKJIdHjzkLh4dwqV6n6nR128I7rr 6+cQRU4PcgiiKm9fIQQgaY1RDuB2CIdROZnvlUjUOaTTOG59OZnkxwUJk8iYJUHltse8CMFkrwo 10ZjD/oD1yLIdoeQiM94QgdjKRMX2ODEBCdvczMSKG1lzlE42FbEHFqxm1QMeYWMUlmv6xzJFIR WTaqtww20h80zTu4GjPh1UMQ1Vw== X-Google-Smtp-Source: AGHT+IGLGnR2gjVBiS7uWt9d8fdk1foprJrMCPDgzog9mB8/+ApvxdoVytfH9URIqIITFimmH1FxGA== X-Received: by 2002:a05:6a21:3289:b0:1f5:8220:7452 with SMTP id adf61e73a8af0-216219356ddmr20309299637.24.1747677258870; Mon, 19 May 2025 10:54:18 -0700 (PDT) Received: from localhost ([2a00:79e0:3e00:2601:3afc:446b:f0df:eadc]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-b26eb08450csm6611996a12.55.2025.05.19.10.54.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 May 2025 10:54:17 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Rob Clark , Danilo Krummrich , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v5 02/40] drm/gpuvm: Allow VAs to hold soft reference to BOs Date: Mon, 19 May 2025 10:51:25 -0700 Message-ID: <20250519175348.11924-3-robdclark@gmail.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250519175348.11924-1-robdclark@gmail.com> References: <20250519175348.11924-1-robdclark@gmail.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Rob Clark Eases migration for drivers where VAs don't hold hard references to their associated BO, avoiding reference loops. In particular, msm uses soft references to optimistically keep around mappings until the BO is distroyed. Which obviously won't work if the VA (the mapping) is holding a reference to the BO. By making this a per-VM flag, we can use normal hard-references for mappings in a "VM_BIND" managed VM, but soft references in other cases, such as kernel-internal VMs (for display scanout, etc). Cc: Danilo Krummrich Signed-off-by: Rob Clark --- drivers/gpu/drm/drm_gpuvm.c | 37 ++++++++++++++++++++++++++++++++----- include/drm/drm_gpuvm.h | 19 +++++++++++++++++-- 2 files changed, 49 insertions(+), 7 deletions(-) diff --git a/drivers/gpu/drm/drm_gpuvm.c b/drivers/gpu/drm/drm_gpuvm.c index 1e89a98caad4..892b62130ff8 100644 --- a/drivers/gpu/drm/drm_gpuvm.c +++ b/drivers/gpu/drm/drm_gpuvm.c @@ -1125,6 +1125,8 @@ __drm_gpuvm_prepare_objects(struct drm_gpuvm *gpuvm, LIST_HEAD(extobjs); int ret = 0; + WARN_ON(gpuvm->flags & DRM_GPUVM_VA_WEAK_REF); + for_each_vm_bo_in_list(gpuvm, extobj, &extobjs, vm_bo) { ret = exec_prepare_obj(exec, vm_bo->obj, num_fences); if (ret) @@ -1145,6 +1147,8 @@ drm_gpuvm_prepare_objects_locked(struct drm_gpuvm *gpuvm, struct drm_gpuvm_bo *vm_bo; int ret = 0; + WARN_ON(gpuvm->flags & DRM_GPUVM_VA_WEAK_REF); + drm_gpuvm_resv_assert_held(gpuvm); list_for_each_entry(vm_bo, &gpuvm->extobj.list, list.entry.extobj) { ret = exec_prepare_obj(exec, vm_bo->obj, num_fences); @@ -1386,6 +1390,7 @@ drm_gpuvm_validate_locked(struct drm_gpuvm *gpuvm, struct drm_exec *exec) struct drm_gpuvm_bo *vm_bo, *next; int ret = 0; + WARN_ON(gpuvm->flags & DRM_GPUVM_VA_WEAK_REF); drm_gpuvm_resv_assert_held(gpuvm); list_for_each_entry_safe(vm_bo, next, &gpuvm->evict.list, @@ -1482,7 +1487,9 @@ drm_gpuvm_bo_create(struct drm_gpuvm *gpuvm, vm_bo->vm = drm_gpuvm_get(gpuvm); vm_bo->obj = obj; - drm_gem_object_get(obj); + + if (!(gpuvm->flags & DRM_GPUVM_VA_WEAK_REF)) + drm_gem_object_get(obj); kref_init(&vm_bo->kref); INIT_LIST_HEAD(&vm_bo->list.gpuva); @@ -1504,16 +1511,22 @@ drm_gpuvm_bo_destroy(struct kref *kref) const struct drm_gpuvm_ops *ops = gpuvm->ops; struct drm_gem_object *obj = vm_bo->obj; bool lock = !drm_gpuvm_resv_protected(gpuvm); + bool unref = !(gpuvm->flags & DRM_GPUVM_VA_WEAK_REF); if (!lock) drm_gpuvm_resv_assert_held(gpuvm); + if (kref_read(&obj->refcount) > 0) { + drm_gem_gpuva_assert_lock_held(obj); + } else { + WARN_ON(!(gpuvm->flags & DRM_GPUVM_VA_WEAK_REF)); + WARN_ON(!list_empty(&vm_bo->list.entry.evict)); + WARN_ON(!list_empty(&vm_bo->list.entry.extobj)); + } + drm_gpuvm_bo_list_del(vm_bo, extobj, lock); drm_gpuvm_bo_list_del(vm_bo, evict, lock); - if (kref_read(&obj->refcount) > 0) - drm_gem_gpuva_assert_lock_held(obj); - list_del(&vm_bo->list.entry.gem); if (ops && ops->vm_bo_free) @@ -1522,7 +1535,8 @@ drm_gpuvm_bo_destroy(struct kref *kref) kfree(vm_bo); drm_gpuvm_put(gpuvm); - drm_gem_object_put(obj); + if (unref) + drm_gem_object_put(obj); } /** @@ -1678,6 +1692,12 @@ drm_gpuvm_bo_extobj_add(struct drm_gpuvm_bo *vm_bo) if (!lock) drm_gpuvm_resv_assert_held(gpuvm); + /* If the vm_bo doesn't hold a hard reference to the obj, then the + * driver is responsible for object tracking. + */ + if (gpuvm->flags & DRM_GPUVM_VA_WEAK_REF) + return; + if (drm_gpuvm_is_extobj(gpuvm, vm_bo->obj)) drm_gpuvm_bo_list_add(vm_bo, extobj, lock); } @@ -1699,6 +1719,13 @@ drm_gpuvm_bo_evict(struct drm_gpuvm_bo *vm_bo, bool evict) bool lock = !drm_gpuvm_resv_protected(gpuvm); dma_resv_assert_held(obj->resv); + + /* If the vm_bo doesn't hold a hard reference to the obj, then the + * driver must track evictions on it's own. + */ + if (gpuvm->flags & DRM_GPUVM_VA_WEAK_REF) + return; + vm_bo->evicted = evict; /* Can't add external objects to the evicted list directly if not using diff --git a/include/drm/drm_gpuvm.h b/include/drm/drm_gpuvm.h index 2a9629377633..652e0fb66413 100644 --- a/include/drm/drm_gpuvm.h +++ b/include/drm/drm_gpuvm.h @@ -205,10 +205,25 @@ enum drm_gpuvm_flags { */ DRM_GPUVM_RESV_PROTECTED = BIT(0), + /** + * @DRM_GPUVM_VA_WEAK_REF: + * + * Flag indicating that the &drm_gpuva (or more correctly, the + * &drm_gpuvm_bo) only holds a weak reference to the &drm_gem_object. + * This mode is intended to ease migration to drm_gpuvm for drivers + * where the GEM object holds a referece to the VA, rather than the + * other way around. + * + * In this mode, drm_gpuvm does not track evicted or external objects. + * It is intended for legacy mode, where the needed objects are attached + * to the command submission ioctl, therefore this tracking is unneeded. + */ + DRM_GPUVM_VA_WEAK_REF = BIT(1), + /** * @DRM_GPUVM_USERBITS: user defined bits */ - DRM_GPUVM_USERBITS = BIT(1), + DRM_GPUVM_USERBITS = BIT(2), }; /** @@ -651,7 +666,7 @@ struct drm_gpuvm_bo { /** * @obj: The &drm_gem_object being mapped in @vm. This is a reference - * counted pointer. + * counted pointer, unless the &DRM_GPUVM_VA_WEAK_REF flag is set. */ struct drm_gem_object *obj; From patchwork Mon May 19 17:51:27 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 891139 Received: from mail-pg1-f171.google.com (mail-pg1-f171.google.com [209.85.215.171]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5687328937B; Mon, 19 May 2025 17:54:23 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.171 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747677265; cv=none; b=S71mbDv8r3yHOP2w0mvSInlwZKOVKVjqYrN+oGx+nwhgUBoFict2rBcPoxtL/qREPHf/LT2qWnWb/+mfGtbVw5gb/KpqMfZ7mZAGeNtbZJhLD8jkazbwDDyPLUPefP4oN7pLA/PGU1aGRvDREqD15CRMoKvyk4eZPJ5gvOp1Eow= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747677265; c=relaxed/simple; bh=OCOkY5Kzx9fQHT33EYqh/192I4x5fhYXVcPq5AZ1/8w=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=qKzQAx17uCknPbIJHaEsEc0nkxWUqXQL6xKNVxbKmVa8uqUAuI3DMgfHGUDwvhg0KQMnPIjfit+ulYHYxyqpzJJ6NXc5Q63li2jZem6pTZ3BjAsrVR7TDwpR2i/bk5H97Wn4jWqWc6c69EllAol24y6QkqfndMjRce5vDvQXpiw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=MGkux4ab; arc=none smtp.client-ip=209.85.215.171 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="MGkux4ab" Received: by mail-pg1-f171.google.com with SMTP id 41be03b00d2f7-ae727e87c26so3059802a12.0; Mon, 19 May 2025 10:54:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1747677263; x=1748282063; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=MECHGtnXB65b4MXNo29O0yS4gkyeIySNFhpqKFVb7PA=; b=MGkux4ab2qQ5Nd2CxfGx4+3cj/MIUmR3ePz0vsaDUXosnXVsi47pJGu9GZ31b+dfO/ wpP+GM1e3v2Hwuxc1qpWdtuW5BCY5JCnfEGloOOMqwa6tCpeO/HMwF+XZENmLe1rRIvJ uleezDz+4nu+nXNCOAQ7tyIKwVPNSCnuCkMQJvOWOX3VUiwah5ja0TekkAWpiDY0cqrJ I1zbsy0quzQKToNX7H4gl+8+8WMquQk/ULGujgncXDVtGGIcj7n9YwqQL0t9gJLVwHgu zQUZgYnWAKS2hqZ/JtCxJKpAO+N7Py7ZhNxbeMm2w58NJYbKW7t/RbbLjFPeE/0hWsPM Rphg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747677263; x=1748282063; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=MECHGtnXB65b4MXNo29O0yS4gkyeIySNFhpqKFVb7PA=; b=pPzNV4n2XJfW+evUwY0pG6tV2lvJJYTRr1PNVIY+BTA8XIshskK95ezgEwf+5KS3DX EQytYHeyqINSWAOhOALKGKcba+Tzf9e2onYtJEF7cLW1eoXu6a/m3HfLNLnrg2/xQDc9 Gp0TIUM/zch31XN1rUigYGVxeowl9H23RFiZEUzD8ntYcZtUrbiDH1y1bJjKc3uebyxa zhjHOWgBvBZsBgD1TIKxVKzksVWkNKqgFNRf3DM/7ioJxC4CuevNri0hJ41GOPv1Df7j wbInsop2Y/nhq4R2D61vYjayWuxp3BPoFY+Cdbbwv144N7cg5BkBinazNbyQZbnP178D rSxA== X-Forwarded-Encrypted: i=1; AJvYcCV1vhIh6EnMSppgklKbjTm7Nh8EFFhWFoiKod60PhOKektRnSI4w1w7eAXRbetMqxrhcDk9JvYQgwzcUkOQ@vger.kernel.org, AJvYcCWWSAHwC6Uf1XuZjKXmZaaNknV0I0+iIEsnCqUmdHdfEHVIL1f1LgiwzSx3YUkqRjRQJw5+JlWMfjAMoHYm@vger.kernel.org X-Gm-Message-State: AOJu0YwRdi9QuN7/Zf+37REneRwkzec7qBmyqxvktbQp9nuQ5Z/HsUZl rj9eNDx6An2s21AkcH28nZhBUceQEmRDivWCyTE2QAFrNJ3gV2LW23qqanlxgA== X-Gm-Gg: ASbGncsJz+/gmE5DWcmZ2WCLBx7XtV4bfkpKrLt83pSOW+bude+DFzy4QCSZu2vuQR3 ydXXCutFQU4mK7PxVdjlna4zIjuRoLm5ZYRiafsR2Gt0HVsI1mUUZzdhzuc8S38wCuR+8bW5YTW zhGspjiQ+0sbgOstwzRQc2qat0hpFE3RojT7Gli0OnVn/0otBiDoG6RY0jNFzt6cNPK8eKVgogM /Q13H6+dK7fKhLfGwkxoRR/3CZDk2x1k3RzJBdlowo8YXauPdhZqQD6T5rNPW+lTth2DIrad9xV NJU+g74dyZy/i/DMaVs5OdbpmZPDM9kQmPX4k8zJx1qSvf9m3acdl8hCrQ52l2ADCmYiOQxE/N9 5D7lXSe42p1Fpk/oeAycYbyP/gQ== X-Google-Smtp-Source: AGHT+IETEAy4l4Rs9OrAnwlLbsxGPdmd9hMmk8n/Oel19Ttvhga9l+j0mfP1uF4WLnTD1Aj7/dl/7g== X-Received: by 2002:a17:902:ce87:b0:223:653e:eb09 with SMTP id d9443c01a7336-231d438a294mr182291145ad.7.1747677262603; Mon, 19 May 2025 10:54:22 -0700 (PDT) Received: from localhost ([2a00:79e0:3e00:2601:3afc:446b:f0df:eadc]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-231d4ebad26sm62936415ad.198.2025.05.19.10.54.21 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 May 2025 10:54:22 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Rob Clark , Philipp Stanner , Danilo Krummrich , Matthew Brost , Philipp Stanner , =?utf-8?q?Christian_K=C3=B6nig?= , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v5 04/40] drm/sched: Add enqueue credit limit Date: Mon, 19 May 2025 10:51:27 -0700 Message-ID: <20250519175348.11924-5-robdclark@gmail.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250519175348.11924-1-robdclark@gmail.com> References: <20250519175348.11924-1-robdclark@gmail.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Rob Clark Similar to the existing credit limit mechanism, but applying to jobs enqueued to the scheduler but not yet run. The use case is to put an upper bound on preallocated, and potentially unneeded, pgtable pages. When this limit is exceeded, pushing new jobs will block until the count drops below the limit. Cc: Philipp Stanner Cc: Danilo Krummrich Signed-off-by: Rob Clark --- drivers/gpu/drm/scheduler/sched_entity.c | 19 +++++++++++++++++-- drivers/gpu/drm/scheduler/sched_main.c | 3 +++ include/drm/gpu_scheduler.h | 24 +++++++++++++++++++++++- 3 files changed, 43 insertions(+), 3 deletions(-) diff --git a/drivers/gpu/drm/scheduler/sched_entity.c b/drivers/gpu/drm/scheduler/sched_entity.c index bd39db7bb240..8e6b12563348 100644 --- a/drivers/gpu/drm/scheduler/sched_entity.c +++ b/drivers/gpu/drm/scheduler/sched_entity.c @@ -579,12 +579,25 @@ void drm_sched_entity_select_rq(struct drm_sched_entity *entity) * fence sequence number this function should be called with drm_sched_job_arm() * under common lock for the struct drm_sched_entity that was set up for * @sched_job in drm_sched_job_init(). + * + * If enqueue_credit_limit is used, this can return -ERESTARTSYS if the system + * call is interrupted. */ -void drm_sched_entity_push_job(struct drm_sched_job *sched_job) +int drm_sched_entity_push_job(struct drm_sched_job *sched_job) { struct drm_sched_entity *entity = sched_job->entity; + struct drm_gpu_scheduler *sched = sched_job->sched; bool first; ktime_t submit_ts; + int ret; + + ret = wait_event_interruptible( + sched->job_scheduled, + atomic_read(&sched->enqueue_credit_count) <= + sched->enqueue_credit_limit); + if (ret) + return ret; + atomic_add(sched_job->enqueue_credits, &sched->enqueue_credit_count); trace_drm_sched_job(sched_job, entity); atomic_inc(entity->rq->sched->score); @@ -609,7 +622,7 @@ void drm_sched_entity_push_job(struct drm_sched_job *sched_job) spin_unlock(&entity->lock); DRM_ERROR("Trying to push to a killed entity\n"); - return; + return -EINVAL; } rq = entity->rq; @@ -626,5 +639,7 @@ void drm_sched_entity_push_job(struct drm_sched_job *sched_job) drm_sched_wakeup(sched); } + + return 0; } EXPORT_SYMBOL(drm_sched_entity_push_job); diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c index cda1216adfa4..5f812253656a 100644 --- a/drivers/gpu/drm/scheduler/sched_main.c +++ b/drivers/gpu/drm/scheduler/sched_main.c @@ -1221,6 +1221,7 @@ static void drm_sched_run_job_work(struct work_struct *w) trace_drm_run_job(sched_job, entity); fence = sched->ops->run_job(sched_job); + atomic_sub(sched_job->enqueue_credits, &sched->enqueue_credit_count); complete_all(&entity->entity_idle); drm_sched_fence_scheduled(s_fence, fence); @@ -1257,6 +1258,7 @@ int drm_sched_init(struct drm_gpu_scheduler *sched, const struct drm_sched_init_ sched->ops = args->ops; sched->credit_limit = args->credit_limit; + sched->enqueue_credit_limit = args->enqueue_credit_limit; sched->name = args->name; sched->timeout = args->timeout; sched->hang_limit = args->hang_limit; @@ -1312,6 +1314,7 @@ int drm_sched_init(struct drm_gpu_scheduler *sched, const struct drm_sched_init_ INIT_LIST_HEAD(&sched->pending_list); spin_lock_init(&sched->job_list_lock); atomic_set(&sched->credit_count, 0); + atomic_set(&sched->enqueue_credit_count, 0); INIT_DELAYED_WORK(&sched->work_tdr, drm_sched_job_timedout); INIT_WORK(&sched->work_run_job, drm_sched_run_job_work); INIT_WORK(&sched->work_free_job, drm_sched_free_job_work); diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h index da64232c989d..8ec5000f81e1 100644 --- a/include/drm/gpu_scheduler.h +++ b/include/drm/gpu_scheduler.h @@ -366,6 +366,19 @@ struct drm_sched_job { enum drm_sched_priority s_priority; u32 credits; + /** + * @enqueue_credits: the number of enqueue credits this job + * contributes to the drm_gpu_scheduler.enqueue_credit_count. + * + * The (optional) @enqueue_credits should be set before calling + * drm_sched_entity_push_job(). When sum of all the jobs pushed + * to the entity, but not yet having their run_job() callback + * called exceeds @drm_gpu_scheduler.enqueue_credit_limit, the + * drm_sched_entity_push_job() will block until the count drops + * back below the limit, providing a way to throttle the number + * of queued, but not yet run, jobs. + */ + u32 enqueue_credits; /** @last_dependency: tracks @dependencies as they signal */ unsigned int last_dependency; atomic_t karma; @@ -485,6 +498,10 @@ struct drm_sched_backend_ops { * @ops: backend operations provided by the driver. * @credit_limit: the credit limit of this scheduler * @credit_count: the current credit count of this scheduler + * @enqueue_credit_limit: the credit limit of jobs pushed to scheduler and not + * yet run + * @enqueue_credit_count: the current crdit count of jobs pushed to scheduler + * but not yet run * @timeout: the time after which a job is removed from the scheduler. * @name: name of the ring for which this scheduler is being used. * @num_rqs: Number of run-queues. This is at most DRM_SCHED_PRIORITY_COUNT, @@ -518,6 +535,8 @@ struct drm_gpu_scheduler { const struct drm_sched_backend_ops *ops; u32 credit_limit; atomic_t credit_count; + u32 enqueue_credit_limit; + atomic_t enqueue_credit_count; long timeout; const char *name; u32 num_rqs; @@ -550,6 +569,8 @@ struct drm_gpu_scheduler { * @num_rqs: Number of run-queues. This may be at most DRM_SCHED_PRIORITY_COUNT, * as there's usually one run-queue per priority, but may be less. * @credit_limit: the number of credits this scheduler can hold from all jobs + * @enqueue_credit_limit: the number of credits that can be enqueued before + * drm_sched_entity_push_job() blocks * @hang_limit: number of times to allow a job to hang before dropping it. * This mechanism is DEPRECATED. Set it to 0. * @timeout: timeout value in jiffies for submitted jobs. @@ -564,6 +585,7 @@ struct drm_sched_init_args { struct workqueue_struct *timeout_wq; u32 num_rqs; u32 credit_limit; + u32 enqueue_credit_limit; unsigned int hang_limit; long timeout; atomic_t *score; @@ -600,7 +622,7 @@ int drm_sched_job_init(struct drm_sched_job *job, struct drm_sched_entity *entity, u32 credits, void *owner); void drm_sched_job_arm(struct drm_sched_job *job); -void drm_sched_entity_push_job(struct drm_sched_job *sched_job); +int drm_sched_entity_push_job(struct drm_sched_job *sched_job); int drm_sched_job_add_dependency(struct drm_sched_job *job, struct dma_fence *fence); int drm_sched_job_add_syncobj_dependency(struct drm_sched_job *job, From patchwork Mon May 19 17:51:29 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 891138 Received: from mail-pf1-f177.google.com (mail-pf1-f177.google.com [209.85.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C3344289E1B; Mon, 19 May 2025 17:54:27 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.177 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747677269; cv=none; b=G8wk7S7mHkM9pitfa0OEtP1sVnkq2+VQPcdYBRXR1zxSi2+fb1zqJw8IAn2i8L73N3OrO/sisEqLqNPq9JNf3oxN7ZSJJpEoseV2V9x/VBL/RdtGCeOQmOKkhwi2Z3P4MkZc/P1lXUBsd1lRZrW92gUII8+6dYo8caQAey2p2xU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747677269; c=relaxed/simple; bh=EbguejmXndTU3oBe1y/iJAaxP56jP1pnpH9c6r0BWO4=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=kdVXXkm/BvVWJ0uJTvJ7majhpmSYSb0QmJYSflKDwvZxsCuDEPBBkSJbtnoHu6DrG6FXyx1wxaeFOUnIipPwJSjNK0Gp3W8lolAyeG5DqCQVFACmSL0DF4GF7VMVeO9zZQrHW9VJirUE/MxSBtSUblMGwn6OdLQT6Mh3H8wHt68= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=ETArp0dq; arc=none smtp.client-ip=209.85.210.177 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="ETArp0dq" Received: by mail-pf1-f177.google.com with SMTP id d2e1a72fcca58-736c277331eso5108912b3a.1; Mon, 19 May 2025 10:54:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1747677267; x=1748282067; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Noi1gT6T3ToMn4Ib3t8y5bXp3qFMx55Y7D+49FvjccM=; b=ETArp0dqn4vRaX1YVQiktDSfNQbclaLvPaj9eBC3AyC6qGf/zow9uNTXaENdJmO4Wx SxTHd0vKJv3xXRCDi+ngIlEAcG0fZ+olc3YK+giwQDsmOQwdxwe7xVa+7K1zTLbteUSR hEmcWgaf2IPhgkwc3hFQM3f07uxgmifQGQp3UaynarklhF940AaslndT5KlhqIVBMfP/ hgkPDlPjJEU2qPcnGLjBsj5VlqBGjwuPIq1oFs1bBHnTU9qlitkwGR9rrZwjsYsAXma/ guLOZz4HFy/Fd3Q7r15WUoBJJMsOR21ceqkSKmrFFF3C5saEIPswuy1zYHXeAK/TZdCn 7B8A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747677267; x=1748282067; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Noi1gT6T3ToMn4Ib3t8y5bXp3qFMx55Y7D+49FvjccM=; b=StNpUTcQQiKeVJRGgTxniMPg/at6HbEJbL/TQ/VDsS0NWY/3ZjxN+7iTuOFrSTCbK/ x/Yzo+60RfNk7LFxxmDz7Y19UIEB6Bmwbv0jpL5kPFtbQfTDNooc1w+G8npDYs3HtSog GxtzLFDdjrKwVdruI9isAmkj1paJaZV27O+u9JygJKw0F0uNXarX4v1xU9skAjDg89Kb MlNvL6uzcHPfQ6FMbO8TQk1+BwLVR+5WeP8FsMVYLwpXsjbxKbPt97kb9Rt2PBdjvlaV /s63/O1GBnqgQr2nx5eLuUOB+j/QeRw477XI01H5yLJr1dPE6GE1SAs+XpDcPvihmJ+q anOA== X-Forwarded-Encrypted: i=1; AJvYcCUouF7qi9B165byOiYSR0mUmUpghLnqcIyIQeFg2kOMGJtNji8TK2QwyJ9I0HBRGUYbhJC+gQzaqEdZPVQz@vger.kernel.org, AJvYcCX9Ai48p/5l1RDuZ+kAYRf10X0k3R5onX9jHLDxPC1ElXoYNfgcfvL/th7Y3sRq+J2EBhbfEA++DZ+rrHJY@vger.kernel.org X-Gm-Message-State: AOJu0Yyn1G/iyvYyQz+G5nErpXSDNQzqvresd+7XOoPd825TnU5mp/y5 HHeleYMHuEKyQ2nfF3qOqZrFnsj+/VgF4IYNmlXv++ENCJbiMykVfw4q X-Gm-Gg: ASbGncvLP6kheyYoo+wNAZI3JyMBBEMsPZ7lW+czkIiFP+iHiRwG3AV+FGSotwZWrIW 09KF82b04RMMd/IP/Y9TiduA+DuBCRECWkKf8iQtd38EtvUkrBNG28HdA8uVX/sQZxIgD/wGLeE bnTaYa+vXoywNDiXnciocdrTZ2cMuXxhePcIcnGJP+OueLzfoJlxqmeklDYqwz11ozS1Sw4hOyT wCDSEnUtLclWwHse+kbKtbrKBHvnHHXjEv7SRSF2vaBpS/QPqAN+iLwDgqfSoGVafV8zHo8aXwk t5bX9z6MFuA3xWthibB+1iLuCXCFCQWkUIotC1C/WJvQ3JDBVj3MCUoKEZSXZ72PNYy1IzXTkWO ZAFST6S6dmKehWK9hfYVugwDgNANYxDTfTyPL X-Google-Smtp-Source: AGHT+IGirmJMjYXzik9DfKKOVo3qckdnT568uLkjRp9/isas2PCbrPwn77EnXsjhMN2A51ih0rWj6w== X-Received: by 2002:a05:6a20:43a5:b0:1fe:8f7c:c8e with SMTP id adf61e73a8af0-2165f84b78cmr19416881637.15.1747677266981; Mon, 19 May 2025 10:54:26 -0700 (PDT) Received: from localhost ([2a00:79e0:3e00:2601:3afc:446b:f0df:eadc]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-742a9878b53sm6468428b3a.152.2025.05.19.10.54.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 May 2025 10:54:26 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Rob Clark , Dmitry Baryshkov , Rob Clark , Sean Paul , Konrad Dybcio , Abhinav Kumar , Dmitry Baryshkov , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v5 06/40] drm/msm: Rename msm_file_private -> msm_context Date: Mon, 19 May 2025 10:51:29 -0700 Message-ID: <20250519175348.11924-7-robdclark@gmail.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250519175348.11924-1-robdclark@gmail.com> References: <20250519175348.11924-1-robdclark@gmail.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Rob Clark This is a more descriptive name. Signed-off-by: Rob Clark Reviewed-by: Dmitry Baryshkov Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 2 +- drivers/gpu/drm/msm/adreno/adreno_gpu.c | 6 ++-- drivers/gpu/drm/msm/adreno/adreno_gpu.h | 4 +-- drivers/gpu/drm/msm/msm_drv.c | 14 ++++----- drivers/gpu/drm/msm/msm_gem.c | 2 +- drivers/gpu/drm/msm/msm_gem_submit.c | 2 +- drivers/gpu/drm/msm/msm_gpu.c | 4 +-- drivers/gpu/drm/msm/msm_gpu.h | 39 ++++++++++++------------- drivers/gpu/drm/msm/msm_submitqueue.c | 27 +++++++++-------- 9 files changed, 49 insertions(+), 51 deletions(-) diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c index fd64af6d0440..620a26638535 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c @@ -111,7 +111,7 @@ static void a6xx_set_pagetable(struct a6xx_gpu *a6xx_gpu, struct msm_ringbuffer *ring, struct msm_gem_submit *submit) { bool sysprof = refcount_read(&a6xx_gpu->base.base.sysprof_active) > 1; - struct msm_file_private *ctx = submit->queue->ctx; + struct msm_context *ctx = submit->queue->ctx; struct adreno_gpu *adreno_gpu = &a6xx_gpu->base; phys_addr_t ttbr; u32 asid; diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c b/drivers/gpu/drm/msm/adreno/adreno_gpu.c index d04657b77857..93fe26009511 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c @@ -356,7 +356,7 @@ int adreno_fault_handler(struct msm_gpu *gpu, unsigned long iova, int flags, return 0; } -int adreno_get_param(struct msm_gpu *gpu, struct msm_file_private *ctx, +int adreno_get_param(struct msm_gpu *gpu, struct msm_context *ctx, uint32_t param, uint64_t *value, uint32_t *len) { struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu); @@ -444,7 +444,7 @@ int adreno_get_param(struct msm_gpu *gpu, struct msm_file_private *ctx, } } -int adreno_set_param(struct msm_gpu *gpu, struct msm_file_private *ctx, +int adreno_set_param(struct msm_gpu *gpu, struct msm_context *ctx, uint32_t param, uint64_t value, uint32_t len) { struct drm_device *drm = gpu->dev; @@ -490,7 +490,7 @@ int adreno_set_param(struct msm_gpu *gpu, struct msm_file_private *ctx, case MSM_PARAM_SYSPROF: if (!capable(CAP_SYS_ADMIN)) return UERR(EPERM, drm, "invalid permissions"); - return msm_file_private_set_sysprof(ctx, gpu, value); + return msm_context_set_sysprof(ctx, gpu, value); default: return UERR(EINVAL, drm, "%s: invalid param: %u", gpu->name, param); } diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.h b/drivers/gpu/drm/msm/adreno/adreno_gpu.h index 2366a57b280f..fed9516da365 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.h +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.h @@ -603,9 +603,9 @@ static inline int adreno_is_a7xx(struct adreno_gpu *gpu) /* Put vm_start above 32b to catch issues with not setting xyz_BASE_HI */ #define ADRENO_VM_START 0x100000000ULL u64 adreno_private_address_space_size(struct msm_gpu *gpu); -int adreno_get_param(struct msm_gpu *gpu, struct msm_file_private *ctx, +int adreno_get_param(struct msm_gpu *gpu, struct msm_context *ctx, uint32_t param, uint64_t *value, uint32_t *len); -int adreno_set_param(struct msm_gpu *gpu, struct msm_file_private *ctx, +int adreno_set_param(struct msm_gpu *gpu, struct msm_context *ctx, uint32_t param, uint64_t value, uint32_t len); const struct firmware *adreno_request_fw(struct adreno_gpu *adreno_gpu, const char *fwname); diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c index c3588dc9e537..29ca24548c67 100644 --- a/drivers/gpu/drm/msm/msm_drv.c +++ b/drivers/gpu/drm/msm/msm_drv.c @@ -333,7 +333,7 @@ static int context_init(struct drm_device *dev, struct drm_file *file) { static atomic_t ident = ATOMIC_INIT(0); struct msm_drm_private *priv = dev->dev_private; - struct msm_file_private *ctx; + struct msm_context *ctx; ctx = kzalloc(sizeof(*ctx), GFP_KERNEL); if (!ctx) @@ -363,23 +363,23 @@ static int msm_open(struct drm_device *dev, struct drm_file *file) return context_init(dev, file); } -static void context_close(struct msm_file_private *ctx) +static void context_close(struct msm_context *ctx) { msm_submitqueue_close(ctx); - msm_file_private_put(ctx); + msm_context_put(ctx); } static void msm_postclose(struct drm_device *dev, struct drm_file *file) { struct msm_drm_private *priv = dev->dev_private; - struct msm_file_private *ctx = file->driver_priv; + struct msm_context *ctx = file->driver_priv; /* * It is not possible to set sysprof param to non-zero if gpu * is not initialized: */ if (priv->gpu) - msm_file_private_set_sysprof(ctx, priv->gpu, 0); + msm_context_set_sysprof(ctx, priv->gpu, 0); context_close(ctx); } @@ -511,7 +511,7 @@ static int msm_ioctl_gem_info_iova(struct drm_device *dev, uint64_t *iova) { struct msm_drm_private *priv = dev->dev_private; - struct msm_file_private *ctx = file->driver_priv; + struct msm_context *ctx = file->driver_priv; if (!priv->gpu) return -EINVAL; @@ -531,7 +531,7 @@ static int msm_ioctl_gem_info_set_iova(struct drm_device *dev, uint64_t iova) { struct msm_drm_private *priv = dev->dev_private; - struct msm_file_private *ctx = file->driver_priv; + struct msm_context *ctx = file->driver_priv; if (!priv->gpu) return -EINVAL; diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index d2f38e1df510..fdeb6cf7eeb5 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -48,7 +48,7 @@ static void update_device_mem(struct msm_drm_private *priv, ssize_t size) static void update_ctx_mem(struct drm_file *file, ssize_t size) { - struct msm_file_private *ctx = file->driver_priv; + struct msm_context *ctx = file->driver_priv; uint64_t ctx_mem = atomic64_add_return(size, &ctx->ctx_mem); rcu_read_lock(); /* Locks file->pid! */ diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c index d4f71bb54e84..3aabf7f1da6d 100644 --- a/drivers/gpu/drm/msm/msm_gem_submit.c +++ b/drivers/gpu/drm/msm/msm_gem_submit.c @@ -651,7 +651,7 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data, { struct msm_drm_private *priv = dev->dev_private; struct drm_msm_gem_submit *args = data; - struct msm_file_private *ctx = file->driver_priv; + struct msm_context *ctx = file->driver_priv; struct msm_gem_submit *submit = NULL; struct msm_gpu *gpu = priv->gpu; struct msm_gpu_submitqueue *queue; diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c index c380d9d9f5af..d786fcfad62f 100644 --- a/drivers/gpu/drm/msm/msm_gpu.c +++ b/drivers/gpu/drm/msm/msm_gpu.c @@ -148,7 +148,7 @@ int msm_gpu_pm_suspend(struct msm_gpu *gpu) return 0; } -void msm_gpu_show_fdinfo(struct msm_gpu *gpu, struct msm_file_private *ctx, +void msm_gpu_show_fdinfo(struct msm_gpu *gpu, struct msm_context *ctx, struct drm_printer *p) { drm_printf(p, "drm-engine-gpu:\t%llu ns\n", ctx->elapsed_ns); @@ -339,7 +339,7 @@ static void retire_submits(struct msm_gpu *gpu); static void get_comm_cmdline(struct msm_gem_submit *submit, char **comm, char **cmd) { - struct msm_file_private *ctx = submit->queue->ctx; + struct msm_context *ctx = submit->queue->ctx; struct task_struct *task; WARN_ON(!mutex_is_locked(&submit->gpu->lock)); diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h index e25009150579..957d6fb3469d 100644 --- a/drivers/gpu/drm/msm/msm_gpu.h +++ b/drivers/gpu/drm/msm/msm_gpu.h @@ -22,7 +22,7 @@ struct msm_gem_submit; struct msm_gpu_perfcntr; struct msm_gpu_state; -struct msm_file_private; +struct msm_context; struct msm_gpu_config { const char *ioname; @@ -44,9 +44,9 @@ struct msm_gpu_config { * + z180_gpu */ struct msm_gpu_funcs { - int (*get_param)(struct msm_gpu *gpu, struct msm_file_private *ctx, + int (*get_param)(struct msm_gpu *gpu, struct msm_context *ctx, uint32_t param, uint64_t *value, uint32_t *len); - int (*set_param)(struct msm_gpu *gpu, struct msm_file_private *ctx, + int (*set_param)(struct msm_gpu *gpu, struct msm_context *ctx, uint32_t param, uint64_t value, uint32_t len); int (*hw_init)(struct msm_gpu *gpu); @@ -347,7 +347,7 @@ struct msm_gpu_perfcntr { #define NR_SCHED_PRIORITIES (1 + DRM_SCHED_PRIORITY_LOW - DRM_SCHED_PRIORITY_HIGH) /** - * struct msm_file_private - per-drm_file context + * struct msm_context - per-drm_file context * * @queuelock: synchronizes access to submitqueues list * @submitqueues: list of &msm_gpu_submitqueue created by userspace @@ -357,7 +357,7 @@ struct msm_gpu_perfcntr { * @ref: reference count * @seqno: unique per process seqno */ -struct msm_file_private { +struct msm_context { rwlock_t queuelock; struct list_head submitqueues; int queueid; @@ -512,7 +512,7 @@ struct msm_gpu_submitqueue { u32 ring_nr; int faults; uint32_t last_fence; - struct msm_file_private *ctx; + struct msm_context *ctx; struct list_head node; struct idr fence_idr; struct spinlock idr_lock; @@ -608,33 +608,32 @@ static inline void gpu_write64(struct msm_gpu *gpu, u32 reg, u64 val) int msm_gpu_pm_suspend(struct msm_gpu *gpu); int msm_gpu_pm_resume(struct msm_gpu *gpu); -void msm_gpu_show_fdinfo(struct msm_gpu *gpu, struct msm_file_private *ctx, +void msm_gpu_show_fdinfo(struct msm_gpu *gpu, struct msm_context *ctx, struct drm_printer *p); -int msm_submitqueue_init(struct drm_device *drm, struct msm_file_private *ctx); -struct msm_gpu_submitqueue *msm_submitqueue_get(struct msm_file_private *ctx, +int msm_submitqueue_init(struct drm_device *drm, struct msm_context *ctx); +struct msm_gpu_submitqueue *msm_submitqueue_get(struct msm_context *ctx, u32 id); int msm_submitqueue_create(struct drm_device *drm, - struct msm_file_private *ctx, + struct msm_context *ctx, u32 prio, u32 flags, u32 *id); -int msm_submitqueue_query(struct drm_device *drm, struct msm_file_private *ctx, +int msm_submitqueue_query(struct drm_device *drm, struct msm_context *ctx, struct drm_msm_submitqueue_query *args); -int msm_submitqueue_remove(struct msm_file_private *ctx, u32 id); -void msm_submitqueue_close(struct msm_file_private *ctx); +int msm_submitqueue_remove(struct msm_context *ctx, u32 id); +void msm_submitqueue_close(struct msm_context *ctx); void msm_submitqueue_destroy(struct kref *kref); -int msm_file_private_set_sysprof(struct msm_file_private *ctx, - struct msm_gpu *gpu, int sysprof); -void __msm_file_private_destroy(struct kref *kref); +int msm_context_set_sysprof(struct msm_context *ctx, struct msm_gpu *gpu, int sysprof); +void __msm_context_destroy(struct kref *kref); -static inline void msm_file_private_put(struct msm_file_private *ctx) +static inline void msm_context_put(struct msm_context *ctx) { - kref_put(&ctx->ref, __msm_file_private_destroy); + kref_put(&ctx->ref, __msm_context_destroy); } -static inline struct msm_file_private *msm_file_private_get( - struct msm_file_private *ctx) +static inline struct msm_context *msm_context_get( + struct msm_context *ctx) { kref_get(&ctx->ref); return ctx; diff --git a/drivers/gpu/drm/msm/msm_submitqueue.c b/drivers/gpu/drm/msm/msm_submitqueue.c index 7fed1de63b5d..1acc0fe36353 100644 --- a/drivers/gpu/drm/msm/msm_submitqueue.c +++ b/drivers/gpu/drm/msm/msm_submitqueue.c @@ -7,8 +7,7 @@ #include "msm_gpu.h" -int msm_file_private_set_sysprof(struct msm_file_private *ctx, - struct msm_gpu *gpu, int sysprof) +int msm_context_set_sysprof(struct msm_context *ctx, struct msm_gpu *gpu, int sysprof) { /* * Since pm_runtime and sysprof_active are both refcounts, we @@ -46,10 +45,10 @@ int msm_file_private_set_sysprof(struct msm_file_private *ctx, return 0; } -void __msm_file_private_destroy(struct kref *kref) +void __msm_context_destroy(struct kref *kref) { - struct msm_file_private *ctx = container_of(kref, - struct msm_file_private, ref); + struct msm_context *ctx = container_of(kref, + struct msm_context, ref); int i; for (i = 0; i < ARRAY_SIZE(ctx->entities); i++) { @@ -73,12 +72,12 @@ void msm_submitqueue_destroy(struct kref *kref) idr_destroy(&queue->fence_idr); - msm_file_private_put(queue->ctx); + msm_context_put(queue->ctx); kfree(queue); } -struct msm_gpu_submitqueue *msm_submitqueue_get(struct msm_file_private *ctx, +struct msm_gpu_submitqueue *msm_submitqueue_get(struct msm_context *ctx, u32 id) { struct msm_gpu_submitqueue *entry; @@ -101,7 +100,7 @@ struct msm_gpu_submitqueue *msm_submitqueue_get(struct msm_file_private *ctx, return NULL; } -void msm_submitqueue_close(struct msm_file_private *ctx) +void msm_submitqueue_close(struct msm_context *ctx) { struct msm_gpu_submitqueue *entry, *tmp; @@ -119,7 +118,7 @@ void msm_submitqueue_close(struct msm_file_private *ctx) } static struct drm_sched_entity * -get_sched_entity(struct msm_file_private *ctx, struct msm_ringbuffer *ring, +get_sched_entity(struct msm_context *ctx, struct msm_ringbuffer *ring, unsigned ring_nr, enum drm_sched_priority sched_prio) { static DEFINE_MUTEX(entity_lock); @@ -155,7 +154,7 @@ get_sched_entity(struct msm_file_private *ctx, struct msm_ringbuffer *ring, return ctx->entities[idx]; } -int msm_submitqueue_create(struct drm_device *drm, struct msm_file_private *ctx, +int msm_submitqueue_create(struct drm_device *drm, struct msm_context *ctx, u32 prio, u32 flags, u32 *id) { struct msm_drm_private *priv = drm->dev_private; @@ -200,7 +199,7 @@ int msm_submitqueue_create(struct drm_device *drm, struct msm_file_private *ctx, write_lock(&ctx->queuelock); - queue->ctx = msm_file_private_get(ctx); + queue->ctx = msm_context_get(ctx); queue->id = ctx->queueid++; if (id) @@ -221,7 +220,7 @@ int msm_submitqueue_create(struct drm_device *drm, struct msm_file_private *ctx, * Create the default submit-queue (id==0), used for backwards compatibility * for userspace that pre-dates the introduction of submitqueues. */ -int msm_submitqueue_init(struct drm_device *drm, struct msm_file_private *ctx) +int msm_submitqueue_init(struct drm_device *drm, struct msm_context *ctx) { struct msm_drm_private *priv = drm->dev_private; int default_prio, max_priority; @@ -261,7 +260,7 @@ static int msm_submitqueue_query_faults(struct msm_gpu_submitqueue *queue, return ret ? -EFAULT : 0; } -int msm_submitqueue_query(struct drm_device *drm, struct msm_file_private *ctx, +int msm_submitqueue_query(struct drm_device *drm, struct msm_context *ctx, struct drm_msm_submitqueue_query *args) { struct msm_gpu_submitqueue *queue; @@ -282,7 +281,7 @@ int msm_submitqueue_query(struct drm_device *drm, struct msm_file_private *ctx, return ret; } -int msm_submitqueue_remove(struct msm_file_private *ctx, u32 id) +int msm_submitqueue_remove(struct msm_context *ctx, u32 id) { struct msm_gpu_submitqueue *entry; From patchwork Mon May 19 17:51:32 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 891137 Received: from mail-pf1-f177.google.com (mail-pf1-f177.google.com [209.85.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0EF9728A701; Mon, 19 May 2025 17:54:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.177 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747677288; cv=none; b=j2ufONuylP7e0Vsn9GFVoesjeOEgYQfJuyZu68UX8UbJDAPmEw9ZsNLr+0lnYtG1XTyYek8/ZbQUuyaKtmr3EYqnj6Ki6vnEbBFcv9gs8HHOPXvdcHEBKc2eeFONIZbp8r13tB+l/JafuA6LIdEMIHkU5ZN9k14JDlyVXa0acyk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747677288; c=relaxed/simple; bh=H3c5gOKGDJs0tTFYfyC6NctvT5hMVMGR0OoAtgWxTdY=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Kprowuc7iELYib7K94a5LG5TCAw2Xv+lsJ86naMxLf2d2qoqYYDs1+m21iZLDC6u5CzWk20tuZmGDbC/f8S0EFEft+jAr6zCGZMESAeCK2zizkSVsRb2Hyrx+NdcQw5QYn/vqRpNBYVD2zVbEn72Iud3np97wOk3X+/8mn0b45w= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=MXlepzMn; arc=none smtp.client-ip=209.85.210.177 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="MXlepzMn" Received: by mail-pf1-f177.google.com with SMTP id d2e1a72fcca58-742c27df0daso1689263b3a.1; Mon, 19 May 2025 10:54:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1747677285; x=1748282085; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=VBeAL4P3/3FskyCROaf1mNKnMOdCBxO36TKmoza4VKQ=; b=MXlepzMnWg6k4nosWDxcpejflIqZnYkHiqGmtTt9tWdJtlrYaYNBc1Ne/E+QO5+Pf/ teDjHd1DFmj4oo9vRtuKzSbrv8SdGA4gpZttaZvN5ZdL4UrdDmV7EdRh4ORLYPWpGCZc OqhUrMgcQWp6GFE4jeYFUScIcUbFw3GOR2Cr1LxowqUEe1NIkCoxZC/oi1GfqwBhme/S AX45v5pQlBzCmRcwBM62FcA5aOcfYK9SUkYpJfblXw24tHIkuBh4lpr+50lWyA5WgtLD 5DvDAyhZxoGQdx9Bv/nh3ieWqfWVKAQzA7EHrfEUX5FbGjTGo3c/HWcq92fIdPq4VdMr uqfw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747677285; x=1748282085; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=VBeAL4P3/3FskyCROaf1mNKnMOdCBxO36TKmoza4VKQ=; b=rNLbK4p4MK0ehgjeOkh2kAAgIn6c88OeIH2x81y4bR0xgqgOcrFK6sexVpTYOQ3w4Y dvaDt/g11I8RIjBXQUSST5v36MxCsgFTJyV4h6xsACnzfk4TyYI+dYscNPOt8sBHqnyo dh8SP6Q2lafNYDiNqOGkWa4F0fCFsrwuWKEX3o5xhm2cXj3sqc2jjdJDxJcTbZp4a/TV IKGnYou+Jio5c1rnaMVRudJ+9ecC/YJ4SVtUPzTuDsKOT1lrLNGLDGBdsjcvYS7C4ROQ Lc19NcO5zevpUM5bwftpnoPaWcAbol/rEWKxru+6DbJff977UCwwcgUN9ruQ6c0P2sCp XmqA== X-Forwarded-Encrypted: i=1; AJvYcCV+FMOPHCRzl/Xb+CNC8/8gbTnn7Ah0TkQV1+WjQ++hVRDGewu6DZ310qcdSLbLcqlCs1FXVyEIW0FkoWqY@vger.kernel.org, AJvYcCVYfudpcKJGnARWgO4J9k0k4vax2ZgtkkOalIePo3XG5rmyqD7XyIaqZO1oWBL411+LW6W+4rctC5CJKJXu@vger.kernel.org X-Gm-Message-State: AOJu0Yxq1lep04quRInIQeLF8j11B4/nEtd7tokuZQqJ1oegFF41zdQ6 bb5Zi9DqrU/RGyXacCzj0JlVG7qcqaW6Se9dBmQ4MYuLTYYuLWLCaVOt X-Gm-Gg: ASbGncvmx2ZKp4lFuGbYStPsdxYoy6N56tSHAJjXTAj+T0mREteE7V5OwTZK8RPkW3F GJ1KLh4xRliGDext22aMpRL7ShcmBqeqkZPljFL1lXZSOxd50fwZy5uS6WzlnPFbOtvuLsdmL0T zfOrTnxs/7+3sTEkxeFMTfSJHIntqJeJSN0UBuYtqVL0hsCIQrp71fMGjo4cfxVQiawoN0HPJiB owNn8ZoblB69qpatPUCGMmrFr6ESIdrYxdtXuxRGLzJJRYRDn2Az86YgAEFNinCJIwmYVh5THrS 7U3IdV1AjIyh7XtbORjG8JD/Ab1/Fjp/Mr4AF5M03V3RSNBefwbPNq8zfUxHY1UksVfztYtzau1 vMJx0W1o23pn+RttzYgUDDGdcZg== X-Google-Smtp-Source: AGHT+IEn98jwZRSCDfZV273eB6aHlunvfL9TH1DDeqeNEPJuf4tzZGmwuQl0aGZRP04h0W4TVUSjbA== X-Received: by 2002:a05:6a20:a114:b0:1f5:7366:2a01 with SMTP id adf61e73a8af0-216219ece2fmr22911075637.37.1747677285283; Mon, 19 May 2025 10:54:45 -0700 (PDT) Received: from localhost ([2a00:79e0:3e00:2601:3afc:446b:f0df:eadc]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-742a9829cc7sm6755784b3a.106.2025.05.19.10.54.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 May 2025 10:54:44 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Rob Clark , Rob Clark , Sean Paul , Konrad Dybcio , Abhinav Kumar , Dmitry Baryshkov , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v5 09/40] drm/msm: Remove vram carveout support Date: Mon, 19 May 2025 10:51:32 -0700 Message-ID: <20250519175348.11924-10-robdclark@gmail.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250519175348.11924-1-robdclark@gmail.com> References: <20250519175348.11924-1-robdclark@gmail.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Rob Clark It is standing in the way of drm_gpuvm / VM_BIND support. Not to mention frequently broken and rarely tested. And I think only needed for a 10yr old not quite upstream SoC (msm8974). Maybe we can add support back in later, but I'm doubtful. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/adreno/a2xx_gpu.c | 8 -- drivers/gpu/drm/msm/adreno/a3xx_gpu.c | 15 --- drivers/gpu/drm/msm/adreno/a4xx_gpu.c | 15 --- drivers/gpu/drm/msm/adreno/a5xx_gpu.c | 3 +- drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 3 +- drivers/gpu/drm/msm/adreno/adreno_device.c | 4 - drivers/gpu/drm/msm/adreno/adreno_gpu.c | 4 +- drivers/gpu/drm/msm/adreno/adreno_gpu.h | 1 - drivers/gpu/drm/msm/msm_drv.c | 117 +----------------- drivers/gpu/drm/msm/msm_drv.h | 11 -- drivers/gpu/drm/msm/msm_gem.c | 131 ++------------------- drivers/gpu/drm/msm/msm_gem.h | 5 - drivers/gpu/drm/msm/msm_gem_submit.c | 5 - drivers/gpu/drm/msm/msm_gpu.c | 6 +- 14 files changed, 19 insertions(+), 309 deletions(-) diff --git a/drivers/gpu/drm/msm/adreno/a2xx_gpu.c b/drivers/gpu/drm/msm/adreno/a2xx_gpu.c index 5eb063ed0b46..095bae92e3e8 100644 --- a/drivers/gpu/drm/msm/adreno/a2xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a2xx_gpu.c @@ -551,14 +551,6 @@ struct msm_gpu *a2xx_gpu_init(struct drm_device *dev) else adreno_gpu->registers = a220_registers; - if (!gpu->vm) { - dev_err(dev->dev, "No memory protection without MMU\n"); - if (!allow_vram_carveout) { - ret = -ENXIO; - goto fail; - } - } - return gpu; fail: diff --git a/drivers/gpu/drm/msm/adreno/a3xx_gpu.c b/drivers/gpu/drm/msm/adreno/a3xx_gpu.c index 434e6ededf83..a956cd79195e 100644 --- a/drivers/gpu/drm/msm/adreno/a3xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a3xx_gpu.c @@ -581,21 +581,6 @@ struct msm_gpu *a3xx_gpu_init(struct drm_device *dev) goto fail; } - if (!gpu->vm) { - /* TODO we think it is possible to configure the GPU to - * restrict access to VRAM carveout. But the required - * registers are unknown. For now just bail out and - * limp along with just modesetting. If it turns out - * to not be possible to restrict access, then we must - * implement a cmdstream validator. - */ - DRM_DEV_ERROR(dev->dev, "No memory protection without IOMMU\n"); - if (!allow_vram_carveout) { - ret = -ENXIO; - goto fail; - } - } - icc_path = devm_of_icc_get(&pdev->dev, "gfx-mem"); if (IS_ERR(icc_path)) { ret = PTR_ERR(icc_path); diff --git a/drivers/gpu/drm/msm/adreno/a4xx_gpu.c b/drivers/gpu/drm/msm/adreno/a4xx_gpu.c index 2c75debcfd84..83f6329accba 100644 --- a/drivers/gpu/drm/msm/adreno/a4xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a4xx_gpu.c @@ -695,21 +695,6 @@ struct msm_gpu *a4xx_gpu_init(struct drm_device *dev) adreno_gpu->uche_trap_base = 0xffff0000ffff0000ull; - if (!gpu->vm) { - /* TODO we think it is possible to configure the GPU to - * restrict access to VRAM carveout. But the required - * registers are unknown. For now just bail out and - * limp along with just modesetting. If it turns out - * to not be possible to restrict access, then we must - * implement a cmdstream validator. - */ - DRM_DEV_ERROR(dev->dev, "No memory protection without IOMMU\n"); - if (!allow_vram_carveout) { - ret = -ENXIO; - goto fail; - } - } - icc_path = devm_of_icc_get(&pdev->dev, "gfx-mem"); if (IS_ERR(icc_path)) { ret = PTR_ERR(icc_path); diff --git a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c index dc31bc0afca4..04138a06724b 100644 --- a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c @@ -1786,8 +1786,7 @@ struct msm_gpu *a5xx_gpu_init(struct drm_device *dev) return ERR_PTR(ret); } - if (gpu->vm) - msm_mmu_set_fault_handler(gpu->vm->mmu, gpu, a5xx_fault_handler); + msm_mmu_set_fault_handler(gpu->vm->mmu, gpu, a5xx_fault_handler); /* Set up the preemption specific bits and pieces for each ringbuffer */ a5xx_preempt_init(gpu); diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c index d05c00624f74..f4d9cdbc5602 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c @@ -2547,8 +2547,7 @@ struct msm_gpu *a6xx_gpu_init(struct drm_device *dev) adreno_gpu->uche_trap_base = 0x1fffffffff000ull; - if (gpu->vm) - msm_mmu_set_fault_handler(gpu->vm->mmu, gpu, a6xx_fault_handler); + msm_mmu_set_fault_handler(gpu->vm->mmu, gpu, a6xx_fault_handler); a6xx_calc_ubwc_config(adreno_gpu); /* Set up the preemption specific bits and pieces for each ringbuffer */ diff --git a/drivers/gpu/drm/msm/adreno/adreno_device.c b/drivers/gpu/drm/msm/adreno/adreno_device.c index f4552b8c6767..6b0390c38bff 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_device.c +++ b/drivers/gpu/drm/msm/adreno/adreno_device.c @@ -16,10 +16,6 @@ bool snapshot_debugbus = false; MODULE_PARM_DESC(snapshot_debugbus, "Include debugbus sections in GPU devcoredump (if not fused off)"); module_param_named(snapshot_debugbus, snapshot_debugbus, bool, 0600); -bool allow_vram_carveout = false; -MODULE_PARM_DESC(allow_vram_carveout, "Allow using VRAM Carveout, in place of IOMMU"); -module_param_named(allow_vram_carveout, allow_vram_carveout, bool, 0600); - int enable_preemption = -1; MODULE_PARM_DESC(enable_preemption, "Enable preemption (A7xx only) (1=on , 0=disable, -1=auto (default))"); module_param(enable_preemption, int, 0600); diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c b/drivers/gpu/drm/msm/adreno/adreno_gpu.c index b01d9efb8663..35a99c81f7e0 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c @@ -209,7 +209,9 @@ adreno_iommu_create_vm(struct msm_gpu *gpu, u64 start, size; mmu = msm_iommu_gpu_new(&pdev->dev, gpu, quirks); - if (IS_ERR_OR_NULL(mmu)) + if (!mmu) + return ERR_PTR(-ENODEV); + else if (IS_ERR_OR_NULL(mmu)) return ERR_CAST(mmu); geometry = msm_iommu_get_geometry(mmu); diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.h b/drivers/gpu/drm/msm/adreno/adreno_gpu.h index 258c5c6dde2e..bbd7e664286e 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.h +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.h @@ -18,7 +18,6 @@ #include "adreno_pm4.xml.h" extern bool snapshot_debugbus; -extern bool allow_vram_carveout; enum { ADRENO_FW_PM4 = 0, diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c index 903abf3532e0..978f1d355b42 100644 --- a/drivers/gpu/drm/msm/msm_drv.c +++ b/drivers/gpu/drm/msm/msm_drv.c @@ -46,12 +46,6 @@ #define MSM_VERSION_MINOR 12 #define MSM_VERSION_PATCHLEVEL 0 -static void msm_deinit_vram(struct drm_device *ddev); - -static char *vram = "16m"; -MODULE_PARM_DESC(vram, "Configure VRAM size (for devices without IOMMU/GPUMMU)"); -module_param(vram, charp, 0); - bool dumpstate; MODULE_PARM_DESC(dumpstate, "Dump KMS state on errors"); module_param(dumpstate, bool, 0600); @@ -97,8 +91,6 @@ static int msm_drm_uninit(struct device *dev) if (priv->kms) msm_drm_kms_uninit(dev); - msm_deinit_vram(ddev); - component_unbind_all(dev, ddev); ddev->dev_private = NULL; @@ -109,107 +101,6 @@ static int msm_drm_uninit(struct device *dev) return 0; } -bool msm_use_mmu(struct drm_device *dev) -{ - struct msm_drm_private *priv = dev->dev_private; - - /* - * a2xx comes with its own MMU - * On other platforms IOMMU can be declared specified either for the - * MDP/DPU device or for its parent, MDSS device. - */ - return priv->is_a2xx || - device_iommu_mapped(dev->dev) || - device_iommu_mapped(dev->dev->parent); -} - -static int msm_init_vram(struct drm_device *dev) -{ - struct msm_drm_private *priv = dev->dev_private; - struct device_node *node; - unsigned long size = 0; - int ret = 0; - - /* In the device-tree world, we could have a 'memory-region' - * phandle, which gives us a link to our "vram". Allocating - * is all nicely abstracted behind the dma api, but we need - * to know the entire size to allocate it all in one go. There - * are two cases: - * 1) device with no IOMMU, in which case we need exclusive - * access to a VRAM carveout big enough for all gpu - * buffers - * 2) device with IOMMU, but where the bootloader puts up - * a splash screen. In this case, the VRAM carveout - * need only be large enough for fbdev fb. But we need - * exclusive access to the buffer to avoid the kernel - * using those pages for other purposes (which appears - * as corruption on screen before we have a chance to - * load and do initial modeset) - */ - - node = of_parse_phandle(dev->dev->of_node, "memory-region", 0); - if (node) { - struct resource r; - ret = of_address_to_resource(node, 0, &r); - of_node_put(node); - if (ret) - return ret; - size = r.end - r.start + 1; - DRM_INFO("using VRAM carveout: %lx@%pa\n", size, &r.start); - - /* if we have no IOMMU, then we need to use carveout allocator. - * Grab the entire DMA chunk carved out in early startup in - * mach-msm: - */ - } else if (!msm_use_mmu(dev)) { - DRM_INFO("using %s VRAM carveout\n", vram); - size = memparse(vram, NULL); - } - - if (size) { - unsigned long attrs = 0; - void *p; - - priv->vram.size = size; - - drm_mm_init(&priv->vram.mm, 0, (size >> PAGE_SHIFT) - 1); - spin_lock_init(&priv->vram.lock); - - attrs |= DMA_ATTR_NO_KERNEL_MAPPING; - attrs |= DMA_ATTR_WRITE_COMBINE; - - /* note that for no-kernel-mapping, the vaddr returned - * is bogus, but non-null if allocation succeeded: - */ - p = dma_alloc_attrs(dev->dev, size, - &priv->vram.paddr, GFP_KERNEL, attrs); - if (!p) { - DRM_DEV_ERROR(dev->dev, "failed to allocate VRAM\n"); - priv->vram.paddr = 0; - return -ENOMEM; - } - - DRM_DEV_INFO(dev->dev, "VRAM: %08x->%08x\n", - (uint32_t)priv->vram.paddr, - (uint32_t)(priv->vram.paddr + size)); - } - - return ret; -} - -static void msm_deinit_vram(struct drm_device *ddev) -{ - struct msm_drm_private *priv = ddev->dev_private; - unsigned long attrs = DMA_ATTR_NO_KERNEL_MAPPING; - - if (!priv->vram.paddr) - return; - - drm_mm_takedown(&priv->vram.mm); - dma_free_attrs(ddev->dev, priv->vram.size, NULL, priv->vram.paddr, - attrs); -} - static int msm_drm_init(struct device *dev, const struct drm_driver *drv) { struct msm_drm_private *priv = dev_get_drvdata(dev); @@ -256,16 +147,12 @@ static int msm_drm_init(struct device *dev, const struct drm_driver *drv) goto err_destroy_wq; } - ret = msm_init_vram(ddev); - if (ret) - goto err_destroy_wq; - dma_set_max_seg_size(dev, UINT_MAX); /* Bind all our sub-components: */ ret = component_bind_all(dev, ddev); if (ret) - goto err_deinit_vram; + goto err_destroy_wq; ret = msm_gem_shrinker_init(ddev); if (ret) @@ -302,8 +189,6 @@ static int msm_drm_init(struct device *dev, const struct drm_driver *drv) return ret; -err_deinit_vram: - msm_deinit_vram(ddev); err_destroy_wq: destroy_workqueue(priv->wq); err_put_dev: diff --git a/drivers/gpu/drm/msm/msm_drv.h b/drivers/gpu/drm/msm/msm_drv.h index 0e675c9a7f83..ad509403f072 100644 --- a/drivers/gpu/drm/msm/msm_drv.h +++ b/drivers/gpu/drm/msm/msm_drv.h @@ -183,17 +183,6 @@ struct msm_drm_private { struct msm_drm_thread event_thread[MAX_CRTCS]; - /* VRAM carveout, used when no IOMMU: */ - struct { - unsigned long size; - dma_addr_t paddr; - /* NOTE: mm managed at the page level, size is in # of pages - * and position mm_node->start is in # of pages: - */ - struct drm_mm mm; - spinlock_t lock; /* Protects drm_mm node allocation/removal */ - } vram; - struct notifier_block vmap_notifier; struct shrinker *shrinker; diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 07a30d29248c..621fb4e17a2e 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -17,24 +17,8 @@ #include #include "msm_drv.h" -#include "msm_fence.h" #include "msm_gem.h" #include "msm_gpu.h" -#include "msm_mmu.h" - -static dma_addr_t physaddr(struct drm_gem_object *obj) -{ - struct msm_gem_object *msm_obj = to_msm_bo(obj); - struct msm_drm_private *priv = obj->dev->dev_private; - return (((dma_addr_t)msm_obj->vram_node->start) << PAGE_SHIFT) + - priv->vram.paddr; -} - -static bool use_pages(struct drm_gem_object *obj) -{ - struct msm_gem_object *msm_obj = to_msm_bo(obj); - return !msm_obj->vram_node; -} static int pgprot = 0; module_param(pgprot, int, 0600); @@ -139,36 +123,6 @@ static void update_lru(struct drm_gem_object *obj) mutex_unlock(&priv->lru.lock); } -/* allocate pages from VRAM carveout, used when no IOMMU: */ -static struct page **get_pages_vram(struct drm_gem_object *obj, int npages) -{ - struct msm_gem_object *msm_obj = to_msm_bo(obj); - struct msm_drm_private *priv = obj->dev->dev_private; - dma_addr_t paddr; - struct page **p; - int ret, i; - - p = kvmalloc_array(npages, sizeof(struct page *), GFP_KERNEL); - if (!p) - return ERR_PTR(-ENOMEM); - - spin_lock(&priv->vram.lock); - ret = drm_mm_insert_node(&priv->vram.mm, msm_obj->vram_node, npages); - spin_unlock(&priv->vram.lock); - if (ret) { - kvfree(p); - return ERR_PTR(ret); - } - - paddr = physaddr(obj); - for (i = 0; i < npages; i++) { - p[i] = pfn_to_page(__phys_to_pfn(paddr)); - paddr += PAGE_SIZE; - } - - return p; -} - static struct page **get_pages(struct drm_gem_object *obj) { struct msm_gem_object *msm_obj = to_msm_bo(obj); @@ -180,10 +134,7 @@ static struct page **get_pages(struct drm_gem_object *obj) struct page **p; int npages = obj->size >> PAGE_SHIFT; - if (use_pages(obj)) - p = drm_gem_get_pages(obj); - else - p = get_pages_vram(obj, npages); + p = drm_gem_get_pages(obj); if (IS_ERR(p)) { DRM_DEV_ERROR(dev->dev, "could not get pages: %ld\n", @@ -216,18 +167,6 @@ static struct page **get_pages(struct drm_gem_object *obj) return msm_obj->pages; } -static void put_pages_vram(struct drm_gem_object *obj) -{ - struct msm_gem_object *msm_obj = to_msm_bo(obj); - struct msm_drm_private *priv = obj->dev->dev_private; - - spin_lock(&priv->vram.lock); - drm_mm_remove_node(msm_obj->vram_node); - spin_unlock(&priv->vram.lock); - - kvfree(msm_obj->pages); -} - static void put_pages(struct drm_gem_object *obj) { struct msm_gem_object *msm_obj = to_msm_bo(obj); @@ -248,10 +187,7 @@ static void put_pages(struct drm_gem_object *obj) update_device_mem(obj->dev->dev_private, -obj->size); - if (use_pages(obj)) - drm_gem_put_pages(obj, msm_obj->pages, true, false); - else - put_pages_vram(obj); + drm_gem_put_pages(obj, msm_obj->pages, true, false); msm_obj->pages = NULL; update_lru(obj); @@ -1215,19 +1151,10 @@ struct drm_gem_object *msm_gem_new(struct drm_device *dev, uint32_t size, uint32 struct msm_drm_private *priv = dev->dev_private; struct msm_gem_object *msm_obj; struct drm_gem_object *obj = NULL; - bool use_vram = false; int ret; size = PAGE_ALIGN(size); - if (!msm_use_mmu(dev)) - use_vram = true; - else if ((flags & (MSM_BO_STOLEN | MSM_BO_SCANOUT)) && priv->vram.size) - use_vram = true; - - if (GEM_WARN_ON(use_vram && !priv->vram.size)) - return ERR_PTR(-EINVAL); - /* Disallow zero sized objects as they make the underlying * infrastructure grumpy */ @@ -1240,44 +1167,16 @@ struct drm_gem_object *msm_gem_new(struct drm_device *dev, uint32_t size, uint32 msm_obj = to_msm_bo(obj); - if (use_vram) { - struct msm_gem_vma *vma; - struct page **pages; - - drm_gem_private_object_init(dev, obj, size); - - msm_gem_lock(obj); - - vma = add_vma(obj, NULL); - msm_gem_unlock(obj); - if (IS_ERR(vma)) { - ret = PTR_ERR(vma); - goto fail; - } - - to_msm_bo(obj)->vram_node = &vma->node; - - msm_gem_lock(obj); - pages = get_pages(obj); - msm_gem_unlock(obj); - if (IS_ERR(pages)) { - ret = PTR_ERR(pages); - goto fail; - } - - vma->iova = physaddr(obj); - } else { - ret = drm_gem_object_init(dev, obj, size); - if (ret) - goto fail; - /* - * Our buffers are kept pinned, so allocating them from the - * MOVABLE zone is a really bad idea, and conflicts with CMA. - * See comments above new_inode() why this is required _and_ - * expected if you're going to pin these pages. - */ - mapping_set_gfp_mask(obj->filp->f_mapping, GFP_HIGHUSER); - } + ret = drm_gem_object_init(dev, obj, size); + if (ret) + goto fail; + /* + * Our buffers are kept pinned, so allocating them from the + * MOVABLE zone is a really bad idea, and conflicts with CMA. + * See comments above new_inode() why this is required _and_ + * expected if you're going to pin these pages. + */ + mapping_set_gfp_mask(obj->filp->f_mapping, GFP_HIGHUSER); drm_gem_lru_move_tail(&priv->lru.unbacked, obj); @@ -1305,12 +1204,6 @@ struct drm_gem_object *msm_gem_import(struct drm_device *dev, uint32_t size; int ret, npages; - /* if we don't have IOMMU, don't bother pretending we can import: */ - if (!msm_use_mmu(dev)) { - DRM_DEV_ERROR(dev->dev, "cannot import without IOMMU\n"); - return ERR_PTR(-EINVAL); - } - size = PAGE_ALIGN(dmabuf->size); ret = msm_gem_new_impl(dev, size, MSM_BO_WC, &obj); diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index d2f39a371373..c16b11182831 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -102,11 +102,6 @@ struct msm_gem_object { struct list_head vmas; /* list of msm_gem_vma */ - /* For physically contiguous buffers. Used when we don't have - * an IOMMU. Also used for stolen/splashscreen buffer. - */ - struct drm_mm_node *vram_node; - char name[32]; /* Identifier to print for the debugfs files */ /* userspace metadata backchannel */ diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c index a59816b6b6de..c184b1a1f522 100644 --- a/drivers/gpu/drm/msm/msm_gem_submit.c +++ b/drivers/gpu/drm/msm/msm_gem_submit.c @@ -669,11 +669,6 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data, if (args->pad) return -EINVAL; - if (unlikely(!ctx->vm) && !capable(CAP_SYS_RAWIO)) { - DRM_ERROR_RATELIMITED("IOMMU support or CAP_SYS_RAWIO required!\n"); - return -EPERM; - } - /* for now, we just have 3d pipe.. eventually this would need to * be more clever to dispatch to appropriate gpu module: */ diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c index 0d466a2e9b32..b30800f80120 100644 --- a/drivers/gpu/drm/msm/msm_gpu.c +++ b/drivers/gpu/drm/msm/msm_gpu.c @@ -944,12 +944,8 @@ int msm_gpu_init(struct drm_device *drm, struct platform_device *pdev, msm_devfreq_init(gpu); - gpu->vm = gpu->funcs->create_vm(gpu, pdev); - - if (gpu->vm == NULL) - DRM_DEV_INFO(drm->dev, "%s: no IOMMU, fallback to VRAM carveout!\n", name); - else if (IS_ERR(gpu->vm)) { + if (IS_ERR(gpu->vm)) { ret = PTR_ERR(gpu->vm); goto fail; } From patchwork Mon May 19 17:51:33 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 891136 Received: from mail-pf1-f178.google.com (mail-pf1-f178.google.com [209.85.210.178]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 96F3E28A704; Mon, 19 May 2025 17:54:47 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.178 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747677289; cv=none; b=IjBuHVB1rYdHpHmHym23XHUrRNXZkPDomHiKLw5UXoeyCzhVlHPPlsT2EJIr09VufUJyDhOvcAmlMiX6Bxs80nX3K3iDtCoB3bDCrXqZyWD4YUldrrXXMtVF0spgyYdfr7DwZWqWAimnE3nf3kXjOprOFpezrj3YOf1R2Cul0vw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747677289; c=relaxed/simple; bh=2WpPjdOnE7lRX5ZCDjp17CsoLPIlDS7/qvt4/yxrQdY=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=nbqZEIRP8Qf5CgGBBXWtnlFamqY/o4EkoLpiEaXWmZPKcRyKrpxNYYDqicC3SyP5fsqaSCgRgQsxtEG4RiV8MZiO09/ykx7tqYdRa7U4Wq09MvU+Ny1uV0cDn2/YlYyHpR0fd2k8cggP+KO4GhpxmE3VC7K/EuahGdceX67oHwM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=NQ6d/Cyg; arc=none smtp.client-ip=209.85.210.178 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="NQ6d/Cyg" Received: by mail-pf1-f178.google.com with SMTP id d2e1a72fcca58-742c9563fd9so1440650b3a.3; Mon, 19 May 2025 10:54:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1747677287; x=1748282087; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=9ZJAaoLVx59vP6ayQOWcOdWbgIttwlPWjJpDsI8g2BE=; b=NQ6d/CygaWMMo/wB6fCsLZQIYFxEzRkn0rNyy6lHcy9GZQMq8R8kLfYtAEoj1Gj2xE etEgyMsKHg/Ys+GYSkcWkQK2kiHKgjLgBf11ZHFIF/LL3bLOL8HsJRGBoJW5uoQNa3H5 Fhu1P7omzBT0X0HDBA8IuQmUKLRz0fe3C4mQWr3PccyRPXt6I/P8lr0nnZeCwTkgntkq PLCdchmnupcQJEZFLFu4LZVHjmSTEieSZ8EHMwc2xSaGpXHvVFFLnwPIhzn3Qp75Zb/y iFaYvS0iKe9nY40/MAIclNbdpKDDj607RKplpRrtefFZORDkiO2bVv0FHY68ALaoNyOc aUCg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747677287; x=1748282087; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=9ZJAaoLVx59vP6ayQOWcOdWbgIttwlPWjJpDsI8g2BE=; b=FXh3ik6qHTedCOrnQnfi8Brjimire8eLftN0EozWr0zvIkMBYEK7UN/iuk8f6fyMZn L9Glaprg3ciobxu+XwchGiAS7yk0zUuH+2upKphG+BpgFc/Ng74x1Lct7TRQE3LcK8C3 Sq2RUrMat/w7HyV7ujPgoI+YJOEp77AGH66DMzHlT5dxuWEoYmHPuWoxHq6OdMQEh86M D/dpjNilUJIevqd4AtG0bywXNk67EQJKxQPtLork0ha7hzTAdDxr0dxFx/JkS0uaBTnl YqaYeteRi9Cj4MjYfeyWE6f7LJdktTj34Bd/IONuhb4iigTr9j3wHKquq+dR82MAgLMm tGUQ== X-Forwarded-Encrypted: i=1; AJvYcCUnIBCBG5GdYZfok9Ww+f7NA/6Ui9cufwP8lH68TQgONZfx7ICe5dvvooNgbPwLUrQTDOhrpKhj4BnqyI8T@vger.kernel.org, AJvYcCXfsMUn/IvuAY5qm5o8Tjbaaui0TsBwwUTGSh2rMqP2X8K1boa9JYkRZ9OibJ2PlKGZV4V7ilLi40ue9Frg@vger.kernel.org X-Gm-Message-State: AOJu0YxsUx/7CVJmm8LmK9q2AqDrtQ3Eh4gllgjyR4r8KmvB2IzJO4m1 hNvSGPj5Z5ZTRcWj+GnBr21EJ8CvFHnGCNd5cIsheKNZ7CYLHpx0z8dt X-Gm-Gg: ASbGnct5K2f21K55vMH/lV5OdkcMoan1N//0K6ujfREtin8+TDD2JxVURa47hF6s30R gxFTDK4xhQpZmfmRQg2gDO6399/Y6yWBCEV/xSMQwMzvsIFjSV9QX3AX4M2Mwc18ac+BMDOLyNF 5dICdLOsP3DqiUEzRhiMFDspv7CdSrdJElleTftXLcAqtg7V7aD8WnPQa07E2q5B6scR/CBn5fc XaQ/Kz+6hx8Gwf7rYs+s1idqV8IpHkteitDSKgZD+DDDwoF56aTBsB6hjbV6//crAAXWj8ka0uK 10Txy5FJKHFzEuX+P3Nz1Mge7eQqZrFIppBfa44a63/Z0ORADhz7MYSDTiff+fs0rI6b9SW4qOn A7V9xN9XP5Vtbcq9Z8lm2acoRuQ== X-Google-Smtp-Source: AGHT+IEkvwoassi3JU2FtzTDATo/5K2z8O6vVEhGhJrQDz6ZrR86QLifz9h8DEvlWr3Zl7UexLwoQA== X-Received: by 2002:a05:6a21:9185:b0:1f5:7ba7:69d8 with SMTP id adf61e73a8af0-2170cc65d73mr20170979637.15.1747677286871; Mon, 19 May 2025 10:54:46 -0700 (PDT) Received: from localhost ([2a00:79e0:3e00:2601:3afc:446b:f0df:eadc]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-742a9739cdesm6539985b3a.82.2025.05.19.10.54.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 May 2025 10:54:45 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v5 10/40] drm/msm: Collapse vma allocation and initialization Date: Mon, 19 May 2025 10:51:33 -0700 Message-ID: <20250519175348.11924-11-robdclark@gmail.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250519175348.11924-1-robdclark@gmail.com> References: <20250519175348.11924-1-robdclark@gmail.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Rob Clark Now that we've dropped vram carveout support, we can collapse vma allocation and initialization. This better matches how things work with drm_gpuvm. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gem.c | 30 +++----------------------- drivers/gpu/drm/msm/msm_gem.h | 4 ++-- drivers/gpu/drm/msm/msm_gem_vma.c | 36 +++++++++++++------------------ 3 files changed, 20 insertions(+), 50 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 621fb4e17a2e..29247911f048 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -337,23 +337,6 @@ uint64_t msm_gem_mmap_offset(struct drm_gem_object *obj) return offset; } -static struct msm_gem_vma *add_vma(struct drm_gem_object *obj, - struct msm_gem_vm *vm) -{ - struct msm_gem_object *msm_obj = to_msm_bo(obj); - struct msm_gem_vma *vma; - - msm_gem_assert_locked(obj); - - vma = msm_gem_vma_new(vm); - if (!vma) - return ERR_PTR(-ENOMEM); - - list_add_tail(&vma->list, &msm_obj->vmas); - - return vma; -} - static struct msm_gem_vma *lookup_vma(struct drm_gem_object *obj, struct msm_gem_vm *vm) { @@ -420,6 +403,7 @@ static struct msm_gem_vma *get_vma_locked(struct drm_gem_object *obj, struct msm_gem_vm *vm, u64 range_start, u64 range_end) { + struct msm_gem_object *msm_obj = to_msm_bo(obj); struct msm_gem_vma *vma; msm_gem_assert_locked(obj); @@ -427,18 +411,10 @@ static struct msm_gem_vma *get_vma_locked(struct drm_gem_object *obj, vma = lookup_vma(obj, vm); if (!vma) { - int ret; - - vma = add_vma(obj, vm); + vma = msm_gem_vma_new(vm, obj, range_start, range_end); if (IS_ERR(vma)) return vma; - - ret = msm_gem_vma_init(vma, obj->size, - range_start, range_end); - if (ret) { - del_vma(vma); - return ERR_PTR(ret); - } + list_add_tail(&vma->list, &msm_obj->vmas); } else { GEM_WARN_ON(vma->iova < range_start); GEM_WARN_ON((vma->iova + obj->size) > range_end); diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index c16b11182831..9bd78642671c 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -66,8 +66,8 @@ struct msm_gem_vma { bool mapped; }; -struct msm_gem_vma *msm_gem_vma_new(struct msm_gem_vm *vm); -int msm_gem_vma_init(struct msm_gem_vma *vma, int size, +struct msm_gem_vma * +msm_gem_vma_new(struct msm_gem_vm *vm, struct drm_gem_object *obj, u64 range_start, u64 range_end); void msm_gem_vma_purge(struct msm_gem_vma *vma); int msm_gem_vma_map(struct msm_gem_vma *vma, int prot, struct sg_table *sgt, int size); diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_gem_vma.c index 9419692f0cc8..6d18364f321c 100644 --- a/drivers/gpu/drm/msm/msm_gem_vma.c +++ b/drivers/gpu/drm/msm/msm_gem_vma.c @@ -106,47 +106,41 @@ void msm_gem_vma_close(struct msm_gem_vma *vma) msm_gem_vm_put(vm); } -struct msm_gem_vma *msm_gem_vma_new(struct msm_gem_vm *vm) +/* Create a new vma and allocate an iova for it */ +struct msm_gem_vma * +msm_gem_vma_new(struct msm_gem_vm *vm, struct drm_gem_object *obj, + u64 range_start, u64 range_end) { struct msm_gem_vma *vma; + int ret; vma = kzalloc(sizeof(*vma), GFP_KERNEL); if (!vma) - return NULL; + return ERR_PTR(-ENOMEM); vma->vm = vm; - return vma; -} - -/* Initialize a new vma and allocate an iova for it */ -int msm_gem_vma_init(struct msm_gem_vma *vma, int size, - u64 range_start, u64 range_end) -{ - struct msm_gem_vm *vm = vma->vm; - int ret; - - if (GEM_WARN_ON(!vm)) - return -EINVAL; - - if (GEM_WARN_ON(vma->iova)) - return -EBUSY; - spin_lock(&vm->lock); ret = drm_mm_insert_node_in_range(&vm->mm, &vma->node, - size, PAGE_SIZE, 0, + obj->size, PAGE_SIZE, 0, range_start, range_end, 0); spin_unlock(&vm->lock); if (ret) - return ret; + goto err_free_vma; vma->iova = vma->node.start; vma->mapped = false; + INIT_LIST_HEAD(&vma->list); + kref_get(&vm->kref); - return 0; + return vma; + +err_free_vma: + kfree(vma); + return ERR_PTR(ret); } struct msm_gem_vm * From patchwork Mon May 19 17:51:35 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 891135 Received: from mail-pj1-f47.google.com (mail-pj1-f47.google.com [209.85.216.47]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CDB9B28AB12; Mon, 19 May 2025 17:54:51 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.47 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747677293; cv=none; b=epZ+wrFZH/BEtaEIQ5/ZA0XzLffjttJ3/zOKFlhdEvl588KF6OMTYz20nBiKgUFLS6A0j9lVK0XlmHBE8pKCHCOXSlvvbmnm5et1fhJo2dolgtg/TEs2eQJkHvl3EoR3MtJWp/TF/00ef41F7h1CsX6WeEw8dmyIQdH0jBdZSmc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747677293; c=relaxed/simple; bh=V8c42ToE53u4EzIBKNHkOhDJybrEWeX5QGoGp8mh/ZA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=u4fL2NH+IbkpoYrQkkoD3Lr0E1D6UImdl0MplmyvQBgmjGCGXiYVCbwa+wqm7UX31hVKSRAcK3oEHQsuNhhNFulyJvQZYP8Rck4oh9QiM1YVnP8hYSZS/1UhPGtLw3UQ62SGXIJS3+wu4swnJLVkLH+UlB64wvWuJiw6nTFgKqc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=YjjVgnxj; arc=none smtp.client-ip=209.85.216.47 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="YjjVgnxj" Received: by mail-pj1-f47.google.com with SMTP id 98e67ed59e1d1-30e9b0f374fso3267579a91.3; Mon, 19 May 2025 10:54:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1747677291; x=1748282091; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=5PtOzJz3dhv1LS9QV7CQZ1khNGE/TmCpgM2HiFcWd9g=; b=YjjVgnxjDD9qNLzesxzd5Qg1t6V5vwDgD2ziWCIeDXuElMVxMvB7mGqBr1gmMFu557 PJ7MrLSJBihI377AZSXAmvdRZ+qsOwBI0svyN07AvCBo57cEeixH/cT31fx9kSmRpOXf iTDb2Ukq13W7nFVzXn2fttCKrsk8vCTNTiXG+QUlIiz2ZxbBLyAZRYVPwuzS6ruYzbVQ 4fJb73nNATHbdIoI2K2EN17jT64p0GPhheeD6OJxz56LcOVu4XPebNS8l+xickoAck8t Fy0s+Dd7s2TFcju46yPCqQIPdmhVHfPesw3JjDXsnWgePq3nNoNIKCjnnk0Xrztwy0H+ h/mg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747677291; x=1748282091; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=5PtOzJz3dhv1LS9QV7CQZ1khNGE/TmCpgM2HiFcWd9g=; b=igHDzMS1LVRZuxu6JHHf+PJJlVD7afu5FOqU0xuZv63+7SwW29FsuXF4ymJfWtSxK+ CfchJVh2DTkMbVRtUf/NLfzRK0ApB6ayCVPxdCGT569yQrlzUwCwn/lsRYzsU5kRIF7p DsbPfbo+cyNm+dWMlZKpXRece+NGJiYA7ht2gAcmygjR6RYAgaDn2bqVL040EjuP3WKc bsbvBfzyKHI2+ytrmsLB7SxRTzSIGVjGCNi7UbuRu4lixjzgMrBxwfv0viSfT/FAWqtA SjbS7neXmwC8kdZFIVGePBJUiL9XxFmIueiMrI48uxFCuIquT4IBuOrS05YUIWj5wpcr dxyw== X-Forwarded-Encrypted: i=1; AJvYcCUJqrcrXWZVqeIdX/tnr+RocuzdbUU5+VKtNLOaksXDOsOvNuIwu1ebT1Ju0GiizOQk1THTrY89O0AeTq5E@vger.kernel.org, AJvYcCW7Equqvz+zLLEWBNyO4tL5mB7Qtz75hSAMknCxY6SEe23f3RJX5PzTAzlIT58TL7oyG0Sorb/pIievPfqA@vger.kernel.org X-Gm-Message-State: AOJu0YwwejXM/s9HKblAnSkAehn7EROkHffOwm9XDkQRiR2IhI1m0vTW 0XQE0OaDNx9IAYSQ3kjN8HyF3ZgtU0SpbraToRIC/JvougQzyxYe5Rro X-Gm-Gg: ASbGncsFxSRKMUUz7AmTX/f3aZqjH58ycuD+KPA/I0ZBE1ryaLflgbgfoX4pLL1cm5u tqKH6sMc2S6ep/3IEXoGYec0OyKez5NOhlEzquvwxW0PnoQBYv8WhvM2Ux4uohBmP+rl0QZ7/1O QyVPV6+cypOJUte0Odx+VoAwzawEHWYSgIUDdwE6GCqdH1GEHIR84JqA2hzbNWCIUJ6MhulJCCn FxSqkyJcT3+MFkVx/nxPaNav3dF9inIQH/BUKUdzmK6zUihIPyl8M7roxsVhDf2ar3q6dbqOZ/F AS7ILDYmaVBH1qaZdjIHdXdARiNoem0iVB/ymDs3sfz2hhVaLrMxwELicCW2WOglvMqfV4Fu4nN Sn4XPpj1gQkzv9cFZFQNFVtR9Jw== X-Google-Smtp-Source: AGHT+IHpowPNebnU683NUJPLBntMDJlNddUI7cpcwZOKdAqHjpaQb88DRHEFLqSJ3HYORfJyQtdfDQ== X-Received: by 2002:a17:90b:3905:b0:2ee:5bc9:75c3 with SMTP id 98e67ed59e1d1-30e7d501dd0mr18086643a91.5.1747677290931; Mon, 19 May 2025 10:54:50 -0700 (PDT) Received: from localhost ([2a00:79e0:3e00:2601:3afc:446b:f0df:eadc]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-30e833da84csm6741686a91.10.2025.05.19.10.54.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 May 2025 10:54:50 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v5 12/40] drm/msm: Don't close VMAs on purge Date: Mon, 19 May 2025 10:51:35 -0700 Message-ID: <20250519175348.11924-13-robdclark@gmail.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250519175348.11924-1-robdclark@gmail.com> References: <20250519175348.11924-1-robdclark@gmail.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Rob Clark Previously we'd also tear down the VMA, making the address space available again. But with drm_gpuvm conversion, this would require holding the locks of all VMs the GEM object is mapped in. Which is problematic for the shrinker. Instead just let the VMA hang around until the GEM object is freed. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gem.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 4c10eca404e0..50b866dcf439 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -763,7 +763,7 @@ void msm_gem_purge(struct drm_gem_object *obj) GEM_WARN_ON(!is_purgeable(msm_obj)); /* Get rid of any iommu mapping(s): */ - put_iova_spaces(obj, true); + put_iova_spaces(obj, false); msm_gem_vunmap(obj); From patchwork Mon May 19 17:57:11 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 891134 Received: from mail-pl1-f170.google.com (mail-pl1-f170.google.com [209.85.214.170]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8A28328982F; Mon, 19 May 2025 17:58:06 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.170 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747677489; cv=none; b=IQpgEUtbJOJAGas3b5WGty3UXcpAW/MMaElAjLagdHEaKC6WbPF4JbxsXWhfawP4EuEcbqooGfc54z7gsZpVVPGdjoTvulOPQAW9CbaLE/O5JFxk1v8zT4SIPEhF1wl653cobInhb/ewfl5bEIQGSf47FdO35jE9D++M5cWPEbc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747677489; c=relaxed/simple; bh=WgKaWhkKhemf+x9LRLZV0J5rs/gmVfXO6MlEcPH+okE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=QuHaT8FSRW9ULx3LgBNGSPSSXcLbDct4YVW7aXNm3Os0BZ5ny4EDolbqISMLae+nxRMyHLuXFC3MSI9klPmtTB1HfdAAYYQm2u3RaOLq8niVOpm/oQZusorAHA2cJLUIr8iTj7CqIl0Cdu6RzUvaoRCS9ZmCXNEYuvrpCoZXvjM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=ELif/ySt; arc=none smtp.client-ip=209.85.214.170 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="ELif/ySt" Received: by mail-pl1-f170.google.com with SMTP id d9443c01a7336-23228b9d684so17079975ad.1; Mon, 19 May 2025 10:58:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1747677486; x=1748282286; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=LJyNsqqMnCFBlZcBxnLOjTsC6Bw4XYLjkgJX6AkR1Z8=; b=ELif/yStD/XNzafeEFiNsAP0bVmwHmJk67Eo0j+89Ch5FzpRsU1e5Tkda2MMijEx2c 6ngEbniye6W1qrF2y+eswPHmKzADmYCFH5+/8cfhK8nWxAJzwkOQA0JugqPhPs+qF19/ hzP2EVdLgvmV2J9ChCbELdivsHsGg2sramzh1T3jZWKTw+AtgRUwf1tza2fJdBB/UJUo oix9Y260ZYxI4IhDmTD1PDyMWtUeuCiV1QGZI/QtrzwO7nGWAXwW9sNIZRqyJsbyymBK vd/oppsqwD4XGhI4hkbxl7wbqlA5BgVEy0mzztXGGkdF+s926lsr+tkSzmGaRr46F3Gh Vpxw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747677486; x=1748282286; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=LJyNsqqMnCFBlZcBxnLOjTsC6Bw4XYLjkgJX6AkR1Z8=; b=veNnSbhKA5bncbrWhUD/MuQytGzNDBoRP+2mKn6/bTqSKuGo3LnbAGVQ/csWDwASfG X2uxLbY1upTWiwE9Srjj7xmCUeRCCactif+zWDi0Iuv7WvEUtbGW/CrKbTT6IozlcOyv gUJDDEuBZ+kjDW6pZNOPF9ghrgNW5XKgwFmlW9jlSwqBRUIRHNOS4U4qy6QAVlZSPX6W tnh0dJxgxaRehCG5+VWBcfiIHhwSuS2VqdSnJp5CBrES0zYyuO4wW/NsZCljk08jejCr x7qmXwLh4U2heTE6Q/zRmUTkZFKhKLL31XjZcTOSG2E1QBZWNj4JvQ9ZRwQMZKUluvBL dZIQ== X-Forwarded-Encrypted: i=1; AJvYcCUmvdDDTBjtMMo1y55vOQwLIxnQF1TqfA7mma8G3L/XJD+Es2ZyqF6+hoqYcrfiGzoN8O+YlmirukUcMBc=@vger.kernel.org, AJvYcCW3iahS7ianlZ0oAAIoLU0sR5FAGsWGmIbmgvpnc8brjbg2eLPbhI3rbgR4d9IOT+FDFELmK+kbapGABW33@vger.kernel.org, AJvYcCXt6S+wVLC2lPlLxgKwOKipKecPD8gbVmOUgFLhjXvvyWWtMev/IeEupEiMAyiZi0VlHebTidFv82KtZHQY@vger.kernel.org X-Gm-Message-State: AOJu0YwP4rUqYJ20FPHTrG43I9f5mBEwMSmzIiNBEx/CyMsHQHLfOZXr 2Wleay+l41gwxu13KuZ+keUVq99G16MsPcgLQSfF834j0ezxOt1+B6Yw X-Gm-Gg: ASbGncuFOTcGz9Kgd2ZGtpDrCYkjexz3nx8yOtFqz3h/QM3zS0jGOTfuwWbkmrg129g y6nk591i/E4TEOOg0tQXIIYtnARalPonX2Ysy4Y2iNoq+Fz/LOyMhLpBAX8m0Lo7825bkIrL4Ze 3eTR/ODJE2VMOy03zaeGO39l9nHrzB1j4miXft9g7ByLeMHEf9FCgKnz2hYFDT5pHEUMBHJ53+4 IobWu+MChGiqVOFJlo6ezHL9R9StcK7lCCsELpRrmTwxJNOUjN1jTC2nSnN4vf9Zj1UDkFvrony s8wSVe4ClZ8zprhm0ZAhlxz2yKikyCbPX9cWD2kOF4+IYpBq0Og8W+HyHOwWrRbm3zv4CEbTfkO 5zhXQ1HAoI27jz8E3Rx0Pfn6aXvTaQ4xNQAgd X-Google-Smtp-Source: AGHT+IGanyuq409M8oZQs0siMUQtt9jalgdF95x4NnSDYklFUUnntLmWL4PzPxxhK6ssW/iEAev1mg== X-Received: by 2002:a17:902:e54e:b0:22e:4b74:5f67 with SMTP id d9443c01a7336-231de376f05mr190501355ad.31.1747677485653; Mon, 19 May 2025 10:58:05 -0700 (PDT) Received: from localhost ([2a00:79e0:3e00:2601:3afc:446b:f0df:eadc]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-231d4e9745esm63022455ad.127.2025.05.19.10.58.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 May 2025 10:58:04 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= , linux-kernel@vger.kernel.org (open list), linux-media@vger.kernel.org (open list:DMA BUFFER SHARING FRAMEWORK:Keyword:\bdma_(?:buf|fence|resv)\b), linaro-mm-sig@lists.linaro.org (moderated list:DMA BUFFER SHARING FRAMEWORK:Keyword:\bdma_(?:buf|fence|resv)\b) Subject: [PATCH v5 14/40] drm/msm: Convert vm locking Date: Mon, 19 May 2025 10:57:11 -0700 Message-ID: <20250519175755.13037-2-robdclark@gmail.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250519175755.13037-1-robdclark@gmail.com> References: <20250519175348.11924-1-robdclark@gmail.com> <20250519175755.13037-1-robdclark@gmail.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Rob Clark Convert to using the gpuvm's r_obj for serializing access to the VM. This way we can use the drm_exec helper for dealing with deadlock detection and backoff. This will let us deal with upcoming locking order conflicts with the VM_BIND implmentation (ie. in some scenarious we need to acquire the obj lock first, for ex. to iterate all the VMs an obj is bound in, and in other scenarious we need to acquire the VM lock first). Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gem.c | 35 ++++++++--- drivers/gpu/drm/msm/msm_gem.h | 37 ++++++++++-- drivers/gpu/drm/msm/msm_gem_shrinker.c | 80 +++++++++++++++++++++++--- drivers/gpu/drm/msm/msm_gem_submit.c | 9 ++- drivers/gpu/drm/msm/msm_gem_vma.c | 27 ++++----- 5 files changed, 150 insertions(+), 38 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 3b7db3b3f763..b7055805a5dd 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -52,6 +52,7 @@ static void put_iova_spaces(struct drm_gem_object *obj, struct drm_gpuvm *vm, bo static void msm_gem_close(struct drm_gem_object *obj, struct drm_file *file) { struct msm_context *ctx = file->driver_priv; + struct drm_exec exec; update_ctx_mem(file, -obj->size); @@ -70,9 +71,9 @@ static void msm_gem_close(struct drm_gem_object *obj, struct drm_file *file) dma_resv_wait_timeout(obj->resv, DMA_RESV_USAGE_READ, false, msecs_to_jiffies(1000)); - msm_gem_lock(obj); + msm_gem_lock_vm_and_obj(&exec, obj, ctx->vm); put_iova_spaces(obj, &ctx->vm->base, true); - msm_gem_unlock(obj); + drm_exec_fini(&exec); /* drop locks */ } /* @@ -538,11 +539,12 @@ int msm_gem_get_and_pin_iova_range(struct drm_gem_object *obj, struct msm_gem_vm *vm, uint64_t *iova, u64 range_start, u64 range_end) { + struct drm_exec exec; int ret; - msm_gem_lock(obj); + msm_gem_lock_vm_and_obj(&exec, obj, vm); ret = get_and_pin_iova_range_locked(obj, vm, iova, range_start, range_end); - msm_gem_unlock(obj); + drm_exec_fini(&exec); /* drop locks */ return ret; } @@ -562,16 +564,17 @@ int msm_gem_get_iova(struct drm_gem_object *obj, struct msm_gem_vm *vm, uint64_t *iova) { struct msm_gem_vma *vma; + struct drm_exec exec; int ret = 0; - msm_gem_lock(obj); + msm_gem_lock_vm_and_obj(&exec, obj, vm); vma = get_vma_locked(obj, vm, 0, U64_MAX); if (IS_ERR(vma)) { ret = PTR_ERR(vma); } else { *iova = vma->base.va.addr; } - msm_gem_unlock(obj); + drm_exec_fini(&exec); /* drop locks */ return ret; } @@ -600,9 +603,10 @@ static int clear_iova(struct drm_gem_object *obj, int msm_gem_set_iova(struct drm_gem_object *obj, struct msm_gem_vm *vm, uint64_t iova) { + struct drm_exec exec; int ret = 0; - msm_gem_lock(obj); + msm_gem_lock_vm_and_obj(&exec, obj, vm); if (!iova) { ret = clear_iova(obj, vm); } else { @@ -615,7 +619,7 @@ int msm_gem_set_iova(struct drm_gem_object *obj, ret = -EBUSY; } } - msm_gem_unlock(obj); + drm_exec_fini(&exec); /* drop locks */ return ret; } @@ -1007,12 +1011,27 @@ static void msm_gem_free_object(struct drm_gem_object *obj) struct msm_gem_object *msm_obj = to_msm_bo(obj); struct drm_device *dev = obj->dev; struct msm_drm_private *priv = dev->dev_private; + struct drm_exec exec; mutex_lock(&priv->obj_lock); list_del(&msm_obj->node); mutex_unlock(&priv->obj_lock); + /* + * We need to lock any VMs the object is still attached to, but not + * the object itself (see explaination in msm_gem_assert_locked()), + * so just open-code this special case: + */ + drm_exec_init(&exec, 0, 0); + drm_exec_until_all_locked (&exec) { + struct drm_gpuvm_bo *vm_bo; + drm_gem_for_each_gpuvm_bo (vm_bo, obj) { + drm_exec_lock_obj(&exec, drm_gpuvm_resv_obj(vm_bo->vm)); + drm_exec_retry_on_contention(&exec); + } + } put_iova_spaces(obj, NULL, true); + drm_exec_fini(&exec); /* drop locks */ if (obj->import_attach) { GEM_WARN_ON(msm_obj->vaddr); diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index f7f7e7910754..36a846e9b943 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -62,12 +62,6 @@ struct msm_gem_vm { */ struct drm_mm mm; - /** @mm_lock: protects @mm node allocation/removal */ - struct spinlock mm_lock; - - /** @vm_lock: protects gpuvm insert/remove/traverse */ - struct mutex vm_lock; - /** @mmu: The mmu object which manages the pgtables */ struct msm_mmu *mmu; @@ -246,6 +240,37 @@ msm_gem_unlock(struct drm_gem_object *obj) dma_resv_unlock(obj->resv); } +/** + * msm_gem_lock_vm_and_obj() - Helper to lock an obj + VM + * @exec: the exec context helper which will be initalized + * @obj: the GEM object to lock + * @vm: the VM to lock + * + * Operations which modify a VM frequently need to lock both the VM and + * the object being mapped/unmapped/etc. This helper uses drm_exec to + * acquire both locks, dealing with potential deadlock/backoff scenarios + * which arise when multiple locks are involved. + */ +static inline int +msm_gem_lock_vm_and_obj(struct drm_exec *exec, + struct drm_gem_object *obj, + struct msm_gem_vm *vm) +{ + int ret = 0; + + drm_exec_init(exec, 0, 2); + drm_exec_until_all_locked (exec) { + ret = drm_exec_lock_obj(exec, drm_gpuvm_resv_obj(&vm->base)); + if (!ret && (obj->resv != drm_gpuvm_resv(&vm->base))) + ret = drm_exec_lock_obj(exec, obj); + drm_exec_retry_on_contention(exec); + if (GEM_WARN_ON(ret)) + break; + } + + return ret; +} + static inline void msm_gem_assert_locked(struct drm_gem_object *obj) { diff --git a/drivers/gpu/drm/msm/msm_gem_shrinker.c b/drivers/gpu/drm/msm/msm_gem_shrinker.c index de185fc34084..5faf6227584a 100644 --- a/drivers/gpu/drm/msm/msm_gem_shrinker.c +++ b/drivers/gpu/drm/msm/msm_gem_shrinker.c @@ -43,6 +43,75 @@ msm_gem_shrinker_count(struct shrinker *shrinker, struct shrink_control *sc) return count; } +static bool +with_vm_locks(struct ww_acquire_ctx *ticket, + void (*fn)(struct drm_gem_object *obj), + struct drm_gem_object *obj) +{ + /* + * Track last locked entry for for unwinding locks in error and + * success paths + */ + struct drm_gpuvm_bo *vm_bo, *last_locked = NULL; + int ret = 0; + + drm_gem_for_each_gpuvm_bo (vm_bo, obj) { + struct dma_resv *resv = drm_gpuvm_resv(vm_bo->vm); + + if (resv == obj->resv) + continue; + + ret = dma_resv_lock(resv, ticket); + + /* + * Since we already skip the case when the VM and obj + * share a resv (ie. _NO_SHARE objs), we don't expect + * to hit a double-locking scenario... which the lock + * unwinding cannot really cope with. + */ + WARN_ON(ret == -EALREADY); + + /* + * Don't bother with slow-lock / backoff / retry sequence, + * if we can't get the lock just give up and move on to + * the next object. + */ + if (ret) + goto out_unlock; + + /* + * Hold a ref to prevent the vm_bo from being freed + * and removed from the obj's gpuva list, as that would + * would result in missing the unlock below + */ + drm_gpuvm_bo_get(vm_bo); + + last_locked = vm_bo; + } + + fn(obj); + +out_unlock: + if (last_locked) { + drm_gem_for_each_gpuvm_bo (vm_bo, obj) { + struct dma_resv *resv = drm_gpuvm_resv(vm_bo->vm); + + if (resv == obj->resv) + continue; + + dma_resv_unlock(resv); + + /* Drop the ref taken while locking: */ + drm_gpuvm_bo_put(vm_bo); + + if (last_locked == vm_bo) + break; + } + } + + return ret == 0; +} + static bool purge(struct drm_gem_object *obj, struct ww_acquire_ctx *ticket) { @@ -52,9 +121,7 @@ purge(struct drm_gem_object *obj, struct ww_acquire_ctx *ticket) if (msm_gem_active(obj)) return false; - msm_gem_purge(obj); - - return true; + return with_vm_locks(ticket, msm_gem_purge, obj); } static bool @@ -66,9 +133,7 @@ evict(struct drm_gem_object *obj, struct ww_acquire_ctx *ticket) if (msm_gem_active(obj)) return false; - msm_gem_evict(obj); - - return true; + return with_vm_locks(ticket, msm_gem_evict, obj); } static bool @@ -100,6 +165,7 @@ static unsigned long msm_gem_shrinker_scan(struct shrinker *shrinker, struct shrink_control *sc) { struct msm_drm_private *priv = shrinker->private_data; + struct ww_acquire_ctx ticket; struct { struct drm_gem_lru *lru; bool (*shrink)(struct drm_gem_object *obj, struct ww_acquire_ctx *ticket); @@ -124,7 +190,7 @@ msm_gem_shrinker_scan(struct shrinker *shrinker, struct shrink_control *sc) drm_gem_lru_scan(stages[i].lru, nr, &stages[i].remaining, stages[i].shrink, - NULL); + &ticket); nr -= stages[i].freed; freed += stages[i].freed; remaining += stages[i].remaining; diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c index 86791a854c42..6924d03026ba 100644 --- a/drivers/gpu/drm/msm/msm_gem_submit.c +++ b/drivers/gpu/drm/msm/msm_gem_submit.c @@ -256,11 +256,18 @@ static int submit_lookup_cmds(struct msm_gem_submit *submit, /* This is where we make sure all the bo's are reserved and pin'd: */ static int submit_lock_objects(struct msm_gem_submit *submit) { + unsigned flags = DRM_EXEC_IGNORE_DUPLICATES | DRM_EXEC_INTERRUPTIBLE_WAIT; int ret; - drm_exec_init(&submit->exec, DRM_EXEC_INTERRUPTIBLE_WAIT, submit->nr_bos); +// TODO need to add vm_bind path which locks vm resv + external objs + drm_exec_init(&submit->exec, flags, submit->nr_bos); drm_exec_until_all_locked (&submit->exec) { + ret = drm_exec_lock_obj(&submit->exec, + drm_gpuvm_resv_obj(&submit->vm->base)); + drm_exec_retry_on_contention(&submit->exec); + if (ret) + goto error; for (unsigned i = 0; i < submit->nr_bos; i++) { struct drm_gem_object *obj = submit->bos[i].obj; ret = drm_exec_prepare_obj(&submit->exec, obj, 1); diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_gem_vma.c index d1621761ef36..e294e7f6e723 100644 --- a/drivers/gpu/drm/msm/msm_gem_vma.c +++ b/drivers/gpu/drm/msm/msm_gem_vma.c @@ -92,15 +92,13 @@ void msm_gem_vma_close(struct msm_gem_vma *vma) GEM_WARN_ON(vma->mapped); - spin_lock(&vm->mm_lock); + drm_gpuvm_resv_assert_held(&vm->base); + if (vma->base.va.addr) drm_mm_remove_node(&vma->node); - spin_unlock(&vm->mm_lock); - mutex_lock(&vm->vm_lock); drm_gpuva_remove(&vma->base); drm_gpuva_unlink(&vma->base); - mutex_unlock(&vm->vm_lock); kfree(vma); } @@ -114,16 +112,16 @@ msm_gem_vma_new(struct msm_gem_vm *vm, struct drm_gem_object *obj, struct msm_gem_vma *vma; int ret; + drm_gpuvm_resv_assert_held(&vm->base); + vma = kzalloc(sizeof(*vma), GFP_KERNEL); if (!vma) return ERR_PTR(-ENOMEM); if (vm->managed) { - spin_lock(&vm->mm_lock); ret = drm_mm_insert_node_in_range(&vm->mm, &vma->node, obj->size, PAGE_SIZE, 0, range_start, range_end, 0); - spin_unlock(&vm->mm_lock); if (ret) goto err_free_vma; @@ -137,9 +135,7 @@ msm_gem_vma_new(struct msm_gem_vm *vm, struct drm_gem_object *obj, drm_gpuva_init(&vma->base, range_start, range_end - range_start, obj, 0); vma->mapped = false; - mutex_lock(&vm->vm_lock); ret = drm_gpuva_insert(&vm->base, &vma->base); - mutex_unlock(&vm->vm_lock); if (ret) goto err_free_range; @@ -149,18 +145,14 @@ msm_gem_vma_new(struct msm_gem_vm *vm, struct drm_gem_object *obj, goto err_va_remove; } - mutex_lock(&vm->vm_lock); drm_gpuvm_bo_extobj_add(vm_bo); drm_gpuva_link(&vma->base, vm_bo); - mutex_unlock(&vm->vm_lock); GEM_WARN_ON(drm_gpuvm_bo_put(vm_bo)); return vma; err_va_remove: - mutex_lock(&vm->vm_lock); drm_gpuva_remove(&vma->base); - mutex_unlock(&vm->vm_lock); err_free_range: if (vm->managed) drm_mm_remove_node(&vma->node); @@ -191,7 +183,13 @@ struct msm_gem_vm * msm_gem_vm_create(struct drm_device *drm, struct msm_mmu *mmu, const char *name, u64 va_start, u64 va_size, bool managed) { - enum drm_gpuvm_flags flags = managed ? DRM_GPUVM_VA_WEAK_REF : 0; + /* + * We mostly want to use DRM_GPUVM_RESV_PROTECTED, except that + * makes drm_gpuvm_bo_evict() a no-op for extobjs (ie. we loose + * tracking that an extobj is evicted) :facepalm: + */ + enum drm_gpuvm_flags flags = + (managed ? DRM_GPUVM_VA_WEAK_REF : 0); struct msm_gem_vm *vm; struct drm_gem_object *dummy_gem; int ret = 0; @@ -213,9 +211,6 @@ msm_gem_vm_create(struct drm_device *drm, struct msm_mmu *mmu, const char *name, va_start, va_size, 0, 0, &msm_gpuvm_ops); drm_gem_object_put(dummy_gem); - spin_lock_init(&vm->mm_lock); - mutex_init(&vm->vm_lock); - vm->mmu = mmu; vm->managed = managed; From patchwork Mon May 19 17:57:13 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 891133 Received: from mail-pl1-f169.google.com (mail-pl1-f169.google.com [209.85.214.169]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 16B7B288C0F; Mon, 19 May 2025 17:58:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.169 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747677498; cv=none; b=f6xIZyRol1SVrrjGVV07iPtga5A1ydu7fplvrZ8VYDv61FVZEaK+2PPp2fmsOZJfCWqfLbIQ3pCoetAeCPwbdsW7MSPPdcmRc2jiNKTHuMtbUXLL9YXsYq5dTqKnPQm7jJQ3Ehzf4Xl28I5aofyz1rZjBZabm+AmVlgGlktaJw0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747677498; c=relaxed/simple; bh=h0LM7y3HUhdp4vKxYG/KykLjtSwzZLPYiRqlbsRlb/M=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=geStwmeMn1OZbwykPSiW7u7xk3CyKWAuVSMZ9EIFMUhz6XTnKqlgs+Fe1peSDmYsvBenCthBkhlIOxPyCKRKjYQHhs4zxTQv3sLiqvx0F9xwmftDU71Sd7ERM5u7ODFG8KZKVNzrDojKZA5olDonupQdQuFmb6LrxdKC4US6pbE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=S9uO7TGM; arc=none smtp.client-ip=209.85.214.169 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="S9uO7TGM" Received: by mail-pl1-f169.google.com with SMTP id d9443c01a7336-231f5a7baa2so21665025ad.0; Mon, 19 May 2025 10:58:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1747677495; x=1748282295; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=oSQkQS6gUCueUQ4Hx1PQ5yLuWwzx74JFyGcJ6vU2NiI=; b=S9uO7TGM6tRiiWcQd9bexbfRLqKw6bfsdsI8xWEl/Pbs4qxLScBzfvY1kCFU6PhLjG 9QtEUDrf8j20bflfAAIOzDVeWSfcMPaP76dONKzV2EH3IR++/1SQeann6Nka0CUEsrO6 Xd3AXfs0W0+cczqmRHAJgrKxK/whm4Gpd0W+qeGiGkZTbyaMM5rUlP/MZYHgust/Pxkt uccGXb7t67Jx8J/Fyvgk01GHKaBCjXexBgZyG1HjAq3l+UIqhSBVIGdJK+6kb6XEJz96 djj9bwHH3HlsUljNFSMl/zxtOg2zoFbBXW2YfOpdGNodUxue4T14a0FPqXoerNzCoOsi 3Z+g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747677495; x=1748282295; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=oSQkQS6gUCueUQ4Hx1PQ5yLuWwzx74JFyGcJ6vU2NiI=; b=XDrnFjXCvxvUyv4M9XDexmMoWe/rnM1Cs2EiJgHsSvZaE4YDEWHkxQrGE3w3PNbZiY DKsiMe/T1WS5YQ+Gq+9c3O4Fo2KTn93XYp5KSoEqbL+YSyxS0yqzf1IQg0T8dG10hYxU pVIyj+V3yPAiolMFt6oLQ5V2uY2KWSjPufT5Qcgp3bck9q/Vsfas0eTO0H4cQbgwbckg 0S6Pyevt9wIWmwg+jn2N9GUQ7VKnBnQVLg7XZLDAm9mTZp/P5iB9uViqqk3FNi0iFnrN 77I+X20loe7cR4p3MhA5pZiaokqUU4sU3uHNE4HSmKtu8NSPYwwdvgdIeno4692V2zsf dxGg== X-Forwarded-Encrypted: i=1; AJvYcCVte2CBu9+2mHKb9C5f1dR6GjYQlcE1DLQOD5GVHE0VaOpOZ5ya0fk/2aGul1C/nI0/aSFvRTvhYNQ7604t@vger.kernel.org, AJvYcCXzEc2uuvnJb983nXqTMbJGad9E2D3fmmrDI9xqVg6/JWo5uBP118jajOgbY+qurx2FdEJ7b/nPXS9DmMnx@vger.kernel.org X-Gm-Message-State: AOJu0Yz3aqTTM45X/7Ox5ijO8pnhgCmU5bxbTrGj2BacB0aZZMNu6EGT yJQZa6xLHhbQYRsDC8Uq+KIi46bB78EbKyIG8fqu7Bei7XNd5sbSvySm X-Gm-Gg: ASbGncumKxH/m1g6sahmNXAZUZKQnSQt8q3E91Z0faPJmufOCr1mFb4uksXt5xCAUyp Xgf8yH2cWGEIi5lPoRBkDRM7ykQDzsJ0lovcuECqi0saBGbXC8+aSweMtGWerU9gsNWH0RqIS6U Y6YKLxfAkL5cOMkyNSQcWJ990/FlbXc+8evkwrzSOSssHQ81FtQosPtWHYDnb8c9ciykt7Bn7b0 2RKDAp5lrQxtEA070MLTpbqx2I1Bjg71K0AJcj5yVjxtTZKSIvukj1NPeuF3rQYMr3OD/6Fjh16 g8H7Xrqt4MTyI3Pg5rj+acnG65pwOGl87EvIEbbS4nAI/ZjPvQWuGa2NWxRHqpk3Iek1PgnhiGZ PvNHC8gOoqRlZZhyWnVt16KSWlQ== X-Google-Smtp-Source: AGHT+IFd9XZtWJ8GT07VyLdYZzMpDphs3K+QujX8XgAxBMMoKxGzCGJZYFE+xZNJPKrHrsqet7BTEw== X-Received: by 2002:a17:902:ce87:b0:223:653e:eb09 with SMTP id d9443c01a7336-231d438a294mr182426675ad.7.1747677495374; Mon, 19 May 2025 10:58:15 -0700 (PDT) Received: from localhost ([2a00:79e0:3e00:2601:3afc:446b:f0df:eadc]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-231d4adbf5esm62981905ad.64.2025.05.19.10.58.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 May 2025 10:58:14 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v5 16/40] drm/msm: Split out helper to get iommu prot flags Date: Mon, 19 May 2025 10:57:13 -0700 Message-ID: <20250519175755.13037-4-robdclark@gmail.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250519175755.13037-1-robdclark@gmail.com> References: <20250519175348.11924-1-robdclark@gmail.com> <20250519175755.13037-1-robdclark@gmail.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Rob Clark We'll re-use this in the vm_bind path. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gem.c | 12 ++++++++++-- drivers/gpu/drm/msm/msm_gem.h | 1 + 2 files changed, 11 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 81500066369f..5b8b9c1d6c74 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -444,10 +444,9 @@ static struct drm_gpuva *get_vma_locked(struct drm_gem_object *obj, return vma; } -int msm_gem_pin_vma_locked(struct drm_gem_object *obj, struct drm_gpuva *vma) +int msm_gem_prot(struct drm_gem_object *obj) { struct msm_gem_object *msm_obj = to_msm_bo(obj); - struct page **pages; int prot = IOMMU_READ; if (!(msm_obj->flags & MSM_BO_GPU_READONLY)) @@ -463,6 +462,15 @@ int msm_gem_pin_vma_locked(struct drm_gem_object *obj, struct drm_gpuva *vma) else if (prot == 2) prot |= IOMMU_USE_LLC_NWA; + return prot; +} + +int msm_gem_pin_vma_locked(struct drm_gem_object *obj, struct drm_gpuva *vma) +{ + struct msm_gem_object *msm_obj = to_msm_bo(obj); + struct page **pages; + int prot = msm_gem_prot(obj); + msm_gem_assert_locked(obj); pages = msm_gem_get_pages_locked(obj, MSM_MADV_WILLNEED); diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index 813e886bc43f..3a853fcb8944 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -158,6 +158,7 @@ struct msm_gem_object { #define to_msm_bo(x) container_of(x, struct msm_gem_object, base) uint64_t msm_gem_mmap_offset(struct drm_gem_object *obj); +int msm_gem_prot(struct drm_gem_object *obj); int msm_gem_pin_vma_locked(struct drm_gem_object *obj, struct drm_gpuva *vma); void msm_gem_unpin_locked(struct drm_gem_object *obj); void msm_gem_unpin_active(struct drm_gem_object *obj); From patchwork Mon May 19 17:57:15 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 891132 Received: from mail-pl1-f182.google.com (mail-pl1-f182.google.com [209.85.214.182]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2B5E1289E3A; Mon, 19 May 2025 17:58:18 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.182 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747677500; cv=none; b=J0L8g0TuGUR8pC6Eh+nupZZmEtnp+ujBUp+xI9stlVv1HB4kMyNeVzD8Ty0oa1faGA/iVeStDOos6yJE5LWBNuZy7dUEmJmWUhkldha4A0h1J8SUHg3hSm+nXwzByJ5r6N81JtpHmHHBPbUKU7vhSJWVSdfBUOa7wbqeAPtbUqs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747677500; c=relaxed/simple; bh=RlQo8YAghCDJm/SaXs8MaKXbZ2FsLwHEJSBVL5STJf4=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=EUY0nwVz0wVnL9OPGXbbY6GE+PIZnQmaoklmtQn+4WXuhw98dKqTRSezcBOgRyYQ9Fcc7BfOALJDVodTo7R0ReXUKmJvsbgwgfyC8puFu8/Mgk73BrG93r2uf/wUahHOM50wdOVJgLqk3/pAI1dOwtwUXAjGRZsXdTeM+46/e3Y= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=gwTpriB4; arc=none smtp.client-ip=209.85.214.182 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="gwTpriB4" Received: by mail-pl1-f182.google.com with SMTP id d9443c01a7336-232059c0b50so19328115ad.2; Mon, 19 May 2025 10:58:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1747677498; x=1748282298; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=r+D65lEByNc4LMsDIcZTWRQIukmgFnZvdvZtXU4eE4o=; b=gwTpriB4OOfH8+RSFRxHCozKjeE4QTgwn3tQxpJFwoHqWXXo2URQbEwaCDzHcZgVlS dt2yBbKdIaaQJHjWJvMxrGmfg6b7xCKtMjgsBT9KtqDiaikCE+IK3vO+zp8da9ghfQm3 NLpdKzueXgU8h0DAyuL28QRoJZFBLnXhHB8TCyQ8T9Y8A8T3LT3xyzLskvqyYw26R+zm 88b/8IrdWK0tOWtmYVawOVUi6mqqaTvkDl6oa0umJFLQF5xuapNmAp1wmW9hxsEApN1S N+tjGnr4dvP7+0679wFgDWI4PEt5OsbEsP6SfADVQwIjS+7IxhmydZrX+tEjT9UmlTbQ KDIg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747677498; x=1748282298; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=r+D65lEByNc4LMsDIcZTWRQIukmgFnZvdvZtXU4eE4o=; b=HROKYsWDZ4QK42MZuN00rR8kCggahcwFoPQITuyDJ3mEHnbPmx3/Z1siVjm1zAdAK6 miIdgaSrv+FfD0lLmSXFa/X4dLW/+SCtkDzd374y1g3BiFeOOeCSXXejFgfzUaevsCHu YiWmPQX8gIsjhfvugcc8vzFZNZmeSCHImNTiiWUgcTR4BUU9ga2Ox7uik37a8LGqaI8l TUUqRFQyL/4i1a+cWcPY6iRpMPUT8LHGjEm2PJ0tq1i8F63SSDGSD8sUPPOmSTXBNEl7 Z6CRujs5ydDDGRdtlEMo+CTWFU2hwVNEzmxs0FZoj+gDXpKpojug8oTpqBWQzfFhjnr+ ourw== X-Forwarded-Encrypted: i=1; AJvYcCV8iAV39sLwehIqz/TyzNBd2IQUS3vC4WmUFa7wea9qFBkBGpDErR9RyDHA7HeF1KNEBMDgTeGFEJJD/h6n@vger.kernel.org, AJvYcCVWcwCgZdwi+DE0fk+Y9QJZJHo5oQl1RFfuq1TCbQ9bcyPqeixfoL+YM+431Vcsmm5Q1ghEyWCSRpYtRGwg@vger.kernel.org X-Gm-Message-State: AOJu0YwXuAPcT4AmPDIOoIWQbwfkLLCHM6wagECgPsVlbot2L5BfXI4b 3ZxMGMyCd+xNVgYgZcKCE+u6rMvqtdQ66B0iGChIt5T+0MVm5qUEybsk X-Gm-Gg: ASbGncu46Vk11n5h9s+QlQY6SZ2uAqDy0mYEitDBrHPmCuoLI6NibJcBY0u9qA7Sn2L 6sRDsYsYLyPD4Bb0Oqvss3P32sfBdDNzBi/Uq8WBw7gkEQ108aHM2OXt9Al+LTof+x00yb0547V sXcN0N4U1okhnXTTEMviFGjoV/gR86PmMumMBvABvsKUNQCqWDmIJX8odC4egb9JpR8hdY9gZ+B 6G3ydcEeSzibnhQbPtv7NtnvPflhx3WykwiLBKstt7QZGlHkHnsmB9r6mxPdwfSNe8RT3DlztT6 Np6uT/AyVR3XKrIvNx5tQf6YuRWmTOHwHELpdAfeWB7rPzMDOy31KOGED6dfhfUV+MJ5nd3Cjms FLvV67JH1EgKiYUYfGjBzZFar+Q== X-Google-Smtp-Source: AGHT+IFQcPSehL1xYQOTKUm0XOa2dBmJUhV+7tQ0TuiltDzL7TcBBkxmzlJzQ3pvrH4U2wsCD79HSw== X-Received: by 2002:a17:902:ef4e:b0:225:ac99:ae08 with SMTP id d9443c01a7336-231de2e6bbfmr207857375ad.5.1747677498278; Mon, 19 May 2025 10:58:18 -0700 (PDT) Received: from localhost ([2a00:79e0:3e00:2601:3afc:446b:f0df:eadc]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-231d4ed5460sm62461485ad.241.2025.05.19.10.58.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 May 2025 10:58:17 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Rob Clark , Rob Clark , Sean Paul , Konrad Dybcio , Abhinav Kumar , Dmitry Baryshkov , Marijn Suijten , David Airlie , Simona Vetter , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v5 18/40] drm/msm: Add PRR support Date: Mon, 19 May 2025 10:57:15 -0700 Message-ID: <20250519175755.13037-6-robdclark@gmail.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250519175755.13037-1-robdclark@gmail.com> References: <20250519175348.11924-1-robdclark@gmail.com> <20250519175755.13037-1-robdclark@gmail.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Rob Clark Add PRR (Partial Resident Region) is a bypass address which make GPU writes go to /dev/null and reads return zero. This is used to implement vulkan sparse residency. To support PRR/NULL mappings, we allocate a page to reserve a physical address which we know will not be used as part of a GEM object, and configure the SMMU to use this address for PRR/NULL mappings. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/adreno/adreno_gpu.c | 10 ++++ drivers/gpu/drm/msm/msm_iommu.c | 62 ++++++++++++++++++++++++- include/uapi/drm/msm_drm.h | 2 + 3 files changed, 73 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c b/drivers/gpu/drm/msm/adreno/adreno_gpu.c index f6624a246694..e24f627daf37 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c @@ -361,6 +361,13 @@ int adreno_fault_handler(struct msm_gpu *gpu, unsigned long iova, int flags, return 0; } +static bool +adreno_smmu_has_prr(struct msm_gpu *gpu) +{ + struct adreno_smmu_priv *adreno_smmu = dev_get_drvdata(&gpu->pdev->dev); + return adreno_smmu && adreno_smmu->set_prr_addr; +} + int adreno_get_param(struct msm_gpu *gpu, struct msm_context *ctx, uint32_t param, uint64_t *value, uint32_t *len) { @@ -444,6 +451,9 @@ int adreno_get_param(struct msm_gpu *gpu, struct msm_context *ctx, case MSM_PARAM_UCHE_TRAP_BASE: *value = adreno_gpu->uche_trap_base; return 0; + case MSM_PARAM_HAS_PRR: + *value = adreno_smmu_has_prr(gpu); + return 0; default: return UERR(EINVAL, drm, "%s: invalid param: %u", gpu->name, param); } diff --git a/drivers/gpu/drm/msm/msm_iommu.c b/drivers/gpu/drm/msm/msm_iommu.c index 2fd48e66bc98..756bd55ee94f 100644 --- a/drivers/gpu/drm/msm/msm_iommu.c +++ b/drivers/gpu/drm/msm/msm_iommu.c @@ -13,6 +13,7 @@ struct msm_iommu { struct msm_mmu base; struct iommu_domain *domain; atomic_t pagetables; + struct page *prr_page; }; #define to_msm_iommu(x) container_of(x, struct msm_iommu, base) @@ -112,6 +113,36 @@ static int msm_iommu_pagetable_unmap(struct msm_mmu *mmu, u64 iova, return (size == 0) ? 0 : -EINVAL; } +static int msm_iommu_pagetable_map_prr(struct msm_mmu *mmu, u64 iova, size_t len, int prot) +{ + struct msm_iommu_pagetable *pagetable = to_pagetable(mmu); + struct io_pgtable_ops *ops = pagetable->pgtbl_ops; + struct msm_iommu *iommu = to_msm_iommu(pagetable->parent); + phys_addr_t phys = page_to_phys(iommu->prr_page); + u64 addr = iova; + + while (len) { + size_t mapped = 0; + size_t size = PAGE_SIZE; + int ret; + + ret = ops->map_pages(ops, addr, phys, size, 1, prot, GFP_KERNEL, &mapped); + + /* map_pages could fail after mapping some of the pages, + * so update the counters before error handling. + */ + addr += mapped; + len -= mapped; + + if (ret) { + msm_iommu_pagetable_unmap(mmu, iova, addr - iova); + return -EINVAL; + } + } + + return 0; +} + static int msm_iommu_pagetable_map(struct msm_mmu *mmu, u64 iova, struct sg_table *sgt, size_t off, size_t len, int prot) @@ -122,6 +153,9 @@ static int msm_iommu_pagetable_map(struct msm_mmu *mmu, u64 iova, u64 addr = iova; unsigned int i; + if (!sgt) + return msm_iommu_pagetable_map_prr(mmu, iova, len, prot); + for_each_sgtable_sg(sgt, sg, i) { size_t size = sg->length; phys_addr_t phys = sg_phys(sg); @@ -177,9 +211,16 @@ static void msm_iommu_pagetable_destroy(struct msm_mmu *mmu) * If this is the last attached pagetable for the parent, * disable TTBR0 in the arm-smmu driver */ - if (atomic_dec_return(&iommu->pagetables) == 0) + if (atomic_dec_return(&iommu->pagetables) == 0) { adreno_smmu->set_ttbr0_cfg(adreno_smmu->cookie, NULL); + if (adreno_smmu->set_prr_bit) { + adreno_smmu->set_prr_bit(adreno_smmu->cookie, false); + __free_page(iommu->prr_page); + iommu->prr_page = NULL; + } + } + free_io_pgtable_ops(pagetable->pgtbl_ops); kfree(pagetable); } @@ -336,6 +377,25 @@ struct msm_mmu *msm_iommu_pagetable_create(struct msm_mmu *parent) kfree(pagetable); return ERR_PTR(ret); } + + BUG_ON(iommu->prr_page); + if (adreno_smmu->set_prr_bit) { + /* + * We need a zero'd page for two reasons: + * + * 1) Reserve a known physical address to use when + * mapping NULL / sparsely resident regions + * 2) Read back zero + * + * It appears the hw drops writes to the PRR region + * on the floor, but reads actually return whatever + * is in the PRR page. + */ + iommu->prr_page = alloc_page(GFP_KERNEL | __GFP_ZERO); + adreno_smmu->set_prr_addr(adreno_smmu->cookie, + page_to_phys(iommu->prr_page)); + adreno_smmu->set_prr_bit(adreno_smmu->cookie, true); + } } /* Needed later for TLB flush */ diff --git a/include/uapi/drm/msm_drm.h b/include/uapi/drm/msm_drm.h index 2342cb90857e..5bc5e4526ccf 100644 --- a/include/uapi/drm/msm_drm.h +++ b/include/uapi/drm/msm_drm.h @@ -91,6 +91,8 @@ struct drm_msm_timespec { #define MSM_PARAM_UBWC_SWIZZLE 0x12 /* RO */ #define MSM_PARAM_MACROTILE_MODE 0x13 /* RO */ #define MSM_PARAM_UCHE_TRAP_BASE 0x14 /* RO */ +/* PRR (Partially Resident Region) is required for sparse residency: */ +#define MSM_PARAM_HAS_PRR 0x15 /* RO */ /* For backwards compat. The original support for preemption was based on * a single ring per priority level so # of priority levels equals the # From patchwork Mon May 19 17:57:17 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 891131 Received: from mail-pf1-f171.google.com (mail-pf1-f171.google.com [209.85.210.171]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E2BA728A419; Mon, 19 May 2025 17:58:21 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.171 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747677503; cv=none; b=jrCy7J0R0qU5pvFIJcvYfNOb15s/krw1Xf2VjbV93rQd6ZD1jp9YR45uaaQbwYbbCAsrGoY+vQrPW03+g1bOpqm6UGCgcGadhGOAfiH4QaDgh9ntjB4lttrqElFXzY0EbHuqAhiZA0YhitboWMtsNF6oXX/ozdGGEK2voSnzPkw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747677503; c=relaxed/simple; bh=RQNYZfcWFkBFy1yXVF2TvLd6+/9GINYibAbtHU0LBxg=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=YAp9sLAdXJTxUUIr6m2/ZK/6sS0Jm681UnJPK5L0D4DwN2iUF+dNMe9L6QYhMjfUDs66y3SrV3rVKz9pKQkUeLT0uqC40vjmhWLgBfT3ztUgqMwhuP/ETfm6fWalAmsA0gVdTpfTvZLXskx6/P35DgrbPDjGLrC2gPzEvUTT3rk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=OpPUpvtw; arc=none smtp.client-ip=209.85.210.171 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="OpPUpvtw" Received: by mail-pf1-f171.google.com with SMTP id d2e1a72fcca58-74267c68c11so4248155b3a.0; Mon, 19 May 2025 10:58:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1747677501; x=1748282301; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=C/IOEUaoWMiY3FcOTsLO0mCip4CvBTwhZvr3W3BIF6Y=; b=OpPUpvtwaUqVtj8Ai5s2VdTnBALXXrvoyeKWeoF4BI/7/2kPCPdOLFlpNLXG5OyzSs 8U3tLFoRbGKUyTDUJrcFlOynYS9Q6jNM3gojsLQtDVKMB9hIfyyADirKMep1w78Jd9UH KoU1EW5jK2iqBNCmcHAYlpeTC0iCX1PcJhF9pMDIlb0Ezh5K3ehYcapOlpuHFWUTPWA8 zvS1Tcw7ryN/dzc4SIIU/egZMcxdca8BgP7qn9ljsSwge+jrLenZR0wdzO9f0sJO9veZ 8VNvRkJLN+o5YIYn9ASzst1PYcnnmidpeU1AAAM0/NairTSekQH7Q9l8c7IkMSaHj2iM mXuw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747677501; x=1748282301; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=C/IOEUaoWMiY3FcOTsLO0mCip4CvBTwhZvr3W3BIF6Y=; b=bP6WXDIUFlJcwJJur5RFye7rDejdQG0quDRVLqOnnH8X1RcFKn3jwPRom1wWGNHrW9 V24fXAejAzF25TUMd+CYgEipFyr0AaEePSapY/ZC9OmA67MVvjptKARQmlycVLM5U6+H SANXCflK+ZZ4IYSwH+5X4HU+omcMXXvh5MPPzFDSTmNnQlNrm6eOC/JhK0XDHZq3L/7C yy9DsNaCmRNSQa71XsBU55lkkQ8mK/6MlA1AiO0aW0OAG3sIHQIhk6t5iiMJAtvhFars qLkaYFC/R322y85wgXt5leObblkr5JmIbjMetnY57Chu8hax1HZ8kDWpgMMgT0BbhBRk myDw== X-Forwarded-Encrypted: i=1; AJvYcCV1Q+REKVuk6T67ylLD0h8O1W1ifo5TTluQN3tj1pkqFEYLJubUGeyUhSe/zN+7o419H4Ie2X0Ne9/F5OXq@vger.kernel.org, AJvYcCVTMQe9gzHoZXddG+JtiN8tzsBsPW0g0ss9qf+tP7ilP1wKbFhdVqYLd1pd8u22g50wGdnVtUU83T8NSDmF@vger.kernel.org X-Gm-Message-State: AOJu0YwfhwwnVuc5fp0hBsYqiX803qr3F9RAUesQy7OAalqrutZ3FbSv 3M9KIAJzS0IP9fAKJBiK5I4MRqHGJYo96Ldk2LXwvmt53qNa+Py08dhd X-Gm-Gg: ASbGncvppMnOLmygZWBivNIhl4JgwAtVEc6mAG8BQCG6FozE+7WV1d/JpBXjQn+6c9Z zsECPtuu1tyrSfnGkS+80/MfUtWA7RPx0/EzDz3PB5QSirCrhXuCLrKKtiCphiNHUmc2nMrsd6W SSmMF/P4Md26Q7rYinX3m+jUR08f9wUwd4elEeFHHfh9nKbyAyRdQ9NAdd1n8aRBUh6+jg7dYbg XYeaZka21GSi9snyXf21zI4FbvlOb16u2DbYKFtQX8+YyJ9CpiTyyim+4C8GsmdHLFh/6iSGiuy LkuZ52xOE5Tz65rC0SACnfKjm5OATRBgnbqQ4GY3ZQj/8Ck55n0+kNroQdnfjwhBo9kn79LsPHc 90d/SKvy6I1L5HmYc1JB91Y+FgQ== X-Google-Smtp-Source: AGHT+IFIyFTDzMZF98uDkDCYSSBpvn7zLyPicAJGXxSwxhrsNbTAsiqbNsLjqrVwcxCyiEStsO/+9g== X-Received: by 2002:a05:6a21:1089:b0:1f3:2e85:c052 with SMTP id adf61e73a8af0-2170ce31534mr20223986637.35.1747677501245; Mon, 19 May 2025 10:58:21 -0700 (PDT) Received: from localhost ([2a00:79e0:3e00:2601:3afc:446b:f0df:eadc]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-742a970d7c5sm6778137b3a.67.2025.05.19.10.58.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 May 2025 10:58:20 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , Konrad Dybcio , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v5 20/40] drm/msm: Drop queued submits on lastclose() Date: Mon, 19 May 2025 10:57:17 -0700 Message-ID: <20250519175755.13037-8-robdclark@gmail.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250519175755.13037-1-robdclark@gmail.com> References: <20250519175348.11924-1-robdclark@gmail.com> <20250519175755.13037-1-robdclark@gmail.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Rob Clark If we haven't written the submit into the ringbuffer yet, then drop it. The submit still retires through the normal path, to preserve fence signalling order, but we can skip the IB's to userspace cmdstream. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_drv.c | 1 + drivers/gpu/drm/msm/msm_gpu.h | 8 ++++++++ drivers/gpu/drm/msm/msm_ringbuffer.c | 6 ++++++ 3 files changed, 15 insertions(+) diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c index 6ef29bc48bb0..5909720be48d 100644 --- a/drivers/gpu/drm/msm/msm_drv.c +++ b/drivers/gpu/drm/msm/msm_drv.c @@ -250,6 +250,7 @@ static int msm_open(struct drm_device *dev, struct drm_file *file) static void context_close(struct msm_context *ctx) { + ctx->closed = true; msm_submitqueue_close(ctx); msm_context_put(ctx); } diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h index d8425e6d7f5a..bfaec80e5f2d 100644 --- a/drivers/gpu/drm/msm/msm_gpu.h +++ b/drivers/gpu/drm/msm/msm_gpu.h @@ -362,6 +362,14 @@ struct msm_context { */ int queueid; + /** + * @closed: The device file associated with this context has been closed. + * + * Once the device is closed, any submits that have not been written + * to the ring buffer are no-op'd. + */ + bool closed; + /** @vm: the per-process GPU address-space */ struct drm_gpuvm *vm; diff --git a/drivers/gpu/drm/msm/msm_ringbuffer.c b/drivers/gpu/drm/msm/msm_ringbuffer.c index bbf8503f6bb5..b8bcd5d9690d 100644 --- a/drivers/gpu/drm/msm/msm_ringbuffer.c +++ b/drivers/gpu/drm/msm/msm_ringbuffer.c @@ -17,6 +17,7 @@ static struct dma_fence *msm_job_run(struct drm_sched_job *job) struct msm_fence_context *fctx = submit->ring->fctx; struct msm_gpu *gpu = submit->gpu; struct msm_drm_private *priv = gpu->dev->dev_private; + unsigned nr_cmds = submit->nr_cmds; int i; msm_fence_init(submit->hw_fence, fctx); @@ -36,8 +37,13 @@ static struct dma_fence *msm_job_run(struct drm_sched_job *job) /* TODO move submit path over to using a per-ring lock.. */ mutex_lock(&gpu->lock); + if (submit->queue->ctx->closed) + submit->nr_cmds = 0; + msm_gpu_submit(gpu, submit); + submit->nr_cmds = nr_cmds; + mutex_unlock(&gpu->lock); return dma_fence_get(submit->hw_fence); From patchwork Mon May 19 17:57:19 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 891130 Received: from mail-pf1-f179.google.com (mail-pf1-f179.google.com [209.85.210.179]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D8AFF28BA89; Mon, 19 May 2025 17:58:25 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.179 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747677507; cv=none; b=Mj6AtFhUcm470DyiIuPRmG2MhWTJ5UHXtm4lNRQlbPqIcjnGl4VxVvqmsnQEK+BImJV3B6hQhGagNjBIBTzVIb+GpX33TJTGH00KgD+P5VeVYvsEbmiGN9rnYtFO8lUwVRqzmnMPsBlMld1tjvyHcbXrLfZ8YjKCRoWBOz4gnkw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747677507; c=relaxed/simple; bh=DkoDTidvRJAccFQvCY9Ny/NlCqO0rCQrnqj+U8u5Hhc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=FFXIHVgCgbSQ7yYPXT04c6aweUIGBxap6CQQ4Je89sFQky6Bzux8wzUBIskFqqTnr940vD9gIwET1MYjJ7kMmrIrXa7f5VLcZNdKGsB/lJv13gtEzSKwrhZwIZAQhwM9L1Qomtf3UcZmGcsOlGSbWAmuTHjXKKvSA9h7DQJcm70= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=H30+5mNS; arc=none smtp.client-ip=209.85.210.179 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="H30+5mNS" Received: by mail-pf1-f179.google.com with SMTP id d2e1a72fcca58-74019695377so3720046b3a.3; Mon, 19 May 2025 10:58:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1747677505; x=1748282305; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=SIn04vW9NzbNbQ+JDqxVwsVXyuEuV1ci6Qcw478mSOM=; b=H30+5mNS8z6YJ8whnE0DRO3n5hdXw0qF8tMvwAM7AAC2XqMlcdEbhZwnyTul3mEeqb Wb6OxVrqZ8wh3FvqjDVkPi+WDBb/ndqOv9gBXcVPCgBZXRllJChBJ88ibHvwPtRfM/7l PAo3m7IEsFDBlWWycWz0m86OG8idepssSno8Moz85Ir7hFcnyE5ddQKXoANC3bGbIEnE X7sfeTkTPKVIZsOcHr2IJzu4hk22Vs6X2iXS1Fo0y7jDjg++IssuOFlS2iM5Y5SaHzgb UF4z1kyxKVpU/PCYXuUhyHcTfSb8b04X621T+9gH3uCSCPm1PE3XWZyVULBGQ6VCOKsV lY8Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747677505; x=1748282305; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=SIn04vW9NzbNbQ+JDqxVwsVXyuEuV1ci6Qcw478mSOM=; b=j65ijEHMZcvIgeub++VtKMrnodpMx05ZFlmxofdP4aOTBNPypeC1VrBm1Y8tJiCZWK QMH/NVYJGeZSjlnZh+lyBHplNm7A87gRdxw/VTBWJgULGT7c1bbFzrhRqQAQdO57f90t iu5bsdXpfOocHK3ivCpc6ryEniR9LbMd95drrA1nDYv+gzYqyrlJKd3wrOAoa/M7myNg kKVNOdW9ZBQV1oCTgZN5sb7FYy2dKUf9NwXs6MM2QYJVZIlh0MHjlXxtH6IyUVIx6myB pfAoo+MTvZrWi0hgFzha00C+jifkGM+0LZZ0Dgv4dGAjlLS3bYU4kxX+h0XzWCNh/jcQ YQuQ== X-Forwarded-Encrypted: i=1; AJvYcCW/cQxu9Yy7Rz/yb7++LcNena+YTT1qOn+v/3wZeXeI0vaCWEb6PFyfM5ta6q5X5mhAYDxh3ZI/785ENIIX@vger.kernel.org, AJvYcCX2Wo/0J+FIj/4WO5uhZnt0gwfMeGF5haHcoXazg8P0loqDLzX9fnSqBfIIEEWeWEc4zblCgEe+BmNRFxFT@vger.kernel.org X-Gm-Message-State: AOJu0YxZTmIATpyq2wOHQ9x76oi7lZparkbBVlqR3JW+OgRfwB5TTr27 mKYA2y9AgKHH9Esrtg57XSWG0Z2btc+i+v7dHUbCu0ANpSE79SbeSmOL X-Gm-Gg: ASbGncu+ek/RtS4vV3+YT/9yC+f/yov/yvhFHSsvl7nZYyqNv5ThuMgkU+45Si0nmGm 35fI9HTxPhxkubJYQNAr7SMAqu6hXMciJsQB3SJezpTrKYUQU8GrXw/AJXaWdJRXmLQrSPkLIoc 5e3B/add9a/KLazJoolKKSXvXEDEzmCKxZVU9ns2rA1Qozt3b+ns6XBzKJSRPyIFQuDycG1LKpx nyz8RhETMHFL6biy8xBzht3sMG5Fp2UiNtwLbGrdhusxn9o78wn3X7cN+8fsHeQYNlqXd42AEuA JA/oJtV+iVjGP6i31dzyTnr1zDwBiF6zvTWTGeyVTaBIaIdCfOMuswqq1tDRjbUZiRv9dITWBy1 A5cg6llcr6baD5837rxyA7RQvJw== X-Google-Smtp-Source: AGHT+IE+Gf4+P9mQbPcZy2p2te5NaWaft/ig4l58ls5folVwzmXCoEWZzHYSdTjUvcVq3Btn3r+LXA== X-Received: by 2002:a05:6a00:ad1:b0:736:ab49:d56 with SMTP id d2e1a72fcca58-742a9776965mr18980651b3a.1.1747677505044; Mon, 19 May 2025 10:58:25 -0700 (PDT) Received: from localhost ([2a00:79e0:3e00:2601:3afc:446b:f0df:eadc]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-742a987af30sm6449017b3a.156.2025.05.19.10.58.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 May 2025 10:58:24 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Rob Clark , Rob Clark , Sean Paul , Konrad Dybcio , Abhinav Kumar , Dmitry Baryshkov , Marijn Suijten , David Airlie , Simona Vetter , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v5 22/40] drm/msm: Add opt-in for VM_BIND Date: Mon, 19 May 2025 10:57:19 -0700 Message-ID: <20250519175755.13037-10-robdclark@gmail.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250519175755.13037-1-robdclark@gmail.com> References: <20250519175348.11924-1-robdclark@gmail.com> <20250519175755.13037-1-robdclark@gmail.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Rob Clark Add a SET_PARAM for userspace to request to manage to the VM itself, instead of getting a kernel managed VM. In order to transition to a userspace managed VM, this param must be set before any mappings are created. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 4 ++-- drivers/gpu/drm/msm/adreno/adreno_gpu.c | 15 +++++++++++++ drivers/gpu/drm/msm/msm_drv.c | 22 +++++++++++++++++-- drivers/gpu/drm/msm/msm_gem.c | 8 +++++++ drivers/gpu/drm/msm/msm_gpu.c | 5 +++-- drivers/gpu/drm/msm/msm_gpu.h | 29 +++++++++++++++++++++++-- include/uapi/drm/msm_drm.h | 24 ++++++++++++++++++++ 7 files changed, 99 insertions(+), 8 deletions(-) diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c index 0d7c2a2eeb8f..f0e37733c65d 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c @@ -2263,7 +2263,7 @@ a6xx_create_vm(struct msm_gpu *gpu, struct platform_device *pdev) } static struct drm_gpuvm * -a6xx_create_private_vm(struct msm_gpu *gpu) +a6xx_create_private_vm(struct msm_gpu *gpu, bool kernel_managed) { struct msm_mmu *mmu; @@ -2273,7 +2273,7 @@ a6xx_create_private_vm(struct msm_gpu *gpu) return ERR_CAST(mmu); return msm_gem_vm_create(gpu->dev, mmu, "gpu", ADRENO_VM_START, - adreno_private_vm_size(gpu), true); + adreno_private_vm_size(gpu), kernel_managed); } static uint32_t a6xx_get_rptr(struct msm_gpu *gpu, struct msm_ringbuffer *ring) diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c b/drivers/gpu/drm/msm/adreno/adreno_gpu.c index b70ed4bc0e0d..efe03f3f42ba 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c @@ -508,6 +508,21 @@ int adreno_set_param(struct msm_gpu *gpu, struct msm_context *ctx, if (!capable(CAP_SYS_ADMIN)) return UERR(EPERM, drm, "invalid permissions"); return msm_context_set_sysprof(ctx, gpu, value); + case MSM_PARAM_EN_VM_BIND: + /* We can only support VM_BIND with per-process pgtables: */ + if (ctx->vm == gpu->vm) + return UERR(EINVAL, drm, "requires per-process pgtables"); + + /* + * We can only swtich to VM_BIND mode if the VM has not yet + * been created: + */ + if (ctx->vm) + return UERR(EBUSY, drm, "VM already created"); + + ctx->userspace_managed_vm = value; + + return 0; default: return UERR(EINVAL, drm, "%s: invalid param: %u", gpu->name, param); } diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c index ac8a5b072afe..89cb7820064f 100644 --- a/drivers/gpu/drm/msm/msm_drv.c +++ b/drivers/gpu/drm/msm/msm_drv.c @@ -228,9 +228,21 @@ static void load_gpu(struct drm_device *dev) */ struct drm_gpuvm *msm_context_vm(struct drm_device *dev, struct msm_context *ctx) { + static DEFINE_MUTEX(init_lock); struct msm_drm_private *priv = dev->dev_private; - if (!ctx->vm) - ctx->vm = msm_gpu_create_private_vm(priv->gpu, current); + + /* Once ctx->vm is created it is valid for the lifetime of the context: */ + if (ctx->vm) + return ctx->vm; + + mutex_lock(&init_lock); + if (!ctx->vm) { + ctx->vm = msm_gpu_create_private_vm( + priv->gpu, current, !ctx->userspace_managed_vm); + + } + mutex_unlock(&init_lock); + return ctx->vm; } @@ -420,6 +432,9 @@ static int msm_ioctl_gem_info_iova(struct drm_device *dev, if (!priv->gpu) return -EINVAL; + if (msm_context_is_vmbind(ctx)) + return UERR(EINVAL, dev, "VM_BIND is enabled"); + if (should_fail(&fail_gem_iova, obj->size)) return -ENOMEM; @@ -441,6 +456,9 @@ static int msm_ioctl_gem_info_set_iova(struct drm_device *dev, if (!priv->gpu) return -EINVAL; + if (msm_context_is_vmbind(ctx)) + return UERR(EINVAL, dev, "VM_BIND is enabled"); + /* Only supported if per-process address space is supported: */ if (priv->gpu->vm == vm) return UERR(EOPNOTSUPP, dev, "requires per-process pgtables"); diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index bdcb90a295fc..36b9e9eefc3c 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -64,6 +64,14 @@ static void msm_gem_close(struct drm_gem_object *obj, struct drm_file *file) if (!ctx->vm) return; + /* + * VM_BIND does not depend on implicit teardown of VMAs on handle + * close, but instead on implicit teardown of the VM when the device + * is closed (see msm_gem_vm_close()) + */ + if (msm_context_is_vmbind(ctx)) + return; + /* * TODO we might need to kick this to a queue to avoid blocking * in CLOSE ioctl diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c index 82e33aa1ccd0..0314e15d04c2 100644 --- a/drivers/gpu/drm/msm/msm_gpu.c +++ b/drivers/gpu/drm/msm/msm_gpu.c @@ -831,7 +831,8 @@ static int get_clocks(struct platform_device *pdev, struct msm_gpu *gpu) /* Return a new address space for a msm_drm_private instance */ struct drm_gpuvm * -msm_gpu_create_private_vm(struct msm_gpu *gpu, struct task_struct *task) +msm_gpu_create_private_vm(struct msm_gpu *gpu, struct task_struct *task, + bool kernel_managed) { struct drm_gpuvm *vm = NULL; @@ -843,7 +844,7 @@ msm_gpu_create_private_vm(struct msm_gpu *gpu, struct task_struct *task) * the global one */ if (gpu->funcs->create_private_vm) { - vm = gpu->funcs->create_private_vm(gpu); + vm = gpu->funcs->create_private_vm(gpu, kernel_managed); if (!IS_ERR(vm)) to_msm_vm(vm)->pid = get_pid(task_pid(task)); } diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h index d1530de96315..448ebf721bd8 100644 --- a/drivers/gpu/drm/msm/msm_gpu.h +++ b/drivers/gpu/drm/msm/msm_gpu.h @@ -79,7 +79,7 @@ struct msm_gpu_funcs { void (*gpu_set_freq)(struct msm_gpu *gpu, struct dev_pm_opp *opp, bool suspended); struct drm_gpuvm *(*create_vm)(struct msm_gpu *gpu, struct platform_device *pdev); - struct drm_gpuvm *(*create_private_vm)(struct msm_gpu *gpu); + struct drm_gpuvm *(*create_private_vm)(struct msm_gpu *gpu, bool kernel_managed); uint32_t (*get_rptr)(struct msm_gpu *gpu, struct msm_ringbuffer *ring); /** @@ -370,6 +370,14 @@ struct msm_context { */ bool closed; + /** + * @userspace_managed_vm: + * + * Has userspace opted-in to userspace managed VM (ie. VM_BIND) via + * MSM_PARAM_EN_VM_BIND? + */ + bool userspace_managed_vm; + /** * @vm: * @@ -462,6 +470,22 @@ struct msm_context { struct drm_gpuvm *msm_context_vm(struct drm_device *dev, struct msm_context *ctx); +/** + * msm_context_is_vm_bind() - has userspace opted in to VM_BIND? + * + * @ctx: the drm_file context + * + * See MSM_PARAM_EN_VM_BIND. If userspace is managing the VM, it can + * do sparse binding including having multiple, potentially partial, + * mappings in the VM. Therefore certain legacy uabi (ie. GET_IOVA, + * SET_IOVA) are rejected because they don't have a sensible meaning. + */ +static inline bool +msm_context_is_vmbind(struct msm_context *ctx) +{ + return ctx->userspace_managed_vm; +} + /** * msm_gpu_convert_priority - Map userspace priority to ring # and sched priority * @@ -689,7 +713,8 @@ int msm_gpu_init(struct drm_device *drm, struct platform_device *pdev, const char *name, struct msm_gpu_config *config); struct drm_gpuvm * -msm_gpu_create_private_vm(struct msm_gpu *gpu, struct task_struct *task); +msm_gpu_create_private_vm(struct msm_gpu *gpu, struct task_struct *task, + bool kernel_managed); void msm_gpu_cleanup(struct msm_gpu *gpu); diff --git a/include/uapi/drm/msm_drm.h b/include/uapi/drm/msm_drm.h index 5bc5e4526ccf..b974f5a24dbc 100644 --- a/include/uapi/drm/msm_drm.h +++ b/include/uapi/drm/msm_drm.h @@ -93,6 +93,30 @@ struct drm_msm_timespec { #define MSM_PARAM_UCHE_TRAP_BASE 0x14 /* RO */ /* PRR (Partially Resident Region) is required for sparse residency: */ #define MSM_PARAM_HAS_PRR 0x15 /* RO */ +/* MSM_PARAM_EN_VM_BIND is set to 1 to enable VM_BIND ops. + * + * With VM_BIND enabled, userspace is required to allocate iova and use the + * VM_BIND ops for map/unmap ioctls. MSM_INFO_SET_IOVA and MSM_INFO_GET_IOVA + * will be rejected. (The latter does not have a sensible meaning when a BO + * can have multiple and/or partial mappings.) + * + * With VM_BIND enabled, userspace does not include a submit_bo table in the + * SUBMIT ioctl (this will be rejected), the resident set is determined by + * the the VM_BIND ops. + * + * Enabling VM_BIND will fail on devices which do not have per-process pgtables. + * And it is not allowed to disable VM_BIND once it has been enabled. + * + * Enabling VM_BIND should be done (attempted) prior to allocating any BOs or + * submitqueues of type MSM_SUBMITQUEUE_VM_BIND. + * + * Relatedly, when VM_BIND mode is enabled, the kernel will not try to recover + * from GPU faults or failed async VM_BIND ops, in particular because it is + * difficult to communicate to userspace which op failed so that userspace + * could rewind and try again. When the VM is marked unusable, the SUBMIT + * ioctl will throw -EPIPE. + */ +#define MSM_PARAM_EN_VM_BIND 0x16 /* WO, once */ /* For backwards compat. The original support for preemption was based on * a single ring per priority level so # of priority levels equals the # From patchwork Mon May 19 17:57:21 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 891129 Received: from mail-pf1-f172.google.com (mail-pf1-f172.google.com [209.85.210.172]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id F3F3328BABE; Mon, 19 May 2025 17:58:28 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.172 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747677510; cv=none; b=nVmwE6FXHtJiN+980tBorLkVDW9zmwHZSNewaRgBUAmoFJZMbqo4nZczhEhsUg/KtKL7N7aWonjkopO5nrO6DlHcZY+RO2dGwFOoR14KPCD1zcx8txL/e1FIAKqu9ZaMKbsDGY++0fa9aAnYYtVTUsAKDAwBOFoqWWRvEkSUgXs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747677510; c=relaxed/simple; bh=3/FKOZ2fW3WRSCgp2omB41mHKjLEIN8851T8UHvaH14=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=qyWiviOlIUD56VJztFThJV47qHXieAQY/ytYFoYrrMRVzsAittLyJAd/4GUZeNekBkpoRvPW5C3UwyqmN1pO0Ik6QszV1y2DrMK6JZ66kDPfs+SlXNta9HG54jixHGiec8JuRd7Ts7MeyOfxkKQp3Lezavz9mTjLToheRHdbt5Y= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=ENRYY6Cc; arc=none smtp.client-ip=209.85.210.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="ENRYY6Cc" Received: by mail-pf1-f172.google.com with SMTP id d2e1a72fcca58-7426c44e014so4937234b3a.3; Mon, 19 May 2025 10:58:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1747677508; x=1748282308; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=wYI2zRJOtd0DrZN4qncIv20rYHQymHOcIiawR317nWQ=; b=ENRYY6Cc32tzw/dJEguW3DYPmbBBFOwF7mz43lvHhLoXMRNXeuB5To18qiKyB2/HLJ S5bOgyTvnsF2vSsvfqf4a3cL/ZbZUj2Phf/rhhRyCRlzFFhol9tfjfhcbTAf6kLBBbBs 0M28RbUztWISFdqZp62dD4QppUsdinwYpo3zw+jz97kLk+2/TLknrjiPoaNLRRbFCu33 TIiuGA2rVMRfQ6ftM0WGcTpoIhR15zmdRB+qO4F2M5HkuBAFQJyIu+mx48MQqpiDVGYv zZemOdnF/+UF2C4Mc/ZIe+Orb/EKHqUtasE6Kes0e3Kp9rU+dJ0YCuDBWiPuTHrGim9M ubrQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747677508; x=1748282308; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=wYI2zRJOtd0DrZN4qncIv20rYHQymHOcIiawR317nWQ=; b=NbbPXkWT5BmI3lHsoIR/EtLsB+02P7UX2qsPwcOcC2KBCXMzopgpk5raE4VxQck+dH vy79QF5wcbHt9lMuV08coXWr0gZXGxwNDVZ0tK/QlfYa00fao7P4pCJvQc6Bv5N0jVXz zzjNUrNeCbl2GSqO6MqO2ryuOES6622ydkzMlEmKeHoSQIkX3kFbNT62QhfDuoZJKfgG 73cEqeRCzAjC8c0rrRO1yqOQ8eYbtCgHGbBey+FEWku5FoZkpdUbD/qEGoZL1ZKxhEmv fipCA+oXvjTCh70ELRPTeEX/TEa9gaW97Uapt+jmyHH9XUGTQ10kzV+pB04pYJj5dDDm AKog== X-Forwarded-Encrypted: i=1; AJvYcCU+MsfHA2xlERpXQW2UYTg4J3Lvbr/PJ9y0sXCEK7yF99mvHOJ74nuqtId9SR14nsxZs8ghlBpTYglcCcg=@vger.kernel.org, AJvYcCWXUmxzwEr4OJamXb8r2gVCq4I6D+f2X9HV7Gv4I6nobPuU3M+YDoLYoJDvxisYpiQaB+nan/aMWrwKs/qn@vger.kernel.org, AJvYcCWxTBUCW3PVxkrukG9dyDBs/4TZhDvMmp1RfQzYZaJIuRS+VSdOihPgjr1cRZ4Hul+ogaNGN7VLmJX7qlZ9@vger.kernel.org X-Gm-Message-State: AOJu0Yx1flYiW1OgNSsYqvvE+x/DfTakhh2sQ/M4yk7dFGpD074Oa2MR 6AvZv9aXK726muCqgIM1djd0esZvQZet/FLnwnrZmIuMNewm+lMbqCKB X-Gm-Gg: ASbGncvIx9WVw2F0gY/qH0rt3wNgYpQ8dCxDkbXSYy+wVaE5iq5L8z48NSgfEaQIrCP whQpjCGZOWf1OwNjRCTCCnjGd4CfSZLx7V3+yu9ZVf2G58THIvvP7Yvmx96UwQa7LYOhFUTBOaX aJC67qpn0NwYk5HCdqSgw/OKB9nV1a2Zs49tb5+4DFSmZ+2bd80xsSr2alK9wVzviRd04248gyv xhEryNsr1qRjegrqDAs1cCbPkLQUDDOTmOdD1v43TorFSoY0xp2xxy+q/hAMKxewbQk9ksGYtcs m6MxxfRAXhqcES9tUiPeSaN7R0qncM/VR0i8OYgvuu1iN3Na+a7Qjsl5fxspPRv8v0INOm416hO MdyR+TouULfW2GcqX5C7vf9UAkw== X-Google-Smtp-Source: AGHT+IFzBCS9nXCRFLRNGKyzZg3GWPK2vE74D3SoO9nls63JyJkY7kSswkhFeCAR1kOXNf3TDeRLZA== X-Received: by 2002:a05:6a20:6a2b:b0:215:d38f:11d1 with SMTP id adf61e73a8af0-216219bd5aemr18354925637.29.1747677508122; Mon, 19 May 2025 10:58:28 -0700 (PDT) Received: from localhost ([2a00:79e0:3e00:2601:3afc:446b:f0df:eadc]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-b26eaf5c6b0sm6598482a12.7.2025.05.19.10.58.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 May 2025 10:58:27 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , Konrad Dybcio , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= , linux-kernel@vger.kernel.org (open list), linux-media@vger.kernel.org (open list:DMA BUFFER SHARING FRAMEWORK:Keyword:\bdma_(?:buf|fence|resv)\b), linaro-mm-sig@lists.linaro.org (moderated list:DMA BUFFER SHARING FRAMEWORK:Keyword:\bdma_(?:buf|fence|resv)\b) Subject: [PATCH v5 24/40] drm/msm: Add _NO_SHARE flag Date: Mon, 19 May 2025 10:57:21 -0700 Message-ID: <20250519175755.13037-12-robdclark@gmail.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250519175755.13037-1-robdclark@gmail.com> References: <20250519175348.11924-1-robdclark@gmail.com> <20250519175755.13037-1-robdclark@gmail.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Rob Clark Buffers that are not shared between contexts can share a single resv object. This way drm_gpuvm will not track them as external objects, and submit-time validating overhead will be O(1) for all N non-shared BOs, instead of O(n). Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_drv.h | 1 + drivers/gpu/drm/msm/msm_gem.c | 23 +++++++++++++++++++++++ drivers/gpu/drm/msm/msm_gem_prime.c | 15 +++++++++++++++ include/uapi/drm/msm_drm.h | 14 ++++++++++++++ 4 files changed, 53 insertions(+) diff --git a/drivers/gpu/drm/msm/msm_drv.h b/drivers/gpu/drm/msm/msm_drv.h index b77fd2c531c3..b0add236cbb3 100644 --- a/drivers/gpu/drm/msm/msm_drv.h +++ b/drivers/gpu/drm/msm/msm_drv.h @@ -246,6 +246,7 @@ int msm_gem_prime_vmap(struct drm_gem_object *obj, struct iosys_map *map); void msm_gem_prime_vunmap(struct drm_gem_object *obj, struct iosys_map *map); struct drm_gem_object *msm_gem_prime_import_sg_table(struct drm_device *dev, struct dma_buf_attachment *attach, struct sg_table *sg); +struct dma_buf *msm_gem_prime_export(struct drm_gem_object *obj, int flags); int msm_gem_prime_pin(struct drm_gem_object *obj); void msm_gem_prime_unpin(struct drm_gem_object *obj); diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 36b9e9eefc3c..65ec99526f82 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -532,6 +532,9 @@ static int get_and_pin_iova_range_locked(struct drm_gem_object *obj, msm_gem_assert_locked(obj); + if (to_msm_bo(obj)->flags & MSM_BO_NO_SHARE) + return -EINVAL; + vma = get_vma_locked(obj, vm, range_start, range_end); if (IS_ERR(vma)) return PTR_ERR(vma); @@ -1060,6 +1063,16 @@ static void msm_gem_free_object(struct drm_gem_object *obj) put_pages(obj); } + if (obj->resv != &obj->_resv) { + struct drm_gem_object *r_obj = + container_of(obj->resv, struct drm_gem_object, _resv); + + BUG_ON(!(msm_obj->flags & MSM_BO_NO_SHARE)); + + /* Drop reference we hold to shared resv obj: */ + drm_gem_object_put(r_obj); + } + drm_gem_object_release(obj); kfree(msm_obj->metadata); @@ -1092,6 +1105,15 @@ int msm_gem_new_handle(struct drm_device *dev, struct drm_file *file, if (name) msm_gem_object_set_name(obj, "%s", name); + if (flags & MSM_BO_NO_SHARE) { + struct msm_context *ctx = file->driver_priv; + struct drm_gem_object *r_obj = drm_gpuvm_resv_obj(ctx->vm); + + drm_gem_object_get(r_obj); + + obj->resv = r_obj->resv; + } + ret = drm_gem_handle_create(file, obj, handle); /* drop reference from allocate - handle holds it now */ @@ -1124,6 +1146,7 @@ static const struct drm_gem_object_funcs msm_gem_object_funcs = { .free = msm_gem_free_object, .open = msm_gem_open, .close = msm_gem_close, + .export = msm_gem_prime_export, .pin = msm_gem_prime_pin, .unpin = msm_gem_prime_unpin, .get_sg_table = msm_gem_prime_get_sg_table, diff --git a/drivers/gpu/drm/msm/msm_gem_prime.c b/drivers/gpu/drm/msm/msm_gem_prime.c index ee267490c935..1a6d8099196a 100644 --- a/drivers/gpu/drm/msm/msm_gem_prime.c +++ b/drivers/gpu/drm/msm/msm_gem_prime.c @@ -16,6 +16,9 @@ struct sg_table *msm_gem_prime_get_sg_table(struct drm_gem_object *obj) struct msm_gem_object *msm_obj = to_msm_bo(obj); int npages = obj->size >> PAGE_SHIFT; + if (msm_obj->flags & MSM_BO_NO_SHARE) + return ERR_PTR(-EINVAL); + if (WARN_ON(!msm_obj->pages)) /* should have already pinned! */ return ERR_PTR(-ENOMEM); @@ -45,6 +48,15 @@ struct drm_gem_object *msm_gem_prime_import_sg_table(struct drm_device *dev, return msm_gem_import(dev, attach->dmabuf, sg); } + +struct dma_buf *msm_gem_prime_export(struct drm_gem_object *obj, int flags) +{ + if (to_msm_bo(obj)->flags & MSM_BO_NO_SHARE) + return ERR_PTR(-EPERM); + + return drm_gem_prime_export(obj, flags); +} + int msm_gem_prime_pin(struct drm_gem_object *obj) { struct page **pages; @@ -53,6 +65,9 @@ int msm_gem_prime_pin(struct drm_gem_object *obj) if (obj->import_attach) return 0; + if (to_msm_bo(obj)->flags & MSM_BO_NO_SHARE) + return -EINVAL; + pages = msm_gem_pin_pages_locked(obj); if (IS_ERR(pages)) ret = PTR_ERR(pages); diff --git a/include/uapi/drm/msm_drm.h b/include/uapi/drm/msm_drm.h index b974f5a24dbc..1bccc347945c 100644 --- a/include/uapi/drm/msm_drm.h +++ b/include/uapi/drm/msm_drm.h @@ -140,6 +140,19 @@ struct drm_msm_param { #define MSM_BO_SCANOUT 0x00000001 /* scanout capable */ #define MSM_BO_GPU_READONLY 0x00000002 +/* Private buffers do not need to be explicitly listed in the SUBMIT + * ioctl, unless referenced by a drm_msm_gem_submit_cmd. Private + * buffers may NOT be imported/exported or used for scanout (or any + * other situation where buffers can be indefinitely pinned, but + * cases other than scanout are all kernel owned BOs which are not + * visible to userspace). + * + * In exchange for those constraints, all private BOs associated with + * a single context (drm_file) share a single dma_resv, and if there + * has been no eviction since the last submit, there are no per-BO + * bookeeping to do, significantly cutting the SUBMIT overhead. + */ +#define MSM_BO_NO_SHARE 0x00000004 #define MSM_BO_CACHE_MASK 0x000f0000 /* cache modes */ #define MSM_BO_CACHED 0x00010000 @@ -149,6 +162,7 @@ struct drm_msm_param { #define MSM_BO_FLAGS (MSM_BO_SCANOUT | \ MSM_BO_GPU_READONLY | \ + MSM_BO_NO_SHARE | \ MSM_BO_CACHE_MASK) struct drm_msm_gem_new { From patchwork Mon May 19 17:57:23 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 891128 Received: from mail-pf1-f176.google.com (mail-pf1-f176.google.com [209.85.210.176]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0529828C2B1; Mon, 19 May 2025 17:58:31 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.176 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747677513; cv=none; b=ShtSnOAXrmb2QJkQWjwnkwx8rULCdHYuYlOKyhSxM5h3oZ5dGXgk/5E8AsypljKdP/TWifDeFN4BOdBG0qeLDqYxmC522/aA9e3ldH/JPf2TVp9n+12h+Ibwjqi6TElunwtrtW42fC5dQGw9Yk0uSFGy03/fI6OQiP/D0HMJGgk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747677513; c=relaxed/simple; bh=BuOG55v99u/wyZEZkhop63nDtOBrVULAauHXI0T6Qfo=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=sHflsQ5Ts0LaQZBDcM7QIXGPq2aJg81eGamJ32lYFdl57kM2o83hzK5qTU7EjMrjTJRiYNUbdOhV8Q2dYnsuzKVaPa4V9Z9A0rdFO2HhQkDZk2Ewk6Cx5rZPU+NbRc/5mvdyphrJJ20XbTS0rC/AJkOogKfce4dxYEVd2QpSoDs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=gvGidRLm; arc=none smtp.client-ip=209.85.210.176 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="gvGidRLm" Received: by mail-pf1-f176.google.com with SMTP id d2e1a72fcca58-7425bd5a83aso4856663b3a.0; Mon, 19 May 2025 10:58:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1747677511; x=1748282311; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=2kT0TmtBcaT0OVTnm09Euq5X3LcvTa/vRpQlqBQDIOE=; b=gvGidRLmwL5bngtUSCs1wIQ5TT2aQThuiojStzXgJYGhpEvmw/xxEe5n+57ZCApk+S s8ITKFdG12UdX5TowiuiKaGLNtm1byj45IsJngWAgZ1ntM9Nm6v/N/k7KdPOeszt0NbT kMKoSiKYx8Yn+YwWgiL+vF41jr7B6yiPTmshrTST+P+0+nKT9a+pmZHK59riK3Qqt4WJ RCm1k96qeUqUfEnePD3e8r6LlvMBgldiIY4+8oXPWXtJSBZjs48o04JXxd8SkjuM2ZzT OKXKpXrEsX7n48V4fnQFg8kKxo8kyLaxmk5IKxx9R20zYzsvuF2Uq24X9zvET7RV7OAY HDZw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747677511; x=1748282311; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=2kT0TmtBcaT0OVTnm09Euq5X3LcvTa/vRpQlqBQDIOE=; b=b16P/gsvjmwRaLJ63tQ59kMcgDUERRAgr7rV9Krk9P8M91oaw/Msi0B2Q9GmDqRCkx j9A66U1GEvfyRy4XQmFnciJxWbxRrn/+t+lNSopvXoF4/P7R7A9JWeyvq0z1TYAj8szD sWGBDQgkiQdtrLaX9SVsVfmvYX1aW4kzOakUFAAPe2OEyziV53k+DztD60XpNkcih3Tb NQK8ssVAid9qvnt9m5RgV8DKSXQ4tdzO73qh7cycFEMzWjxO2VDS40sGNvFqhP1BzQMU ubGC58tQYUXp6ZPz3Jxol+fUXozrs6wHHuHI0nFpQkG2AF0xyEFxanbxOr6pNgREI7zd YURg== X-Forwarded-Encrypted: i=1; AJvYcCUIiO85T9eqYJJLCejqulyRrlNM4f02C+KK3xUvu8WalSfLUNWx955auiC6sZ1k7bD3H/+w28bFecKmPAJu@vger.kernel.org, AJvYcCVZsHE2ndX7RGNYEy8HyPkgPQ98nxtg6Gt9HE3WC2Fwd9MufClrIHZy5E1uNM8ec3KZnFYz0OB+nm14/IKJ@vger.kernel.org X-Gm-Message-State: AOJu0Yw+LKR4NQ16se66rbzD/8q0mXW0fbL+9p+q2f0R1YSaISjOCLe+ OG8kE6NCO3RYtEWQCcttBlFSM/8V2FvA4tjz2yfIyDG3hknJvy9l3AyR X-Gm-Gg: ASbGncsxmYgOywoZRA1zRm2FAAzlKppyddK6lQH5WwEOVWV70bBzeBkU/1RbTiJVie/ XG0hF27MMge1gAqtgC+JP0+aiq9beKNv00vJeEURGs4NbDiy/NNbceybawOYpo08jNd1YRlylrJ dnPUGP6mAEUJtL1Xg0rme95umtdKbyM1LzKFQ6M3vzbRdCpB1Yw/p59a5E2z2wDaxa7nY9QvFng Q/mxBafT6slAEU7f+Mmo+KI/Uci+Ih7CjkHvk6C5ZI79CeKzhIUAe7+uRwA/E2pIIscCPLhQHr2 yDFc0X77AFDhgETVBJ2a1EVTpCEptKaXYh74BwT90xLsNCauR7ZqWX7ioMlAj4WOU6qpF6UoWCe ZX/mGzA9FIU1Pk4KsizY8uWONYg== X-Google-Smtp-Source: AGHT+IGcNbZUzGfoylGoUZxjRkQzz9FJ2idEjJZ6ivfqPV/jiUM1EgTwpooQaYWRwMgYCi0RdliOzw== X-Received: by 2002:a05:6a20:4309:b0:1f3:1d13:96b3 with SMTP id adf61e73a8af0-2162188cb19mr18253281637.5.1747677511282; Mon, 19 May 2025 10:58:31 -0700 (PDT) Received: from localhost ([2a00:79e0:3e00:2601:3afc:446b:f0df:eadc]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-b26eb0a44e7sm6583580a12.73.2025.05.19.10.58.30 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 May 2025 10:58:30 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v5 26/40] drm/msm: rd dumping prep for sparse mappings Date: Mon, 19 May 2025 10:57:23 -0700 Message-ID: <20250519175755.13037-14-robdclark@gmail.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250519175755.13037-1-robdclark@gmail.com> References: <20250519175348.11924-1-robdclark@gmail.com> <20250519175755.13037-1-robdclark@gmail.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Rob Clark Similar to the previous commit, add support for dumping partial mappings. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gem.h | 10 --------- drivers/gpu/drm/msm/msm_rd.c | 38 ++++++++++++++++------------------- 2 files changed, 17 insertions(+), 31 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index 67f845213810..f7b85084e228 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -402,14 +402,4 @@ static inline void msm_gem_submit_put(struct msm_gem_submit *submit) void msm_submit_retire(struct msm_gem_submit *submit); -/* helper to determine of a buffer in submit should be dumped, used for both - * devcoredump and debugfs cmdstream dumping: - */ -static inline bool -should_dump(struct msm_gem_submit *submit, int idx) -{ - extern bool rd_full; - return rd_full || (submit->bos[idx].flags & MSM_SUBMIT_BO_DUMP); -} - #endif /* __MSM_GEM_H__ */ diff --git a/drivers/gpu/drm/msm/msm_rd.c b/drivers/gpu/drm/msm/msm_rd.c index 39138e190cb9..edbcb93410a9 100644 --- a/drivers/gpu/drm/msm/msm_rd.c +++ b/drivers/gpu/drm/msm/msm_rd.c @@ -308,21 +308,11 @@ void msm_rd_debugfs_cleanup(struct msm_drm_private *priv) priv->hangrd = NULL; } -static void snapshot_buf(struct msm_rd_state *rd, - struct msm_gem_submit *submit, int idx, - uint64_t iova, uint32_t size, bool full) +static void snapshot_buf(struct msm_rd_state *rd, struct drm_gem_object *obj, + uint64_t iova, bool full, size_t offset, size_t size) { - struct drm_gem_object *obj = submit->bos[idx].obj; - unsigned offset = 0; const char *buf; - if (iova) { - offset = iova - submit->bos[idx].iova; - } else { - iova = submit->bos[idx].iova; - size = obj->size; - } - /* * Always write the GPUADDR header so can get a complete list of all the * buffers in the cmd @@ -333,10 +323,6 @@ static void snapshot_buf(struct msm_rd_state *rd, if (!full) return; - /* But only dump the contents of buffers marked READ */ - if (!(submit->bos[idx].flags & MSM_SUBMIT_BO_READ)) - return; - buf = msm_gem_get_vaddr_active(obj); if (IS_ERR(buf)) return; @@ -352,6 +338,7 @@ static void snapshot_buf(struct msm_rd_state *rd, void msm_rd_dump_submit(struct msm_rd_state *rd, struct msm_gem_submit *submit, const char *fmt, ...) { + extern bool rd_full; struct task_struct *task; char msg[256]; int i, n; @@ -385,16 +372,25 @@ void msm_rd_dump_submit(struct msm_rd_state *rd, struct msm_gem_submit *submit, rd_write_section(rd, RD_CMD, msg, ALIGN(n, 4)); - for (i = 0; i < submit->nr_bos; i++) - snapshot_buf(rd, submit, i, 0, 0, should_dump(submit, i)); + for (i = 0; i < submit->nr_bos; i++) { + struct drm_gem_object *obj = submit->bos[i].obj; + bool dump = rd_full || (submit->bos[i].flags & MSM_SUBMIT_BO_DUMP); + + snapshot_buf(rd, obj, submit->bos[i].iova, dump, 0, obj->size); + } for (i = 0; i < submit->nr_cmds; i++) { uint32_t szd = submit->cmd[i].size; /* in dwords */ + int idx = submit->cmd[i].idx; + bool dump = rd_full || (submit->bos[idx].flags & MSM_SUBMIT_BO_DUMP); /* snapshot cmdstream bo's (if we haven't already): */ - if (!should_dump(submit, i)) { - snapshot_buf(rd, submit, submit->cmd[i].idx, - submit->cmd[i].iova, szd * 4, true); + if (!dump) { + struct drm_gem_object *obj = submit->bos[idx].obj; + size_t offset = submit->cmd[i].iova - submit->bos[idx].iova; + + snapshot_buf(rd, obj, submit->cmd[i].iova, true, + offset, szd * 4); } } From patchwork Mon May 19 17:57:25 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 891127 Received: from mail-pg1-f176.google.com (mail-pg1-f176.google.com [209.85.215.176]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1E2A728C5D8; Mon, 19 May 2025 17:58:34 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.176 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747677516; cv=none; b=BgYi5NNeO6Jw0KPkmGzEtRak4hyu31wIIrzMy88XOXRFBLTl5YTroGajrNx1W/38IRKouPan/+ceCDNAS/+D0+0sF5CBKEicxMcSNGkfbwN6c6Ovtq0DuDLOPU8XfT/+gt0+UOF5t7WOkFVGbJKFekOmXZQ+EQp3+w03z4j7g8k= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747677516; c=relaxed/simple; bh=1wsvWV0YZRQ3M4IxbBtLuTb+92UFg0iH3xbdfIwOzMM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Wuqzou8vIydgS+ZBYzpKmyVu6xkAbeoluyliaDWoabiBX5DKYm7Ps8xGKga1XQ/Tm9OXV/9AHUD2GGcFL2MVjBfBl/VVHbmf9gckrsnarXD5Q8PT8bdxOPBGu9NheA+kQVZ7vDDR8zmgWdsqhr6bJEcSEz0WXFHQLLC63u7GqLQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=WKwsCfZW; arc=none smtp.client-ip=209.85.215.176 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="WKwsCfZW" Received: by mail-pg1-f176.google.com with SMTP id 41be03b00d2f7-af523f4511fso3738224a12.0; Mon, 19 May 2025 10:58:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1747677514; x=1748282314; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=QiZxqllnGo+IvZFpO+z8qrywFd5uUBfHA/RzhZX4v9U=; b=WKwsCfZWFw0qbXlKmK8Sccoc4E6JwqBeCDyKwrc0EoaFBFfwEksTUzmeTkYzBU1yCB 0k1cfuOzsD7oSTGJsz1n4lDF+/ZSKqUg0qGTQ7WG6KVQwdVMmcXcfLzHNIJJtiUz36BI 9L96moTrGRg+xdi0htfaKOGE6SXzPVM7EmOJEV8lX2d/hY5aocVI8dlwYDxYF5ghagoz wqw+4tOsiqU7ru8ZF2p8aLTOmDZa3TW4dN+RnxL8yN66ELRlhZJ7dwFaJTTQeXc58O4S DN/s+DYy/RH5Opj0MVIlmRqVMaStie21LdJoyJPuK/dXsrTmk70LUk8dkg2Ob4ZXijl7 qPog== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747677514; x=1748282314; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=QiZxqllnGo+IvZFpO+z8qrywFd5uUBfHA/RzhZX4v9U=; b=rlbitACyfaJMxWRYW+/FQ6M0/uZ+DBq9FkEnb9WQwwBrLyyV5sQQfYnfZROP6iaKXU +2bGEdJx3Y/v/XoQgvqGSYJQyz/1stq0Az9oW9oNko60GZmLP1zxIhZdTDrf5aGWART4 fFYqRBvTZkeuD6TEmWGQlZRUOd3XFzL1QGJqKKEJYBe8oYV52F6ufTknlw/yQphUMTLc ZrRSIwjRbRJ6JGm00g4fowMMMkDfPjNifVofm7lEczR2G84tLg/MS9SSSea6bXlCcC4e 5NciZO4LgyuRgrghVMOw6lCb9LNEnO+L/IA8Z82Ueg3bqZSyUDcWide85h9/LXtaiopA YL7Q== X-Forwarded-Encrypted: i=1; AJvYcCVN8O2A49nlMsesxEmIlMuO7m8MDhQOA2D3cKwm9nDqlvpSGfqilJs9ofI3ZGPz4zCFJbs0lv1zJDu1UmfC@vger.kernel.org, AJvYcCW/c71UCo05EFVdh8D10E7AzEg456rPztJ/AcoYqk9Bho1cWj7n4cuuLWWyMtcyRPaBh2Zic0yBnZ556RQh@vger.kernel.org X-Gm-Message-State: AOJu0YzpC3GdZr33nrxJIQsHQwnZFfxfEvCY/ctDsKeJsxX5Icuftb4s y8dOutqXpQ0XDFOUn+JgcwzDK6HmCLykVdewDEuLPpNtIXirdCxJLyN9 X-Gm-Gg: ASbGncs4oYKDErlRgYxy0+o20zGh3oVNt1AeUvMqlRV2MIjIsRiSw0pvrWVNBKrHYO9 E8Zyim9RtrZ40vz9iB2iqifhIVk9IjOgJl0Fe7JiCkTdtvYxvJDKpaYwPEt1RncS1QbGAZMvCgy pNBLYiYFyrPRc+/e3CcHLlCYw4AWpoiUQN/S4PKzMhoB+uEp2TPcSdK13Kdt2sMDH+0VbKE+h7/ FN2i4sYPhoHkWZ3VGqCqqHnakuoNHuvBnF9Jk6VSKYqcmicLgGwAKyjsVUwIlOHq9awFTpUWZja R7Gv/8xT1wYAVQF5ojpVfKgDmh/JiVkv87kwwIahgIXkCKMfzGCV/8oU7hYTtPIDOtVxmOhxgeI GVtkIFRCnQfhjLU8j4/hSKSqfVg== X-Google-Smtp-Source: AGHT+IHrmLudJpGr4AV2O9eH5ch69mKTZqWzfXo1Zl0S1OWsw1bZvzg52Ne9++UTY1xHmvTFxaiFnA== X-Received: by 2002:a17:903:fa4:b0:22f:f747:8fbe with SMTP id d9443c01a7336-231d458284bmr197335365ad.53.1747677514284; Mon, 19 May 2025 10:58:34 -0700 (PDT) Received: from localhost ([2a00:79e0:3e00:2601:3afc:446b:f0df:eadc]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-231d4ebb9f6sm63061805ad.207.2025.05.19.10.58.33 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 May 2025 10:58:33 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v5 28/40] drm/msm: rd dumping support for sparse Date: Mon, 19 May 2025 10:57:25 -0700 Message-ID: <20250519175755.13037-16-robdclark@gmail.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250519175755.13037-1-robdclark@gmail.com> References: <20250519175348.11924-1-robdclark@gmail.com> <20250519175755.13037-1-robdclark@gmail.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Rob Clark As with devcoredump, we need to iterate the VMAs to figure out what to dump. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_rd.c | 48 +++++++++++++++++++++++++----------- 1 file changed, 33 insertions(+), 15 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_rd.c b/drivers/gpu/drm/msm/msm_rd.c index edbcb93410a9..54493a94dcb7 100644 --- a/drivers/gpu/drm/msm/msm_rd.c +++ b/drivers/gpu/drm/msm/msm_rd.c @@ -372,25 +372,43 @@ void msm_rd_dump_submit(struct msm_rd_state *rd, struct msm_gem_submit *submit, rd_write_section(rd, RD_CMD, msg, ALIGN(n, 4)); - for (i = 0; i < submit->nr_bos; i++) { - struct drm_gem_object *obj = submit->bos[i].obj; - bool dump = rd_full || (submit->bos[i].flags & MSM_SUBMIT_BO_DUMP); + if (msm_context_is_vmbind(submit->queue->ctx)) { + struct drm_gpuva *vma; - snapshot_buf(rd, obj, submit->bos[i].iova, dump, 0, obj->size); - } + drm_gpuvm_resv_assert_held(submit->vm); - for (i = 0; i < submit->nr_cmds; i++) { - uint32_t szd = submit->cmd[i].size; /* in dwords */ - int idx = submit->cmd[i].idx; - bool dump = rd_full || (submit->bos[idx].flags & MSM_SUBMIT_BO_DUMP); + drm_gpuvm_for_each_va (vma, submit->vm) { + bool dump = rd_full || (vma->flags & MSM_VMA_DUMP); + + /* Skip MAP_NULL/PRR VMAs: */ + if (!vma->gem.obj) + continue; + + snapshot_buf(rd, vma->gem.obj, vma->va.addr, dump, + vma->gem.offset, vma->va.range); + } + + } else { + for (i = 0; i < submit->nr_bos; i++) { + struct drm_gem_object *obj = submit->bos[i].obj; + bool dump = rd_full || (submit->bos[i].flags & MSM_SUBMIT_BO_DUMP); + + snapshot_buf(rd, obj, submit->bos[i].iova, dump, 0, obj->size); + } + + for (i = 0; i < submit->nr_cmds; i++) { + uint32_t szd = submit->cmd[i].size; /* in dwords */ + int idx = submit->cmd[i].idx; + bool dump = rd_full || (submit->bos[idx].flags & MSM_SUBMIT_BO_DUMP); - /* snapshot cmdstream bo's (if we haven't already): */ - if (!dump) { - struct drm_gem_object *obj = submit->bos[idx].obj; - size_t offset = submit->cmd[i].iova - submit->bos[idx].iova; + /* snapshot cmdstream bo's (if we haven't already): */ + if (!dump) { + struct drm_gem_object *obj = submit->bos[idx].obj; + size_t offset = submit->cmd[i].iova - submit->bos[idx].iova; - snapshot_buf(rd, obj, submit->cmd[i].iova, true, - offset, szd * 4); + snapshot_buf(rd, obj, submit->cmd[i].iova, true, + offset, szd * 4); + } } } From patchwork Mon May 19 17:57:27 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 891126 Received: from mail-pl1-f180.google.com (mail-pl1-f180.google.com [209.85.214.180]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7FF4F28D85B; Mon, 19 May 2025 17:58:38 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.180 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747677520; cv=none; b=rslmD6qb1ZZBT7f/vUAz+2tAe0m0IQek2p0h9cMNzdN81emLqRzhSGjMulH2gemY3kxdeMYKpdE7/u87IePwC3eCqbA3KRYgCKAMabsj9Z1nwj+WenfwOlo3e5qSQ+NPWOXKkiAl7nQS6jOgMwLCEn30tGS2ue/ZEeFtLdtqZ+4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747677520; c=relaxed/simple; bh=he5qvIDKEYSpUpScoEQkO4nstTj9ZaWWJob1vo4UMGQ=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=SygBhBSSeiEHhsLavcJTq8Go4XJ2TRCsH2BrDRKCZHx1QqH05aHXInrC7w7A1NI9CGppBMsTVbA25ghYehPxcZxSChEbFuxNpez3EZgCU0I4tliyxE30wsnSIU4USKK2uSWD5iFP8Br8XbtokI3gayErUrXyi/FdNNed232mk2s= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=k5sg7ur6; arc=none smtp.client-ip=209.85.214.180 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="k5sg7ur6" Received: by mail-pl1-f180.google.com with SMTP id d9443c01a7336-22e331215dbso43202025ad.1; Mon, 19 May 2025 10:58:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1747677518; x=1748282318; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Yw8+5Lu92Jvnuz9rkyDiNgf2Y33qWiuGoKmOSlycwFo=; b=k5sg7ur6LYAmEaRreX14apAqbLEufAOxNEGBT8NPOvbL8xPPrvJkk2Fe5Fi5yYk5No ZQv1AfH9qZSjiDslaOtvgXZz/WGn8G7gxYPek0tniFAHL0Wty2OvqVFa5Juc6pZqgwaY 8IeF38tDplatuulySpC61J5VCw4An0PM7TO99DatU1jQtKsQSZVaHF5zqWMiXSikE5tI aMkjCZvqOYTame0qdDATCZHJ/mX+3BCeTODZbvlKLJGBvLEdHAYqKYG77FW+gZ++47xk ymW6lO5nhV9Aw7Lyy7Vr6u7y5k0gesnWnBaSaq7UkqBqz5mFMbikYlHSJjzDRx2EyBUb Y6LQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747677518; x=1748282318; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Yw8+5Lu92Jvnuz9rkyDiNgf2Y33qWiuGoKmOSlycwFo=; b=bIpeAJojDwfknc35zYdH5XzVWcvPBZGsnSEIF9EOOpm+JrfDsUYmciv33JLHh+D6/R E5POxVMd9cR8h2rnPTVsyE4NVPJclyngyxCRv5fu/JzZsnJUvoUiXscYhplgyPq+TVs8 RuqFrM8ejo3BeAx6OmzKM79u9L78MvJefenpvZFmh7bDop7qnB83LShbtWO/rrwEVHmO lAIERM6os/ny4HBJFX0fv84JYm7iH6KCihHF7qcTSO/GslHYBs3mHx78e060IP6tpDiZ OITJnLrKQm6zLuq1/wIJvKAkFfR6A+2nm9AWlGUyRp/nr0ozrr04w2qkoJhW99H8HJ7A +VuA== X-Forwarded-Encrypted: i=1; AJvYcCV7KjbIs2BSFqE54SSCI0pVDOcUTpja0WQLBD+vixhogOUGeS8TN6SNwaFdbbqzo+rDxGnVA4lM7LQJ/Htn@vger.kernel.org, AJvYcCXQqexUrn1zxcNvYpSNEbAaOR75S+LKS2B807AhjlWx45W3DWnTYOrbrZqcbU0irdM329YgjfHhLToSlkod@vger.kernel.org X-Gm-Message-State: AOJu0YxvySe5UuryNbvWxeDGy8WifAXTpqxhFXcHSa51GRfl9JFoKbI4 J21/KRpTXv78NU27U3/dTv1PNo/lX+IVEp8dfESNoUCOkebHLaudwXZelwI69w== X-Gm-Gg: ASbGnctHZPm85yXBPY3nq1qFVYh4jvo1sQyAuZDYVv7bhImTEbZX7yiExnX1eTkVKxS cihQCt3YnL1YIKpf1avXkoLTogjhTUsUfh/oc4R+PQ84cpU84jhqUJW0osaO8HknLL4Y2YttKku Bn+pSTq8Jzyq83vDNk9wFSHr/3xjJoEMPkiepfjOHEZTT73Jzh/BHS3lHuK4b4RHpM+qmiZf7zQ 1kPq76k2aA9ULzfBOp8DQLgKhCqMW8Ww9/YJW/aJG2rNbo6GRGKzWmS6Bl4oMY7nJCe6g57RC7f DY4A8pF1iP5tAG+luAvV3W1s7Fj7YvV4HmGCTwiKgJzVlNIsG46JtWoXcF9bDffQaQvYVenefoB +CINR0dAb3IXlzGE+CdyXh5UiLg== X-Google-Smtp-Source: AGHT+IGs6KrnF1n7CohVGZ2xmw1s8L1aM9us0S4M1mcH1itEFyn1K0EgB83X+frh9MTVQgjtuCFIYQ== X-Received: by 2002:a17:902:f543:b0:21f:1348:10e6 with SMTP id d9443c01a7336-231d4d2041dmr208599415ad.13.1747677517708; Mon, 19 May 2025 10:58:37 -0700 (PDT) Received: from localhost ([2a00:79e0:3e00:2601:3afc:446b:f0df:eadc]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-231d4e97dadsm63119275ad.141.2025.05.19.10.58.36 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 May 2025 10:58:37 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v5 30/40] drm/msm: Use DMA_RESV_USAGE_BOOKKEEP/KERNEL Date: Mon, 19 May 2025 10:57:27 -0700 Message-ID: <20250519175755.13037-18-robdclark@gmail.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250519175755.13037-1-robdclark@gmail.com> References: <20250519175348.11924-1-robdclark@gmail.com> <20250519175755.13037-1-robdclark@gmail.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Rob Clark Any place we wait for a BO to become idle, we should use BOOKKEEP usage, to ensure that it waits for _any_ activity. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gem.c | 6 +++--- drivers/gpu/drm/msm/msm_gem_shrinker.c | 2 +- 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 65ec99526f82..cf509ca42da0 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -76,8 +76,8 @@ static void msm_gem_close(struct drm_gem_object *obj, struct drm_file *file) * TODO we might need to kick this to a queue to avoid blocking * in CLOSE ioctl */ - dma_resv_wait_timeout(obj->resv, DMA_RESV_USAGE_READ, false, - msecs_to_jiffies(1000)); + dma_resv_wait_timeout(obj->resv, DMA_RESV_USAGE_BOOKKEEP, false, + MAX_SCHEDULE_TIMEOUT); msm_gem_lock_vm_and_obj(&exec, obj, ctx->vm); put_iova_spaces(obj, ctx->vm, true); @@ -879,7 +879,7 @@ bool msm_gem_active(struct drm_gem_object *obj) if (to_msm_bo(obj)->pin_count) return true; - return !dma_resv_test_signaled(obj->resv, dma_resv_usage_rw(true)); + return !dma_resv_test_signaled(obj->resv, DMA_RESV_USAGE_BOOKKEEP); } int msm_gem_cpu_prep(struct drm_gem_object *obj, uint32_t op, ktime_t *timeout) diff --git a/drivers/gpu/drm/msm/msm_gem_shrinker.c b/drivers/gpu/drm/msm/msm_gem_shrinker.c index 5faf6227584a..1039e3c0a47b 100644 --- a/drivers/gpu/drm/msm/msm_gem_shrinker.c +++ b/drivers/gpu/drm/msm/msm_gem_shrinker.c @@ -139,7 +139,7 @@ evict(struct drm_gem_object *obj, struct ww_acquire_ctx *ticket) static bool wait_for_idle(struct drm_gem_object *obj) { - enum dma_resv_usage usage = dma_resv_usage_rw(true); + enum dma_resv_usage usage = DMA_RESV_USAGE_BOOKKEEP; return dma_resv_wait_timeout(obj->resv, usage, false, 10) > 0; } From patchwork Mon May 19 17:57:29 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 891125 Received: from mail-pf1-f176.google.com (mail-pf1-f176.google.com [209.85.210.176]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 81E5328DB5D; Mon, 19 May 2025 17:58:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.176 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747677523; cv=none; b=WooUo9Qg1bx7i9u2MY9J3r89kJgLU5eD1nHG8CZ/CusCcW7TwIJl41Uhh4JEOco9v/pXmd6OWUc9fHuvydUR6Eu+P+2aHgy9erDMfsRkoKMTaXcdNLqm5UoxZOquYm9a7vo9UkblCP/1PciORqY1N18qTMnt1gPwwqbiGqpxhwU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747677523; c=relaxed/simple; bh=O/nV/ZBdihwESJbKRGMGIMsUx9sBpWSdFb0C10uWsVY=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=iKdja4VzipGAcT5KBXZd5E8ecz48er3odg+K8NamyZXEDggoYmI2BeYESWLJZpHa0ZDD2lPENXPe3JL+jjn44VJNQipzXY+BgC+6dBoskuFaT3UiRAFJbkU1Mbx83LtSCltbNEXImoNFtOUdyOdbD3TIQD+HSTYoDQhJ0D0Uczg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=PFs/dCCK; arc=none smtp.client-ip=209.85.210.176 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="PFs/dCCK" Received: by mail-pf1-f176.google.com with SMTP id d2e1a72fcca58-7399a2dc13fso6067859b3a.2; Mon, 19 May 2025 10:58:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1747677521; x=1748282321; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=zsDuMkK6+ntYHJetvzHtNuC1/rYImV4dSwAsx5YSXi4=; b=PFs/dCCKUE05HLD5Dzns/eM9XJasJIpdlqca5UMrHckjpEKFkv8mY93VHl0P4YXHoy 8MpHwX/w4fNHHIqwx8n3odZ0qEXP6D4s92XgtZdST9CGkBJNpibbzCRdtrQspoNpP1Ww 29g2bM4OZH47BLW3OxUoyycnHnvxU51YcusbBuyDraESJrwMwfUv6j/Yz9mkfeOrpFF2 Nnu403Ffc2SVH8b/vlprNeOcZuXNpot9R2fb/wTCoNQk29/931XiBUVeNbsY50i28P2x HAp6Enm2oADjzkDBKMbY/sG7mGD/veMUIii9vzIVCYQN/tzCSv+9gcvElENncCfbR0rU CAXA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747677521; x=1748282321; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=zsDuMkK6+ntYHJetvzHtNuC1/rYImV4dSwAsx5YSXi4=; b=eoRP36TxieqXEZO6qw2Wzv9UvAXrQK7Vg/Aq3SW5MOeWbKyDeN0otXaywaUVwUX6mg ddedT4mwuZ/HrBDx5AGuPl9OhdqZvVYdDRCUMYv4lh4QbpwScCl7UaFelJsbu6fZBKBL 2eLVpqDqxBkvxwEGGTGKjrXFJjzjrNJYTvCVh1B55RukvbiGibZay00n6rkIwM+7E0M9 Tn3uG1dFg+X4v1ODTQh0Ve22aqxaNlF5x0z1edRX3l42rZ6JZkzhT+R5r7CKrgmQ5Cim CVD/Zs3oUXP2r0zdtCSGfUyDWZ8eUQJlZMcmM9ENd0bF7IhESscTQ/amn1aHU3GI449L /bpQ== X-Forwarded-Encrypted: i=1; AJvYcCUpqk09cO+xrKVECt56t4jSZe0zQh720WqSFqWEP9ospw7wJuNWq4EEBwzpaFmovAKABETL6DTchtt//Htn@vger.kernel.org, AJvYcCVRH+E0FOFLipXLdmeaqxFlk199V4hXEWGt+EC/j3dWHT8M0OcweRik4YJE+qGG8nqG+PhN9a3TdIoxAQlf@vger.kernel.org X-Gm-Message-State: AOJu0YwgwVt+okyDvwoY2+M0/bjFmLvwZmLFtJtXQmlwmGETpjQ7nmxz 3BeTHaCmYp77reApMH3AbzFBVEqOVmjJQQ2/A8KvZ9hpy5WKQxfZ3Ya8 X-Gm-Gg: ASbGncs3O2bYq9GTJjEPtfIBsWOw4UjINcc/0k70dOK3ThcCGxXYSV6q4iU8wLuI2LJ cllGzTwV46M7ULEmJWZXNa//rpKO0h9EqBHmwKtXSvHiU05gY7DOiPghfdnv+Kq1kRVkE67nomA SbVy7nmKTZpW9//HEcamUYsLweCiggyGl7xFY/cXDJRL2KM7v6wzdP8mxLMEg0lewyNqbCPuOz6 bHFYffPqRTDN3SSWg7OL0JQAtoazIXVqw+e8oM6jy77VcPeoXbWjGguh2m1drZw6NZNAkhGrz31 f2JIMZD06TvHw9IXsGHfGJQZEPsLz4qZenvVc+lfVlBGbO/IrZZfhQ8QmYkqeg1cUW3+bIjuHsv dCNETx4eluJYD+k4jLKro2zIjYw== X-Google-Smtp-Source: AGHT+IEpPUhsp2Km3MfeKJLcLTe8fytrJtiMjwY97/JiD5mpGxU47A0gjeAM8i2SRnvZ5UivSJVNRw== X-Received: by 2002:a05:6a00:4fc2:b0:73e:598:7e5b with SMTP id d2e1a72fcca58-742acc906ccmr17537689b3a.1.1747677520798; Mon, 19 May 2025 10:58:40 -0700 (PDT) Received: from localhost ([2a00:79e0:3e00:2601:3afc:446b:f0df:eadc]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-742a970954asm6498324b3a.46.2025.05.19.10.58.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 May 2025 10:58:40 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Rob Clark , Rob Clark , Sean Paul , Konrad Dybcio , Abhinav Kumar , Dmitry Baryshkov , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v5 32/40] drm/msm: Support IO_PGTABLE_QUIRK_NO_WARN_ON Date: Mon, 19 May 2025 10:57:29 -0700 Message-ID: <20250519175755.13037-20-robdclark@gmail.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250519175755.13037-1-robdclark@gmail.com> References: <20250519175348.11924-1-robdclark@gmail.com> <20250519175755.13037-1-robdclark@gmail.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Rob Clark With user managed VMs and multiple queues, it is in theory possible to trigger map/unmap errors. These will (in a later patch) mark the VM as unusable. But we want to tell the io-pgtable helpers not to spam the log. In addition, in the unmap path, we don't want to bail early from the unmap, to ensure we don't leave some dangling pages mapped. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 2 +- drivers/gpu/drm/msm/msm_iommu.c | 23 ++++++++++++++++++----- drivers/gpu/drm/msm/msm_mmu.h | 2 +- 3 files changed, 20 insertions(+), 7 deletions(-) diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c index f0e37733c65d..83fba02ca1df 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c @@ -2267,7 +2267,7 @@ a6xx_create_private_vm(struct msm_gpu *gpu, bool kernel_managed) { struct msm_mmu *mmu; - mmu = msm_iommu_pagetable_create(to_msm_vm(gpu->vm)->mmu); + mmu = msm_iommu_pagetable_create(to_msm_vm(gpu->vm)->mmu, kernel_managed); if (IS_ERR(mmu)) return ERR_CAST(mmu); diff --git a/drivers/gpu/drm/msm/msm_iommu.c b/drivers/gpu/drm/msm/msm_iommu.c index 756bd55ee94f..237d298d0eeb 100644 --- a/drivers/gpu/drm/msm/msm_iommu.c +++ b/drivers/gpu/drm/msm/msm_iommu.c @@ -94,15 +94,24 @@ static int msm_iommu_pagetable_unmap(struct msm_mmu *mmu, u64 iova, { struct msm_iommu_pagetable *pagetable = to_pagetable(mmu); struct io_pgtable_ops *ops = pagetable->pgtbl_ops; + int ret = 0; while (size) { - size_t unmapped, pgsize, count; + size_t pgsize, count; + ssize_t unmapped; pgsize = calc_pgsize(pagetable, iova, iova, size, &count); unmapped = ops->unmap_pages(ops, iova, pgsize, count, NULL); - if (!unmapped) - break; + if (unmapped <= 0) { + ret = -EINVAL; + /* + * Continue attempting to unamp the remained of the + * range, so we don't end up with some dangling + * mapped pages + */ + unmapped = PAGE_SIZE; + } iova += unmapped; size -= unmapped; @@ -110,7 +119,7 @@ static int msm_iommu_pagetable_unmap(struct msm_mmu *mmu, u64 iova, iommu_flush_iotlb_all(to_msm_iommu(pagetable->parent)->domain); - return (size == 0) ? 0 : -EINVAL; + return ret; } static int msm_iommu_pagetable_map_prr(struct msm_mmu *mmu, u64 iova, size_t len, int prot) @@ -324,7 +333,7 @@ static const struct iommu_flush_ops tlb_ops = { static int msm_gpu_fault_handler(struct iommu_domain *domain, struct device *dev, unsigned long iova, int flags, void *arg); -struct msm_mmu *msm_iommu_pagetable_create(struct msm_mmu *parent) +struct msm_mmu *msm_iommu_pagetable_create(struct msm_mmu *parent, bool kernel_managed) { struct adreno_smmu_priv *adreno_smmu = dev_get_drvdata(parent->dev); struct msm_iommu *iommu = to_msm_iommu(parent); @@ -358,6 +367,10 @@ struct msm_mmu *msm_iommu_pagetable_create(struct msm_mmu *parent) ttbr0_cfg.quirks &= ~IO_PGTABLE_QUIRK_ARM_TTBR1; ttbr0_cfg.tlb = &tlb_ops; + if (!kernel_managed) { + ttbr0_cfg.quirks |= IO_PGTABLE_QUIRK_NO_WARN_ON; + } + pagetable->pgtbl_ops = alloc_io_pgtable_ops(ARM_64_LPAE_S1, &ttbr0_cfg, pagetable); diff --git a/drivers/gpu/drm/msm/msm_mmu.h b/drivers/gpu/drm/msm/msm_mmu.h index c874852b7331..c70c71fb1a4a 100644 --- a/drivers/gpu/drm/msm/msm_mmu.h +++ b/drivers/gpu/drm/msm/msm_mmu.h @@ -52,7 +52,7 @@ static inline void msm_mmu_set_fault_handler(struct msm_mmu *mmu, void *arg, mmu->handler = handler; } -struct msm_mmu *msm_iommu_pagetable_create(struct msm_mmu *parent); +struct msm_mmu *msm_iommu_pagetable_create(struct msm_mmu *parent, bool kernel_managed); int msm_iommu_pagetable_params(struct msm_mmu *mmu, phys_addr_t *ttbr, int *asid); From patchwork Mon May 19 17:57:31 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 891124 Received: from mail-pf1-f177.google.com (mail-pf1-f177.google.com [209.85.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 98E9A28DF4E; Mon, 19 May 2025 17:58:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.177 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747677526; cv=none; b=MppWhNC6Pok0JMAzWcr0WwH4h5OqlNIPdUtI8Kyx8ruUaB2HESnOU8Ts7eoc4/N8GaqZTlWD6qMJJ/Tww3ozgzKbg5WOMFLltAmLlO6G+T7DotZzzT6IaMAcZsCfqrzOlspN5mSXdlPHWv19VZUFKuW/dDisFIMmIkl2t5K5c/4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747677526; c=relaxed/simple; bh=pnQLoKZ1vywrckNj3NloDzeebCX7xk3jDKnHs/ziE+I=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=demKfXmd3uf03JeT6EBvbWTHjVwSJc0R5NZN079QXOlJu9N7S0AEquEtvnRv0WWGEebE2uprr+KY8r+hxChdiy5+qxYhmqAW+kKRm78F2kJkujj4M6tYLGGtPs2MWevhEvpU+zWFg0cRUbe97W3ikJM2XMaLWpCuNuFepD5TTcY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=Rq4QTDYN; arc=none smtp.client-ip=209.85.210.177 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="Rq4QTDYN" Received: by mail-pf1-f177.google.com with SMTP id d2e1a72fcca58-7390d21bb1cso4321484b3a.2; Mon, 19 May 2025 10:58:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1747677524; x=1748282324; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=hDifTqtGVvkdBFnZ2aqlffmPu8mY951wNojwhaWPbg0=; b=Rq4QTDYN9YuvtRmWV62DPkNbJbfUIgmQbePp/iBVPIJo2PgZD/LbZmy9+YyPNWBon0 D7cIp+nTFOALoYRBbWvhV/mXYVssNZ/55+f4TcK10jhcIzb+Uw+n3WRZ5zGJrqbrcJCf kafGVQUGwV15uR5ELsX3hMVpp5HTLHt3l1WEszW1LlukN675iaigaBdsUmxyOZaw6bUN GzdYDPJBv1FLIwaA4UxHzP59heVfQqE6BACm/8rE6quE1JW0aaQWYo/io/tkE9oboIaA igrmvKQgCvrB51E3pkf6cx/DYaywljzyWWMWypYvc/sFx4TZJTVNdvziDswiVayzqUTe O9Yw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747677524; x=1748282324; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=hDifTqtGVvkdBFnZ2aqlffmPu8mY951wNojwhaWPbg0=; b=n9v5mdNs94he45Ov1KkDAgLTee+9e0irfJnZIisrOFy1LMNtSRKWFOTPrX7m5Cx06U bc970+HWgWQWAOnqxrdvQjKP5QadOIG2J/q3OjFe6kX4AVt7WK/J4nASrsf+yWePrvSa ls9WVegMQocIq47FJ7zE5/7vqPhyECk0s7UDh8KsrxVnXplOCZ3ThcWBKkXvJXRQM+Kv 9txY4YVLwXMgCm2kCgh39rjoF7TbkdawJ58LBSP0mMOL4IceTIMIEcaC9q+kB3jOYOt3 A8F/DEoQPksvMJ7TitbRfxQh7oN6sLwwNbPNHxKx0sk19dLEbR7nU3zf54kvrU2pJ1l2 X08Q== X-Forwarded-Encrypted: i=1; AJvYcCWDehDamlksZRJlqOUATPktCYz4SWj3+ZPmZC/dc/4NRdvAvT1+n+wZ1sF8sGn/B3Lu6lh50S2sAtPrKMvE@vger.kernel.org, AJvYcCXlfPkdxyqRxDtrf9XtLn0KB4FiPz2HXgEp3qUgrWEE8QlQnkZIuBZuv1qK9eh14WwWMBmVOLia2/Inu9tB@vger.kernel.org X-Gm-Message-State: AOJu0YxIjuakrSv+VqDOK0IcSNU7QHAA+dW8y+81YIWrGhpwQBrgoZSs Q0/ea/z9rKm9HotAwYEM3ibRuwEwiFl0FkiDFTPK8Vh90Wyr++6Eh93z X-Gm-Gg: ASbGncvwEKrQJZ0JWps5Tx2ZKbkgluoyIqEi89jFYhexO3c/gVFKMYWCdGTDOtVZjZA pjZa+bhgTRNP62GXbnDECLyKXdv9CBW+fE2tyEbYuJt4e+ggWVwocAviXHqGH4zi8LthsZKcSyW sDilwI16NUtP7rGab281Hk95Oqy04MP7ykvhcTiZQZqf/ZqVO312BNXYi6TgSdZ6NhIMz3CbMEq Lb5+9OH7fUrkiiKq0YqmBDHoPkVIUlxQuLqfJhUb6TDXL5mZVYq89vrRbuP7DXfXWql2FSBYHZ0 tAgEZrDTP5MaoBnoqVKFwnapefl0jQdH6fDcPKdhlVmwOt+CO1Pmr0Ulf0oSawvgPAqf2XvxPje Ng63lQ/3k7yvkfCvBTW8QfbDzzA== X-Google-Smtp-Source: AGHT+IGmYHPRNVkZlfycSsEN2aiJ8PCBCc4hbt1q3NPUc/bEuyH3gRv8qHnGFRo+5j3JGV1XMEkfZw== X-Received: by 2002:a05:6a20:d705:b0:215:de5f:febc with SMTP id adf61e73a8af0-216219bd5fcmr19558870637.27.1747677523827; Mon, 19 May 2025 10:58:43 -0700 (PDT) Received: from localhost ([2a00:79e0:3e00:2601:3afc:446b:f0df:eadc]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-742a9876788sm6488742b3a.126.2025.05.19.10.58.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 May 2025 10:58:42 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v5 34/40] drm/msm: Split out map/unmap ops Date: Mon, 19 May 2025 10:57:31 -0700 Message-ID: <20250519175755.13037-22-robdclark@gmail.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250519175755.13037-1-robdclark@gmail.com> References: <20250519175348.11924-1-robdclark@gmail.com> <20250519175755.13037-1-robdclark@gmail.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Rob Clark With async VM_BIND, the actual pgtable updates are deferred. Synchronously, a list of map/unmap ops will be generated, but the actual pgtable changes are deferred. To support that, split out op handlers and change the existing non-VM_BIND paths to use them. Note in particular, the vma itself may already be destroyed/freed by the time an UNMAP op runs (or even a MAP op if there is a later queued UNMAP). For this reason, the op handlers cannot reference the vma pointer. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gem_vma.c | 63 +++++++++++++++++++++++++++---- 1 file changed, 56 insertions(+), 7 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_gem_vma.c index 73baa9451ada..a105aed82cae 100644 --- a/drivers/gpu/drm/msm/msm_gem_vma.c +++ b/drivers/gpu/drm/msm/msm_gem_vma.c @@ -8,6 +8,34 @@ #include "msm_gem.h" #include "msm_mmu.h" +#define vm_dbg(fmt, ...) pr_debug("%s:%d: "fmt"\n", __func__, __LINE__, ##__VA_ARGS__) + +/** + * struct msm_vm_map_op - create new pgtable mapping + */ +struct msm_vm_map_op { + /** @iova: start address for mapping */ + uint64_t iova; + /** @range: size of the region to map */ + uint64_t range; + /** @offset: offset into @sgt to map */ + uint64_t offset; + /** @sgt: pages to map, or NULL for a PRR mapping */ + struct sg_table *sgt; + /** @prot: the mapping protection flags */ + int prot; +}; + +/** + * struct msm_vm_unmap_op - unmap a range of pages from pgtable + */ +struct msm_vm_unmap_op { + /** @iova: start address for unmap */ + uint64_t iova; + /** @range: size of region to unmap */ + uint64_t range; +}; + static void msm_gem_vm_free(struct drm_gpuvm *gpuvm) { @@ -21,18 +49,36 @@ msm_gem_vm_free(struct drm_gpuvm *gpuvm) kfree(vm); } +static void +vm_unmap_op(struct msm_gem_vm *vm, const struct msm_vm_unmap_op *op) +{ + vm_dbg("%p: %016llx %016llx", vm, op->iova, op->iova + op->range); + + vm->mmu->funcs->unmap(vm->mmu, op->iova, op->range); +} + +static int +vm_map_op(struct msm_gem_vm *vm, const struct msm_vm_map_op *op) +{ + vm_dbg("%p: %016llx %016llx", vm, op->iova, op->iova + op->range); + + return vm->mmu->funcs->map(vm->mmu, op->iova, op->sgt, op->offset, + op->range, op->prot); +} + /* Actually unmap memory for the vma */ void msm_gem_vma_unmap(struct drm_gpuva *vma) { struct msm_gem_vma *msm_vma = to_msm_vma(vma); - struct msm_gem_vm *vm = to_msm_vm(vma->vm); - unsigned size = vma->va.range; /* Don't do anything if the memory isn't mapped */ if (!msm_vma->mapped) return; - vm->mmu->funcs->unmap(vm->mmu, vma->va.addr, size); + vm_unmap_op(to_msm_vm(vma->vm), &(struct msm_vm_unmap_op){ + .iova = vma->va.addr, + .range = vma->va.range, + }); msm_vma->mapped = false; } @@ -42,7 +88,6 @@ int msm_gem_vma_map(struct drm_gpuva *vma, int prot, struct sg_table *sgt) { struct msm_gem_vma *msm_vma = to_msm_vma(vma); - struct msm_gem_vm *vm = to_msm_vm(vma->vm); int ret; if (GEM_WARN_ON(!vma->va.addr)) @@ -62,9 +107,13 @@ msm_gem_vma_map(struct drm_gpuva *vma, int prot, struct sg_table *sgt) * Revisit this if we can come up with a scheme to pre-alloc pages * for the pgtable in map/unmap ops. */ - ret = vm->mmu->funcs->map(vm->mmu, vma->va.addr, sgt, - vma->gem.offset, vma->va.range, - prot); + ret = vm_map_op(to_msm_vm(vma->vm), &(struct msm_vm_map_op){ + .iova = vma->va.addr, + .range = vma->va.range, + .offset = vma->gem.offset, + .sgt = sgt, + .prot = prot, + }); if (ret) { msm_vma->mapped = false; } From patchwork Mon May 19 17:57:33 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 891123 Received: from mail-pj1-f54.google.com (mail-pj1-f54.google.com [209.85.216.54]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8D25528E606; Mon, 19 May 2025 17:58:49 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.54 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747677531; cv=none; b=afsGHG+Pd/Eb7UTF/RGwlinOxYJaOJtN122eDiKD9AYQInxf/mqa5PoCNz3hAZC4mbq2ZhxpWa6fVEhXKk4Wox/TlnX968x7taGSymiG4JPWRxhZhZIhd2058z2RXwArovSVNl+tzXgTMIzrU6MOY3QW1NsRMEzMH9J3mQc4oGo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747677531; c=relaxed/simple; bh=dKdw8DHdr4de+pLf0G0fohs0iChjVU8bfF+DE91GQnE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=s8ODfkeg8kE3qFyIwo0VdkYCvAsqBBMIY+bD+jJiC29SA+j0+beQF3pQPb5EtMJ+58AtzM8UNGlUo29LP1CVaaUHGHfi/v1mVbwxKxYMDTKhEUo/ty4LRhAsoKUXX3mFC5/LLMLA39rXWVJwzdHpqyzTMBYg3tA/R1ayecU8Aus= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=F1Xlq0LN; arc=none smtp.client-ip=209.85.216.54 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="F1Xlq0LN" Received: by mail-pj1-f54.google.com with SMTP id 98e67ed59e1d1-30820167b47so3873537a91.0; Mon, 19 May 2025 10:58:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1747677529; x=1748282329; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=MlkCAAKtM3wOcQWyTtWQWVVEqfus5C6fj3S4qB7dQ/Y=; b=F1Xlq0LNnCXVjsmfaxOLADKXY8WZS6g9MFvHPPnQXKQFzvCJJAzfbPxoDwJJzxnUNz +HESzpPTvkq2+0cCuU9hBuweZYxR2Fwd/xrduXA+0rMGAuu76FLdyzrS/0y3JEGb4qbh 8ES+SsHd5gV8FvXNdUpm5vagwJSq1fomiUn3hLZQziQ/e3n2eR5E3+98FF3+NO8Hk7Vq boRg6iUE6mG0lHe5Yx4JYoyvTuq68Hzk9HzG/74DGLV/r2DWwHHcZAxQO/8BApD35yoh PsDIfIsLXKB7AIzqZ+vT1DClUu+q+yJ34NZXxQ7GI7Ku9l2DGDM6EG1ZnBHG3ilixEYl OvAQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747677529; x=1748282329; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=MlkCAAKtM3wOcQWyTtWQWVVEqfus5C6fj3S4qB7dQ/Y=; b=Bpdar8Cfs8+LBIRN/2/v2OPXkrsLXh/I9UgvLJwL/kGFT6j9Ak4Pycudl8R2Wr8Y1w 1avlugB/WW29qhcoEcaKydtl3z67vgULDNBx3sBQMoVQzxZSFWNbLvKb33TxyqhGe0Dn Nm3zZsn9bWsBdn8+yInwWgD2R1VkMA0E4zKmEUmdK/q1OtoudrZfhKSrziKks9XgAiw/ Aykoyp9jZ+56hjWKVKDqNEtux1sScu61WNtwiQY0ylHJ6RL5XWNa0ZMJoEMNmnsJqjeZ 5R9/IIzPprHs08UYifTKyqbWxEWC8LSu0gDEx/ikXUqYg1GD/4aL7zJCLH4uVcIP9zRt nNrA== X-Forwarded-Encrypted: i=1; AJvYcCU2pM2i7LuRZSyTikgnhlK4uUeakQ//UuIIknwkpgsKjpuFytVaymqJo6nsAHhVygrqQM0j4lDXbTyQvBGS@vger.kernel.org, AJvYcCWgjYUsiq8+3xWZttswDJYzp5xFjeOyS+ru/DfDm0OLLP6Rn+72cmLvnpUzMNS/gA1x8anwiCJ7CG3aj21S@vger.kernel.org X-Gm-Message-State: AOJu0YwhbR6y1gQgG8Bys/WFsGqAy334SDLjOPLovdv6+oeduz7qRwWL gyJffZAzyI0M2Q0QtRtgsQ/aw8m/TaN25Hc3tNHbdwanoPBaMjqkjffy X-Gm-Gg: ASbGnctTCGWsVxg9ec/5cNtsqVrXdcYa847LzPjved7DJ0FxmzZyhcur+FO/R/8in9B 79/pq4V0qfblMoaxoFn68sLSnhmbVWAPLLqEsF2K4A0FonPz5DLHc5XDS/VMr6GG4IZC8Pu3krP gwfR+3VkDs3aoAjpnRJlkhJbYcF398d6EYH3G+Ij/iSKYb06VKiD+4mNsJoj0b9HU+oVN2p2JFR Kwupu3xdvLYLZI7ajl6Q5k1rxXnnQXGCsP5Cxp8H9KdpkicGXhxtJn3O8OiaKkYtKRAjyF/xebX sVIddqkthyT3PJdofKC3fU3lVERSwuVYx26GiOXR0VBYF09/Ldub9kEyIb7kXI9ksKxfJWktT1w lz8mVxgSBJdaX2M14c6u4uxxVFQ== X-Google-Smtp-Source: AGHT+IHi3lW4kYLwbdf1uvFbUhc2rF9CF6d0zFjk8XG4rGHx76qnW4OPrzSggWJ5k2qzKMlnIyNJ3Q== X-Received: by 2002:a17:90b:524c:b0:2fa:42f3:e3e4 with SMTP id 98e67ed59e1d1-30e4dac99b4mr24907480a91.3.1747677528711; Mon, 19 May 2025 10:58:48 -0700 (PDT) Received: from localhost ([2a00:79e0:3e00:2601:3afc:446b:f0df:eadc]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-30e9239aaeesm5861729a91.15.2025.05.19.10.58.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 May 2025 10:58:46 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Rob Clark , Rob Clark , Sean Paul , Konrad Dybcio , Abhinav Kumar , Dmitry Baryshkov , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v5 36/40] drm/msm: Add VM logging for VM_BIND updates Date: Mon, 19 May 2025 10:57:33 -0700 Message-ID: <20250519175755.13037-24-robdclark@gmail.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250519175755.13037-1-robdclark@gmail.com> References: <20250519175348.11924-1-robdclark@gmail.com> <20250519175755.13037-1-robdclark@gmail.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Rob Clark When userspace opts in to VM_BIND, the submit no longer holds references keeping the VMA alive. This makes it difficult to distinguish between UMD/KMD/app bugs. So add a debug option for logging the most recent VM updates and capturing these in GPU devcoredumps. The submitqueue id is also captured, a value of zero means the operation did not go via a submitqueue (ie. comes from msm_gem_vm_close() tearing down the remaining mappings when the device file is closed. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/adreno/adreno_gpu.c | 11 +++ drivers/gpu/drm/msm/msm_gem.h | 24 +++++ drivers/gpu/drm/msm/msm_gem_vma.c | 124 ++++++++++++++++++++++-- drivers/gpu/drm/msm/msm_gpu.c | 52 +++++++++- drivers/gpu/drm/msm/msm_gpu.h | 4 + 5 files changed, 202 insertions(+), 13 deletions(-) diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c b/drivers/gpu/drm/msm/adreno/adreno_gpu.c index efe03f3f42ba..12b42ae2688c 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c @@ -837,6 +837,7 @@ void adreno_gpu_state_destroy(struct msm_gpu_state *state) for (i = 0; state->bos && i < state->nr_bos; i++) kvfree(state->bos[i].data); + kfree(state->vm_logs); kfree(state->bos); kfree(state->comm); kfree(state->cmd); @@ -977,6 +978,16 @@ void adreno_show(struct msm_gpu *gpu, struct msm_gpu_state *state, info->ptes[0], info->ptes[1], info->ptes[2], info->ptes[3]); } + if (state->vm_logs) { + drm_puts(p, "vm-log:\n"); + for (i = 0; i < state->nr_vm_logs; i++) { + struct msm_gem_vm_log_entry *e = &state->vm_logs[i]; + drm_printf(p, " - %s:%d: 0x%016llx-0x%016llx\n", + e->op, e->queue_id, e->iova, + e->iova + e->range); + } + } + drm_printf(p, "rbbm-status: 0x%08x\n", state->rbbm_status); drm_puts(p, "ringbuffer:\n"); diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index bfeb0f584ae5..4dc9b72b9193 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -24,6 +24,20 @@ #define MSM_BO_STOLEN 0x10000000 /* try to use stolen/splash memory */ #define MSM_BO_MAP_PRIV 0x20000000 /* use IOMMU_PRIV when mapping */ +/** + * struct msm_gem_vm_log_entry - An entry in the VM log + * + * For userspace managed VMs, a log of recent VM updates is tracked and + * captured in GPU devcore dumps, to aid debugging issues caused by (for + * example) incorrectly synchronized VM updates + */ +struct msm_gem_vm_log_entry { + const char *op; + uint64_t iova; + uint64_t range; + int queue_id; +}; + /** * struct msm_gem_vm - VM object * @@ -85,6 +99,15 @@ struct msm_gem_vm { /** @last_fence: Fence for last pending work scheduled on the VM */ struct dma_fence *last_fence; + /** @log: A log of recent VM updates */ + struct msm_gem_vm_log_entry *log; + + /** @log_shift: length of @log is (1 << @log_shift) */ + uint32_t log_shift; + + /** @log_idx: index of next @log entry to write */ + uint32_t log_idx; + /** @faults: the number of GPU hangs associated with this address space */ int faults; @@ -115,6 +138,7 @@ msm_gem_vm_create(struct drm_device *drm, struct msm_mmu *mmu, const char *name, u64 va_start, u64 va_size, bool managed); void msm_gem_vm_close(struct drm_gpuvm *gpuvm); +void msm_gem_vm_unusable(struct drm_gpuvm *gpuvm); struct msm_fence_context; diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_gem_vma.c index fe41b7a042c3..d349025924b4 100644 --- a/drivers/gpu/drm/msm/msm_gem_vma.c +++ b/drivers/gpu/drm/msm/msm_gem_vma.c @@ -17,6 +17,10 @@ #define vm_dbg(fmt, ...) pr_debug("%s:%d: "fmt"\n", __func__, __LINE__, ##__VA_ARGS__) +static uint vm_log_shift = 0; +MODULE_PARM_DESC(vm_log_shift, "Length of VM op log"); +module_param_named(vm_log_shift, vm_log_shift, uint, 0600); + /** * struct msm_vm_map_op - create new pgtable mapping */ @@ -31,6 +35,13 @@ struct msm_vm_map_op { struct sg_table *sgt; /** @prot: the mapping protection flags */ int prot; + + /** + * @queue_id: The id of the submitqueue the operation is performed + * on, or zero for (in particular) UNMAP ops triggered outside of + * a submitqueue (ie. process cleanup) + */ + int queue_id; }; /** @@ -41,6 +52,13 @@ struct msm_vm_unmap_op { uint64_t iova; /** @range: size of region to unmap */ uint64_t range; + + /** + * @queue_id: The id of the submitqueue the operation is performed + * on, or zero for (in particular) UNMAP ops triggered outside of + * a submitqueue (ie. process cleanup) + */ + int queue_id; }; /** @@ -144,16 +162,87 @@ msm_gem_vm_free(struct drm_gpuvm *gpuvm) vm->mmu->funcs->destroy(vm->mmu); dma_fence_put(vm->last_fence); put_pid(vm->pid); + kfree(vm->log); kfree(vm); } +/** + * msm_gem_vm_unusable() - Mark a VM as unusable + * @vm: the VM to mark unusable + */ +void +msm_gem_vm_unusable(struct drm_gpuvm *gpuvm) +{ + struct msm_gem_vm *vm = to_msm_vm(gpuvm); + uint32_t vm_log_len = (1 << vm->log_shift); + uint32_t vm_log_mask = vm_log_len - 1; + uint32_t nr_vm_logs; + int first; + + vm->unusable = true; + + /* Bail if no log, or empty log: */ + if (!vm->log || !vm->log[0].op) + return; + + mutex_lock(&vm->mmu_lock); + + /* + * log_idx is the next entry to overwrite, meaning it is the oldest, or + * first, entry (other than the special case handled below where the + * log hasn't wrapped around yet) + */ + first = vm->log_idx; + + if (!vm->log[first].op) { + /* + * If the next log entry has not been written yet, then only + * entries 0 to idx-1 are valid (ie. we haven't wrapped around + * yet) + */ + nr_vm_logs = MAX(0, first - 1); + first = 0; + } else { + nr_vm_logs = vm_log_len; + } + + pr_err("vm-log:\n"); + for (int i = 0; i < nr_vm_logs; i++) { + int idx = (i + first) & vm_log_mask; + struct msm_gem_vm_log_entry *e = &vm->log[idx]; + pr_err(" - %s:%d: 0x%016llx-0x%016llx\n", + e->op, e->queue_id, e->iova, + e->iova + e->range); + } + + mutex_unlock(&vm->mmu_lock); +} + static void -vm_unmap_op(struct msm_gem_vm *vm, const struct msm_vm_unmap_op *op) +vm_log(struct msm_gem_vm *vm, const char *op, uint64_t iova, uint64_t range, int queue_id) { + int idx; + if (!vm->managed) lockdep_assert_held(&vm->mmu_lock); - vm_dbg("%p: %016llx %016llx", vm, op->iova, op->iova + op->range); + vm_dbg("%s:%p:%d: %016llx %016llx", op, vm, queue_id, iova, iova + range); + + if (!vm->log) + return; + + idx = vm->log_idx; + vm->log[idx].op = op; + vm->log[idx].iova = iova; + vm->log[idx].range = range; + vm->log[idx].queue_id = queue_id; + vm->log_idx = (vm->log_idx + 1) & ((1 << vm->log_shift) - 1); +} + +static void +vm_unmap_op(struct msm_gem_vm *vm, const struct msm_vm_unmap_op *op) +{ + vm_log(vm, "unmap", op->iova, op->range, op->queue_id); vm->mmu->funcs->unmap(vm->mmu, op->iova, op->range); } @@ -161,10 +250,7 @@ vm_unmap_op(struct msm_gem_vm *vm, const struct msm_vm_unmap_op *op) static int vm_map_op(struct msm_gem_vm *vm, const struct msm_vm_map_op *op) { - if (!vm->managed) - lockdep_assert_held(&vm->mmu_lock); - - vm_dbg("%p: %016llx %016llx", vm, op->iova, op->iova + op->range); + vm_log(vm, "map", op->iova, op->range, op->queue_id); return vm->mmu->funcs->map(vm->mmu, op->iova, op->sgt, op->offset, op->range, op->prot); @@ -382,6 +468,7 @@ vma_from_op(struct op_arg *arg, struct drm_gpuva_op_map *op) static int msm_gem_vm_sm_step_map(struct drm_gpuva_op *op, void *arg) { + struct msm_vm_bind_job *job = ((struct op_arg *)arg)->job; struct drm_gem_object *obj = op->map.gem.obj; struct drm_gpuva *vma; struct sg_table *sgt; @@ -412,6 +499,7 @@ msm_gem_vm_sm_step_map(struct drm_gpuva_op *op, void *arg) .range = vma->va.range, .offset = vma->gem.offset, .prot = prot, + .queue_id = job->queue->id, }, .obj = vma->gem.obj, }); @@ -445,6 +533,7 @@ msm_gem_vm_sm_step_remap(struct drm_gpuva_op *op, void *arg) .unmap = { .iova = unmap_start, .range = unmap_range, + .queue_id = job->queue->id, }, .obj = orig_vma->gem.obj, }); @@ -506,6 +595,7 @@ msm_gem_vm_sm_step_remap(struct drm_gpuva_op *op, void *arg) static int msm_gem_vm_sm_step_unmap(struct drm_gpuva_op *op, void *arg) { + struct msm_vm_bind_job *job = ((struct op_arg *)arg)->job; struct drm_gpuva *vma = op->unmap.va; struct msm_gem_vma *msm_vma = to_msm_vma(vma); @@ -520,6 +610,7 @@ msm_gem_vm_sm_step_unmap(struct drm_gpuva_op *op, void *arg) .unmap = { .iova = vma->va.addr, .range = vma->va.range, + .queue_id = job->queue->id, }, .obj = vma->gem.obj, }); @@ -584,7 +675,7 @@ msm_vma_job_run(struct drm_sched_job *_job) * now the VM is in an undefined state. Game over! */ if (ret) - vm->unusable = true; + msm_gem_vm_unusable(job->vm); job_foreach_bo (obj, job) { msm_gem_lock(obj); @@ -697,6 +788,23 @@ msm_gem_vm_create(struct drm_device *drm, struct msm_mmu *mmu, const char *name, drm_mm_init(&vm->mm, va_start, va_size); + /* + * We don't really need vm log for kernel managed VMs, as the kernel + * is responsible for ensuring that GEM objs are mapped if they are + * used by a submit. Furthermore we piggyback on mmu_lock to serialize + * access to the log. + * + * Limit the max log_shift to 8 to prevent userspace from asking us + * for an unreasonable log size. + */ + if (!managed) + vm->log_shift = MIN(vm_log_shift, 8); + + if (vm->log_shift) { + vm->log = kmalloc_array(1 << vm->log_shift, sizeof(vm->log[0]), + GFP_KERNEL | __GFP_ZERO); + } + return &vm->base; err_free_dummy: @@ -1143,7 +1251,7 @@ vm_bind_job_prepare(struct msm_vm_bind_job *job) * state the vm is in. So throw up our hands! */ if (i > 0) - vm->unusable = true; + msm_gem_vm_unusable(job->vm); return ret; } } diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c index b70355fc8570..210e756cb563 100644 --- a/drivers/gpu/drm/msm/msm_gpu.c +++ b/drivers/gpu/drm/msm/msm_gpu.c @@ -259,9 +259,6 @@ static void crashstate_get_bos(struct msm_gpu_state *state, struct msm_gem_submi { extern bool rd_full; - if (!submit) - return; - if (msm_context_is_vmbind(submit->queue->ctx)) { struct drm_exec exec; struct drm_gpuva *vma; @@ -318,6 +315,48 @@ static void crashstate_get_bos(struct msm_gpu_state *state, struct msm_gem_submi } } +static void crashstate_get_vm_logs(struct msm_gpu_state *state, struct msm_gem_vm *vm) +{ + uint32_t vm_log_len = (1 << vm->log_shift); + uint32_t vm_log_mask = vm_log_len - 1; + int first; + + /* Bail if no log, or empty log: */ + if (!vm->log || !vm->log[0].op) + return; + + mutex_lock(&vm->mmu_lock); + + /* + * log_idx is the next entry to overwrite, meaning it is the oldest, or + * first, entry (other than the special case handled below where the + * log hasn't wrapped around yet) + */ + first = vm->log_idx; + + if (!vm->log[first].op) { + /* + * If the next log entry has not been written yet, then only + * entries 0 to idx-1 are valid (ie. we haven't wrapped around + * yet) + */ + state->nr_vm_logs = MAX(0, first - 1); + first = 0; + } else { + state->nr_vm_logs = vm_log_len; + } + + state->vm_logs = kmalloc_array( + state->nr_vm_logs, sizeof(vm->log[0]), GFP_KERNEL); + for (int i = 0; i < state->nr_vm_logs; i++) { + int idx = (i + first) & vm_log_mask; + + state->vm_logs[i] = vm->log[idx]; + } + + mutex_unlock(&vm->mmu_lock); +} + static void msm_gpu_crashstate_capture(struct msm_gpu *gpu, struct msm_gem_submit *submit, char *comm, char *cmd) { @@ -349,7 +388,10 @@ static void msm_gpu_crashstate_capture(struct msm_gpu *gpu, msm_iommu_pagetable_walk(mmu, info->iova, info->ptes); } - crashstate_get_bos(state, submit); + if (submit) { + crashstate_get_vm_logs(state, to_msm_vm(submit->vm)); + crashstate_get_bos(state, submit); + } /* Set the active crash state to be dumped on failure */ gpu->crashstate = state; @@ -449,7 +491,7 @@ static void recover_worker(struct kthread_work *work) * VM_BIND) */ if (!vm->managed) - vm->unusable = true; + msm_gem_vm_unusable(submit->vm); } get_comm_cmdline(submit, &comm, &cmd); diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h index 9cbf155ff222..31b83e9e3673 100644 --- a/drivers/gpu/drm/msm/msm_gpu.h +++ b/drivers/gpu/drm/msm/msm_gpu.h @@ -20,6 +20,7 @@ #include "msm_gem.h" struct msm_gem_submit; +struct msm_gem_vm_log_entry; struct msm_gpu_perfcntr; struct msm_gpu_state; struct msm_context; @@ -609,6 +610,9 @@ struct msm_gpu_state { struct msm_gpu_fault_info fault_info; + int nr_vm_logs; + struct msm_gem_vm_log_entry *vm_logs; + int nr_bos; struct msm_gpu_state_bo *bos; }; From patchwork Mon May 19 17:57:35 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 891122 Received: from mail-pf1-f179.google.com (mail-pf1-f179.google.com [209.85.210.179]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9DD3C28EA62; Mon, 19 May 2025 17:58:52 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.179 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747677534; cv=none; b=tvTRquT2qyEwc/fahMyH0uu7LteCEL3cPGTnRVKt8lTKMg4kVymWMvwGiCq0I0XF1niBn8+T81RL7zV6+wrbyCTDFXE4vucTPcgBgEx5p40KY6LI857dEezsm15NBliQdm+G7TrPrhykSrX7Cf5D5vxV6iZwrfOARxcI8y/mYNU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747677534; c=relaxed/simple; bh=C9zlyB2KOAVgg++Lt92d9JRcxin5HlXVkR/UdSjceQc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=dYGLH/sERklUmweC+JI6Ha3U+vA0v1VzYa+NY4qC4+VNA1xpUixvBjEW7tiUebD5uz2nW+rmNe2s39DYrscc3NxcaRH4RXhQPkY6As8g0NoT3cOkGFx1qovOuyHUYZXsF4PVYVkERM7RcG6txKOIzfEEa61ZZTjwOParki3kzzk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=EhguchGk; arc=none smtp.client-ip=209.85.210.179 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="EhguchGk" Received: by mail-pf1-f179.google.com with SMTP id d2e1a72fcca58-7399a2dc13fso6068054b3a.2; Mon, 19 May 2025 10:58:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1747677532; x=1748282332; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=5vZUP3yiYvnGJEGnZA5yTtXN5qfGZSR/tLPaaIrC1kY=; b=EhguchGkkxVen9w3pxwD/FrD5s0q9QepH2267JC7kcwI+z7fPCX5gKvJ7kTFN0eLq7 Ip0ypoYIp1x3ZNUasiOgWBBydKSQ0Jm2ghp4wB9tv4lvZ4W5ESNOaZev3IqCgj8BL4tz AZ8M6EitA1zN+Q5cD0AalI73BGEPpo+qDijX85NEX/9YX9mVkaFZkHPYZD/MUJUJknQm 5YoEKCiDNG2YsdHqevuQRS8Jqb/dSNxgtJDwCCF9BrsBuFjvxrYiXODcgBzjTX62wlbj S1kkdmHkcxjjLJCRqVcaWblwzy1sQSMYidOEWJ4bKQMDWO7lRkgiNALj4va/GpzMUp2h w0Uw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747677532; x=1748282332; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=5vZUP3yiYvnGJEGnZA5yTtXN5qfGZSR/tLPaaIrC1kY=; b=aqP0Y4iQyJxDSFDMoGaq6Sow17z+T2PzGmdaC7aAY+JkdYes5S468Azhm3xADNSatl M4BOPhBsxsVMHv1BrKynV5xEkI+627HHMSdyK9sYzqo+BEh5KTguuxJXMEyfRQAZqq9p w059C+t4YCInw3uqO7loq1kYBUcHxoUYoi+0x9bTlONbwj9PsE/POMeaQegGG0umiaAk AaiclzOJB6B9zTPBeK9XfNmeMkDibDD5fkbmrD5i21hVhA7WPa0u6ym2aqAoDVUxCL/I eP1BxjJjobaCldriZ8S4BOzFKMNoBMGiRpm2ll+aL/l21FbTJWJnrVZD02A0q0VvX4QY Xe0A== X-Forwarded-Encrypted: i=1; AJvYcCUeXuFuYDiUOXk4uGHQhQQgE+Go9bqVHr4UACuQtsfcOl+dAAQy2fwqyLY5nMs18je629fkc7GMQmKVe9jx@vger.kernel.org, AJvYcCWRvi3nFQgEVHJROpau2PJwkkHV4bjwmu2q0JwX64pWlUo7WGXPK06FTLFsHGWI4mqEtwU4IxtE8gAIZnRq@vger.kernel.org X-Gm-Message-State: AOJu0Yz8RaCjp0RUJo6QzZh2OmAoDQoso1vWvlt20Nm4obqnuI0MkIK+ uL3Hr9WSoDr5c4UJ4yeQvy/yxAPKqcnRNuGY93gZQ8N0h4PJBn1P3PiM X-Gm-Gg: ASbGnctCuEaBz4KTpsL7g3pKmSgFj7Uy4B3q4KNNTdPtdLgqRjP3MQg4TTWx0eQemYS +ra8cQQJvc63vu+zS1lL/SZvp823dekztXJ5nV4aA/ksPHMv4G8W1z3kx3dAtcVfLqPVdrPybES qWIJ67gKTDjAQE+Xv1xSFxi1jPdXuEWl14eNFiiYtdAD+/NruQns98uH4yZDrny0AoOiEUKX8DF q/+M3McfWfgObKk4ZO7qXrqLxMK+X0bNCD8HsELE+ehcxKvrbHZEUzKK1vOsyURd8Cet3kgkLzG xWb6K3V/274SFRLaN80r/q0sLGdQdISYeglqxJghe40pULHFT5KWduHSm7bGOowHIBsymyLJGqD dhmEHQDhHxceheCnk0hgOnVjm3g== X-Google-Smtp-Source: AGHT+IGRr3wC6zNEkbbZIOqB63IqRXMNCh4mPcpqHYlPlc47+qmcazwgSDB4ccUJf9IGlUSasHlnXw== X-Received: by 2002:a05:6a20:6a2b:b0:215:dfee:bb70 with SMTP id adf61e73a8af0-2170cde519dmr20916003637.29.1747677531712; Mon, 19 May 2025 10:58:51 -0700 (PDT) Received: from localhost ([2a00:79e0:3e00:2601:3afc:446b:f0df:eadc]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-742a9876e28sm6726150b3a.139.2025.05.19.10.58.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 May 2025 10:58:50 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v5 38/40] drm/msm: Add mmu prealloc tracepoint Date: Mon, 19 May 2025 10:57:35 -0700 Message-ID: <20250519175755.13037-26-robdclark@gmail.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250519175755.13037-1-robdclark@gmail.com> References: <20250519175348.11924-1-robdclark@gmail.com> <20250519175755.13037-1-robdclark@gmail.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Rob Clark So we can monitor how many pages are getting preallocated vs how many get used. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gpu_trace.h | 14 ++++++++++++++ drivers/gpu/drm/msm/msm_iommu.c | 4 ++++ 2 files changed, 18 insertions(+) diff --git a/drivers/gpu/drm/msm/msm_gpu_trace.h b/drivers/gpu/drm/msm/msm_gpu_trace.h index 7f863282db0d..781bbe5540bd 100644 --- a/drivers/gpu/drm/msm/msm_gpu_trace.h +++ b/drivers/gpu/drm/msm/msm_gpu_trace.h @@ -205,6 +205,20 @@ TRACE_EVENT(msm_gpu_preemption_irq, TP_printk("preempted to %u", __entry->ring_id) ); +TRACE_EVENT(msm_mmu_prealloc_cleanup, + TP_PROTO(u32 count, u32 remaining), + TP_ARGS(count, remaining), + TP_STRUCT__entry( + __field(u32, count) + __field(u32, remaining) + ), + TP_fast_assign( + __entry->count = count; + __entry->remaining = remaining; + ), + TP_printk("count=%u, remaining=%u", __entry->count, __entry->remaining) +); + #endif #undef TRACE_INCLUDE_PATH diff --git a/drivers/gpu/drm/msm/msm_iommu.c b/drivers/gpu/drm/msm/msm_iommu.c index d04837461c3d..b5d019093380 100644 --- a/drivers/gpu/drm/msm/msm_iommu.c +++ b/drivers/gpu/drm/msm/msm_iommu.c @@ -8,6 +8,7 @@ #include #include #include "msm_drv.h" +#include "msm_gpu_trace.h" #include "msm_mmu.h" struct msm_iommu { @@ -346,6 +347,9 @@ msm_iommu_pagetable_prealloc_cleanup(struct msm_mmu *mmu, struct msm_mmu_preallo struct kmem_cache *pt_cache = get_pt_cache(mmu); uint32_t remaining_pt_count = p->count - p->ptr; + if (p->count > 0) + trace_msm_mmu_prealloc_cleanup(p->count, remaining_pt_count); + kmem_cache_free_bulk(pt_cache, remaining_pt_count, &p->pages[p->ptr]); kvfree(p->pages); } From patchwork Mon May 19 17:57:37 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 891121 Received: from mail-pf1-f172.google.com (mail-pf1-f172.google.com [209.85.210.172]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9FE2028ECED; Mon, 19 May 2025 17:58:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.172 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747677537; cv=none; b=RDZwdR+LlnwD0DZxlK6K50q8Q8j6Gbi+NFzaELn/c/yKdLlTyaFET9ryHWQewfnWSNxEGtYJavGmJE9c0bNRcn9eAPBWawW+Eb6MPpk51qMpqJYVb3jB8sDx/M3rv1iyBxIoXpTRH/rdICdslNr1h4DMXb7rbS0rHl0C184eH+s= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747677537; c=relaxed/simple; bh=zHv4z2Q3Jv4oPf5jd6GJH/LDQPxUZtdS3NmmX2/ISZI=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=ALrjOrrbFeqYxNl39IC/U7iof1kFaPMg9C9Ve+09xYtc6Xg/6SIgkLNeFoyMDeBPBDwNZnBoJED+jC3toZAsfDuaJYEOulbIQWmIhvMooDjxX4m1id7YOJv5pVGpopjpX5I56kkbjJk749BcmcdIzxyrfnG4Bbp2ELXEBx8Q/wc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=b4fklxbK; arc=none smtp.client-ip=209.85.210.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="b4fklxbK" Received: by mail-pf1-f172.google.com with SMTP id d2e1a72fcca58-7398d65476eso3775844b3a.1; Mon, 19 May 2025 10:58:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1747677535; x=1748282335; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=F/hXTxlFJMrvU7CPa2JHrx2ogSsPgBPZqjeavvvUR14=; b=b4fklxbKIOiruO60b5v6TGyMlYe7RbPD45+gsmkGtrI7OsqvE7Hmuve+vvAfncWPg/ u8WahXRJSFIELDHQtOsOwssd12rdSvuPNXeAnTNHp3ncdxv+T1jDVAIlCUhps9DJNc59 6UPGAv1vQzbB/AUGTN4zNiOqHJU26bVEZfymPvhvs35ZsOBUVTWgbeXXQeCbddJ0MOKp SIbjuQn+cnhhUYkGh+QY5xtksxIDoLtYGcKYB1NEmb5hD6Yf1J51iZDihWXXqU6EVQiQ 0KRYU370FbWP5Ujvwx5EZfkejdGh1CQH0cuUt+73Ui+J/0+Nmie33ybaznAEYi1+M5Ra 6Mtg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747677535; x=1748282335; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=F/hXTxlFJMrvU7CPa2JHrx2ogSsPgBPZqjeavvvUR14=; b=hT0DLRK2I76/OEOJ1mnYE124aYb3UuRN/ZU13fYzbu8nW/1vJrhOW1Tan2vT4O4JwY QdClC1q+U9dCvjJCM0riADdEXGg9FG17al0i+B8k1daMPYEjYJqeT+Y6AmtdQxciI2fv yw4111CkK2dY+FEf93gjzWY4T0eJFLtahDwLT+T//4zsiCylN9FL351PHP+zOsiP6Yna asCTlc2xmazk4E/d+akQ43NLMqTJusjwdKvnyiAOR0+WVLEnJz5MIGaFGwY7IRb6V28+ 5oeHgyVFTZizYZnUjC2PzKVPFKgVIVKzcm9YhVora4qqg36bKOTjxLndWwob4Wf85hhC mjKQ== X-Forwarded-Encrypted: i=1; AJvYcCWpzLCRXNiynWcGh4c7Yn13pcOicuy+Iaa/sBSKdpV3Avi6A2bImigrpKZg3y3i2esP1pmfvJQArzs+w1rS@vger.kernel.org, AJvYcCXi4pmrgxlpSCG6KKvCMbTGwoNMFyvKlbCQPxtR1i+aSTevdluiTR2zBXFSTuFiRF+P/Vw+wFtmAjOw+JfR@vger.kernel.org X-Gm-Message-State: AOJu0YwMyzON/XD9mpZbu6fTBphqW6+q1zGzi57migxSNMk6zISe4+aA mxr7OnkXdNh+b16xoc1J34F+7+RUfih6CtrSeMRwlVFVlB8JQFwWwja0 X-Gm-Gg: ASbGnctT9pWHQfzG8kfjLzv0ghtLW4zVblXbwCzMqUqXjg8K9carT8dYZQRRCDWirxk tkZqz8aTFSR+EZefOAff/SVMFLr/99MjFQC4RktT3PjSSdE680bJXX3TqkSpqRh+IeniiH5ap4K OXUYaHGUdcbh0VuD/7VWcKkH8F+Vy44d/5HRcO/mq/n1lMkoo0gZU/erN5lIVwfQh8aFIspStBo Z9GRTStEi89tafi8UFAZWVp6SIg1Zq/h1nGWZNbYV8YEAeY//9zh0p5FYw9X8XohsTw/onpg02Y wgVZmtj/p7k9o2H9MGDhPKs1KUCq79Mpdk114ArefhXLV3aTnzvWVrARXBuV00Ag1lCIscMI/IB LafiZMWMvF4DvANPPt+Gydxr27w== X-Google-Smtp-Source: AGHT+IHO7vUmfXPvZMeyKDWjNf4LOGaBf9hXJlFDwffRUWBX2ZxfXn1HguqoFF3TkkZafBez0mFjWA== X-Received: by 2002:a05:6a00:e06:b0:742:cdf2:62c7 with SMTP id d2e1a72fcca58-742cdf26398mr7819210b3a.4.1747677534937; Mon, 19 May 2025 10:58:54 -0700 (PDT) Received: from localhost ([2a00:79e0:3e00:2601:3afc:446b:f0df:eadc]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-742a9709293sm6466435b3a.37.2025.05.19.10.58.54 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 May 2025 10:58:54 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v5 40/40] drm/msm: Bump UAPI version Date: Mon, 19 May 2025 10:57:37 -0700 Message-ID: <20250519175755.13037-28-robdclark@gmail.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250519175755.13037-1-robdclark@gmail.com> References: <20250519175348.11924-1-robdclark@gmail.com> <20250519175755.13037-1-robdclark@gmail.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Rob Clark Bump version to signal to userspace that VM_BIND is supported. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_drv.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c index bdf775897de8..710046906229 100644 --- a/drivers/gpu/drm/msm/msm_drv.c +++ b/drivers/gpu/drm/msm/msm_drv.c @@ -41,9 +41,10 @@ * - 1.10.0 - Add MSM_SUBMIT_BO_NO_IMPLICIT * - 1.11.0 - Add wait boost (MSM_WAIT_FENCE_BOOST, MSM_PREP_BOOST) * - 1.12.0 - Add MSM_INFO_SET_METADATA and MSM_INFO_GET_METADATA + * - 1.13.0 - Add VM_BIND */ #define MSM_VERSION_MAJOR 1 -#define MSM_VERSION_MINOR 12 +#define MSM_VERSION_MINOR 13 #define MSM_VERSION_PATCHLEVEL 0 bool dumpstate;