From patchwork Wed May 14 16:59:00 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 889944 Received: from mail-pf1-f179.google.com (mail-pf1-f179.google.com [209.85.210.179]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7B000221DA8; Wed, 14 May 2025 17:03:12 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.179 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747242193; cv=none; b=aoJbnBnukVg4+zF/rw9abjNIVGnYxvXdauHpcWqA2cT2NEnPWfTNMSW1AG6ldl3sd/NBd6mbhQurK4OI7hLblJOryI2qLzz2Gpy4zeEu3kXtA2Ra+48564x/fVB4TPFOA9//3ThD9KzzSQOsZM2fCkAgQbqvt911MztJToc2GqE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747242193; c=relaxed/simple; bh=388Q4Y2vXw/88ABph+aL+8fVavpaQXXIqL0c5ylbmAM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=ah+cipTcu2atdOHOMbR4l0fc7OjOBR0IoeUysqK0f71oyh6uP730r2E1+xW1zV6lqCmNp0RcTi0wPFpOgZKPydvMBvJR+Nr99GpNcxZ/VkeeRy7+/OuOc5NB4UZnecBdYgQ/3mrsDV51O3yR12nJsBaWeIfdb/GPGS2vk2v+J5U= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=e5nQb6o6; arc=none smtp.client-ip=209.85.210.179 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="e5nQb6o6" Received: by mail-pf1-f179.google.com with SMTP id d2e1a72fcca58-7376dd56f8fso114064b3a.2; Wed, 14 May 2025 10:03:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1747242192; x=1747846992; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=6Pxupw0mnheBn4UC0J0PHtfsiWVwGvSizaV6fVHNumo=; b=e5nQb6o6yIfk3wFXm1Ma1xynhpmF9hrIE5GeVDpVss/g+PG5q335FXr/0Yd6+cMtGa mCC8W68MhstcbSHLVB7DBW2sLOXWJOI7615UQtonxXONAGjpokL0rz3HCOLIVPXac0T3 jxZMKyTE2yfrsTslzuRldfnhrCmblitXvRHyfF+nRlkzM/J39iFxp3OmJUOQaU9eDNZI DcKBGhBC82vrI3lm7cB7MuTyyGNo6sxTl/L5Of9qkHtoVxg5LBSKIjhrTRkFgjkZ1iR8 h1ccIk9tz3GUDeeQUCPNScqQIHAH1FErtPeshcagiZBon6dcMebpezCTgJl37XLm00Kx c+eA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747242192; x=1747846992; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=6Pxupw0mnheBn4UC0J0PHtfsiWVwGvSizaV6fVHNumo=; b=VFYD3j9mCzTx47jH5+3NgUZK8Rc0AY05tlnRdAonFnU/21Z53A2B+1E3u4iyRl/E2O NMZgfY0Uulfpj3kQk3movLVrbhkuOWqtyg0CzWMr47XT6hkyI0mXsxDdMslKXoUYC6G7 EzAaVd2lxJABtJPaXM0ue6dVNNa2yRuEA3f1mUK820yVzk9RKkc0nFSjeR4YYeSX+p8e iTTRcdEH8LXG7RIaw+GM4Ojqpp1zbKmdFLuWXN6vOdTkWxI9HMpvDWzAxnpWMvA4cB4H w9k5w6mwNqr5/6Siy+sbeVK8fuFIIhw4URf87+MASSe1kniFaQX0SVbvM5fSUEcLAdzX zpSw== X-Forwarded-Encrypted: i=1; AJvYcCV2CzaTWwDR0SPcQPdu+x/3DHKDNZjbVXsxpl+oBl5pqpwvqg+oAdU63sfF8p+/lPzWECIBISA9TjVw2F7K@vger.kernel.org, AJvYcCVYoDjcHmf8d/ZW4mlCB05IXaXvnJYtP/7pOEXhysIdAmgT/+Gexl7HrCO5mlwmALG3qaFTi+d15nqBbWCY@vger.kernel.org X-Gm-Message-State: AOJu0YxtQH2UAwZuZVpgs3/OqSGvLZc3/PVJtz8OfFh4WoiIo/FU2rW2 Bhc9fV2FzfOnf5zLv11jt0Yt7tHLGSCstfgW3DyDDAIrKKIK7vQ8 X-Gm-Gg: ASbGncv4eVswOAGq7QUWb3is2kD4r3uTNFPRB9CMwSaFc/V2q95qcPCEWSnTIhB7lrH C/mocbuO2240+Nm/XR10mb0tC3J8ErNtsMzWx9JdaShzON4i/JpLuYkqGGYiQAvd9IKdtg/F/CC bNOLRKHJJQUG10AoAXV4PNmBz1QTzoZwJQVggWxx4enjtmBQHGU3JB+ouu1fyF8gHKAz1UBPUrQ 2OxKHGuXRTYdRFVxBXNaWUfVQ1XlM2QLh59EaDuD4WLs5OHlAHrvfq28DzdYW1w/ZVdWfSBoyHH PEwXVzSlbDC2Qs1MzRHVLxWEaoKreMPqxVTSpYGQ2NNMoWNHR0Z9tm20WQS1trgNVtC3eMDulOg 3kHUd+5zXfTpacl8APR2ggBS4tOYuqIdmJ6F2 X-Google-Smtp-Source: AGHT+IEenEE0TDJYGjpoad0MaXzpg4coPrwhIFddIk4z7RGaH0tq5EifEZh9tMcraEV0o9p+MEDF5g== X-Received: by 2002:a05:6a00:91d6:b0:742:95a3:b000 with SMTP id d2e1a72fcca58-74295a3b144mr559866b3a.17.1747242191636; Wed, 14 May 2025 10:03:11 -0700 (PDT) Received: from localhost ([2a00:79e0:3e00:2601:3afc:446b:f0df:eadc]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-74237a10608sm10022046b3a.101.2025.05.14.10.03.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 14 May 2025 10:03:10 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Rob Clark , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v4 01/40] drm/gpuvm: Don't require obj lock in destructor path Date: Wed, 14 May 2025 09:59:00 -0700 Message-ID: <20250514170118.40555-2-robdclark@gmail.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250514170118.40555-1-robdclark@gmail.com> References: <20250514170118.40555-1-robdclark@gmail.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Rob Clark See commit a414fe3a2129 ("drm/msm/gem: Drop obj lock in msm_gem_free_object()") for justification. Signed-off-by: Rob Clark --- drivers/gpu/drm/drm_gpuvm.c | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/drm_gpuvm.c b/drivers/gpu/drm/drm_gpuvm.c index f9eb56f24bef..1e89a98caad4 100644 --- a/drivers/gpu/drm/drm_gpuvm.c +++ b/drivers/gpu/drm/drm_gpuvm.c @@ -1511,7 +1511,9 @@ drm_gpuvm_bo_destroy(struct kref *kref) drm_gpuvm_bo_list_del(vm_bo, extobj, lock); drm_gpuvm_bo_list_del(vm_bo, evict, lock); - drm_gem_gpuva_assert_lock_held(obj); + if (kref_read(&obj->refcount) > 0) + drm_gem_gpuva_assert_lock_held(obj); + list_del(&vm_bo->list.entry.gem); if (ops && ops->vm_bo_free) @@ -1871,7 +1873,8 @@ drm_gpuva_unlink(struct drm_gpuva *va) if (unlikely(!obj)) return; - drm_gem_gpuva_assert_lock_held(obj); + if (kref_read(&obj->refcount) > 0) + drm_gem_gpuva_assert_lock_held(obj); list_del_init(&va->gem.entry); va->vm_bo = NULL; From patchwork Wed May 14 16:59:02 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 889943 Received: from mail-pf1-f179.google.com (mail-pf1-f179.google.com [209.85.210.179]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 435EB1F30BB; Wed, 14 May 2025 17:03:22 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.179 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747242203; cv=none; b=ISphiweoaGmSX5HyNuY08VVY2R9hsGe6XukILswtZmLQRiKYrpHIjrWHCmmHKmdb43swwhJJM4C5mVtQtxSAEkuyz/TMfebn4kK8F/quqfVhaStmPqizqKfzbh877JQ8cOX04DWCh5QKNW2irhJMjPCXM24Q9wd/HRwiRf4iVcc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747242203; c=relaxed/simple; bh=HQTYjppayVSe6RMF1/nDWeCVEk0At2uUwSWtYOLEAf8=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=a4QMM2Kb3Gh0hYrpX5/ssY3jwnWAaGZdSP3bpzvHwqeNafcwSacJg8fX9g7y/8zRlzwaVWwS6zecOWx9KAPtRxpjyHyd3fHmXD74RX/HnFdmF/LaMGiCGOHhzQYqu8u/sBTKuRVOAVCNvr8KJX5wnQ4SPTJRSMKcmz2KJB8WI2o= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=NmffRXhA; arc=none smtp.client-ip=209.85.210.179 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="NmffRXhA" Received: by mail-pf1-f179.google.com with SMTP id d2e1a72fcca58-7411f65811cso128425b3a.1; Wed, 14 May 2025 10:03:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1747242201; x=1747847001; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=bzY7l39Nyf4cERK4w5Xc1S0qyexBdUxlhblg/1kA2ys=; b=NmffRXhAEkGmTe0/BJ4h7/0mWl5rDMwxjgyncfSYcly8T3GgwBhihI909DcpP4HWRQ TqroLSB28H7hL8WlTU+wikjPfOOod+nmp73wsZmAk/GKPs9vRren0mpCT24b9eiqoDOa nS8AQjgYrtrKIATadTV8R0CTvB7T4wWLB19B4n+5Rt3VMFAOmI+ZtkslUmfAWs27M/XF Wc19+9sRWgN/L5dTonmtLYP57+vV5zHN0PKF6DZGxEm+IYhtFwVa8I7Vi7bcWQ4QdpxT aeC3sSEGrogwsUzqIvPherfxpU2VCgjbTTsbILBNlnSTbS19uT5C6LFTc8W/VQt0qYM4 R+AQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747242201; x=1747847001; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=bzY7l39Nyf4cERK4w5Xc1S0qyexBdUxlhblg/1kA2ys=; b=LiZUsrzb13jSh7imTbrogZP7vx4vb/Og1mfLRz0Nnfvrob4KvaBZEy/WGq+8REMPKo v8FtdHejPwoeCHv0LzWYYHlc6314jN6UMrkwrICviUh9LXipRsZDehfsegM+qOqymLn4 Kj2hTU3R4i62S/OT9xJ0gMl6CuFpl8XrxMDeSDGH/fxNK06OUT6Yw1jzi4JKB0XGnk02 vyzzktqcURPV9gIfMerRxyxu9E32dY8ljsgiZxLxP81pvXZLQg1WguWNavEIMKAAD5b1 Ts8+UufW1UxK3FwtpJHpOFzAYm3gciw/dI4fEv14RCs0LeKuzBDDHYUhsANPIc5XlBfa vncw== X-Forwarded-Encrypted: i=1; AJvYcCXV36UQBHvwJU/NdIGWGeEANkiEvBEmzaC7py5LrCHu3lDy8ocE50huFWbNtySvpWUXp88m2Hny6KJ7BBeX@vger.kernel.org, AJvYcCXdC+OIeubR8PWZ4SKu3p7uSvoDxx0UBBI99jKiwowBRieuCMJIdTtz6JoEO9Oe3gRTBj9MauwbCmUFTQ5m@vger.kernel.org X-Gm-Message-State: AOJu0Yzip0lWIi6B/PTs/g8zYYMNqNPrd4ST+i2ifC5RKB7AvQaeUOXR A8lSEtk3AuCU2Ol9TsZ5NVwKLlwdED8Nb99aW5mUvERatL0p9Bib X-Gm-Gg: ASbGncuO1HNqI/YZbLpYAxSAmXc53Uc166yaRcN6bQCgw76GfOOi5FQNkL4u8eWnTQw t5xL8fO8pe5K+lbCe9pUAB+qhMTtB3Gc+Tt1vFEDV3Efy+MSJNqKnLKUeoIlrZzVwe1v8NlwvAZ 7J6LxAZ7+aI0YWaYzRXlTfDqQ3jyqbtwYLooafJT4i16lEhsgiJwKaRdHI5iR2xKgeQRojdcEC7 5EAl2RsRykeY4Tmu2B4s5IM72jP2PvOX8BKwPEUyg9ero0YISQIF2pfsWQHz64QPyGJj3OV8pQM Tl2G+k0JuQiby17yaZ8Rbs6Y1/voT3e96kjgEa7yQ/CrsL1iRDlEZ/JN+MQdR+Tl4YWkT08JqIy 3RwZ7NBpLKFKrRLNiey+v20sGCg== X-Google-Smtp-Source: AGHT+IGqdByNX1/PVGLieoBHyvCaoUrIY4UetAMzQyT8J4UNF9vYT1h+KT+vU7nL97fTzkxNSXDxPg== X-Received: by 2002:a05:6a00:ac8c:b0:730:9946:5973 with SMTP id d2e1a72fcca58-74289263315mr5321735b3a.5.1747242201464; Wed, 14 May 2025 10:03:21 -0700 (PDT) Received: from localhost ([2a00:79e0:3e00:2601:3afc:446b:f0df:eadc]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-742377279c3sm9637453b3a.43.2025.05.14.10.03.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 14 May 2025 10:03:20 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Rob Clark , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , David Airlie , Simona Vetter , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , Marijn Suijten , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v4 03/40] drm/gem: Add ww_acquire_ctx support to drm_gem_lru_scan() Date: Wed, 14 May 2025 09:59:02 -0700 Message-ID: <20250514170118.40555-4-robdclark@gmail.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250514170118.40555-1-robdclark@gmail.com> References: <20250514170118.40555-1-robdclark@gmail.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Rob Clark If the callback is going to have to attempt to grab more locks, it is useful to have an ww_acquire_ctx to avoid locking order problems. Why not use the drm_exec helper instead? Mainly because (a) where ww_acquire_init() is called is awkward, and (b) we don't really need to retry after backoff, we can just move on to the next object. Signed-off-by: Rob Clark --- drivers/gpu/drm/drm_gem.c | 14 +++++++++++--- drivers/gpu/drm/msm/msm_gem_shrinker.c | 24 +++++++++++++----------- include/drm/drm_gem.h | 10 ++++++---- 3 files changed, 30 insertions(+), 18 deletions(-) diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c index ee811764c3df..9e3db9a864f8 100644 --- a/drivers/gpu/drm/drm_gem.c +++ b/drivers/gpu/drm/drm_gem.c @@ -1460,12 +1460,14 @@ EXPORT_SYMBOL(drm_gem_lru_move_tail); * @nr_to_scan: The number of pages to try to reclaim * @remaining: The number of pages left to reclaim, should be initialized by caller * @shrink: Callback to try to shrink/reclaim the object. + * @ticket: Optional ww_acquire_ctx context to use for locking */ unsigned long drm_gem_lru_scan(struct drm_gem_lru *lru, unsigned int nr_to_scan, unsigned long *remaining, - bool (*shrink)(struct drm_gem_object *obj)) + bool (*shrink)(struct drm_gem_object *obj, struct ww_acquire_ctx *ticket), + struct ww_acquire_ctx *ticket) { struct drm_gem_lru still_in_lru; struct drm_gem_object *obj; @@ -1498,17 +1500,20 @@ drm_gem_lru_scan(struct drm_gem_lru *lru, */ mutex_unlock(lru->lock); + if (ticket) + ww_acquire_init(ticket, &reservation_ww_class); + /* * Note that this still needs to be trylock, since we can * hit shrinker in response to trying to get backing pages * for this obj (ie. while it's lock is already held) */ - if (!dma_resv_trylock(obj->resv)) { + if (!ww_mutex_trylock(&obj->resv->lock, ticket)) { *remaining += obj->size >> PAGE_SHIFT; goto tail; } - if (shrink(obj)) { + if (shrink(obj, ticket)) { freed += obj->size >> PAGE_SHIFT; /* @@ -1522,6 +1527,9 @@ drm_gem_lru_scan(struct drm_gem_lru *lru, dma_resv_unlock(obj->resv); + if (ticket) + ww_acquire_fini(ticket); + tail: drm_gem_object_put(obj); mutex_lock(lru->lock); diff --git a/drivers/gpu/drm/msm/msm_gem_shrinker.c b/drivers/gpu/drm/msm/msm_gem_shrinker.c index 07ca4ddfe4e3..de185fc34084 100644 --- a/drivers/gpu/drm/msm/msm_gem_shrinker.c +++ b/drivers/gpu/drm/msm/msm_gem_shrinker.c @@ -44,7 +44,7 @@ msm_gem_shrinker_count(struct shrinker *shrinker, struct shrink_control *sc) } static bool -purge(struct drm_gem_object *obj) +purge(struct drm_gem_object *obj, struct ww_acquire_ctx *ticket) { if (!is_purgeable(to_msm_bo(obj))) return false; @@ -58,7 +58,7 @@ purge(struct drm_gem_object *obj) } static bool -evict(struct drm_gem_object *obj) +evict(struct drm_gem_object *obj, struct ww_acquire_ctx *ticket) { if (is_unevictable(to_msm_bo(obj))) return false; @@ -79,21 +79,21 @@ wait_for_idle(struct drm_gem_object *obj) } static bool -active_purge(struct drm_gem_object *obj) +active_purge(struct drm_gem_object *obj, struct ww_acquire_ctx *ticket) { if (!wait_for_idle(obj)) return false; - return purge(obj); + return purge(obj, ticket); } static bool -active_evict(struct drm_gem_object *obj) +active_evict(struct drm_gem_object *obj, struct ww_acquire_ctx *ticket) { if (!wait_for_idle(obj)) return false; - return evict(obj); + return evict(obj, ticket); } static unsigned long @@ -102,7 +102,7 @@ msm_gem_shrinker_scan(struct shrinker *shrinker, struct shrink_control *sc) struct msm_drm_private *priv = shrinker->private_data; struct { struct drm_gem_lru *lru; - bool (*shrink)(struct drm_gem_object *obj); + bool (*shrink)(struct drm_gem_object *obj, struct ww_acquire_ctx *ticket); bool cond; unsigned long freed; unsigned long remaining; @@ -122,8 +122,9 @@ msm_gem_shrinker_scan(struct shrinker *shrinker, struct shrink_control *sc) continue; stages[i].freed = drm_gem_lru_scan(stages[i].lru, nr, - &stages[i].remaining, - stages[i].shrink); + &stages[i].remaining, + stages[i].shrink, + NULL); nr -= stages[i].freed; freed += stages[i].freed; remaining += stages[i].remaining; @@ -164,7 +165,7 @@ msm_gem_shrinker_shrink(struct drm_device *dev, unsigned long nr_to_scan) static const int vmap_shrink_limit = 15; static bool -vmap_shrink(struct drm_gem_object *obj) +vmap_shrink(struct drm_gem_object *obj, struct ww_acquire_ctx *ticket) { if (!is_vunmapable(to_msm_bo(obj))) return false; @@ -192,7 +193,8 @@ msm_gem_shrinker_vmap(struct notifier_block *nb, unsigned long event, void *ptr) unmapped += drm_gem_lru_scan(lrus[idx], vmap_shrink_limit - unmapped, &remaining, - vmap_shrink); + vmap_shrink, + NULL); } *(unsigned long *)ptr += unmapped; diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h index fdae947682cd..0e2c476df731 100644 --- a/include/drm/drm_gem.h +++ b/include/drm/drm_gem.h @@ -555,10 +555,12 @@ void drm_gem_lru_init(struct drm_gem_lru *lru, struct mutex *lock); void drm_gem_lru_remove(struct drm_gem_object *obj); void drm_gem_lru_move_tail_locked(struct drm_gem_lru *lru, struct drm_gem_object *obj); void drm_gem_lru_move_tail(struct drm_gem_lru *lru, struct drm_gem_object *obj); -unsigned long drm_gem_lru_scan(struct drm_gem_lru *lru, - unsigned int nr_to_scan, - unsigned long *remaining, - bool (*shrink)(struct drm_gem_object *obj)); +unsigned long +drm_gem_lru_scan(struct drm_gem_lru *lru, + unsigned int nr_to_scan, + unsigned long *remaining, + bool (*shrink)(struct drm_gem_object *obj, struct ww_acquire_ctx *ticket), + struct ww_acquire_ctx *ticket); int drm_gem_evict(struct drm_gem_object *obj); From patchwork Wed May 14 16:59:04 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 889942 Received: from mail-pg1-f179.google.com (mail-pg1-f179.google.com [209.85.215.179]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BA8402836A6; Wed, 14 May 2025 17:03:25 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.179 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747242207; cv=none; b=KQtr3xXTU29P48naf5dQeOeCCqzO7tvbha1fJ6viq3S8ha6Ee+WAhIk5Lw6NRBunAdFAhiGdpQN6zHnhwjnLvzqCIqUjGBlIvohc/j1sC7mvQOaK1uDxCXml+9zkeNLkGrC33TsZErofwjxUFJl9pjPg02BgTqbKjLJXz6j386k= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747242207; c=relaxed/simple; bh=/jfmZZExg+Wzp+4EZ6rojxHF6RgLrIAptiF8m9AlQpM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=uOM3q78dTQKSya6ZkPZ6dvDwnkxxBkaDMyu1g9XFzU/PPif3Gylbr0vBp/1iCFIeINXL+S57zvESIlwqSw3FHoOhc9pNOs7PtPiawYGqZvrYqYvfnq8Gt5gbfJYERAb2WBFeLEAHVL2TKOFMiP/8t0GCmABap4mpTf6HXMD/ovY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=MNyssu6f; arc=none smtp.client-ip=209.85.215.179 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="MNyssu6f" Received: by mail-pg1-f179.google.com with SMTP id 41be03b00d2f7-b2325c56ebdso6044093a12.1; Wed, 14 May 2025 10:03:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1747242205; x=1747847005; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=kXQcDRAe4EWVXNL5yTAKMixST4qgC4DZfdrTRe+HbCQ=; b=MNyssu6fCtSfG7OKRfIbYtHla+QMukdutUAoo7eu8eRpJ8jzYR0dkp5dqT+08CzW30 pQMo6oosWTto7D59Vy2bfsES5udZez4Lqy3fj30qaAEPW0em8gBYvMgq06R6qI9gfPs8 IBI5pvA1HK55gfI08KEA6jXkUp+q4ftOrwaY6p4q6zAj/KJt+R8N2IOQjNhcjuL4tPCR LZYRi26Ud6AbyaBeAx1PZ/y4+qo0IJFSdd+inBbcJcnQbvhWPSi2RZJOYwhww6lzZGLx FxEsw76ly6GtG/Og0Tm7vu/TkfpKNn8MP1jak4LSUGmF21WUftZ+HxxGmpenxFAbwe3U ANTQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747242205; x=1747847005; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=kXQcDRAe4EWVXNL5yTAKMixST4qgC4DZfdrTRe+HbCQ=; b=j19JJzBMZf7CzOYjRJHGvINmcC+1IBCZ9OEuDLDBdJnDvRc7HwuGLaeUaOskqbglZ3 9Oucv1N3FiT1krmLrtQuneBtA3NvwNhbdSvyG5EUClD9APBXIBFxRJ1fS/oT3MafjF8X cs40Ev842myGfRH30O1YAOJ6tqCKdaQOZykbskzUGjD9ya1E7dHAFnjrQ59Btdk024hr xo0kfhhEfcLrQpqjM1R03+bt1UQSrl/4HeBUPYUpoR4el9JrpTxNxqiZBpIAjA+I7FEH URzu8lShcvlNpvfHMVntO4Sc6aMfugGLpyDbgdSMWc0diIHi1r6Amck4BajGoNJ4VLCj Dxlg== X-Forwarded-Encrypted: i=1; AJvYcCW7eOZTkZhr4Y0AwKnRe1uX/OkwJgn1MAiPP0xfB/s1k+383xWVIdY7MH/19djEh95RgrtPqWsX60Gdu1EE@vger.kernel.org, AJvYcCWUH50Nn2Uy9q1/mPXk7XiYYCjUSgWwc3pCorZODwy9SWcvvGAzAPky1fVTyDS6mM+7JF0W3MXFt/mJeJFf@vger.kernel.org X-Gm-Message-State: AOJu0YzVZD8WEMyRsq/wL0StOlM7qsRaGmUIRnZMiLA0cgWZksuzKVZY gO3ZJqI29ofBo8jbWpSpa9jIUv6PCNImFcYsqE4NNR1IVP2uCRc8 X-Gm-Gg: ASbGncu1FxuV2XtTmtz/78zpEQ7eXcsAC4ynLE/QBoc5yLHdf0Os6fzFYUT3zNrCAmZ jFNSQ8WEQ/6+nK+EeQUoxjOfG6+KBUJMfrELnmwvh5buecqqjJteLF7LT06zO/eL+hR8jgbAOkA 31DC/4RIHiiLMXZdTRIVdddBjnnBs1AqpaAwe+km2+0vAYwkWmJQu1fc39ADFptj8vpUk4Ii70m 0HgxEjZpVPEEzX/h6JUM91j7/p+6VILnr9KRBWQv219Gq8eafHz0acyUEq+6qUIeydHH0IJ4Qe1 +PYViehp0jK13Ik9FwqN08KxBgiMX8PePQ1j1kHu7SgA2wi4lt1tQRB7tac9AbYoplEES1IrQZb iWfMS6B3EhlkqQGNOQCf6oW7Eyw== X-Google-Smtp-Source: AGHT+IHksyKHL/cAj2IBPQcufM/sbeYjbaaEX1F8jd2jD7SBpRllEzRc2mShwfdtPSsMevJvYm3ogQ== X-Received: by 2002:a17:903:1aef:b0:220:c164:6ee1 with SMTP id d9443c01a7336-23198129cd1mr71866655ad.32.1747242204951; Wed, 14 May 2025 10:03:24 -0700 (PDT) Received: from localhost ([2a00:79e0:3e00:2601:3afc:446b:f0df:eadc]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-22fc828b445sm101230955ad.181.2025.05.14.10.03.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 14 May 2025 10:03:24 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Rob Clark , Robin Murphy , Will Deacon , Joerg Roedel , Jason Gunthorpe , Kevin Tian , Nicolin Chen , Joao Martins , linux-arm-kernel@lists.infradead.org (moderated list:ARM SMMU DRIVERS), iommu@lists.linux.dev (open list:IOMMU SUBSYSTEM), linux-kernel@vger.kernel.org (open list) Subject: [PATCH v4 05/40] iommu/io-pgtable-arm: Add quirk to quiet WARN_ON() Date: Wed, 14 May 2025 09:59:04 -0700 Message-ID: <20250514170118.40555-6-robdclark@gmail.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250514170118.40555-1-robdclark@gmail.com> References: <20250514170118.40555-1-robdclark@gmail.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Rob Clark In situations where mapping/unmapping sequence can be controlled by userspace, attempting to map over a region that has not yet been unmapped is an error. But not something that should spam dmesg. Now that there is a quirk, we can also drop the selftest_running flag, and use the quirk instead for selftests. Signed-off-by: Rob Clark Acked-by: Robin Murphy Signed-off-by: Rob Clark --- drivers/iommu/io-pgtable-arm.c | 27 ++++++++++++++------------- include/linux/io-pgtable.h | 8 ++++++++ 2 files changed, 22 insertions(+), 13 deletions(-) diff --git a/drivers/iommu/io-pgtable-arm.c b/drivers/iommu/io-pgtable-arm.c index f27965caf6a1..a535d88f8943 100644 --- a/drivers/iommu/io-pgtable-arm.c +++ b/drivers/iommu/io-pgtable-arm.c @@ -253,8 +253,6 @@ static inline bool arm_lpae_concat_mandatory(struct io_pgtable_cfg *cfg, (data->start_level == 1) && (oas == 40); } -static bool selftest_running = false; - static dma_addr_t __arm_lpae_dma_addr(void *pages) { return (dma_addr_t)virt_to_phys(pages); @@ -373,7 +371,7 @@ static int arm_lpae_init_pte(struct arm_lpae_io_pgtable *data, for (i = 0; i < num_entries; i++) if (iopte_leaf(ptep[i], lvl, data->iop.fmt)) { /* We require an unmap first */ - WARN_ON(!selftest_running); + WARN_ON(!(data->iop.cfg.quirks & IO_PGTABLE_QUIRK_NO_WARN_ON)); return -EEXIST; } else if (iopte_type(ptep[i]) == ARM_LPAE_PTE_TYPE_TABLE) { /* @@ -475,7 +473,7 @@ static int __arm_lpae_map(struct arm_lpae_io_pgtable *data, unsigned long iova, cptep = iopte_deref(pte, data); } else if (pte) { /* We require an unmap first */ - WARN_ON(!selftest_running); + WARN_ON(!(cfg->quirks & IO_PGTABLE_QUIRK_NO_WARN_ON)); return -EEXIST; } @@ -649,8 +647,10 @@ static size_t __arm_lpae_unmap(struct arm_lpae_io_pgtable *data, unmap_idx_start = ARM_LPAE_LVL_IDX(iova, lvl, data); ptep += unmap_idx_start; pte = READ_ONCE(*ptep); - if (WARN_ON(!pte)) - return 0; + if (!pte) { + WARN_ON(!(data->iop.cfg.quirks & IO_PGTABLE_QUIRK_NO_WARN_ON)); + return -ENOENT; + } /* If the size matches this level, we're in the right place */ if (size == ARM_LPAE_BLOCK_SIZE(lvl, data)) { @@ -660,8 +660,10 @@ static size_t __arm_lpae_unmap(struct arm_lpae_io_pgtable *data, /* Find and handle non-leaf entries */ for (i = 0; i < num_entries; i++) { pte = READ_ONCE(ptep[i]); - if (WARN_ON(!pte)) + if (!pte) { + WARN_ON(!(data->iop.cfg.quirks & IO_PGTABLE_QUIRK_NO_WARN_ON)); break; + } if (!iopte_leaf(pte, lvl, iop->fmt)) { __arm_lpae_clear_pte(&ptep[i], &iop->cfg, 1); @@ -976,7 +978,8 @@ arm_64_lpae_alloc_pgtable_s1(struct io_pgtable_cfg *cfg, void *cookie) if (cfg->quirks & ~(IO_PGTABLE_QUIRK_ARM_NS | IO_PGTABLE_QUIRK_ARM_TTBR1 | IO_PGTABLE_QUIRK_ARM_OUTER_WBWA | - IO_PGTABLE_QUIRK_ARM_HD)) + IO_PGTABLE_QUIRK_ARM_HD | + IO_PGTABLE_QUIRK_NO_WARN_ON)) return NULL; data = arm_lpae_alloc_pgtable(cfg); @@ -1079,7 +1082,8 @@ arm_64_lpae_alloc_pgtable_s2(struct io_pgtable_cfg *cfg, void *cookie) struct arm_lpae_io_pgtable *data; typeof(&cfg->arm_lpae_s2_cfg.vtcr) vtcr = &cfg->arm_lpae_s2_cfg.vtcr; - if (cfg->quirks & ~(IO_PGTABLE_QUIRK_ARM_S2FWB)) + if (cfg->quirks & ~(IO_PGTABLE_QUIRK_ARM_S2FWB | + IO_PGTABLE_QUIRK_NO_WARN_ON)) return NULL; data = arm_lpae_alloc_pgtable(cfg); @@ -1320,7 +1324,6 @@ static void __init arm_lpae_dump_ops(struct io_pgtable_ops *ops) #define __FAIL(ops, i) ({ \ WARN(1, "selftest: test failed for fmt idx %d\n", (i)); \ arm_lpae_dump_ops(ops); \ - selftest_running = false; \ -EFAULT; \ }) @@ -1336,8 +1339,6 @@ static int __init arm_lpae_run_tests(struct io_pgtable_cfg *cfg) size_t size, mapped; struct io_pgtable_ops *ops; - selftest_running = true; - for (i = 0; i < ARRAY_SIZE(fmts); ++i) { cfg_cookie = cfg; ops = alloc_io_pgtable_ops(fmts[i], cfg, cfg); @@ -1426,7 +1427,6 @@ static int __init arm_lpae_run_tests(struct io_pgtable_cfg *cfg) free_io_pgtable_ops(ops); } - selftest_running = false; return 0; } @@ -1448,6 +1448,7 @@ static int __init arm_lpae_do_selftests(void) .tlb = &dummy_tlb_ops, .coherent_walk = true, .iommu_dev = &dev, + .quirks = IO_PGTABLE_QUIRK_NO_WARN_ON, }; /* __arm_lpae_alloc_pages() merely needs dev_to_node() to work */ diff --git a/include/linux/io-pgtable.h b/include/linux/io-pgtable.h index bba2a51c87d2..639b8f4fb87d 100644 --- a/include/linux/io-pgtable.h +++ b/include/linux/io-pgtable.h @@ -88,6 +88,13 @@ struct io_pgtable_cfg { * * IO_PGTABLE_QUIRK_ARM_HD: Enables dirty tracking in stage 1 pagetable. * IO_PGTABLE_QUIRK_ARM_S2FWB: Use the FWB format for the MemAttrs bits + * + * IO_PGTABLE_QUIRK_NO_WARN_ON: Do not WARN_ON() on conflicting + * mappings, but silently return -EEXISTS. Normally an attempt + * to map over an existing mapping would indicate some sort of + * kernel bug, which would justify the WARN_ON(). But for GPU + * drivers, this could be under control of userspace. Which + * deserves an error return, but not to spam dmesg. */ #define IO_PGTABLE_QUIRK_ARM_NS BIT(0) #define IO_PGTABLE_QUIRK_NO_PERMS BIT(1) @@ -97,6 +104,7 @@ struct io_pgtable_cfg { #define IO_PGTABLE_QUIRK_ARM_OUTER_WBWA BIT(6) #define IO_PGTABLE_QUIRK_ARM_HD BIT(7) #define IO_PGTABLE_QUIRK_ARM_S2FWB BIT(8) + #define IO_PGTABLE_QUIRK_NO_WARN_ON BIT(9) unsigned long quirks; unsigned long pgsize_bitmap; unsigned int ias; From patchwork Wed May 14 16:59:06 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 889941 Received: from mail-pf1-f181.google.com (mail-pf1-f181.google.com [209.85.210.181]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D9259285419; Wed, 14 May 2025 17:03:28 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.181 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747242210; cv=none; b=Yv2YN+15yGsBcdvAl7u0SebrBGLB5hdbS4NVTVTNreE8jHNFrzZUIDTq/LJL6GTSZUmtaPn8HAvlWveh7XGhC0B/FDyjugbtQgaJu+6iqerjv9xjwocL7f4dl18m/Xu9jFh+SS3ha9X5vjcEjbetlVUruzDIYLBguqzxxBBJ5eQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747242210; c=relaxed/simple; bh=hA0n3sJHSqtfCC0QQbm5Ra4aQsTlaYfz6FOXn+eApww=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Zj1s2RKwJCjiiUk1WSC6x9yvE3caELNUzHgOv9Ppa2Fq6fxAhL18bJMJq9ozJp3kwbsGonRALeTHtfQ+oQGiEy32V0KnOlBjrK2N81de09tfHy9a4Zi4iKmLTb+pJG3rrslqDrQBwSumad9c5ReMdYNCVFv7bl+ilVxQWAW27D4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=Zudmz6GJ; arc=none smtp.client-ip=209.85.210.181 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="Zudmz6GJ" Received: by mail-pf1-f181.google.com with SMTP id d2e1a72fcca58-7423fb98c5aso161935b3a.0; Wed, 14 May 2025 10:03:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1747242208; x=1747847008; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=P9wao8L9QpDXqghTRoSkoUr2w+hVzfvrL1RxfpMOf90=; b=Zudmz6GJrfWwrLuZZiH1DY4DKVSec2rvUIGIO+/k8QA2Cbvw7HGw9KQTubZhBl22nv 9whgRc9I/Ex3wyNfA4mizddQsN4uYs3PZSzX2F7znYHVEFrZP6ZTK1jumdhYwwfnNNyT UoNi+hruuRCBvbxkopWx66Tj7lWO5Iflda0NXveJ6ira3aNySyXovmkVEMBHkBSqZ5WP 8LwTwK/boa8+e1JRcNpjvxrw8IA02GnumS1OGb8UEkG1CJexRyRLRiKBNZMSWlE9zjYT U0MADU0nn/4LdTt0CnmHsKYgboH/RZ+tHmIOIWs4PzQav0QE6X/pYSguWORysPFRWIGB t7FA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747242208; x=1747847008; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=P9wao8L9QpDXqghTRoSkoUr2w+hVzfvrL1RxfpMOf90=; b=KMrEbykQ2uAxt9247AzYh/avNaZICmr1nvZiu03ie67rmkV1T7eQISbCaKSYRe9qO4 hi+EOPouMqhfixxQ30lLcm3VfbIJwGjI3NyKqafE3ydmazdrjm438kEGoVfh8i1fMyGp 0Fv2X4OcXv0aseS/u7ZwNtNZIbu4JUWYg3/4fK7QnZHpNrO2IJj+3uWYwN7dAD7qORB9 kBEDCm+7qMlD3JFw6oa4yuQveLuwTIjtZbC3Vp/Hk4ObMjyTZ7/aeBnYLgQwksZGQo1k DLEsxPcODhQ4zb3S5JntysV9nOheUt9qXNk6RvIE7MsVAYwFonM4CZa5fSmSIlF+kkJ1 wyWg== X-Forwarded-Encrypted: i=1; AJvYcCWbJ6fNc0NJbtVusytsgk+YWsyMUlwmu3+LmzfyQc7zqGWSWSbXd9O5G+ngDRhXKRTH/eBf6ElQGkkBQp8U@vger.kernel.org, AJvYcCXFLLsdb2V3U6Y5QY3AGs7qO2vUR2+G34OqZpq+BYiudzU9TvyRfT4X69mka0Qq44GB1xxr1zHGH48Ukj6P@vger.kernel.org X-Gm-Message-State: AOJu0YwZIXSJ6cSXlI+0ah8DNM6cJsKy+yfMwfLOGs6YnPMtGAN7MwYC UHuprKcs+LPTRewidhe2PYaU3ipiqLxIXUx7bZU2lzmJWA0cj3iR X-Gm-Gg: ASbGncu37fEDuPAdkI8MkY/HWRQMmYsnY4Pyo6qst9Jd/B3y8HlHffmfrPzJAoUOlNw aepyMoItwbXHzXknjFNkX2Jc8ECRxqjYIGxCy8fu6Sp46jbr71JCsnGacDwzoEA47GD/Hk+e4p7 iT2/DDbifMBm4TjW9dy2NQUKaHVFeiwvybIbkl3dhx/aillv49Wou8SL1MlcCtA3j/YtHeQZuVD 46K3voKwB+V7UuH/GTUc0EvUObFAr70K63JUpfbZNxNklRQXAzWrWJq9aYb8Sm0WtkyCRA8OB1k CNIH4cYo5U4NpcUMv1McFIu0oMWX+K0hD9TRUJ7NvKukGoBI07RGJdUva3g4Lgk9xoYVZhNQguS EM4f23vweAjtuJWk3YysUptHytw== X-Google-Smtp-Source: AGHT+IGyGypLDfWhQ9d87DuR+AeAXFJ8JA+bXc2NjVqwTDBTjF5ZuSSNFOGM71st0gVYaB1usT12QA== X-Received: by 2002:a05:6a21:398c:b0:1f3:3ca3:8216 with SMTP id adf61e73a8af0-215ff0abd4bmr6121172637.5.1747242208106; Wed, 14 May 2025 10:03:28 -0700 (PDT) Received: from localhost ([2a00:79e0:3e00:2601:3afc:446b:f0df:eadc]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-b234af2bf21sm8009353a12.42.2025.05.14.10.03.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 14 May 2025 10:03:27 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Rob Clark , Dmitry Baryshkov , Rob Clark , Sean Paul , Konrad Dybcio , Abhinav Kumar , Dmitry Baryshkov , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v4 07/40] drm/msm: Improve msm_context comments Date: Wed, 14 May 2025 09:59:06 -0700 Message-ID: <20250514170118.40555-8-robdclark@gmail.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250514170118.40555-1-robdclark@gmail.com> References: <20250514170118.40555-1-robdclark@gmail.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Rob Clark Just some tidying up. Signed-off-by: Rob Clark Reviewed-by: Dmitry Baryshkov Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gpu.h | 44 +++++++++++++++++++++++------------ 1 file changed, 29 insertions(+), 15 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h index 957d6fb3469d..c699ce0c557b 100644 --- a/drivers/gpu/drm/msm/msm_gpu.h +++ b/drivers/gpu/drm/msm/msm_gpu.h @@ -348,25 +348,39 @@ struct msm_gpu_perfcntr { /** * struct msm_context - per-drm_file context - * - * @queuelock: synchronizes access to submitqueues list - * @submitqueues: list of &msm_gpu_submitqueue created by userspace - * @queueid: counter incremented each time a submitqueue is created, - * used to assign &msm_gpu_submitqueue.id - * @aspace: the per-process GPU address-space - * @ref: reference count - * @seqno: unique per process seqno */ struct msm_context { + /** @queuelock: synchronizes access to submitqueues list */ rwlock_t queuelock; + + /** @submitqueues: list of &msm_gpu_submitqueue created by userspace */ struct list_head submitqueues; + + /** + * @queueid: + * + * Counter incremented each time a submitqueue is created, used to + * assign &msm_gpu_submitqueue.id + */ int queueid; + + /** @aspace: the per-process GPU address-space */ struct msm_gem_address_space *aspace; + + /** @kref: the reference count */ struct kref ref; + + /** + * @seqno: + * + * A unique per-process sequence number. Used to detect context + * switches, without relying on keeping a, potentially dangling, + * pointer to the previous context. + */ int seqno; /** - * sysprof: + * @sysprof: * * The value of MSM_PARAM_SYSPROF set by userspace. This is * intended to be used by system profiling tools like Mesa's @@ -384,21 +398,21 @@ struct msm_context { int sysprof; /** - * comm: Overridden task comm, see MSM_PARAM_COMM + * @comm: Overridden task comm, see MSM_PARAM_COMM * * Accessed under msm_gpu::lock */ char *comm; /** - * cmdline: Overridden task cmdline, see MSM_PARAM_CMDLINE + * @cmdline: Overridden task cmdline, see MSM_PARAM_CMDLINE * * Accessed under msm_gpu::lock */ char *cmdline; /** - * elapsed: + * @elapsed: * * The total (cumulative) elapsed time GPU was busy with rendering * from this context in ns. @@ -406,7 +420,7 @@ struct msm_context { uint64_t elapsed_ns; /** - * cycles: + * @cycles: * * The total (cumulative) GPU cycles elapsed attributed to this * context. @@ -414,7 +428,7 @@ struct msm_context { uint64_t cycles; /** - * entities: + * @entities: * * Table of per-priority-level sched entities used by submitqueues * associated with this &drm_file. Because some userspace apps @@ -427,7 +441,7 @@ struct msm_context { struct drm_sched_entity *entities[NR_SCHED_PRIORITIES * MSM_GPU_MAX_RINGS]; /** - * ctx_mem: + * @ctx_mem: * * Total amount of memory of GEM buffers with handles attached for * this context. From patchwork Wed May 14 16:59:08 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 889939 Received: from mail-pf1-f181.google.com (mail-pf1-f181.google.com [209.85.210.181]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B7A1D27A900; Wed, 14 May 2025 17:03:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.181 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747242237; cv=none; b=qLhOT93dTayGA/9TJ202NeyFCIQxkotYvpApM7c19RkBO7dQeHwqFl439qNPGbu3b6v+/kfcEr6NfYSPyJv/fSWiBQl/73zFhssNA5NHYcy0xmulDaDXzDJz1U74/1qTK5OclkF3nnza61308kMYp3PuPlGBDT3Ruv6axqZoD3g= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747242237; c=relaxed/simple; bh=uN0XTlmy2aTzHt6ccro0fSyszQ6nWEDFcnPtNDje+Hc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=RMXeFnU/m93PPwhceYbAgvQjLwqTyPfbLgGP5Py6aG5vdeTUsTicOfFVe9GJZI9kD3ZpwD8um+uWUMdrbGs6dL+gSNVs4WIWYmf7I7RZ25anHdYN/gSKCSOgOJkOsjFFSXgVnJUi6LmktQ5hKACQ8knP1di4aBxQv/UuLGrRQm8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=WP45CRyc; arc=none smtp.client-ip=209.85.210.181 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="WP45CRyc" Received: by mail-pf1-f181.google.com with SMTP id d2e1a72fcca58-7376e311086so102443b3a.3; Wed, 14 May 2025 10:03:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1747242235; x=1747847035; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=tvsFxHVjgUmfkFmjEMXbovUiziVDlMJdR+5iNQP2f6I=; b=WP45CRycDiQBjkuutIX54v1YlYWLe2aHJO8jDyuN3XK05SzId88mqIwEQzzapHnuz2 98XqZlvF9ySdU5i5Sr13cLIIlOUmJjDbIWK7O18p3FF4Ly0ce2XiWeEr1ifn6Ng2uqku dvJu4J/kQ+F4LFJCoj0HCLElGIk7k/qBi+5FW78Xti3H3W4uBQRFNlAiviACbCBEZ8s5 gbvqT0pYPEhfyONTbLtDnZMJpyfaGg0yFX13mvpcGbLuLP4VotxmmAqR+/w5loC234kM 2nyoxTfjkesqLbUjso8itKwM4NcDhFFPvVjQpxFsnpysDisanUWlkkvvm9QaEHwjqkmG wbVw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747242235; x=1747847035; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=tvsFxHVjgUmfkFmjEMXbovUiziVDlMJdR+5iNQP2f6I=; b=TirB5JKc0V8FRvkVRmlWYlvEoXYTj5QD444q5JFZkC5PtLCZ0RWfm033jWLfZgWFOk dlVK5DkVl/GpPiMY8502t0wD6Za0SXiix/KiFhWeQQXGZ+ZwLoNcGBUMyAT0OB/rlVXm w7Z6667Ct5su1mDA4VpLYTEyg1N5HEnsPhhJRIc5Vtayjsk2WJ2ssjo6A5Erx2RPy4Pm AMRS3sn9cGGgyCmK3sQn6rR27BlviVMqSPsKKNwY0UhT0Xeh1agPuJLtkuoK+WWYmSbv KthlRwCloi48xBJDFmwefQA3hFh84aK2gIsmygrqQRuguhi0M/fjSnTwrKUgYmmWMiGU zSIA== X-Forwarded-Encrypted: i=1; AJvYcCV+SSvxmeHzTSM5zdgn1dGcxWA5q6F2sdFExqade1Si2GTQ1mi0zZBDzFGzIY5zLohaNuMaloBTsxMbZ7+P@vger.kernel.org, AJvYcCXiGAGm62EKEQIiJMcwECv7ivdgBbne6mWHy7fomjTPgaqQ3HMCgmrdQa4sVH92DkrLEIGrp9Aryc13+zoT@vger.kernel.org X-Gm-Message-State: AOJu0Yz0RPB1/vJERnotKVaRKuNAilR4UaB3NIwxbfAQkIPnPzobuprd EzdDcaBTwE13yzXtW881+2ypKbfwxAK0cu3bMw6JVMxk2wdZyzoFcpJrug== X-Gm-Gg: ASbGncsSWJ8Lw5k4BNomdNfYO49HN4DtylebayKJ/UIQS9LXIPDiqniQz6uvNcbEHbE Jx5IrPwwJdpTVEx7+jmyD05C9pYDhhFCp9rc1XCLSqgvl6cj5IU/d6m0Bm1KnXv3IycjyHpX6QL svRvgORDx+XVl1icbCXRxpJoUs6CzU6P47OXfTaa1k9AT66LvLPhtdrEi+/kSMXWIcxcJd7RaIu HabEd3y2J3c+srZ//YS0xNKYyQalMsQs0iufV08C38pLufXAoU71T7IXmk4d/DtIh8TZIE6CQLN ygc7LpYqlF2Idfs2XLKzDhM3vukzAxIBt7kPRLuRGFMZUsY26HC+HKSZMexg/krpSz7BOu8kpic T09kp8U0Cn8bAN9kexErLzyew/22c53JvFAaV X-Google-Smtp-Source: AGHT+IFKEgtSOZ/giSvwWHPrXDxEDvNOsO/EWh0hhxNS/fnLuYJoaS/i+Dg9oFAlhT64OmAvjbbOIQ== X-Received: by 2002:a17:902:f54d:b0:223:6254:b4ba with SMTP id d9443c01a7336-2319811f06bmr58569345ad.13.1747242223816; Wed, 14 May 2025 10:03:43 -0700 (PDT) Received: from localhost ([2a00:79e0:3e00:2601:3afc:446b:f0df:eadc]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-22fc828b724sm101090135ad.166.2025.05.14.10.03.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 14 May 2025 10:03:42 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Rob Clark , Rob Clark , Sean Paul , Konrad Dybcio , Abhinav Kumar , Dmitry Baryshkov , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v4 09/40] drm/msm: Remove vram carveout support Date: Wed, 14 May 2025 09:59:08 -0700 Message-ID: <20250514170118.40555-10-robdclark@gmail.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250514170118.40555-1-robdclark@gmail.com> References: <20250514170118.40555-1-robdclark@gmail.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Rob Clark It is standing in the way of drm_gpuvm / VM_BIND support. Not to mention frequently broken and rarely tested. And I think only needed for a 10yr old not quite upstream SoC (msm8974). Maybe we can add support back in later, but I'm doubtful. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/adreno/a2xx_gpu.c | 8 -- drivers/gpu/drm/msm/adreno/a3xx_gpu.c | 15 --- drivers/gpu/drm/msm/adreno/a4xx_gpu.c | 15 --- drivers/gpu/drm/msm/adreno/a5xx_gpu.c | 3 +- drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 3 +- drivers/gpu/drm/msm/adreno/adreno_device.c | 4 - drivers/gpu/drm/msm/adreno/adreno_gpu.c | 4 +- drivers/gpu/drm/msm/adreno/adreno_gpu.h | 1 - drivers/gpu/drm/msm/msm_drv.c | 117 +----------------- drivers/gpu/drm/msm/msm_drv.h | 11 -- drivers/gpu/drm/msm/msm_gem.c | 131 ++------------------- drivers/gpu/drm/msm/msm_gem.h | 5 - drivers/gpu/drm/msm/msm_gem_submit.c | 5 - drivers/gpu/drm/msm/msm_gpu.c | 6 +- 14 files changed, 19 insertions(+), 309 deletions(-) diff --git a/drivers/gpu/drm/msm/adreno/a2xx_gpu.c b/drivers/gpu/drm/msm/adreno/a2xx_gpu.c index 5eb063ed0b46..095bae92e3e8 100644 --- a/drivers/gpu/drm/msm/adreno/a2xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a2xx_gpu.c @@ -551,14 +551,6 @@ struct msm_gpu *a2xx_gpu_init(struct drm_device *dev) else adreno_gpu->registers = a220_registers; - if (!gpu->vm) { - dev_err(dev->dev, "No memory protection without MMU\n"); - if (!allow_vram_carveout) { - ret = -ENXIO; - goto fail; - } - } - return gpu; fail: diff --git a/drivers/gpu/drm/msm/adreno/a3xx_gpu.c b/drivers/gpu/drm/msm/adreno/a3xx_gpu.c index 434e6ededf83..a956cd79195e 100644 --- a/drivers/gpu/drm/msm/adreno/a3xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a3xx_gpu.c @@ -581,21 +581,6 @@ struct msm_gpu *a3xx_gpu_init(struct drm_device *dev) goto fail; } - if (!gpu->vm) { - /* TODO we think it is possible to configure the GPU to - * restrict access to VRAM carveout. But the required - * registers are unknown. For now just bail out and - * limp along with just modesetting. If it turns out - * to not be possible to restrict access, then we must - * implement a cmdstream validator. - */ - DRM_DEV_ERROR(dev->dev, "No memory protection without IOMMU\n"); - if (!allow_vram_carveout) { - ret = -ENXIO; - goto fail; - } - } - icc_path = devm_of_icc_get(&pdev->dev, "gfx-mem"); if (IS_ERR(icc_path)) { ret = PTR_ERR(icc_path); diff --git a/drivers/gpu/drm/msm/adreno/a4xx_gpu.c b/drivers/gpu/drm/msm/adreno/a4xx_gpu.c index 2c75debcfd84..83f6329accba 100644 --- a/drivers/gpu/drm/msm/adreno/a4xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a4xx_gpu.c @@ -695,21 +695,6 @@ struct msm_gpu *a4xx_gpu_init(struct drm_device *dev) adreno_gpu->uche_trap_base = 0xffff0000ffff0000ull; - if (!gpu->vm) { - /* TODO we think it is possible to configure the GPU to - * restrict access to VRAM carveout. But the required - * registers are unknown. For now just bail out and - * limp along with just modesetting. If it turns out - * to not be possible to restrict access, then we must - * implement a cmdstream validator. - */ - DRM_DEV_ERROR(dev->dev, "No memory protection without IOMMU\n"); - if (!allow_vram_carveout) { - ret = -ENXIO; - goto fail; - } - } - icc_path = devm_of_icc_get(&pdev->dev, "gfx-mem"); if (IS_ERR(icc_path)) { ret = PTR_ERR(icc_path); diff --git a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c index cce95ad3cfb8..913e4fdfca21 100644 --- a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c @@ -1786,8 +1786,7 @@ struct msm_gpu *a5xx_gpu_init(struct drm_device *dev) return ERR_PTR(ret); } - if (gpu->vm) - msm_mmu_set_fault_handler(gpu->vm->mmu, gpu, a5xx_fault_handler); + msm_mmu_set_fault_handler(gpu->vm->mmu, gpu, a5xx_fault_handler); /* Set up the preemption specific bits and pieces for each ringbuffer */ a5xx_preempt_init(gpu); diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c index 3c92ea35d39a..c119493c13aa 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c @@ -2547,8 +2547,7 @@ struct msm_gpu *a6xx_gpu_init(struct drm_device *dev) adreno_gpu->uche_trap_base = 0x1fffffffff000ull; - if (gpu->vm) - msm_mmu_set_fault_handler(gpu->vm->mmu, gpu, a6xx_fault_handler); + msm_mmu_set_fault_handler(gpu->vm->mmu, gpu, a6xx_fault_handler); a6xx_calc_ubwc_config(adreno_gpu); /* Set up the preemption specific bits and pieces for each ringbuffer */ diff --git a/drivers/gpu/drm/msm/adreno/adreno_device.c b/drivers/gpu/drm/msm/adreno/adreno_device.c index f4552b8c6767..6b0390c38bff 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_device.c +++ b/drivers/gpu/drm/msm/adreno/adreno_device.c @@ -16,10 +16,6 @@ bool snapshot_debugbus = false; MODULE_PARM_DESC(snapshot_debugbus, "Include debugbus sections in GPU devcoredump (if not fused off)"); module_param_named(snapshot_debugbus, snapshot_debugbus, bool, 0600); -bool allow_vram_carveout = false; -MODULE_PARM_DESC(allow_vram_carveout, "Allow using VRAM Carveout, in place of IOMMU"); -module_param_named(allow_vram_carveout, allow_vram_carveout, bool, 0600); - int enable_preemption = -1; MODULE_PARM_DESC(enable_preemption, "Enable preemption (A7xx only) (1=on , 0=disable, -1=auto (default))"); module_param(enable_preemption, int, 0600); diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c b/drivers/gpu/drm/msm/adreno/adreno_gpu.c index b13aaebd8da7..a2e39283360f 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c @@ -209,7 +209,9 @@ adreno_iommu_create_vm(struct msm_gpu *gpu, u64 start, size; mmu = msm_iommu_gpu_new(&pdev->dev, gpu, quirks); - if (IS_ERR_OR_NULL(mmu)) + if (!mmu) + return ERR_PTR(-ENODEV); + else if (IS_ERR_OR_NULL(mmu)) return ERR_CAST(mmu); geometry = msm_iommu_get_geometry(mmu); diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.h b/drivers/gpu/drm/msm/adreno/adreno_gpu.h index 258c5c6dde2e..bbd7e664286e 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.h +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.h @@ -18,7 +18,6 @@ #include "adreno_pm4.xml.h" extern bool snapshot_debugbus; -extern bool allow_vram_carveout; enum { ADRENO_FW_PM4 = 0, diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c index 903abf3532e0..978f1d355b42 100644 --- a/drivers/gpu/drm/msm/msm_drv.c +++ b/drivers/gpu/drm/msm/msm_drv.c @@ -46,12 +46,6 @@ #define MSM_VERSION_MINOR 12 #define MSM_VERSION_PATCHLEVEL 0 -static void msm_deinit_vram(struct drm_device *ddev); - -static char *vram = "16m"; -MODULE_PARM_DESC(vram, "Configure VRAM size (for devices without IOMMU/GPUMMU)"); -module_param(vram, charp, 0); - bool dumpstate; MODULE_PARM_DESC(dumpstate, "Dump KMS state on errors"); module_param(dumpstate, bool, 0600); @@ -97,8 +91,6 @@ static int msm_drm_uninit(struct device *dev) if (priv->kms) msm_drm_kms_uninit(dev); - msm_deinit_vram(ddev); - component_unbind_all(dev, ddev); ddev->dev_private = NULL; @@ -109,107 +101,6 @@ static int msm_drm_uninit(struct device *dev) return 0; } -bool msm_use_mmu(struct drm_device *dev) -{ - struct msm_drm_private *priv = dev->dev_private; - - /* - * a2xx comes with its own MMU - * On other platforms IOMMU can be declared specified either for the - * MDP/DPU device or for its parent, MDSS device. - */ - return priv->is_a2xx || - device_iommu_mapped(dev->dev) || - device_iommu_mapped(dev->dev->parent); -} - -static int msm_init_vram(struct drm_device *dev) -{ - struct msm_drm_private *priv = dev->dev_private; - struct device_node *node; - unsigned long size = 0; - int ret = 0; - - /* In the device-tree world, we could have a 'memory-region' - * phandle, which gives us a link to our "vram". Allocating - * is all nicely abstracted behind the dma api, but we need - * to know the entire size to allocate it all in one go. There - * are two cases: - * 1) device with no IOMMU, in which case we need exclusive - * access to a VRAM carveout big enough for all gpu - * buffers - * 2) device with IOMMU, but where the bootloader puts up - * a splash screen. In this case, the VRAM carveout - * need only be large enough for fbdev fb. But we need - * exclusive access to the buffer to avoid the kernel - * using those pages for other purposes (which appears - * as corruption on screen before we have a chance to - * load and do initial modeset) - */ - - node = of_parse_phandle(dev->dev->of_node, "memory-region", 0); - if (node) { - struct resource r; - ret = of_address_to_resource(node, 0, &r); - of_node_put(node); - if (ret) - return ret; - size = r.end - r.start + 1; - DRM_INFO("using VRAM carveout: %lx@%pa\n", size, &r.start); - - /* if we have no IOMMU, then we need to use carveout allocator. - * Grab the entire DMA chunk carved out in early startup in - * mach-msm: - */ - } else if (!msm_use_mmu(dev)) { - DRM_INFO("using %s VRAM carveout\n", vram); - size = memparse(vram, NULL); - } - - if (size) { - unsigned long attrs = 0; - void *p; - - priv->vram.size = size; - - drm_mm_init(&priv->vram.mm, 0, (size >> PAGE_SHIFT) - 1); - spin_lock_init(&priv->vram.lock); - - attrs |= DMA_ATTR_NO_KERNEL_MAPPING; - attrs |= DMA_ATTR_WRITE_COMBINE; - - /* note that for no-kernel-mapping, the vaddr returned - * is bogus, but non-null if allocation succeeded: - */ - p = dma_alloc_attrs(dev->dev, size, - &priv->vram.paddr, GFP_KERNEL, attrs); - if (!p) { - DRM_DEV_ERROR(dev->dev, "failed to allocate VRAM\n"); - priv->vram.paddr = 0; - return -ENOMEM; - } - - DRM_DEV_INFO(dev->dev, "VRAM: %08x->%08x\n", - (uint32_t)priv->vram.paddr, - (uint32_t)(priv->vram.paddr + size)); - } - - return ret; -} - -static void msm_deinit_vram(struct drm_device *ddev) -{ - struct msm_drm_private *priv = ddev->dev_private; - unsigned long attrs = DMA_ATTR_NO_KERNEL_MAPPING; - - if (!priv->vram.paddr) - return; - - drm_mm_takedown(&priv->vram.mm); - dma_free_attrs(ddev->dev, priv->vram.size, NULL, priv->vram.paddr, - attrs); -} - static int msm_drm_init(struct device *dev, const struct drm_driver *drv) { struct msm_drm_private *priv = dev_get_drvdata(dev); @@ -256,16 +147,12 @@ static int msm_drm_init(struct device *dev, const struct drm_driver *drv) goto err_destroy_wq; } - ret = msm_init_vram(ddev); - if (ret) - goto err_destroy_wq; - dma_set_max_seg_size(dev, UINT_MAX); /* Bind all our sub-components: */ ret = component_bind_all(dev, ddev); if (ret) - goto err_deinit_vram; + goto err_destroy_wq; ret = msm_gem_shrinker_init(ddev); if (ret) @@ -302,8 +189,6 @@ static int msm_drm_init(struct device *dev, const struct drm_driver *drv) return ret; -err_deinit_vram: - msm_deinit_vram(ddev); err_destroy_wq: destroy_workqueue(priv->wq); err_put_dev: diff --git a/drivers/gpu/drm/msm/msm_drv.h b/drivers/gpu/drm/msm/msm_drv.h index 0e675c9a7f83..ad509403f072 100644 --- a/drivers/gpu/drm/msm/msm_drv.h +++ b/drivers/gpu/drm/msm/msm_drv.h @@ -183,17 +183,6 @@ struct msm_drm_private { struct msm_drm_thread event_thread[MAX_CRTCS]; - /* VRAM carveout, used when no IOMMU: */ - struct { - unsigned long size; - dma_addr_t paddr; - /* NOTE: mm managed at the page level, size is in # of pages - * and position mm_node->start is in # of pages: - */ - struct drm_mm mm; - spinlock_t lock; /* Protects drm_mm node allocation/removal */ - } vram; - struct notifier_block vmap_notifier; struct shrinker *shrinker; diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 07a30d29248c..621fb4e17a2e 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -17,24 +17,8 @@ #include #include "msm_drv.h" -#include "msm_fence.h" #include "msm_gem.h" #include "msm_gpu.h" -#include "msm_mmu.h" - -static dma_addr_t physaddr(struct drm_gem_object *obj) -{ - struct msm_gem_object *msm_obj = to_msm_bo(obj); - struct msm_drm_private *priv = obj->dev->dev_private; - return (((dma_addr_t)msm_obj->vram_node->start) << PAGE_SHIFT) + - priv->vram.paddr; -} - -static bool use_pages(struct drm_gem_object *obj) -{ - struct msm_gem_object *msm_obj = to_msm_bo(obj); - return !msm_obj->vram_node; -} static int pgprot = 0; module_param(pgprot, int, 0600); @@ -139,36 +123,6 @@ static void update_lru(struct drm_gem_object *obj) mutex_unlock(&priv->lru.lock); } -/* allocate pages from VRAM carveout, used when no IOMMU: */ -static struct page **get_pages_vram(struct drm_gem_object *obj, int npages) -{ - struct msm_gem_object *msm_obj = to_msm_bo(obj); - struct msm_drm_private *priv = obj->dev->dev_private; - dma_addr_t paddr; - struct page **p; - int ret, i; - - p = kvmalloc_array(npages, sizeof(struct page *), GFP_KERNEL); - if (!p) - return ERR_PTR(-ENOMEM); - - spin_lock(&priv->vram.lock); - ret = drm_mm_insert_node(&priv->vram.mm, msm_obj->vram_node, npages); - spin_unlock(&priv->vram.lock); - if (ret) { - kvfree(p); - return ERR_PTR(ret); - } - - paddr = physaddr(obj); - for (i = 0; i < npages; i++) { - p[i] = pfn_to_page(__phys_to_pfn(paddr)); - paddr += PAGE_SIZE; - } - - return p; -} - static struct page **get_pages(struct drm_gem_object *obj) { struct msm_gem_object *msm_obj = to_msm_bo(obj); @@ -180,10 +134,7 @@ static struct page **get_pages(struct drm_gem_object *obj) struct page **p; int npages = obj->size >> PAGE_SHIFT; - if (use_pages(obj)) - p = drm_gem_get_pages(obj); - else - p = get_pages_vram(obj, npages); + p = drm_gem_get_pages(obj); if (IS_ERR(p)) { DRM_DEV_ERROR(dev->dev, "could not get pages: %ld\n", @@ -216,18 +167,6 @@ static struct page **get_pages(struct drm_gem_object *obj) return msm_obj->pages; } -static void put_pages_vram(struct drm_gem_object *obj) -{ - struct msm_gem_object *msm_obj = to_msm_bo(obj); - struct msm_drm_private *priv = obj->dev->dev_private; - - spin_lock(&priv->vram.lock); - drm_mm_remove_node(msm_obj->vram_node); - spin_unlock(&priv->vram.lock); - - kvfree(msm_obj->pages); -} - static void put_pages(struct drm_gem_object *obj) { struct msm_gem_object *msm_obj = to_msm_bo(obj); @@ -248,10 +187,7 @@ static void put_pages(struct drm_gem_object *obj) update_device_mem(obj->dev->dev_private, -obj->size); - if (use_pages(obj)) - drm_gem_put_pages(obj, msm_obj->pages, true, false); - else - put_pages_vram(obj); + drm_gem_put_pages(obj, msm_obj->pages, true, false); msm_obj->pages = NULL; update_lru(obj); @@ -1215,19 +1151,10 @@ struct drm_gem_object *msm_gem_new(struct drm_device *dev, uint32_t size, uint32 struct msm_drm_private *priv = dev->dev_private; struct msm_gem_object *msm_obj; struct drm_gem_object *obj = NULL; - bool use_vram = false; int ret; size = PAGE_ALIGN(size); - if (!msm_use_mmu(dev)) - use_vram = true; - else if ((flags & (MSM_BO_STOLEN | MSM_BO_SCANOUT)) && priv->vram.size) - use_vram = true; - - if (GEM_WARN_ON(use_vram && !priv->vram.size)) - return ERR_PTR(-EINVAL); - /* Disallow zero sized objects as they make the underlying * infrastructure grumpy */ @@ -1240,44 +1167,16 @@ struct drm_gem_object *msm_gem_new(struct drm_device *dev, uint32_t size, uint32 msm_obj = to_msm_bo(obj); - if (use_vram) { - struct msm_gem_vma *vma; - struct page **pages; - - drm_gem_private_object_init(dev, obj, size); - - msm_gem_lock(obj); - - vma = add_vma(obj, NULL); - msm_gem_unlock(obj); - if (IS_ERR(vma)) { - ret = PTR_ERR(vma); - goto fail; - } - - to_msm_bo(obj)->vram_node = &vma->node; - - msm_gem_lock(obj); - pages = get_pages(obj); - msm_gem_unlock(obj); - if (IS_ERR(pages)) { - ret = PTR_ERR(pages); - goto fail; - } - - vma->iova = physaddr(obj); - } else { - ret = drm_gem_object_init(dev, obj, size); - if (ret) - goto fail; - /* - * Our buffers are kept pinned, so allocating them from the - * MOVABLE zone is a really bad idea, and conflicts with CMA. - * See comments above new_inode() why this is required _and_ - * expected if you're going to pin these pages. - */ - mapping_set_gfp_mask(obj->filp->f_mapping, GFP_HIGHUSER); - } + ret = drm_gem_object_init(dev, obj, size); + if (ret) + goto fail; + /* + * Our buffers are kept pinned, so allocating them from the + * MOVABLE zone is a really bad idea, and conflicts with CMA. + * See comments above new_inode() why this is required _and_ + * expected if you're going to pin these pages. + */ + mapping_set_gfp_mask(obj->filp->f_mapping, GFP_HIGHUSER); drm_gem_lru_move_tail(&priv->lru.unbacked, obj); @@ -1305,12 +1204,6 @@ struct drm_gem_object *msm_gem_import(struct drm_device *dev, uint32_t size; int ret, npages; - /* if we don't have IOMMU, don't bother pretending we can import: */ - if (!msm_use_mmu(dev)) { - DRM_DEV_ERROR(dev->dev, "cannot import without IOMMU\n"); - return ERR_PTR(-EINVAL); - } - size = PAGE_ALIGN(dmabuf->size); ret = msm_gem_new_impl(dev, size, MSM_BO_WC, &obj); diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index d2f39a371373..c16b11182831 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -102,11 +102,6 @@ struct msm_gem_object { struct list_head vmas; /* list of msm_gem_vma */ - /* For physically contiguous buffers. Used when we don't have - * an IOMMU. Also used for stolen/splashscreen buffer. - */ - struct drm_mm_node *vram_node; - char name[32]; /* Identifier to print for the debugfs files */ /* userspace metadata backchannel */ diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c index a59816b6b6de..c184b1a1f522 100644 --- a/drivers/gpu/drm/msm/msm_gem_submit.c +++ b/drivers/gpu/drm/msm/msm_gem_submit.c @@ -669,11 +669,6 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data, if (args->pad) return -EINVAL; - if (unlikely(!ctx->vm) && !capable(CAP_SYS_RAWIO)) { - DRM_ERROR_RATELIMITED("IOMMU support or CAP_SYS_RAWIO required!\n"); - return -EPERM; - } - /* for now, we just have 3d pipe.. eventually this would need to * be more clever to dispatch to appropriate gpu module: */ diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c index 0d466a2e9b32..b30800f80120 100644 --- a/drivers/gpu/drm/msm/msm_gpu.c +++ b/drivers/gpu/drm/msm/msm_gpu.c @@ -944,12 +944,8 @@ int msm_gpu_init(struct drm_device *drm, struct platform_device *pdev, msm_devfreq_init(gpu); - gpu->vm = gpu->funcs->create_vm(gpu, pdev); - - if (gpu->vm == NULL) - DRM_DEV_INFO(drm->dev, "%s: no IOMMU, fallback to VRAM carveout!\n", name); - else if (IS_ERR(gpu->vm)) { + if (IS_ERR(gpu->vm)) { ret = PTR_ERR(gpu->vm); goto fail; } From patchwork Wed May 14 16:59:10 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 889940 Received: from mail-pf1-f175.google.com (mail-pf1-f175.google.com [209.85.210.175]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 63151280A57; Wed, 14 May 2025 17:03:47 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.175 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747242228; cv=none; b=ir97+Ni89/+rKm7QdCztltofl61GAtoGOgp2Oy1bAxt2S2colMABoU01IgLthf15AyPk7SpwPOe/hFINeIreqHVXunJLzeoconKwOGLL0Q6Jma8M0hRTVOP/7aWmGCqvQk4qVwkJzBZirMbd24U3DURfxX3BaC0VrriuYgzaiDI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747242228; c=relaxed/simple; bh=ZaHKZAq/z/eNpWNhqjzvNMs33X3FsXvex3c08Wmj4FY=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=mA9lW1yGygBVkydaXJtRru64CcrmSWOjI2TIHA7hp26vqc8Ho2cARV51SQa9FftZf0ocWM35Kuq3oUKJavX+nBA9nyViKsXZ8JgZBI25ScSMXyBp9QxArt5vNdyogtwuTuLIYGbp6LiG4SJsVwnO361X0RSiK8FTVa5yeq2bwxY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=ASkb7TW4; arc=none smtp.client-ip=209.85.210.175 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="ASkb7TW4" Received: by mail-pf1-f175.google.com with SMTP id d2e1a72fcca58-739b3fe7ce8so141224b3a.0; Wed, 14 May 2025 10:03:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1747242226; x=1747847026; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=waw4w0Q/P9keedP7Ykz7fyMHfgHgVQWacH3yk9lXO5s=; b=ASkb7TW4u71ivc7AnPJcpARNrs1WbYtPkLc+tXYNU8mifb3Sl6PzGGlaJCvVkVRrni 2hsYNqxpVaNbYs7Ixrd8APg8r8vYhtGiunYiUCyj/9y7WZwVjFcPbuvMuiHK7lUIZffR b4tLX9M9aMj5jgmz21qP650QFCDvOyxZxPQeaFXB58uF6oHhXfSeHZ5gU+iiXbVGVKS4 arPZE8B1Mv/FHYN1ljzDkebuOcSh+igPKJP/6fW4/1C3Y3sKoLM0+wc/h8/k+WCNJDyq rJYiCngr7UrANkVXFXtGXVR7DB6AnpDh8jV6vsSiktzuvnhdn66/RVkblVwtgsjiF6UX Zy0A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747242226; x=1747847026; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=waw4w0Q/P9keedP7Ykz7fyMHfgHgVQWacH3yk9lXO5s=; b=fDW6o9hmXBA9m5g4KJGy7+mR9PX9p1WAdsM93NV3VwrQegnZsMungDD0zFBDYhNDh7 3HdhaXrhHiP7Y4COCeOVLSBFTj0qVafaQsg4Kc41HDGueWDiyOEWKcXloFsOEFxrLaN7 ZUeW3Ij14lJRfvyTwPs+LDigmdoYUxj6ZSTHm1CggR0wOXMQucNTARy80/Na25zaokQn K4UPSx8vdX8BVofsidWISBXC92+xMQ1cHgNOC0DijPw5L8qCSQc61LYjuYqyqZSh08Dm ta5UWczpa1dfyUYaDQ6DQmgcHgJK+1sn3zj1lasP61QX4dmOUkG5EX3N7alvUYtgIu5t d17Q== X-Forwarded-Encrypted: i=1; AJvYcCVZq+zfWUzXSkwLV7Rkw6TZm49XP1O7YxGxqOlrICcg/295NW3QjEnEPCFFEuqSi4NEN+u7PqLDox20ykPN@vger.kernel.org, AJvYcCVeRr3ZWhNZ5EDceNRdqzFsKHNGxvDdBPC7gIhghH3YhSDdXmubFfKKuLiIFtb7jdczU6FQyIpS2Xwcrp3m@vger.kernel.org X-Gm-Message-State: AOJu0Yx7CVgJ5JSSTEuH2nY+0weD+haEe/1fV9DyHkQa5UNlJpRjZzt8 +RuJYXrTOS/YMXl/fA6ZirOZxZYfrOuF5uD0L6FctCnDO0RLencb X-Gm-Gg: ASbGncu/3hOd4pKAxxHkDmqaCTOyFnmysY3B4sTfA0QbDy8+7csPF0LDWOGvMgASgPk tn79MUQGABAfsKuIVcL2v/txW8R8j+yvRlJKCiLLa/WoIoH9q8K5DjEH5Y4/ASda3M4DO3KLcMA d66JiDgxL2mDZmEUKYbSRBNtF4UvLzcXUjtsUJiHS6k8MOM+knyFZ9LPLVVRbcP6kydTgHmJ1lC ngCE4L066oZ+t9r+gNXPT4+T7NE5wnr5exVGzhUB+A2MK15myxd1XJEXgYnQFerNgrjW9h+th7b 1lccHMLDsTueWyjZTe4knP51/R5YtXQZto2tz/xSedIoGg/1fIBYXfGlhUS95kNuvZO2k+lrTVa XlKxU1YoMDd8kX7vppYPK3Oj9RA== X-Google-Smtp-Source: AGHT+IEudCs3epNgOzH45zFKhF3wF3KgAJNaSNIjJuJ3OYrje+XY0VybCKKMR36pmLxg8on8O/Cqug== X-Received: by 2002:a05:6a21:6d88:b0:1f5:5ca4:2744 with SMTP id adf61e73a8af0-215ff0d4647mr5220628637.17.1747242226550; Wed, 14 May 2025 10:03:46 -0700 (PDT) Received: from localhost ([2a00:79e0:3e00:2601:3afc:446b:f0df:eadc]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-b2349dd1fd8sm9172819a12.26.2025.05.14.10.03.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 14 May 2025 10:03:45 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v4 11/40] drm/msm: Collapse vma close and delete Date: Wed, 14 May 2025 09:59:10 -0700 Message-ID: <20250514170118.40555-12-robdclark@gmail.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250514170118.40555-1-robdclark@gmail.com> References: <20250514170118.40555-1-robdclark@gmail.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Rob Clark This fits better drm_gpuvm/drm_gpuva. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gem.c | 16 +++------------- drivers/gpu/drm/msm/msm_gem_vma.c | 2 ++ 2 files changed, 5 insertions(+), 13 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 29247911f048..4c10eca404e0 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -353,15 +353,6 @@ static struct msm_gem_vma *lookup_vma(struct drm_gem_object *obj, return NULL; } -static void del_vma(struct msm_gem_vma *vma) -{ - if (!vma) - return; - - list_del(&vma->list); - kfree(vma); -} - /* * If close is true, this also closes the VMA (releasing the allocated * iova range) in addition to removing the iommu mapping. In the eviction @@ -372,11 +363,11 @@ static void put_iova_spaces(struct drm_gem_object *obj, bool close) { struct msm_gem_object *msm_obj = to_msm_bo(obj); - struct msm_gem_vma *vma; + struct msm_gem_vma *vma, *tmp; msm_gem_assert_locked(obj); - list_for_each_entry(vma, &msm_obj->vmas, list) { + list_for_each_entry_safe(vma, tmp, &msm_obj->vmas, list) { if (vma->vm) { msm_gem_vma_purge(vma); if (close) @@ -395,7 +386,7 @@ put_iova_vmas(struct drm_gem_object *obj) msm_gem_assert_locked(obj); list_for_each_entry_safe(vma, tmp, &msm_obj->vmas, list) { - del_vma(vma); + msm_gem_vma_close(vma); } } @@ -564,7 +555,6 @@ static int clear_iova(struct drm_gem_object *obj, msm_gem_vma_purge(vma); msm_gem_vma_close(vma); - del_vma(vma); return 0; } diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_gem_vma.c index 6d18364f321c..ca29e81d79d2 100644 --- a/drivers/gpu/drm/msm/msm_gem_vma.c +++ b/drivers/gpu/drm/msm/msm_gem_vma.c @@ -102,8 +102,10 @@ void msm_gem_vma_close(struct msm_gem_vma *vma) spin_unlock(&vm->lock); vma->iova = 0; + list_del(&vma->list); msm_gem_vm_put(vm); + kfree(vma); } /* Create a new vma and allocate an iova for it */