From patchwork Thu Jun 5 18:28:46 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 894512 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 19E481F099A for ; Thu, 5 Jun 2025 18:32:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.168.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749148333; cv=none; b=W+2jXQmy5joTXuZ5y0q4ScJxu6vhLEDYPhm2TTEyw6bnQvmkqZlyTSxo6tM0N204fxn1gGYRcRojUPgOL5CJ2icLYmt5Ty39cN082XIXCP3L8Dqh5fE+mw1a7A0DmhtXbY7s0H1IfQFHY8TbbrZLX3IlMX6U7aOze8SRX8nIeBY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749148333; c=relaxed/simple; bh=Ljxp0rbVayEqE7lmqppBFPy5GFbKDreN6wHhP8x2y+U=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=upmoO7W5/2FfLfq20Qa6yGu5RzBLBC/SJQk5YgAigD8p0WisI1I6rOVkuzcQqNOOqmdj+xxT4S5oA7yRDQtce0NJN4SdRb10FaN/oldjCG7IjorBwhjV25VHey9ie3YLjn8ZniNgqAHmzcbNza8ZyrAThnKcOT8obROmD4QPlfE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=Zm/ofcbn; arc=none smtp.client-ip=205.220.168.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="Zm/ofcbn" Received: from pps.filterd (m0279867.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 555HV3rJ006413 for ; Thu, 5 Jun 2025 18:32:10 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=tgiGXs8Fmtg Byi6+4YZ4IRUEBBgzjqr+lD/A8pUTVaQ=; b=Zm/ofcbnD1VZQDOavBBxUSU3b4o oRFCLbynsakKodEStLUnV11nCCPtyeTkcc5APS7EBlGjyGMQlCNokAEeD5ARkY12 p7WTJDskcEqc6O/IhFn2e6JsW9tjbnQdFEmtBokpeTyCVjjGKP5PI9AqRP65tvyv ZE+GyND+rO+pj+uGD+UVK0lyNWAI2myLXv8/eD8kZclNsgrqnu3TWArZqWgcOGx2 +jwJ9Sgb6KbXwY4hqrr5l/jWkA1mxX+tG2BK9yNJY5k8HZjK9gCzs2OUK4Hf4sgr a9I947nzzO3ZQvLwwJUttyUklgFCayV6aEU4tKWHX8/Ic84pbE5tTH0Gf8g== Received: from mail-pg1-f198.google.com (mail-pg1-f198.google.com [209.85.215.198]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 471g8t2a7r-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Thu, 05 Jun 2025 18:32:10 +0000 (GMT) Received: by mail-pg1-f198.google.com with SMTP id 41be03b00d2f7-b2f5a5da5efso167114a12.2 for ; Thu, 05 Jun 2025 11:32:10 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1749148329; x=1749753129; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=tgiGXs8FmtgByi6+4YZ4IRUEBBgzjqr+lD/A8pUTVaQ=; b=CZriQsmDu9E5EY4p+AqaEqqLGGMTdSiml/exN0zCOAWWzkBdn28Wr2I/4paQLso9Dy 3V6/snwh7GWOGJWDT33pQ6cUDfaSENAwZeFKPZnwqHVcMgempws8VawvOZnSVh6NqrVM BTdQ6qCfD+c5hMQD2LMbOcy+Ya15AKX8AcdYFN9UcTRFENJXag2j2LwdMJwLqwjDoJ4P jhq0gFwbxyCSd2M7YQGTV9OdPB+VJJqC8OveXkOyIMXgdDmxS1yXQ/VMoAq6Rqe2o9Dc o+qtcqFelJqF+Jxky3WPGzutSI/Nsk7tMJ1sBPleGRsKSRnIxV7MbyXVuJeNBL+PJHCE rQTQ== X-Forwarded-Encrypted: i=1; AJvYcCUNbHcMedkjvfzzrMrS0andMQwuyGDtVmIjRbaDVnDKGuI8kOi2XGb2A4EbBrZ14EZlpuaOh4ucAP+X4m48@vger.kernel.org X-Gm-Message-State: AOJu0YzGfOxLgInoh+NE4EsDvjzcZ+VKmK0ReH2ny7D8KL1CnN08lT8x Z92yZIplz9mUr5EEpmiYBXIVTKw33OIpAOnufrC2Q9RezeRI2snZlj7YtTOfK1aEkbkZnTA4IUm kK7h4FgjHnGH06PSQbea0ZrMiI2QHeYfcFQQyoIYBlpZ7dTCXW1GsTgcINp1V0UPAWtOj X-Gm-Gg: ASbGncuoLMoNYN/h6LXu5DDhUGUT5LRns7I+ldDIcFRKR/HRzCjpGoizxT02WeL794D rNqYEdSf28IPthI+ew4/gom+6C0M+pPiBWjapU0MmeNyrrARH3My9+CqLJP5J3PWzQMaZ8pIfwI 2B5oYCXWVKxyrWbuY7bSYhicGkXZ0ZdIFYn95DZWvF6ExrOqHmpa1idttCjzbAdelYeaYnV33BA 7AJKY6/9EDSe3wA842dt1RiJimKLhKWoL2eNdj0eNUoUiAd7LlCmxi7AQ6B0/CHOg5pDKe6/Ime ReKZ6BF+2+2A/JXoV+Kx4Q== X-Received: by 2002:a05:6a20:1585:b0:21c:f5eb:ede1 with SMTP id adf61e73a8af0-21ee257726cmr346309637.24.1749148329203; Thu, 05 Jun 2025 11:32:09 -0700 (PDT) X-Google-Smtp-Source: AGHT+IE6yEyRAo6qHQu3LjusRo5WiuixIyyerG+CXnhIKpPAvFwR7mK8AVmwqDRnbddTUCiGjf/KQA== X-Received: by 2002:a05:6a20:1585:b0:21c:f5eb:ede1 with SMTP id adf61e73a8af0-21ee257726cmr346258637.24.1749148328731; Thu, 05 Jun 2025 11:32:08 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:89fa:e299:1a34:c1f5]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-b2f5f6683a5sm4687a12.37.2025.06.05.11.32.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 05 Jun 2025 11:32:08 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Rob Clark , Rob Clark , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , David Airlie , Simona Vetter , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , Marijn Suijten , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v6 01/40] drm/gem: Add ww_acquire_ctx support to drm_gem_lru_scan() Date: Thu, 5 Jun 2025 11:28:46 -0700 Message-ID: <20250605183111.163594-2-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250605183111.163594-1-robin.clark@oss.qualcomm.com> References: <20250605183111.163594-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: O456aXyKeb-tSJniQytYPebW18AdcFU4 X-Authority-Analysis: v=2.4 cv=EPcG00ZC c=1 sm=1 tr=0 ts=6841e2aa cx=c_pps a=Qgeoaf8Lrialg5Z894R3/Q==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=cm27Pg_UAAAA:8 a=EUspDBNiAAAA:8 a=Xs8HUT0FnXyYc1zbtowA:9 a=x9snwWr2DeNwDh03kgHS:22 X-Proofpoint-GUID: O456aXyKeb-tSJniQytYPebW18AdcFU4 X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjA1MDE2NSBTYWx0ZWRfX4BNoCMbbU0Fl F//BzmOB8YvwqI3T1Jr0SvTYzmCLp7KGj9pLwkKM5Oe6cMFiZNuLfsPxFvJIuz+dj6pCWVDu2+c pRU0Sb1RImRLcbyuoXxH1bQpUuIZqoD5hY/N8oUoel7GPN18Bsml/WRv8GE+4OKSsbGdQSOXifL Okigx1dmyMRLZHG73mEt6ky2ZMxM/rv0w9AIhtvdHaHYas/W5dGIjIz9lxKUrfX+mOf16eKM/MM kuuaGZ3O49Y4xdCfwXrvSQ/rpm976nKAxDkrpQ3FKuvXSSsRbGqhhH/vgOMeDeyiYJ7jQELL6PW L/+eUxUtIVgj16npp02gVGJwt/z8zqfdVm6PJt3cC+yT4twxou0eU5VoKCdoQhRcr+iasv7ds+u G3zAVwy9CKc4W0TPLaZGWdaDub3Gtt/fK2lhmeqnBXtFXCYmHPZfTqknejCPKETjXfu0SDwy X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.0.736,FMLib:17.12.80.40 definitions=2025-06-05_05,2025-06-05_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 suspectscore=0 bulkscore=0 priorityscore=1501 phishscore=0 impostorscore=0 lowpriorityscore=0 adultscore=0 malwarescore=0 mlxscore=0 clxscore=1015 mlxlogscore=999 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506050165 From: Rob Clark If the callback is going to have to attempt to grab more locks, it is useful to have an ww_acquire_ctx to avoid locking order problems. Why not use the drm_exec helper instead? Mainly because (a) where ww_acquire_init() is called is awkward, and (b) we don't really need to retry after backoff, we can just move on to the next object. Signed-off-by: Rob Clark Signed-off-by: Rob Clark --- drivers/gpu/drm/drm_gem.c | 14 +++++++++++--- drivers/gpu/drm/msm/msm_gem_shrinker.c | 24 +++++++++++++----------- include/drm/drm_gem.h | 10 ++++++---- 3 files changed, 30 insertions(+), 18 deletions(-) diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c index c6240bab3fa5..c8f983571c70 100644 --- a/drivers/gpu/drm/drm_gem.c +++ b/drivers/gpu/drm/drm_gem.c @@ -1460,12 +1460,14 @@ EXPORT_SYMBOL(drm_gem_lru_move_tail); * @nr_to_scan: The number of pages to try to reclaim * @remaining: The number of pages left to reclaim, should be initialized by caller * @shrink: Callback to try to shrink/reclaim the object. + * @ticket: Optional ww_acquire_ctx context to use for locking */ unsigned long drm_gem_lru_scan(struct drm_gem_lru *lru, unsigned int nr_to_scan, unsigned long *remaining, - bool (*shrink)(struct drm_gem_object *obj)) + bool (*shrink)(struct drm_gem_object *obj, struct ww_acquire_ctx *ticket), + struct ww_acquire_ctx *ticket) { struct drm_gem_lru still_in_lru; struct drm_gem_object *obj; @@ -1498,17 +1500,20 @@ drm_gem_lru_scan(struct drm_gem_lru *lru, */ mutex_unlock(lru->lock); + if (ticket) + ww_acquire_init(ticket, &reservation_ww_class); + /* * Note that this still needs to be trylock, since we can * hit shrinker in response to trying to get backing pages * for this obj (ie. while it's lock is already held) */ - if (!dma_resv_trylock(obj->resv)) { + if (!ww_mutex_trylock(&obj->resv->lock, ticket)) { *remaining += obj->size >> PAGE_SHIFT; goto tail; } - if (shrink(obj)) { + if (shrink(obj, ticket)) { freed += obj->size >> PAGE_SHIFT; /* @@ -1522,6 +1527,9 @@ drm_gem_lru_scan(struct drm_gem_lru *lru, dma_resv_unlock(obj->resv); + if (ticket) + ww_acquire_fini(ticket); + tail: drm_gem_object_put(obj); mutex_lock(lru->lock); diff --git a/drivers/gpu/drm/msm/msm_gem_shrinker.c b/drivers/gpu/drm/msm/msm_gem_shrinker.c index 07ca4ddfe4e3..de185fc34084 100644 --- a/drivers/gpu/drm/msm/msm_gem_shrinker.c +++ b/drivers/gpu/drm/msm/msm_gem_shrinker.c @@ -44,7 +44,7 @@ msm_gem_shrinker_count(struct shrinker *shrinker, struct shrink_control *sc) } static bool -purge(struct drm_gem_object *obj) +purge(struct drm_gem_object *obj, struct ww_acquire_ctx *ticket) { if (!is_purgeable(to_msm_bo(obj))) return false; @@ -58,7 +58,7 @@ purge(struct drm_gem_object *obj) } static bool -evict(struct drm_gem_object *obj) +evict(struct drm_gem_object *obj, struct ww_acquire_ctx *ticket) { if (is_unevictable(to_msm_bo(obj))) return false; @@ -79,21 +79,21 @@ wait_for_idle(struct drm_gem_object *obj) } static bool -active_purge(struct drm_gem_object *obj) +active_purge(struct drm_gem_object *obj, struct ww_acquire_ctx *ticket) { if (!wait_for_idle(obj)) return false; - return purge(obj); + return purge(obj, ticket); } static bool -active_evict(struct drm_gem_object *obj) +active_evict(struct drm_gem_object *obj, struct ww_acquire_ctx *ticket) { if (!wait_for_idle(obj)) return false; - return evict(obj); + return evict(obj, ticket); } static unsigned long @@ -102,7 +102,7 @@ msm_gem_shrinker_scan(struct shrinker *shrinker, struct shrink_control *sc) struct msm_drm_private *priv = shrinker->private_data; struct { struct drm_gem_lru *lru; - bool (*shrink)(struct drm_gem_object *obj); + bool (*shrink)(struct drm_gem_object *obj, struct ww_acquire_ctx *ticket); bool cond; unsigned long freed; unsigned long remaining; @@ -122,8 +122,9 @@ msm_gem_shrinker_scan(struct shrinker *shrinker, struct shrink_control *sc) continue; stages[i].freed = drm_gem_lru_scan(stages[i].lru, nr, - &stages[i].remaining, - stages[i].shrink); + &stages[i].remaining, + stages[i].shrink, + NULL); nr -= stages[i].freed; freed += stages[i].freed; remaining += stages[i].remaining; @@ -164,7 +165,7 @@ msm_gem_shrinker_shrink(struct drm_device *dev, unsigned long nr_to_scan) static const int vmap_shrink_limit = 15; static bool -vmap_shrink(struct drm_gem_object *obj) +vmap_shrink(struct drm_gem_object *obj, struct ww_acquire_ctx *ticket) { if (!is_vunmapable(to_msm_bo(obj))) return false; @@ -192,7 +193,8 @@ msm_gem_shrinker_vmap(struct notifier_block *nb, unsigned long event, void *ptr) unmapped += drm_gem_lru_scan(lrus[idx], vmap_shrink_limit - unmapped, &remaining, - vmap_shrink); + vmap_shrink, + NULL); } *(unsigned long *)ptr += unmapped; diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h index bcd54020d6ba..b611a9482abf 100644 --- a/include/drm/drm_gem.h +++ b/include/drm/drm_gem.h @@ -556,10 +556,12 @@ void drm_gem_lru_init(struct drm_gem_lru *lru, struct mutex *lock); void drm_gem_lru_remove(struct drm_gem_object *obj); void drm_gem_lru_move_tail_locked(struct drm_gem_lru *lru, struct drm_gem_object *obj); void drm_gem_lru_move_tail(struct drm_gem_lru *lru, struct drm_gem_object *obj); -unsigned long drm_gem_lru_scan(struct drm_gem_lru *lru, - unsigned int nr_to_scan, - unsigned long *remaining, - bool (*shrink)(struct drm_gem_object *obj)); +unsigned long +drm_gem_lru_scan(struct drm_gem_lru *lru, + unsigned int nr_to_scan, + unsigned long *remaining, + bool (*shrink)(struct drm_gem_object *obj, struct ww_acquire_ctx *ticket), + struct ww_acquire_ctx *ticket); int drm_gem_evict(struct drm_gem_object *obj); From patchwork Thu Jun 5 18:28:47 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 894241 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DACAE27A46A for ; Thu, 5 Jun 2025 18:32:12 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.168.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749148334; cv=none; b=V7BrUWGchcp/h6IBDC+AJ9JWjtINzBts+r0UR1AaMEgsr4x/HMKHOtaojTwpFkMBHt21LYdlHDBojiDLNHhBKMjsR/Zc7xnO2uGZdvZ6b/yaXseqTx9GUSSaInHdHUdhprmNliXH7uCdNXiODUQzM1ii7s0NRiTPlo3QFpHCuv4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749148334; c=relaxed/simple; bh=pWTQ4iKmrXVXY3iE+Dk0vkCRKIygHOG9cW6zdD2N1l8=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=J5UZLvCOKpIBBPTCbUCHvu/dtmLLk6WI9Q8Us4ugaH2g5L93JWIrr6mKJq17FwQ2elBqjMS7A4yGqH2bAA5Bmw8kXKh5zGBib1Rg569uhtwylB3cT6PBaOtUVpudllBWjelIH+V5hvuO3R6XlMPJ6ErgwjrDSbwnAjbEAmhLhMQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=pXkv0Gfr; arc=none smtp.client-ip=205.220.168.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="pXkv0Gfr" Received: from pps.filterd (m0279867.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 555H0xgw006331 for ; Thu, 5 Jun 2025 18:32:12 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=yVCtoAdChq2 rZfW409bmOE4102RcfW3cFf66u8cbB8o=; b=pXkv0GfrZn5dAJjPMWTqiQMcfhM bCEkp+A7aHiX+FmyFu5U8WfCzRHySXIUhReN7XSKyoyzTxX0i8e3zyFSv2mMhDNd tcXo8IZdIO9fbDPS/xUV3nQ/yF3mEW5preNRp3wJC6EJ3zTAbV6ni9YbQkBigYoh F2bOUhWnOYARMxYbT74m9Atp0NMTPN/1xPj2m6XeDHiaR7GfvL86CQ7PCpufCKse 7KVr5bCBRt6S7sr6Mg4Xt78f0AQsCtPWCnfw04VMnUBPfN0igYGvwKDSqA7RLF+F ZdWUiCRS8FvxNoV/Bokb5ra64PoJNM9rREDhO+4Ot0luvszQ3HqAJJhy9gw== Received: from mail-pg1-f197.google.com (mail-pg1-f197.google.com [209.85.215.197]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 471g8t2a7x-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Thu, 05 Jun 2025 18:32:11 +0000 (GMT) Received: by mail-pg1-f197.google.com with SMTP id 41be03b00d2f7-b2c36d3f884so833919a12.2 for ; Thu, 05 Jun 2025 11:32:11 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1749148331; x=1749753131; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=yVCtoAdChq2rZfW409bmOE4102RcfW3cFf66u8cbB8o=; b=VrPJCjRLdkVm6AM+pmGlFCtp88BKZDtAGeEe4+qi3kma8cHTbgomFZtmKPKGG1Yjf9 PbayJ2VYgHfn6kuGK9i9bECSYoC4vvuiJ8WUkBs2zW1vEud+DlwwFFc/w86P87XAlYVl zUtqNSRnqKihxCPA/4mFTx6NLw6hPGJ4tyDqYLaqRQlHIT1I+amPdIFF1Gh00XBu0fhw 77nDA/av5BXa6eyByTJSCPxSxj0fuGUuzUzYFyCs7BnZN6lh+be+bnX7KHoTnm0WU3TW /4Cse2Gm0S5lAaAeo1oso0+THLnOC5IYUVVOQaDTJoZPa6eMft7iwkqnTho5hXjQ9D58 P6Zg== X-Forwarded-Encrypted: i=1; AJvYcCV3atTFDZ162npZdMvQDdd4zrTzifGGluxRil6k8SF5lysx10Jwa+cGyvFJwRmug4Cipn5yyGikKHiCZgxt@vger.kernel.org X-Gm-Message-State: AOJu0Ywl8h0XEZoRB4JkP6Wvqiy1H4P4gW4mdynSZjb9TbMFr/eKYElF 5uBHLlTF47kUODfT3BrHS722/sCWnPTbMn7g3mMIO403SUekqAvPN6M34kkk7pBkMWhxY1RECot 5LfwygZAnFeb99Pg/Yj8TPtdn3hhQ40/RXiulbbl1um8bguuNN5JrWf+C8U0zvVeKFvTa X-Gm-Gg: ASbGnct8gdNs4CsIzPcdDIw7i+HI8lVk3Llk9erbF6bQuNS/YEmEL/EY2QQFeyhMwA4 4XKtywsjwpty9RKZPDTaAjdbIJaKGZK+tReaM6bkjgyupRhGfBGSTQ3qMXvN8/Z6E3qvV2NTM5i xIX2pew+mVgQ9DoantnT3e1uv1zD+vufx9gsfJS1JWpRyFPERcqrWne6ko124i3GF3fcKg/h14Y zisYis8t7QbwukBdM6D+7CamM9WeFsKYXRVeudR1ttzCf9M8tDNLlAYvF8/R6IiadaGS1guXySa 8zfO2H2KGBfxr+XtXaal1g== X-Received: by 2002:a17:903:40cb:b0:235:7c6:ebbf with SMTP id d9443c01a7336-23601d82e5cmr5666905ad.35.1749148330899; Thu, 05 Jun 2025 11:32:10 -0700 (PDT) X-Google-Smtp-Source: AGHT+IG+c6BP+G5JK7CXmLXFSnPc/niO7LWg2Agd1VnjKL3DtJKLt4eItnn/t9gW2fngik+yikXRQg== X-Received: by 2002:a17:903:40cb:b0:235:7c6:ebbf with SMTP id d9443c01a7336-23601d82e5cmr5666435ad.35.1749148330417; Thu, 05 Jun 2025 11:32:10 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:89fa:e299:1a34:c1f5]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-23506d14c9asm122293815ad.231.2025.06.05.11.32.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 05 Jun 2025 11:32:10 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Rob Clark , Dmitry Baryshkov , Rob Clark , Rob Clark , Sean Paul , Konrad Dybcio , Abhinav Kumar , Dmitry Baryshkov , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v6 02/40] drm/msm: Rename msm_file_private -> msm_context Date: Thu, 5 Jun 2025 11:28:47 -0700 Message-ID: <20250605183111.163594-3-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250605183111.163594-1-robin.clark@oss.qualcomm.com> References: <20250605183111.163594-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: HXxC5dqBc0EWs5g1HrTGHpVkYadQhet- X-Authority-Analysis: v=2.4 cv=EPcG00ZC c=1 sm=1 tr=0 ts=6841e2ab cx=c_pps a=rz3CxIlbcmazkYymdCej/Q==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=cm27Pg_UAAAA:8 a=EUspDBNiAAAA:8 a=f_Su2nKqq5cPEY4CjHUA:9 a=bFCP_H2QrGi7Okbo017w:22 X-Proofpoint-GUID: HXxC5dqBc0EWs5g1HrTGHpVkYadQhet- X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjA1MDE2NSBTYWx0ZWRfX+0d7WeR/RA+0 L/LVa13iFXNTYzQeLPagjzZ7XCSlcz1dOYRWLcd8Ud6OIdPCLwowwmNh2UfQXjNZ9Gr2VJR68Bj r1D1EFMjWDZKDUHDy7L2cVkJBP99o3Dx8MjexTcvWJqAbKBItSYL8KzXXQlBxF4zN7dCEGE+MIG +N4Calg8kPqAW67kynEYNZrvTYTI5abtKvocNOBm1COKTlCwrhdiqLulmVdBmMXTp5lvVls1LAl KkYHoqgMsiTtzYPoo8wSPQrjKXcfTgDBxswez91LkYZJ1BcL9fQyZ7BUTpbZNW1ul3IPAWRDabn 8O2BIJ8ly1mznF5slyHj8jVj5JeBwJW0AcIzrRs9RrpnZho3ekq1TP2fcXOoxK617yDKmzWWt5p 9fLvOcm8vkbBILT8NTAriBgtj7qwlCZzUs7PKj29nltch9wsPYJ2t8BVgKpFMtafSXQJeODY X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.0.736,FMLib:17.12.80.40 definitions=2025-06-05_05,2025-06-05_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 suspectscore=0 bulkscore=0 priorityscore=1501 phishscore=0 impostorscore=0 lowpriorityscore=0 adultscore=0 malwarescore=0 mlxscore=0 clxscore=1015 mlxlogscore=999 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506050165 From: Rob Clark This is a more descriptive name. Signed-off-by: Rob Clark Reviewed-by: Dmitry Baryshkov Signed-off-by: Rob Clark Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 2 +- drivers/gpu/drm/msm/adreno/adreno_gpu.c | 6 ++-- drivers/gpu/drm/msm/adreno/adreno_gpu.h | 4 +-- drivers/gpu/drm/msm/msm_drv.c | 14 ++++----- drivers/gpu/drm/msm/msm_gem.c | 2 +- drivers/gpu/drm/msm/msm_gem_submit.c | 2 +- drivers/gpu/drm/msm/msm_gpu.c | 4 +-- drivers/gpu/drm/msm/msm_gpu.h | 39 ++++++++++++------------- drivers/gpu/drm/msm/msm_submitqueue.c | 27 +++++++++-------- 9 files changed, 49 insertions(+), 51 deletions(-) diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c index fd64af6d0440..620a26638535 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c @@ -111,7 +111,7 @@ static void a6xx_set_pagetable(struct a6xx_gpu *a6xx_gpu, struct msm_ringbuffer *ring, struct msm_gem_submit *submit) { bool sysprof = refcount_read(&a6xx_gpu->base.base.sysprof_active) > 1; - struct msm_file_private *ctx = submit->queue->ctx; + struct msm_context *ctx = submit->queue->ctx; struct adreno_gpu *adreno_gpu = &a6xx_gpu->base; phys_addr_t ttbr; u32 asid; diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c b/drivers/gpu/drm/msm/adreno/adreno_gpu.c index d04657b77857..93fe26009511 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c @@ -356,7 +356,7 @@ int adreno_fault_handler(struct msm_gpu *gpu, unsigned long iova, int flags, return 0; } -int adreno_get_param(struct msm_gpu *gpu, struct msm_file_private *ctx, +int adreno_get_param(struct msm_gpu *gpu, struct msm_context *ctx, uint32_t param, uint64_t *value, uint32_t *len) { struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu); @@ -444,7 +444,7 @@ int adreno_get_param(struct msm_gpu *gpu, struct msm_file_private *ctx, } } -int adreno_set_param(struct msm_gpu *gpu, struct msm_file_private *ctx, +int adreno_set_param(struct msm_gpu *gpu, struct msm_context *ctx, uint32_t param, uint64_t value, uint32_t len) { struct drm_device *drm = gpu->dev; @@ -490,7 +490,7 @@ int adreno_set_param(struct msm_gpu *gpu, struct msm_file_private *ctx, case MSM_PARAM_SYSPROF: if (!capable(CAP_SYS_ADMIN)) return UERR(EPERM, drm, "invalid permissions"); - return msm_file_private_set_sysprof(ctx, gpu, value); + return msm_context_set_sysprof(ctx, gpu, value); default: return UERR(EINVAL, drm, "%s: invalid param: %u", gpu->name, param); } diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.h b/drivers/gpu/drm/msm/adreno/adreno_gpu.h index 2366a57b280f..fed9516da365 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.h +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.h @@ -603,9 +603,9 @@ static inline int adreno_is_a7xx(struct adreno_gpu *gpu) /* Put vm_start above 32b to catch issues with not setting xyz_BASE_HI */ #define ADRENO_VM_START 0x100000000ULL u64 adreno_private_address_space_size(struct msm_gpu *gpu); -int adreno_get_param(struct msm_gpu *gpu, struct msm_file_private *ctx, +int adreno_get_param(struct msm_gpu *gpu, struct msm_context *ctx, uint32_t param, uint64_t *value, uint32_t *len); -int adreno_set_param(struct msm_gpu *gpu, struct msm_file_private *ctx, +int adreno_set_param(struct msm_gpu *gpu, struct msm_context *ctx, uint32_t param, uint64_t value, uint32_t len); const struct firmware *adreno_request_fw(struct adreno_gpu *adreno_gpu, const char *fwname); diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c index c3588dc9e537..29ca24548c67 100644 --- a/drivers/gpu/drm/msm/msm_drv.c +++ b/drivers/gpu/drm/msm/msm_drv.c @@ -333,7 +333,7 @@ static int context_init(struct drm_device *dev, struct drm_file *file) { static atomic_t ident = ATOMIC_INIT(0); struct msm_drm_private *priv = dev->dev_private; - struct msm_file_private *ctx; + struct msm_context *ctx; ctx = kzalloc(sizeof(*ctx), GFP_KERNEL); if (!ctx) @@ -363,23 +363,23 @@ static int msm_open(struct drm_device *dev, struct drm_file *file) return context_init(dev, file); } -static void context_close(struct msm_file_private *ctx) +static void context_close(struct msm_context *ctx) { msm_submitqueue_close(ctx); - msm_file_private_put(ctx); + msm_context_put(ctx); } static void msm_postclose(struct drm_device *dev, struct drm_file *file) { struct msm_drm_private *priv = dev->dev_private; - struct msm_file_private *ctx = file->driver_priv; + struct msm_context *ctx = file->driver_priv; /* * It is not possible to set sysprof param to non-zero if gpu * is not initialized: */ if (priv->gpu) - msm_file_private_set_sysprof(ctx, priv->gpu, 0); + msm_context_set_sysprof(ctx, priv->gpu, 0); context_close(ctx); } @@ -511,7 +511,7 @@ static int msm_ioctl_gem_info_iova(struct drm_device *dev, uint64_t *iova) { struct msm_drm_private *priv = dev->dev_private; - struct msm_file_private *ctx = file->driver_priv; + struct msm_context *ctx = file->driver_priv; if (!priv->gpu) return -EINVAL; @@ -531,7 +531,7 @@ static int msm_ioctl_gem_info_set_iova(struct drm_device *dev, uint64_t iova) { struct msm_drm_private *priv = dev->dev_private; - struct msm_file_private *ctx = file->driver_priv; + struct msm_context *ctx = file->driver_priv; if (!priv->gpu) return -EINVAL; diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index d2f38e1df510..fdeb6cf7eeb5 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -48,7 +48,7 @@ static void update_device_mem(struct msm_drm_private *priv, ssize_t size) static void update_ctx_mem(struct drm_file *file, ssize_t size) { - struct msm_file_private *ctx = file->driver_priv; + struct msm_context *ctx = file->driver_priv; uint64_t ctx_mem = atomic64_add_return(size, &ctx->ctx_mem); rcu_read_lock(); /* Locks file->pid! */ diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c index d4f71bb54e84..3aabf7f1da6d 100644 --- a/drivers/gpu/drm/msm/msm_gem_submit.c +++ b/drivers/gpu/drm/msm/msm_gem_submit.c @@ -651,7 +651,7 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data, { struct msm_drm_private *priv = dev->dev_private; struct drm_msm_gem_submit *args = data; - struct msm_file_private *ctx = file->driver_priv; + struct msm_context *ctx = file->driver_priv; struct msm_gem_submit *submit = NULL; struct msm_gpu *gpu = priv->gpu; struct msm_gpu_submitqueue *queue; diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c index c380d9d9f5af..d786fcfad62f 100644 --- a/drivers/gpu/drm/msm/msm_gpu.c +++ b/drivers/gpu/drm/msm/msm_gpu.c @@ -148,7 +148,7 @@ int msm_gpu_pm_suspend(struct msm_gpu *gpu) return 0; } -void msm_gpu_show_fdinfo(struct msm_gpu *gpu, struct msm_file_private *ctx, +void msm_gpu_show_fdinfo(struct msm_gpu *gpu, struct msm_context *ctx, struct drm_printer *p) { drm_printf(p, "drm-engine-gpu:\t%llu ns\n", ctx->elapsed_ns); @@ -339,7 +339,7 @@ static void retire_submits(struct msm_gpu *gpu); static void get_comm_cmdline(struct msm_gem_submit *submit, char **comm, char **cmd) { - struct msm_file_private *ctx = submit->queue->ctx; + struct msm_context *ctx = submit->queue->ctx; struct task_struct *task; WARN_ON(!mutex_is_locked(&submit->gpu->lock)); diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h index e25009150579..957d6fb3469d 100644 --- a/drivers/gpu/drm/msm/msm_gpu.h +++ b/drivers/gpu/drm/msm/msm_gpu.h @@ -22,7 +22,7 @@ struct msm_gem_submit; struct msm_gpu_perfcntr; struct msm_gpu_state; -struct msm_file_private; +struct msm_context; struct msm_gpu_config { const char *ioname; @@ -44,9 +44,9 @@ struct msm_gpu_config { * + z180_gpu */ struct msm_gpu_funcs { - int (*get_param)(struct msm_gpu *gpu, struct msm_file_private *ctx, + int (*get_param)(struct msm_gpu *gpu, struct msm_context *ctx, uint32_t param, uint64_t *value, uint32_t *len); - int (*set_param)(struct msm_gpu *gpu, struct msm_file_private *ctx, + int (*set_param)(struct msm_gpu *gpu, struct msm_context *ctx, uint32_t param, uint64_t value, uint32_t len); int (*hw_init)(struct msm_gpu *gpu); @@ -347,7 +347,7 @@ struct msm_gpu_perfcntr { #define NR_SCHED_PRIORITIES (1 + DRM_SCHED_PRIORITY_LOW - DRM_SCHED_PRIORITY_HIGH) /** - * struct msm_file_private - per-drm_file context + * struct msm_context - per-drm_file context * * @queuelock: synchronizes access to submitqueues list * @submitqueues: list of &msm_gpu_submitqueue created by userspace @@ -357,7 +357,7 @@ struct msm_gpu_perfcntr { * @ref: reference count * @seqno: unique per process seqno */ -struct msm_file_private { +struct msm_context { rwlock_t queuelock; struct list_head submitqueues; int queueid; @@ -512,7 +512,7 @@ struct msm_gpu_submitqueue { u32 ring_nr; int faults; uint32_t last_fence; - struct msm_file_private *ctx; + struct msm_context *ctx; struct list_head node; struct idr fence_idr; struct spinlock idr_lock; @@ -608,33 +608,32 @@ static inline void gpu_write64(struct msm_gpu *gpu, u32 reg, u64 val) int msm_gpu_pm_suspend(struct msm_gpu *gpu); int msm_gpu_pm_resume(struct msm_gpu *gpu); -void msm_gpu_show_fdinfo(struct msm_gpu *gpu, struct msm_file_private *ctx, +void msm_gpu_show_fdinfo(struct msm_gpu *gpu, struct msm_context *ctx, struct drm_printer *p); -int msm_submitqueue_init(struct drm_device *drm, struct msm_file_private *ctx); -struct msm_gpu_submitqueue *msm_submitqueue_get(struct msm_file_private *ctx, +int msm_submitqueue_init(struct drm_device *drm, struct msm_context *ctx); +struct msm_gpu_submitqueue *msm_submitqueue_get(struct msm_context *ctx, u32 id); int msm_submitqueue_create(struct drm_device *drm, - struct msm_file_private *ctx, + struct msm_context *ctx, u32 prio, u32 flags, u32 *id); -int msm_submitqueue_query(struct drm_device *drm, struct msm_file_private *ctx, +int msm_submitqueue_query(struct drm_device *drm, struct msm_context *ctx, struct drm_msm_submitqueue_query *args); -int msm_submitqueue_remove(struct msm_file_private *ctx, u32 id); -void msm_submitqueue_close(struct msm_file_private *ctx); +int msm_submitqueue_remove(struct msm_context *ctx, u32 id); +void msm_submitqueue_close(struct msm_context *ctx); void msm_submitqueue_destroy(struct kref *kref); -int msm_file_private_set_sysprof(struct msm_file_private *ctx, - struct msm_gpu *gpu, int sysprof); -void __msm_file_private_destroy(struct kref *kref); +int msm_context_set_sysprof(struct msm_context *ctx, struct msm_gpu *gpu, int sysprof); +void __msm_context_destroy(struct kref *kref); -static inline void msm_file_private_put(struct msm_file_private *ctx) +static inline void msm_context_put(struct msm_context *ctx) { - kref_put(&ctx->ref, __msm_file_private_destroy); + kref_put(&ctx->ref, __msm_context_destroy); } -static inline struct msm_file_private *msm_file_private_get( - struct msm_file_private *ctx) +static inline struct msm_context *msm_context_get( + struct msm_context *ctx) { kref_get(&ctx->ref); return ctx; diff --git a/drivers/gpu/drm/msm/msm_submitqueue.c b/drivers/gpu/drm/msm/msm_submitqueue.c index 7fed1de63b5d..1acc0fe36353 100644 --- a/drivers/gpu/drm/msm/msm_submitqueue.c +++ b/drivers/gpu/drm/msm/msm_submitqueue.c @@ -7,8 +7,7 @@ #include "msm_gpu.h" -int msm_file_private_set_sysprof(struct msm_file_private *ctx, - struct msm_gpu *gpu, int sysprof) +int msm_context_set_sysprof(struct msm_context *ctx, struct msm_gpu *gpu, int sysprof) { /* * Since pm_runtime and sysprof_active are both refcounts, we @@ -46,10 +45,10 @@ int msm_file_private_set_sysprof(struct msm_file_private *ctx, return 0; } -void __msm_file_private_destroy(struct kref *kref) +void __msm_context_destroy(struct kref *kref) { - struct msm_file_private *ctx = container_of(kref, - struct msm_file_private, ref); + struct msm_context *ctx = container_of(kref, + struct msm_context, ref); int i; for (i = 0; i < ARRAY_SIZE(ctx->entities); i++) { @@ -73,12 +72,12 @@ void msm_submitqueue_destroy(struct kref *kref) idr_destroy(&queue->fence_idr); - msm_file_private_put(queue->ctx); + msm_context_put(queue->ctx); kfree(queue); } -struct msm_gpu_submitqueue *msm_submitqueue_get(struct msm_file_private *ctx, +struct msm_gpu_submitqueue *msm_submitqueue_get(struct msm_context *ctx, u32 id) { struct msm_gpu_submitqueue *entry; @@ -101,7 +100,7 @@ struct msm_gpu_submitqueue *msm_submitqueue_get(struct msm_file_private *ctx, return NULL; } -void msm_submitqueue_close(struct msm_file_private *ctx) +void msm_submitqueue_close(struct msm_context *ctx) { struct msm_gpu_submitqueue *entry, *tmp; @@ -119,7 +118,7 @@ void msm_submitqueue_close(struct msm_file_private *ctx) } static struct drm_sched_entity * -get_sched_entity(struct msm_file_private *ctx, struct msm_ringbuffer *ring, +get_sched_entity(struct msm_context *ctx, struct msm_ringbuffer *ring, unsigned ring_nr, enum drm_sched_priority sched_prio) { static DEFINE_MUTEX(entity_lock); @@ -155,7 +154,7 @@ get_sched_entity(struct msm_file_private *ctx, struct msm_ringbuffer *ring, return ctx->entities[idx]; } -int msm_submitqueue_create(struct drm_device *drm, struct msm_file_private *ctx, +int msm_submitqueue_create(struct drm_device *drm, struct msm_context *ctx, u32 prio, u32 flags, u32 *id) { struct msm_drm_private *priv = drm->dev_private; @@ -200,7 +199,7 @@ int msm_submitqueue_create(struct drm_device *drm, struct msm_file_private *ctx, write_lock(&ctx->queuelock); - queue->ctx = msm_file_private_get(ctx); + queue->ctx = msm_context_get(ctx); queue->id = ctx->queueid++; if (id) @@ -221,7 +220,7 @@ int msm_submitqueue_create(struct drm_device *drm, struct msm_file_private *ctx, * Create the default submit-queue (id==0), used for backwards compatibility * for userspace that pre-dates the introduction of submitqueues. */ -int msm_submitqueue_init(struct drm_device *drm, struct msm_file_private *ctx) +int msm_submitqueue_init(struct drm_device *drm, struct msm_context *ctx) { struct msm_drm_private *priv = drm->dev_private; int default_prio, max_priority; @@ -261,7 +260,7 @@ static int msm_submitqueue_query_faults(struct msm_gpu_submitqueue *queue, return ret ? -EFAULT : 0; } -int msm_submitqueue_query(struct drm_device *drm, struct msm_file_private *ctx, +int msm_submitqueue_query(struct drm_device *drm, struct msm_context *ctx, struct drm_msm_submitqueue_query *args) { struct msm_gpu_submitqueue *queue; @@ -282,7 +281,7 @@ int msm_submitqueue_query(struct drm_device *drm, struct msm_file_private *ctx, return ret; } -int msm_submitqueue_remove(struct msm_file_private *ctx, u32 id) +int msm_submitqueue_remove(struct msm_context *ctx, u32 id) { struct msm_gpu_submitqueue *entry; From patchwork Thu Jun 5 18:28:48 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 894511 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 384A427C854 for ; Thu, 5 Jun 2025 18:32:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.180.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749148337; cv=none; b=k3FfNGX7GJHDNRzD5bfU7t5Oi2mAbvxQrYlQjDF0WkEofvCQjHS3r+3tmWBpcBbrujoltl1O2G05PLv6Ah31K/j4Zgd2PB+BNw8XyI8NhqXR9Sv4EyixzLkmBED6y4sEVGHtgj/vubkitv3EBq5+EvAAZ623HH4gafSP+i/QhXg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749148337; c=relaxed/simple; bh=8yxrJuQkxzYQChDV1AsntwCFAr6kU9KajLDojsgN7uo=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=JIpLbvhpP2EeZeXHPJP8r2TSTYTTHkU3lwrj5jKZnZA4m7UwjBukkwyOZJ0LBXPmn5qZ1gteCU9AP1L9rdNiBKcCerfX2BOfjBjQN3FmIcDXK9mMojmcQvzezJ/ZGdE3aHs6g5DKdO/UGMXJG/er9ntRPDbnw8nO4vHStHx9gsU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=F0iRk1uN; arc=none smtp.client-ip=205.220.180.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="F0iRk1uN" Received: from pps.filterd (m0279871.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 555GjaDH010763 for ; Thu, 5 Jun 2025 18:32:13 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=iz489B2Vnt2 sUZBEqce/LAoH7pGwFDnWfRoUqmkB+Q8=; b=F0iRk1uNT/wx0mzuCe7xK/aMnDy vhMRE7SuFAmZZQuUWa802yiJynMvI6vtkWKAefljtPIeMsJpEYEp6wUNCFhICZ36 SYxZBJoBcQc9UoMlmDi/hOs3Rx66DBHahSAfucZso9Cu5MMr8/FImGGwwtZO0+gy 0pjWyjr+ZqDsEPwtQOeT8aoEGluB82SJIseTJ2pxfKKve4eXCCwe64iSPfUexcX3 gGsBjg+AOxJr4SZ2YYlBKpVrQxBBPxr9yPvYdcIKcKwFgasIiAqzDqjDFs1bct4R h68mp/8X1E//Yg4oh02W72BYXK/6/8emxlWAU+z5QH3VsRfWwyH3EcywW6w== Received: from mail-pj1-f70.google.com (mail-pj1-f70.google.com [209.85.216.70]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 471g8ytbys-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Thu, 05 Jun 2025 18:32:13 +0000 (GMT) Received: by mail-pj1-f70.google.com with SMTP id 98e67ed59e1d1-311a6b43ed7so1207469a91.1 for ; Thu, 05 Jun 2025 11:32:13 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1749148332; x=1749753132; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=iz489B2Vnt2sUZBEqce/LAoH7pGwFDnWfRoUqmkB+Q8=; b=BM8J9cIk4qXS9H7adc9jc8S3vZcT9i9rUCehSW3FwsMH/PNjX3uCb9xhG5W56e6A8B 0b5dt/95PwXUnRnc+Qrm3q0xaPJNdE6XQTXpAvk49bDXqXOKf0Mx32KUJ9rrCpLzBl5s syL/S3ooNtSKvrJxzBJhbgTsx1zOCyOafN3XYz0FYY8YqMhuv7nCiFo2ab35/4bWJb/y X3m33MFv2OYyKU+DcaBg1bQyLV7hOAx5BszjJZDlj1xOplWco/m6tvSmT/s+MJyGnHsY cH3pDGAGSLNEEJ8EJVcDeL2XnpMztltI0yC2c87BdLlBC8zJgDGXa26FidH/NIZq/Hys YXdA== X-Forwarded-Encrypted: i=1; AJvYcCXoJStwAhGNOfY/cTIYLa7ccMP3jG1yohOMri61K199nKzwgjxSqUoWFS2HYySgytyUKA4doMisSp2jzMIs@vger.kernel.org X-Gm-Message-State: AOJu0Yztjz3VlXsFVRvw1foM0/gVwo64RgZ92rlWzgtj4BrMHwos0Gav AimbfRvqb2hXo4MQmFWmhAl/BPBO7na9tgLc3ZBT0t/E7xo+IafoO92Urhrn3b7xGu2PkpYuU0a k6rYlzPoVPOhthhRCPjm+eIHGBn77i4N6j+wjfDsNqmmO1pr5jwPB4c4u0jbkVdxow++9 X-Gm-Gg: ASbGnctVP25c8YlK4Q1CeNF+CkMshM0VTPKn0/SwQ3bOCf3jOQi33L6nhY+sn2MlaX/ 0b+bhUwpQZqZ/vzz4FgnMnX+xqct5cLn2qPJzchcQLEPkT7UP3zUHUtZmn6PIku0JDUryCps00j J8cJhISXlz5/8sg58ehs1KlKtDzep7jt9kfxi5h50XEqGsnJTd6XaduiATQgzNv475UXdXSF9P/ pcaUvbFeCm7ePoF+BM5LiwqfuFbr51KA4HNrdhwB9MEAZifX1wIgxSCyueCo9f28xdgcHP5o51h o3gzytDiXE659Mdmb8P6ZSEMmZwLvzPe X-Received: by 2002:a17:90b:4b46:b0:312:b4a:6342 with SMTP id 98e67ed59e1d1-3134788faefmr998044a91.33.1749148332203; Thu, 05 Jun 2025 11:32:12 -0700 (PDT) X-Google-Smtp-Source: AGHT+IFiKBDew32EdipW62Q0cflXApQbfZYLbCKMDJJCflNU0zpokfB0+AAOj2B64JXMPgxhpvo9fg== X-Received: by 2002:a17:90b:4b46:b0:312:b4a:6342 with SMTP id 98e67ed59e1d1-3134788faefmr997991a91.33.1749148331774; Thu, 05 Jun 2025 11:32:11 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:89fa:e299:1a34:c1f5]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-3132ad9a4aasm1558718a91.1.2025.06.05.11.32.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 05 Jun 2025 11:32:11 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Rob Clark , Dmitry Baryshkov , Rob Clark , Rob Clark , Sean Paul , Konrad Dybcio , Abhinav Kumar , Dmitry Baryshkov , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v6 03/40] drm/msm: Improve msm_context comments Date: Thu, 5 Jun 2025 11:28:48 -0700 Message-ID: <20250605183111.163594-4-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250605183111.163594-1-robin.clark@oss.qualcomm.com> References: <20250605183111.163594-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjA1MDE2NSBTYWx0ZWRfX8clal9sSCAFR ndqrLuyXS6NgGKExmh2pEKYq6VJRfWrGKg8VMz4BvJkEbQkx9eBbaLHCJSHV6IUkUle2jDxdhjw zkqSSv8sfCyqDHpQM9ATS6Ncgf/jAnYF5D2p0M8p26V46O8wsuV4A18qqVjUGBDmIyxWFv6K5sV 7Bov4XEjZh668Ta1/BIsIk43UHh+mo/u6M2Sh2NsJMOeYPDEdU/a3RvLVQAZMFTU3qZIg5wrw1z TkPp5OsPCvAFDr+4+VlnqX99ziqGv+Nd9M+Etvhg71ljO1MWk5bIEfHm1y+4Qk/eLl/fFGSNe27 WGAD7BvljRn3bvKcRGRfez0SS6lun8blAq4ODvO+pO03v75IbUBp1i5p/yTGLWofRILcq7OhzGk 71YLZvsduyX5+FMdFVn8mLkRJzFMnwYJf6CF7Yw3MROoL0ZXTkG+2tqOSkmORokfVTMAb4JH X-Proofpoint-ORIG-GUID: xgCA_lpcxt0GKE8DnClmCUr29SmR9ziB X-Proofpoint-GUID: xgCA_lpcxt0GKE8DnClmCUr29SmR9ziB X-Authority-Analysis: v=2.4 cv=T/uMT+KQ c=1 sm=1 tr=0 ts=6841e2ad cx=c_pps a=0uOsjrqzRL749jD1oC5vDA==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=cm27Pg_UAAAA:8 a=EUspDBNiAAAA:8 a=t0yPb2BRG13ODCZxGvcA:9 a=mQ_c8vxmzFEMiUWkPHU9:22 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.0.736,FMLib:17.12.80.40 definitions=2025-06-05_05,2025-06-05_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxlogscore=999 suspectscore=0 impostorscore=0 mlxscore=0 bulkscore=0 lowpriorityscore=0 clxscore=1015 spamscore=0 malwarescore=0 phishscore=0 adultscore=0 priorityscore=1501 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506050165 From: Rob Clark Just some tidying up. Signed-off-by: Rob Clark Reviewed-by: Dmitry Baryshkov Signed-off-by: Rob Clark Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gpu.h | 44 +++++++++++++++++++++++------------ 1 file changed, 29 insertions(+), 15 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h index 957d6fb3469d..c699ce0c557b 100644 --- a/drivers/gpu/drm/msm/msm_gpu.h +++ b/drivers/gpu/drm/msm/msm_gpu.h @@ -348,25 +348,39 @@ struct msm_gpu_perfcntr { /** * struct msm_context - per-drm_file context - * - * @queuelock: synchronizes access to submitqueues list - * @submitqueues: list of &msm_gpu_submitqueue created by userspace - * @queueid: counter incremented each time a submitqueue is created, - * used to assign &msm_gpu_submitqueue.id - * @aspace: the per-process GPU address-space - * @ref: reference count - * @seqno: unique per process seqno */ struct msm_context { + /** @queuelock: synchronizes access to submitqueues list */ rwlock_t queuelock; + + /** @submitqueues: list of &msm_gpu_submitqueue created by userspace */ struct list_head submitqueues; + + /** + * @queueid: + * + * Counter incremented each time a submitqueue is created, used to + * assign &msm_gpu_submitqueue.id + */ int queueid; + + /** @aspace: the per-process GPU address-space */ struct msm_gem_address_space *aspace; + + /** @kref: the reference count */ struct kref ref; + + /** + * @seqno: + * + * A unique per-process sequence number. Used to detect context + * switches, without relying on keeping a, potentially dangling, + * pointer to the previous context. + */ int seqno; /** - * sysprof: + * @sysprof: * * The value of MSM_PARAM_SYSPROF set by userspace. This is * intended to be used by system profiling tools like Mesa's @@ -384,21 +398,21 @@ struct msm_context { int sysprof; /** - * comm: Overridden task comm, see MSM_PARAM_COMM + * @comm: Overridden task comm, see MSM_PARAM_COMM * * Accessed under msm_gpu::lock */ char *comm; /** - * cmdline: Overridden task cmdline, see MSM_PARAM_CMDLINE + * @cmdline: Overridden task cmdline, see MSM_PARAM_CMDLINE * * Accessed under msm_gpu::lock */ char *cmdline; /** - * elapsed: + * @elapsed: * * The total (cumulative) elapsed time GPU was busy with rendering * from this context in ns. @@ -406,7 +420,7 @@ struct msm_context { uint64_t elapsed_ns; /** - * cycles: + * @cycles: * * The total (cumulative) GPU cycles elapsed attributed to this * context. @@ -414,7 +428,7 @@ struct msm_context { uint64_t cycles; /** - * entities: + * @entities: * * Table of per-priority-level sched entities used by submitqueues * associated with this &drm_file. Because some userspace apps @@ -427,7 +441,7 @@ struct msm_context { struct drm_sched_entity *entities[NR_SCHED_PRIORITIES * MSM_GPU_MAX_RINGS]; /** - * ctx_mem: + * @ctx_mem: * * Total amount of memory of GEM buffers with handles attached for * this context. From patchwork Thu Jun 5 18:28:49 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 894509 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AE93327E1AB for ; Thu, 5 Jun 2025 18:32:28 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.168.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749148353; cv=none; b=YpDfvdGxz6kon3Qobz4bWc2bKNSHU3O3BRsoAN/3aigQjiWLQhVwZuHw6KAK60U9T0inFD+X9vhn0mxysk//T+cQ85JDGUajU+t5nNCDmV0Ah9LE2GTqNkTA7ZUm1oa7ct4I3+sjQugXgCa8atheyLJGNW5F9kBrc04xaxMsKkE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749148353; c=relaxed/simple; bh=qdpTWVoY8wjKJ3WY3hhqAix9JNRJY0mRnwfqiFuXiFw=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=TRQsGyO6jkI8Qg/NKqo6No+ie7HnZwmkPCyi188ULAsPyo8zzujCRzyXxrPBWf3YQGp3G316m3EdoiGdg1CnoSzULLy//mOV8rLAHqJHGfUqMvUekk5HeHtCk6J+LRIGy4uOHfSjDI1zxfOxYC/mHKUIoDTYExrWGAsSWMEA1pg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=S1GarBV0; arc=none smtp.client-ip=205.220.168.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="S1GarBV0" Received: from pps.filterd (m0279864.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 555GtiY5028364 for ; Thu, 5 Jun 2025 18:32:28 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=L9dD9Q8hADL QOkskzGevOPdbFuYhtTIsJoW4Dzezivw=; b=S1GarBV0aMUcoSdklvwNd5qQ0Xt LHLQnQ9U2E+85YEKWOReC7n+ubH3pVcpIDI5fUOuH21Bkv+KKChcfAtiwwD44XKF wXNIBLIipN82Qj4WTTUhxTP2fszU/yQjIC1WXYcbiAWuJXwHSb2y/xE6JWy4axuW z7vKy6fmE2rKcO4Eb2m7cV+mdz/nj/GvY7hbUFD/dWG9leXz9bKAfSL8B1gnIR93 MX2xhwaAR6GA3dPxpF/43Jt12OnhsjAIVCOxksEEmN7f900VMBzEfX2wj4k+95cK 4VCWY6qodGyf3ljP3NHBjHycxNG+wRQ12DhsELxrkfqGO2IK47jfWj54fqw== Received: from mail-pf1-f199.google.com (mail-pf1-f199.google.com [209.85.210.199]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 471g8nt9rn-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Thu, 05 Jun 2025 18:32:27 +0000 (GMT) Received: by mail-pf1-f199.google.com with SMTP id d2e1a72fcca58-747adea6ddbso1091789b3a.0 for ; Thu, 05 Jun 2025 11:32:27 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1749148346; x=1749753146; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=L9dD9Q8hADLQOkskzGevOPdbFuYhtTIsJoW4Dzezivw=; b=vAZGRXn3X3nz/brFmJXnRtLVoFiM4wYjSRjrz/ikk9WgfkGSsy6/uMrv+cRvA76hug VWmm2cOHBp4r/exP+WQXzwFomD5vKHkyWipAGle6VUicHKdva7a9lCGriDVtV7ajDN0R e7iXquePsGTVxA6QfEhsNwD4RoE6QuYSoKmH+NrNbzM+nJPx+d4y7sEGum+VAYiUMqJm 5DafAr8rLQPc8LMzLvqAHF40STAhW9Mp5yO5zAC3tlJrBcKyvRJKfm1G1rkCP679pHDI HrFjpWBNNiYiTm7d6mNGtfnCojZf73VrOlsMVIprn7d3bBOr2Rf5fdWyUA3goObQ/fPI EYZw== X-Forwarded-Encrypted: i=1; AJvYcCXabk04S1Rr5TL9ztG/fb34FKW2cbA/XW71dPMToipuJEhPc2KdzTHGCKQzngKulhkuVdb9LIhHXK2d3Ywv@vger.kernel.org X-Gm-Message-State: AOJu0YxpUgeq7n1fEAYPBJGpnJeS4fTtGrRhlqI4Ky/c6fcx0yZuJ/Qh v8RavO8uEG7ytmW2ViCIu0qJwypBTb1m9zw98Jo6t23TQpKiLB+ghWNRUsl1PzcNE8/9lST1SD8 VETXAec++VPZ36cVps7OohkOCZ+83BXY1+qFKYS5KNXS5E0mhutkxfeyyzlNI2o8X+Dxe X-Gm-Gg: ASbGncsLXMsy3KOL4i8vjnSoffXxnvUUM5PFbLMoC+aNiDiB0iczts2AdigsEBnGJ2y FuarfRWlYCwkxJuV8bTXWFOq9N2kxAV+GQp5FqoOPAm2rLgPx6Bt/OzafsZXxmoCFw9NU0ImnGG WJBVBu1EkHDi1lnm23J9X0UkQYHyzbmcaU+adjzMpXvZogBuziXQ0+XdFda35jwnTaxV2FbClVJ nRdH5rf/U6UzevHexUhNyAFFI/1CHkkb0LjZeSly5jbRFL6LJOJfojh/ecgoavE/+snDr4hMqwa aEAjDq65DSVdwlKLRsgIDg== X-Received: by 2002:a05:6a20:3d87:b0:1f5:882e:60f with SMTP id adf61e73a8af0-21ee25b03c8mr327991637.17.1749148345443; Thu, 05 Jun 2025 11:32:25 -0700 (PDT) X-Google-Smtp-Source: AGHT+IGIAWBj2kfnWqbPrSEvm00JkonTdHyQsT6Y6ROZssqPvuHt207SPAJFjScVMvqu4Rcom2v8jQ== X-Received: by 2002:a05:6a20:3d87:b0:1f5:882e:60f with SMTP id adf61e73a8af0-21ee25b03c8mr327908637.17.1749148344505; Thu, 05 Jun 2025 11:32:24 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:89fa:e299:1a34:c1f5]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-b2f5f8a6003sm2633a12.78.2025.06.05.11.32.23 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 05 Jun 2025 11:32:24 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Rob Clark , Dmitry Baryshkov , Rob Clark , Rob Clark , Sean Paul , Konrad Dybcio , Abhinav Kumar , Dmitry Baryshkov , Marijn Suijten , David Airlie , Simona Vetter , Jessica Zhang , Arnd Bergmann , Jun Nie , Jonathan Marek , Krzysztof Kozlowski , Eugene Lepshy , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v6 04/40] drm/msm: Rename msm_gem_address_space -> msm_gem_vm Date: Thu, 5 Jun 2025 11:28:49 -0700 Message-ID: <20250605183111.163594-5-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250605183111.163594-1-robin.clark@oss.qualcomm.com> References: <20250605183111.163594-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Proofpoint-GUID: HMxwrUoGS6A8Ffngu24Uv2adF7zN3w8s X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjA1MDE2NSBTYWx0ZWRfXwb60OA7iatbW d9aGr3iP3m0qoxV+22s7eAUB65sTjrVR03EFKciG6uvIiQkg7gQQzRDARfQjbtjwetfVSqKwCwX LyuBnIgUWhTPEDr+VMEsIb0kN71Mru0KUFV3GBXQWSO4u7U4W/f24JAyUBvBSBpzyNk++kpSRoz 5+BiUOx9hZkLgRrPyaEkYseoqHXlpYtmIGPnsOT+tWhOaUsgDUd/fTpd0NLtaab7SHEGEbAeWvE poqMskoTUltJXFrfMNW4oBf/Z6zCqvnDbzR9PAlIfA06ECcyh0T6lI9QJqjxikdaZAN6Mma/wAh OZDfywxwUkVBFy8uw7wPkwOj9WiQmiYUIKb/mP9zm/lLAka7xWIQO+pr0TlVDdXxeOICueooMVT sX3MqmV78lb+R04dEK8oDy/u5q3krkKjuxRXpPVlWw9/i0ve9XlZ7Y4cugRQg7WgXt8JsbMN X-Proofpoint-ORIG-GUID: HMxwrUoGS6A8Ffngu24Uv2adF7zN3w8s X-Authority-Analysis: v=2.4 cv=UphjN/wB c=1 sm=1 tr=0 ts=6841e2bb cx=c_pps a=WW5sKcV1LcKqjgzy2JUPuA==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=cm27Pg_UAAAA:8 a=EUspDBNiAAAA:8 a=kLMotZ-wiLr96KG7Bc8A:9 a=OpyuDcXvxspvyRM73sMx:22 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.0.736,FMLib:17.12.80.40 definitions=2025-06-05_05,2025-06-05_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 spamscore=0 impostorscore=0 lowpriorityscore=0 phishscore=0 mlxlogscore=999 clxscore=1015 malwarescore=0 adultscore=0 bulkscore=0 mlxscore=0 suspectscore=0 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506050165 From: Rob Clark Re-aligning naming to better match drm_gpuvm terminology will make things less confusing at the end of the drm_gpuvm conversion. This is just rename churn, no functional change. Signed-off-by: Rob Clark Reviewed-by: Dmitry Baryshkov Signed-off-by: Rob Clark Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/adreno/a2xx_gpu.c | 18 ++-- drivers/gpu/drm/msm/adreno/a3xx_gpu.c | 4 +- drivers/gpu/drm/msm/adreno/a4xx_gpu.c | 4 +- drivers/gpu/drm/msm/adreno/a5xx_debugfs.c | 4 +- drivers/gpu/drm/msm/adreno/a5xx_gpu.c | 22 ++--- drivers/gpu/drm/msm/adreno/a5xx_power.c | 2 +- drivers/gpu/drm/msm/adreno/a5xx_preempt.c | 10 +- drivers/gpu/drm/msm/adreno/a6xx_gmu.c | 26 +++--- drivers/gpu/drm/msm/adreno/a6xx_gmu.h | 2 +- drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 45 +++++---- drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c | 6 +- drivers/gpu/drm/msm/adreno/a6xx_preempt.c | 10 +- drivers/gpu/drm/msm/adreno/adreno_gpu.c | 47 +++++----- drivers/gpu/drm/msm/adreno/adreno_gpu.h | 18 ++-- .../drm/msm/disp/dpu1/dpu_encoder_phys_wb.c | 14 +-- drivers/gpu/drm/msm/disp/dpu1/dpu_formats.c | 18 ++-- drivers/gpu/drm/msm/disp/dpu1/dpu_formats.h | 2 +- drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c | 18 ++-- drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c | 14 +-- drivers/gpu/drm/msm/disp/dpu1/dpu_plane.h | 4 +- drivers/gpu/drm/msm/disp/mdp4/mdp4_crtc.c | 6 +- drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c | 24 ++--- drivers/gpu/drm/msm/disp/mdp4/mdp4_plane.c | 12 +-- drivers/gpu/drm/msm/disp/mdp5/mdp5_crtc.c | 4 +- drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c | 18 ++-- drivers/gpu/drm/msm/disp/mdp5/mdp5_plane.c | 12 +-- drivers/gpu/drm/msm/dsi/dsi_host.c | 14 +-- drivers/gpu/drm/msm/msm_drv.c | 8 +- drivers/gpu/drm/msm/msm_drv.h | 10 +- drivers/gpu/drm/msm/msm_fb.c | 10 +- drivers/gpu/drm/msm/msm_fbdev.c | 2 +- drivers/gpu/drm/msm/msm_gem.c | 74 +++++++-------- drivers/gpu/drm/msm/msm_gem.h | 34 +++---- drivers/gpu/drm/msm/msm_gem_submit.c | 6 +- drivers/gpu/drm/msm/msm_gem_vma.c | 93 +++++++++---------- drivers/gpu/drm/msm/msm_gpu.c | 48 +++++----- drivers/gpu/drm/msm/msm_gpu.h | 16 ++-- drivers/gpu/drm/msm/msm_kms.c | 16 ++-- drivers/gpu/drm/msm/msm_kms.h | 2 +- drivers/gpu/drm/msm/msm_ringbuffer.c | 4 +- drivers/gpu/drm/msm/msm_submitqueue.c | 2 +- 41 files changed, 349 insertions(+), 354 deletions(-) diff --git a/drivers/gpu/drm/msm/adreno/a2xx_gpu.c b/drivers/gpu/drm/msm/adreno/a2xx_gpu.c index 379a3d346c30..5eb063ed0b46 100644 --- a/drivers/gpu/drm/msm/adreno/a2xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a2xx_gpu.c @@ -113,7 +113,7 @@ static int a2xx_hw_init(struct msm_gpu *gpu) uint32_t *ptr, len; int i, ret; - a2xx_gpummu_params(gpu->aspace->mmu, &pt_base, &tran_error); + a2xx_gpummu_params(gpu->vm->mmu, &pt_base, &tran_error); DBG("%s", gpu->name); @@ -466,19 +466,19 @@ static struct msm_gpu_state *a2xx_gpu_state_get(struct msm_gpu *gpu) return state; } -static struct msm_gem_address_space * -a2xx_create_address_space(struct msm_gpu *gpu, struct platform_device *pdev) +static struct msm_gem_vm * +a2xx_create_vm(struct msm_gpu *gpu, struct platform_device *pdev) { struct msm_mmu *mmu = a2xx_gpummu_new(&pdev->dev, gpu); - struct msm_gem_address_space *aspace; + struct msm_gem_vm *vm; - aspace = msm_gem_address_space_create(mmu, "gpu", SZ_16M, + vm = msm_gem_vm_create(mmu, "gpu", SZ_16M, 0xfff * SZ_64K); - if (IS_ERR(aspace) && !IS_ERR(mmu)) + if (IS_ERR(vm) && !IS_ERR(mmu)) mmu->funcs->destroy(mmu); - return aspace; + return vm; } static u32 a2xx_get_rptr(struct msm_gpu *gpu, struct msm_ringbuffer *ring) @@ -504,7 +504,7 @@ static const struct adreno_gpu_funcs funcs = { #endif .gpu_state_get = a2xx_gpu_state_get, .gpu_state_put = adreno_gpu_state_put, - .create_address_space = a2xx_create_address_space, + .create_vm = a2xx_create_vm, .get_rptr = a2xx_get_rptr, }, }; @@ -551,7 +551,7 @@ struct msm_gpu *a2xx_gpu_init(struct drm_device *dev) else adreno_gpu->registers = a220_registers; - if (!gpu->aspace) { + if (!gpu->vm) { dev_err(dev->dev, "No memory protection without MMU\n"); if (!allow_vram_carveout) { ret = -ENXIO; diff --git a/drivers/gpu/drm/msm/adreno/a3xx_gpu.c b/drivers/gpu/drm/msm/adreno/a3xx_gpu.c index b6df115bb567..434e6ededf83 100644 --- a/drivers/gpu/drm/msm/adreno/a3xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a3xx_gpu.c @@ -526,7 +526,7 @@ static const struct adreno_gpu_funcs funcs = { .gpu_busy = a3xx_gpu_busy, .gpu_state_get = a3xx_gpu_state_get, .gpu_state_put = adreno_gpu_state_put, - .create_address_space = adreno_create_address_space, + .create_vm = adreno_create_vm, .get_rptr = a3xx_get_rptr, }, }; @@ -581,7 +581,7 @@ struct msm_gpu *a3xx_gpu_init(struct drm_device *dev) goto fail; } - if (!gpu->aspace) { + if (!gpu->vm) { /* TODO we think it is possible to configure the GPU to * restrict access to VRAM carveout. But the required * registers are unknown. For now just bail out and diff --git a/drivers/gpu/drm/msm/adreno/a4xx_gpu.c b/drivers/gpu/drm/msm/adreno/a4xx_gpu.c index f1b18a6663f7..2c75debcfd84 100644 --- a/drivers/gpu/drm/msm/adreno/a4xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a4xx_gpu.c @@ -645,7 +645,7 @@ static const struct adreno_gpu_funcs funcs = { .gpu_busy = a4xx_gpu_busy, .gpu_state_get = a4xx_gpu_state_get, .gpu_state_put = adreno_gpu_state_put, - .create_address_space = adreno_create_address_space, + .create_vm = adreno_create_vm, .get_rptr = a4xx_get_rptr, }, .get_timestamp = a4xx_get_timestamp, @@ -695,7 +695,7 @@ struct msm_gpu *a4xx_gpu_init(struct drm_device *dev) adreno_gpu->uche_trap_base = 0xffff0000ffff0000ull; - if (!gpu->aspace) { + if (!gpu->vm) { /* TODO we think it is possible to configure the GPU to * restrict access to VRAM carveout. But the required * registers are unknown. For now just bail out and diff --git a/drivers/gpu/drm/msm/adreno/a5xx_debugfs.c b/drivers/gpu/drm/msm/adreno/a5xx_debugfs.c index 169b8fe688f8..625a4e787d8f 100644 --- a/drivers/gpu/drm/msm/adreno/a5xx_debugfs.c +++ b/drivers/gpu/drm/msm/adreno/a5xx_debugfs.c @@ -116,13 +116,13 @@ reset_set(void *data, u64 val) adreno_gpu->fw[ADRENO_FW_PFP] = NULL; if (a5xx_gpu->pm4_bo) { - msm_gem_unpin_iova(a5xx_gpu->pm4_bo, gpu->aspace); + msm_gem_unpin_iova(a5xx_gpu->pm4_bo, gpu->vm); drm_gem_object_put(a5xx_gpu->pm4_bo); a5xx_gpu->pm4_bo = NULL; } if (a5xx_gpu->pfp_bo) { - msm_gem_unpin_iova(a5xx_gpu->pfp_bo, gpu->aspace); + msm_gem_unpin_iova(a5xx_gpu->pfp_bo, gpu->vm); drm_gem_object_put(a5xx_gpu->pfp_bo); a5xx_gpu->pfp_bo = NULL; } diff --git a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c index 60aef0796236..dc31bc0afca4 100644 --- a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c @@ -622,7 +622,7 @@ static int a5xx_ucode_load(struct msm_gpu *gpu) a5xx_gpu->shadow = msm_gem_kernel_new(gpu->dev, sizeof(u32) * gpu->nr_rings, MSM_BO_WC | MSM_BO_MAP_PRIV, - gpu->aspace, &a5xx_gpu->shadow_bo, + gpu->vm, &a5xx_gpu->shadow_bo, &a5xx_gpu->shadow_iova); if (IS_ERR(a5xx_gpu->shadow)) @@ -1042,22 +1042,22 @@ static void a5xx_destroy(struct msm_gpu *gpu) a5xx_preempt_fini(gpu); if (a5xx_gpu->pm4_bo) { - msm_gem_unpin_iova(a5xx_gpu->pm4_bo, gpu->aspace); + msm_gem_unpin_iova(a5xx_gpu->pm4_bo, gpu->vm); drm_gem_object_put(a5xx_gpu->pm4_bo); } if (a5xx_gpu->pfp_bo) { - msm_gem_unpin_iova(a5xx_gpu->pfp_bo, gpu->aspace); + msm_gem_unpin_iova(a5xx_gpu->pfp_bo, gpu->vm); drm_gem_object_put(a5xx_gpu->pfp_bo); } if (a5xx_gpu->gpmu_bo) { - msm_gem_unpin_iova(a5xx_gpu->gpmu_bo, gpu->aspace); + msm_gem_unpin_iova(a5xx_gpu->gpmu_bo, gpu->vm); drm_gem_object_put(a5xx_gpu->gpmu_bo); } if (a5xx_gpu->shadow_bo) { - msm_gem_unpin_iova(a5xx_gpu->shadow_bo, gpu->aspace); + msm_gem_unpin_iova(a5xx_gpu->shadow_bo, gpu->vm); drm_gem_object_put(a5xx_gpu->shadow_bo); } @@ -1457,7 +1457,7 @@ static int a5xx_crashdumper_init(struct msm_gpu *gpu, struct a5xx_crashdumper *dumper) { dumper->ptr = msm_gem_kernel_new(gpu->dev, - SZ_1M, MSM_BO_WC, gpu->aspace, + SZ_1M, MSM_BO_WC, gpu->vm, &dumper->bo, &dumper->iova); if (!IS_ERR(dumper->ptr)) @@ -1557,7 +1557,7 @@ static void a5xx_gpu_state_get_hlsq_regs(struct msm_gpu *gpu, if (a5xx_crashdumper_run(gpu, &dumper)) { kfree(a5xx_state->hlsqregs); - msm_gem_kernel_put(dumper.bo, gpu->aspace); + msm_gem_kernel_put(dumper.bo, gpu->vm); return; } @@ -1565,7 +1565,7 @@ static void a5xx_gpu_state_get_hlsq_regs(struct msm_gpu *gpu, memcpy(a5xx_state->hlsqregs, dumper.ptr + (256 * SZ_1K), count * sizeof(u32)); - msm_gem_kernel_put(dumper.bo, gpu->aspace); + msm_gem_kernel_put(dumper.bo, gpu->vm); } static struct msm_gpu_state *a5xx_gpu_state_get(struct msm_gpu *gpu) @@ -1713,7 +1713,7 @@ static const struct adreno_gpu_funcs funcs = { .gpu_busy = a5xx_gpu_busy, .gpu_state_get = a5xx_gpu_state_get, .gpu_state_put = a5xx_gpu_state_put, - .create_address_space = adreno_create_address_space, + .create_vm = adreno_create_vm, .get_rptr = a5xx_get_rptr, }, .get_timestamp = a5xx_get_timestamp, @@ -1786,8 +1786,8 @@ struct msm_gpu *a5xx_gpu_init(struct drm_device *dev) return ERR_PTR(ret); } - if (gpu->aspace) - msm_mmu_set_fault_handler(gpu->aspace->mmu, gpu, a5xx_fault_handler); + if (gpu->vm) + msm_mmu_set_fault_handler(gpu->vm->mmu, gpu, a5xx_fault_handler); /* Set up the preemption specific bits and pieces for each ringbuffer */ a5xx_preempt_init(gpu); diff --git a/drivers/gpu/drm/msm/adreno/a5xx_power.c b/drivers/gpu/drm/msm/adreno/a5xx_power.c index 6b91e0bd1514..d6da7351cfbb 100644 --- a/drivers/gpu/drm/msm/adreno/a5xx_power.c +++ b/drivers/gpu/drm/msm/adreno/a5xx_power.c @@ -363,7 +363,7 @@ void a5xx_gpmu_ucode_init(struct msm_gpu *gpu) bosize = (cmds_size + (cmds_size / TYPE4_MAX_PAYLOAD) + 1) << 2; ptr = msm_gem_kernel_new(drm, bosize, - MSM_BO_WC | MSM_BO_GPU_READONLY, gpu->aspace, + MSM_BO_WC | MSM_BO_GPU_READONLY, gpu->vm, &a5xx_gpu->gpmu_bo, &a5xx_gpu->gpmu_iova); if (IS_ERR(ptr)) return; diff --git a/drivers/gpu/drm/msm/adreno/a5xx_preempt.c b/drivers/gpu/drm/msm/adreno/a5xx_preempt.c index 36f72c43eae8..e50221d4e6ee 100644 --- a/drivers/gpu/drm/msm/adreno/a5xx_preempt.c +++ b/drivers/gpu/drm/msm/adreno/a5xx_preempt.c @@ -254,7 +254,7 @@ static int preempt_init_ring(struct a5xx_gpu *a5xx_gpu, ptr = msm_gem_kernel_new(gpu->dev, A5XX_PREEMPT_RECORD_SIZE + A5XX_PREEMPT_COUNTER_SIZE, - MSM_BO_WC | MSM_BO_MAP_PRIV, gpu->aspace, &bo, &iova); + MSM_BO_WC | MSM_BO_MAP_PRIV, gpu->vm, &bo, &iova); if (IS_ERR(ptr)) return PTR_ERR(ptr); @@ -262,9 +262,9 @@ static int preempt_init_ring(struct a5xx_gpu *a5xx_gpu, /* The buffer to store counters needs to be unprivileged */ counters = msm_gem_kernel_new(gpu->dev, A5XX_PREEMPT_COUNTER_SIZE, - MSM_BO_WC, gpu->aspace, &counters_bo, &counters_iova); + MSM_BO_WC, gpu->vm, &counters_bo, &counters_iova); if (IS_ERR(counters)) { - msm_gem_kernel_put(bo, gpu->aspace); + msm_gem_kernel_put(bo, gpu->vm); return PTR_ERR(counters); } @@ -295,8 +295,8 @@ void a5xx_preempt_fini(struct msm_gpu *gpu) int i; for (i = 0; i < gpu->nr_rings; i++) { - msm_gem_kernel_put(a5xx_gpu->preempt_bo[i], gpu->aspace); - msm_gem_kernel_put(a5xx_gpu->preempt_counters_bo[i], gpu->aspace); + msm_gem_kernel_put(a5xx_gpu->preempt_bo[i], gpu->vm); + msm_gem_kernel_put(a5xx_gpu->preempt_counters_bo[i], gpu->vm); } } diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c index 38c0f8ef85c3..848acc382b7d 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c @@ -1259,15 +1259,15 @@ int a6xx_gmu_stop(struct a6xx_gpu *a6xx_gpu) static void a6xx_gmu_memory_free(struct a6xx_gmu *gmu) { - msm_gem_kernel_put(gmu->hfi.obj, gmu->aspace); - msm_gem_kernel_put(gmu->debug.obj, gmu->aspace); - msm_gem_kernel_put(gmu->icache.obj, gmu->aspace); - msm_gem_kernel_put(gmu->dcache.obj, gmu->aspace); - msm_gem_kernel_put(gmu->dummy.obj, gmu->aspace); - msm_gem_kernel_put(gmu->log.obj, gmu->aspace); - - gmu->aspace->mmu->funcs->detach(gmu->aspace->mmu); - msm_gem_address_space_put(gmu->aspace); + msm_gem_kernel_put(gmu->hfi.obj, gmu->vm); + msm_gem_kernel_put(gmu->debug.obj, gmu->vm); + msm_gem_kernel_put(gmu->icache.obj, gmu->vm); + msm_gem_kernel_put(gmu->dcache.obj, gmu->vm); + msm_gem_kernel_put(gmu->dummy.obj, gmu->vm); + msm_gem_kernel_put(gmu->log.obj, gmu->vm); + + gmu->vm->mmu->funcs->detach(gmu->vm->mmu); + msm_gem_vm_put(gmu->vm); } static int a6xx_gmu_memory_alloc(struct a6xx_gmu *gmu, struct a6xx_gmu_bo *bo, @@ -1296,7 +1296,7 @@ static int a6xx_gmu_memory_alloc(struct a6xx_gmu *gmu, struct a6xx_gmu_bo *bo, if (IS_ERR(bo->obj)) return PTR_ERR(bo->obj); - ret = msm_gem_get_and_pin_iova_range(bo->obj, gmu->aspace, &bo->iova, + ret = msm_gem_get_and_pin_iova_range(bo->obj, gmu->vm, &bo->iova, range_start, range_end); if (ret) { drm_gem_object_put(bo->obj); @@ -1321,9 +1321,9 @@ static int a6xx_gmu_memory_probe(struct a6xx_gmu *gmu) if (IS_ERR(mmu)) return PTR_ERR(mmu); - gmu->aspace = msm_gem_address_space_create(mmu, "gmu", 0x0, 0x80000000); - if (IS_ERR(gmu->aspace)) - return PTR_ERR(gmu->aspace); + gmu->vm = msm_gem_vm_create(mmu, "gmu", 0x0, 0x80000000); + if (IS_ERR(gmu->vm)) + return PTR_ERR(gmu->vm); return 0; } diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.h b/drivers/gpu/drm/msm/adreno/a6xx_gmu.h index 39fb8c774a79..cceda7d9c33a 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.h +++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.h @@ -62,7 +62,7 @@ struct a6xx_gmu { /* For serializing communication with the GMU: */ struct mutex lock; - struct msm_gem_address_space *aspace; + struct msm_gem_vm *vm; void __iomem *mmio; void __iomem *rscc; diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c index 620a26638535..d05c00624f74 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c @@ -120,7 +120,7 @@ static void a6xx_set_pagetable(struct a6xx_gpu *a6xx_gpu, if (ctx->seqno == ring->cur_ctx_seqno) return; - if (msm_iommu_pagetable_params(ctx->aspace->mmu, &ttbr, &asid)) + if (msm_iommu_pagetable_params(ctx->vm->mmu, &ttbr, &asid)) return; if (adreno_gpu->info->family >= ADRENO_7XX_GEN1) { @@ -957,7 +957,7 @@ static int a6xx_ucode_load(struct msm_gpu *gpu) msm_gem_object_set_name(a6xx_gpu->sqe_bo, "sqefw"); if (!a6xx_ucode_check_version(a6xx_gpu, a6xx_gpu->sqe_bo)) { - msm_gem_unpin_iova(a6xx_gpu->sqe_bo, gpu->aspace); + msm_gem_unpin_iova(a6xx_gpu->sqe_bo, gpu->vm); drm_gem_object_put(a6xx_gpu->sqe_bo); a6xx_gpu->sqe_bo = NULL; @@ -974,7 +974,7 @@ static int a6xx_ucode_load(struct msm_gpu *gpu) a6xx_gpu->shadow = msm_gem_kernel_new(gpu->dev, sizeof(u32) * gpu->nr_rings, MSM_BO_WC | MSM_BO_MAP_PRIV, - gpu->aspace, &a6xx_gpu->shadow_bo, + gpu->vm, &a6xx_gpu->shadow_bo, &a6xx_gpu->shadow_iova); if (IS_ERR(a6xx_gpu->shadow)) @@ -985,7 +985,7 @@ static int a6xx_ucode_load(struct msm_gpu *gpu) a6xx_gpu->pwrup_reglist_ptr = msm_gem_kernel_new(gpu->dev, PAGE_SIZE, MSM_BO_WC | MSM_BO_MAP_PRIV, - gpu->aspace, &a6xx_gpu->pwrup_reglist_bo, + gpu->vm, &a6xx_gpu->pwrup_reglist_bo, &a6xx_gpu->pwrup_reglist_iova); if (IS_ERR(a6xx_gpu->pwrup_reglist_ptr)) @@ -2198,12 +2198,12 @@ static void a6xx_destroy(struct msm_gpu *gpu) struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu); if (a6xx_gpu->sqe_bo) { - msm_gem_unpin_iova(a6xx_gpu->sqe_bo, gpu->aspace); + msm_gem_unpin_iova(a6xx_gpu->sqe_bo, gpu->vm); drm_gem_object_put(a6xx_gpu->sqe_bo); } if (a6xx_gpu->shadow_bo) { - msm_gem_unpin_iova(a6xx_gpu->shadow_bo, gpu->aspace); + msm_gem_unpin_iova(a6xx_gpu->shadow_bo, gpu->vm); drm_gem_object_put(a6xx_gpu->shadow_bo); } @@ -2243,8 +2243,8 @@ static void a6xx_gpu_set_freq(struct msm_gpu *gpu, struct dev_pm_opp *opp, mutex_unlock(&a6xx_gpu->gmu.lock); } -static struct msm_gem_address_space * -a6xx_create_address_space(struct msm_gpu *gpu, struct platform_device *pdev) +static struct msm_gem_vm * +a6xx_create_vm(struct msm_gpu *gpu, struct platform_device *pdev) { struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu); struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu); @@ -2258,22 +2258,22 @@ a6xx_create_address_space(struct msm_gpu *gpu, struct platform_device *pdev) !device_iommu_capable(&pdev->dev, IOMMU_CAP_CACHE_COHERENCY)) quirks |= IO_PGTABLE_QUIRK_ARM_OUTER_WBWA; - return adreno_iommu_create_address_space(gpu, pdev, quirks); + return adreno_iommu_create_vm(gpu, pdev, quirks); } -static struct msm_gem_address_space * -a6xx_create_private_address_space(struct msm_gpu *gpu) +static struct msm_gem_vm * +a6xx_create_private_vm(struct msm_gpu *gpu) { struct msm_mmu *mmu; - mmu = msm_iommu_pagetable_create(gpu->aspace->mmu); + mmu = msm_iommu_pagetable_create(gpu->vm->mmu); if (IS_ERR(mmu)) return ERR_CAST(mmu); - return msm_gem_address_space_create(mmu, + return msm_gem_vm_create(mmu, "gpu", ADRENO_VM_START, - adreno_private_address_space_size(gpu)); + adreno_private_vm_size(gpu)); } static uint32_t a6xx_get_rptr(struct msm_gpu *gpu, struct msm_ringbuffer *ring) @@ -2390,8 +2390,8 @@ static const struct adreno_gpu_funcs funcs = { .gpu_state_get = a6xx_gpu_state_get, .gpu_state_put = a6xx_gpu_state_put, #endif - .create_address_space = a6xx_create_address_space, - .create_private_address_space = a6xx_create_private_address_space, + .create_vm = a6xx_create_vm, + .create_private_vm = a6xx_create_private_vm, .get_rptr = a6xx_get_rptr, .progress = a6xx_progress, }, @@ -2419,8 +2419,8 @@ static const struct adreno_gpu_funcs funcs_gmuwrapper = { .gpu_state_get = a6xx_gpu_state_get, .gpu_state_put = a6xx_gpu_state_put, #endif - .create_address_space = a6xx_create_address_space, - .create_private_address_space = a6xx_create_private_address_space, + .create_vm = a6xx_create_vm, + .create_private_vm = a6xx_create_private_vm, .get_rptr = a6xx_get_rptr, .progress = a6xx_progress, }, @@ -2450,8 +2450,8 @@ static const struct adreno_gpu_funcs funcs_a7xx = { .gpu_state_get = a6xx_gpu_state_get, .gpu_state_put = a6xx_gpu_state_put, #endif - .create_address_space = a6xx_create_address_space, - .create_private_address_space = a6xx_create_private_address_space, + .create_vm = a6xx_create_vm, + .create_private_vm = a6xx_create_private_vm, .get_rptr = a6xx_get_rptr, .progress = a6xx_progress, }, @@ -2547,9 +2547,8 @@ struct msm_gpu *a6xx_gpu_init(struct drm_device *dev) adreno_gpu->uche_trap_base = 0x1fffffffff000ull; - if (gpu->aspace) - msm_mmu_set_fault_handler(gpu->aspace->mmu, gpu, - a6xx_fault_handler); + if (gpu->vm) + msm_mmu_set_fault_handler(gpu->vm->mmu, gpu, a6xx_fault_handler); a6xx_calc_ubwc_config(adreno_gpu); /* Set up the preemption specific bits and pieces for each ringbuffer */ diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c index 341a72a67401..ff06bb75b76d 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c @@ -132,7 +132,7 @@ static int a6xx_crashdumper_init(struct msm_gpu *gpu, struct a6xx_crashdumper *dumper) { dumper->ptr = msm_gem_kernel_new(gpu->dev, - SZ_1M, MSM_BO_WC, gpu->aspace, + SZ_1M, MSM_BO_WC, gpu->vm, &dumper->bo, &dumper->iova); if (!IS_ERR(dumper->ptr)) @@ -1619,7 +1619,7 @@ struct msm_gpu_state *a6xx_gpu_state_get(struct msm_gpu *gpu) a7xx_get_clusters(gpu, a6xx_state, dumper); a7xx_get_dbgahb_clusters(gpu, a6xx_state, dumper); - msm_gem_kernel_put(dumper->bo, gpu->aspace); + msm_gem_kernel_put(dumper->bo, gpu->vm); } a7xx_get_post_crashdumper_registers(gpu, a6xx_state); @@ -1631,7 +1631,7 @@ struct msm_gpu_state *a6xx_gpu_state_get(struct msm_gpu *gpu) a6xx_get_clusters(gpu, a6xx_state, dumper); a6xx_get_dbgahb_clusters(gpu, a6xx_state, dumper); - msm_gem_kernel_put(dumper->bo, gpu->aspace); + msm_gem_kernel_put(dumper->bo, gpu->vm); } } diff --git a/drivers/gpu/drm/msm/adreno/a6xx_preempt.c b/drivers/gpu/drm/msm/adreno/a6xx_preempt.c index 9b5e27d2373c..b14a7c630bd0 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_preempt.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_preempt.c @@ -343,7 +343,7 @@ static int preempt_init_ring(struct a6xx_gpu *a6xx_gpu, ptr = msm_gem_kernel_new(gpu->dev, PREEMPT_RECORD_SIZE(adreno_gpu), - MSM_BO_WC | MSM_BO_MAP_PRIV, gpu->aspace, &bo, &iova); + MSM_BO_WC | MSM_BO_MAP_PRIV, gpu->vm, &bo, &iova); if (IS_ERR(ptr)) return PTR_ERR(ptr); @@ -361,7 +361,7 @@ static int preempt_init_ring(struct a6xx_gpu *a6xx_gpu, ptr = msm_gem_kernel_new(gpu->dev, PREEMPT_SMMU_INFO_SIZE, MSM_BO_WC | MSM_BO_MAP_PRIV | MSM_BO_GPU_READONLY, - gpu->aspace, &bo, &iova); + gpu->vm, &bo, &iova); if (IS_ERR(ptr)) return PTR_ERR(ptr); @@ -376,7 +376,7 @@ static int preempt_init_ring(struct a6xx_gpu *a6xx_gpu, struct a7xx_cp_smmu_info *smmu_info_ptr = ptr; - msm_iommu_pagetable_params(gpu->aspace->mmu, &ttbr, &asid); + msm_iommu_pagetable_params(gpu->vm->mmu, &ttbr, &asid); smmu_info_ptr->magic = GEN7_CP_SMMU_INFO_MAGIC; smmu_info_ptr->ttbr0 = ttbr; @@ -404,7 +404,7 @@ void a6xx_preempt_fini(struct msm_gpu *gpu) int i; for (i = 0; i < gpu->nr_rings; i++) - msm_gem_kernel_put(a6xx_gpu->preempt_bo[i], gpu->aspace); + msm_gem_kernel_put(a6xx_gpu->preempt_bo[i], gpu->vm); } void a6xx_preempt_init(struct msm_gpu *gpu) @@ -430,7 +430,7 @@ void a6xx_preempt_init(struct msm_gpu *gpu) a6xx_gpu->preempt_postamble_ptr = msm_gem_kernel_new(gpu->dev, PAGE_SIZE, MSM_BO_WC | MSM_BO_MAP_PRIV | MSM_BO_GPU_READONLY, - gpu->aspace, &a6xx_gpu->preempt_postamble_bo, + gpu->vm, &a6xx_gpu->preempt_postamble_bo, &a6xx_gpu->preempt_postamble_iova); preempt_prepare_postamble(a6xx_gpu); diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c b/drivers/gpu/drm/msm/adreno/adreno_gpu.c index 93fe26009511..b01d9efb8663 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c @@ -191,21 +191,21 @@ int adreno_zap_shader_load(struct msm_gpu *gpu, u32 pasid) return zap_shader_load_mdt(gpu, adreno_gpu->info->zapfw, pasid); } -struct msm_gem_address_space * -adreno_create_address_space(struct msm_gpu *gpu, - struct platform_device *pdev) +struct msm_gem_vm * +adreno_create_vm(struct msm_gpu *gpu, + struct platform_device *pdev) { - return adreno_iommu_create_address_space(gpu, pdev, 0); + return adreno_iommu_create_vm(gpu, pdev, 0); } -struct msm_gem_address_space * -adreno_iommu_create_address_space(struct msm_gpu *gpu, - struct platform_device *pdev, - unsigned long quirks) +struct msm_gem_vm * +adreno_iommu_create_vm(struct msm_gpu *gpu, + struct platform_device *pdev, + unsigned long quirks) { struct iommu_domain_geometry *geometry; struct msm_mmu *mmu; - struct msm_gem_address_space *aspace; + struct msm_gem_vm *vm; u64 start, size; mmu = msm_iommu_gpu_new(&pdev->dev, gpu, quirks); @@ -224,16 +224,15 @@ adreno_iommu_create_address_space(struct msm_gpu *gpu, start = max_t(u64, SZ_16M, geometry->aperture_start); size = geometry->aperture_end - start + 1; - aspace = msm_gem_address_space_create(mmu, "gpu", - start & GENMASK_ULL(48, 0), size); + vm = msm_gem_vm_create(mmu, "gpu", start & GENMASK_ULL(48, 0), size); - if (IS_ERR(aspace) && !IS_ERR(mmu)) + if (IS_ERR(vm) && !IS_ERR(mmu)) mmu->funcs->destroy(mmu); - return aspace; + return vm; } -u64 adreno_private_address_space_size(struct msm_gpu *gpu) +u64 adreno_private_vm_size(struct msm_gpu *gpu) { struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu); struct adreno_smmu_priv *adreno_smmu = dev_get_drvdata(&gpu->pdev->dev); @@ -274,7 +273,7 @@ void adreno_check_and_reenable_stall(struct adreno_gpu *adreno_gpu) !READ_ONCE(gpu->crashstate)) { adreno_gpu->stall_enabled = true; - gpu->aspace->mmu->funcs->set_stall(gpu->aspace->mmu, true); + gpu->vm->mmu->funcs->set_stall(gpu->vm->mmu, true); } spin_unlock_irqrestore(&adreno_gpu->fault_stall_lock, flags); } @@ -302,7 +301,7 @@ int adreno_fault_handler(struct msm_gpu *gpu, unsigned long iova, int flags, if (adreno_gpu->stall_enabled) { adreno_gpu->stall_enabled = false; - gpu->aspace->mmu->funcs->set_stall(gpu->aspace->mmu, false); + gpu->vm->mmu->funcs->set_stall(gpu->vm->mmu, false); } adreno_gpu->stall_reenable_time = ktime_add_ms(ktime_get(), 500); spin_unlock_irqrestore(&adreno_gpu->fault_stall_lock, irq_flags); @@ -312,7 +311,7 @@ int adreno_fault_handler(struct msm_gpu *gpu, unsigned long iova, int flags, * it now. */ if (!do_devcoredump) { - gpu->aspace->mmu->funcs->resume_translation(gpu->aspace->mmu); + gpu->vm->mmu->funcs->resume_translation(gpu->vm->mmu); } /* @@ -406,8 +405,8 @@ int adreno_get_param(struct msm_gpu *gpu, struct msm_context *ctx, *value = 0; return 0; case MSM_PARAM_FAULTS: - if (ctx->aspace) - *value = gpu->global_faults + ctx->aspace->faults; + if (ctx->vm) + *value = gpu->global_faults + ctx->vm->faults; else *value = gpu->global_faults; return 0; @@ -415,14 +414,14 @@ int adreno_get_param(struct msm_gpu *gpu, struct msm_context *ctx, *value = gpu->suspend_count; return 0; case MSM_PARAM_VA_START: - if (ctx->aspace == gpu->aspace) + if (ctx->vm == gpu->vm) return UERR(EINVAL, drm, "requires per-process pgtables"); - *value = ctx->aspace->va_start; + *value = ctx->vm->va_start; return 0; case MSM_PARAM_VA_SIZE: - if (ctx->aspace == gpu->aspace) + if (ctx->vm == gpu->vm) return UERR(EINVAL, drm, "requires per-process pgtables"); - *value = ctx->aspace->va_size; + *value = ctx->vm->va_size; return 0; case MSM_PARAM_HIGHEST_BANK_BIT: *value = adreno_gpu->ubwc_config.highest_bank_bit; @@ -612,7 +611,7 @@ struct drm_gem_object *adreno_fw_create_bo(struct msm_gpu *gpu, void *ptr; ptr = msm_gem_kernel_new(gpu->dev, fw->size - 4, - MSM_BO_WC | MSM_BO_GPU_READONLY, gpu->aspace, &bo, iova); + MSM_BO_WC | MSM_BO_GPU_READONLY, gpu->vm, &bo, iova); if (IS_ERR(ptr)) return ERR_CAST(ptr); diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.h b/drivers/gpu/drm/msm/adreno/adreno_gpu.h index fed9516da365..258c5c6dde2e 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.h +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.h @@ -602,7 +602,7 @@ static inline int adreno_is_a7xx(struct adreno_gpu *gpu) /* Put vm_start above 32b to catch issues with not setting xyz_BASE_HI */ #define ADRENO_VM_START 0x100000000ULL -u64 adreno_private_address_space_size(struct msm_gpu *gpu); +u64 adreno_private_vm_size(struct msm_gpu *gpu); int adreno_get_param(struct msm_gpu *gpu, struct msm_context *ctx, uint32_t param, uint64_t *value, uint32_t *len); int adreno_set_param(struct msm_gpu *gpu, struct msm_context *ctx, @@ -645,14 +645,14 @@ void adreno_show_object(struct drm_printer *p, void **ptr, int len, * Common helper function to initialize the default address space for arm-smmu * attached targets */ -struct msm_gem_address_space * -adreno_create_address_space(struct msm_gpu *gpu, - struct platform_device *pdev); - -struct msm_gem_address_space * -adreno_iommu_create_address_space(struct msm_gpu *gpu, - struct platform_device *pdev, - unsigned long quirks); +struct msm_gem_vm * +adreno_create_vm(struct msm_gpu *gpu, + struct platform_device *pdev); + +struct msm_gem_vm * +adreno_iommu_create_vm(struct msm_gpu *gpu, + struct platform_device *pdev, + unsigned long quirks); int adreno_fault_handler(struct msm_gpu *gpu, unsigned long iova, int flags, struct adreno_smmu_fault_info *info, const char *block, diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_wb.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_wb.c index 849fea580a4c..32e208ee946d 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_wb.c +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_wb.c @@ -566,7 +566,7 @@ static void dpu_encoder_phys_wb_prepare_wb_job(struct dpu_encoder_phys *phys_enc struct drm_writeback_job *job) { const struct msm_format *format; - struct msm_gem_address_space *aspace; + struct msm_gem_vm *vm; struct dpu_hw_wb_cfg *wb_cfg; int ret; struct dpu_encoder_phys_wb *wb_enc = to_dpu_encoder_phys_wb(phys_enc); @@ -576,13 +576,13 @@ static void dpu_encoder_phys_wb_prepare_wb_job(struct dpu_encoder_phys *phys_enc wb_enc->wb_job = job; wb_enc->wb_conn = job->connector; - aspace = phys_enc->dpu_kms->base.aspace; + vm = phys_enc->dpu_kms->base.vm; wb_cfg = &wb_enc->wb_cfg; memset(wb_cfg, 0, sizeof(struct dpu_hw_wb_cfg)); - ret = msm_framebuffer_prepare(job->fb, aspace, false); + ret = msm_framebuffer_prepare(job->fb, vm, false); if (ret) { DPU_ERROR("prep fb failed, %d\n", ret); return; @@ -596,7 +596,7 @@ static void dpu_encoder_phys_wb_prepare_wb_job(struct dpu_encoder_phys *phys_enc return; } - dpu_format_populate_addrs(aspace, job->fb, &wb_cfg->dest); + dpu_format_populate_addrs(vm, job->fb, &wb_cfg->dest); wb_cfg->dest.width = job->fb->width; wb_cfg->dest.height = job->fb->height; @@ -619,14 +619,14 @@ static void dpu_encoder_phys_wb_cleanup_wb_job(struct dpu_encoder_phys *phys_enc struct drm_writeback_job *job) { struct dpu_encoder_phys_wb *wb_enc = to_dpu_encoder_phys_wb(phys_enc); - struct msm_gem_address_space *aspace; + struct msm_gem_vm *vm; if (!job->fb) return; - aspace = phys_enc->dpu_kms->base.aspace; + vm = phys_enc->dpu_kms->base.vm; - msm_framebuffer_cleanup(job->fb, aspace, false); + msm_framebuffer_cleanup(job->fb, vm, false); wb_enc->wb_job = NULL; wb_enc->wb_conn = NULL; } diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_formats.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_formats.c index 59c9427da7dd..d115b79af771 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_formats.c +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_formats.c @@ -274,7 +274,7 @@ int dpu_format_populate_plane_sizes( return _dpu_format_populate_plane_sizes_linear(fmt, fb, layout); } -static void _dpu_format_populate_addrs_ubwc(struct msm_gem_address_space *aspace, +static void _dpu_format_populate_addrs_ubwc(struct msm_gem_vm *vm, struct drm_framebuffer *fb, struct dpu_hw_fmt_layout *layout) { @@ -282,7 +282,7 @@ static void _dpu_format_populate_addrs_ubwc(struct msm_gem_address_space *aspace uint32_t base_addr = 0; bool meta; - base_addr = msm_framebuffer_iova(fb, aspace, 0); + base_addr = msm_framebuffer_iova(fb, vm, 0); fmt = msm_framebuffer_format(fb); meta = MSM_FORMAT_IS_UBWC(fmt); @@ -355,7 +355,7 @@ static void _dpu_format_populate_addrs_ubwc(struct msm_gem_address_space *aspace } } -static void _dpu_format_populate_addrs_linear(struct msm_gem_address_space *aspace, +static void _dpu_format_populate_addrs_linear(struct msm_gem_vm *vm, struct drm_framebuffer *fb, struct dpu_hw_fmt_layout *layout) { @@ -363,17 +363,17 @@ static void _dpu_format_populate_addrs_linear(struct msm_gem_address_space *aspa /* Populate addresses for simple formats here */ for (i = 0; i < layout->num_planes; ++i) - layout->plane_addr[i] = msm_framebuffer_iova(fb, aspace, i); -} + layout->plane_addr[i] = msm_framebuffer_iova(fb, vm, i); + } /** * dpu_format_populate_addrs - populate buffer addresses based on * mmu, fb, and format found in the fb - * @aspace: address space pointer + * @vm: address space pointer * @fb: framebuffer pointer * @layout: format layout structure to populate */ -void dpu_format_populate_addrs(struct msm_gem_address_space *aspace, +void dpu_format_populate_addrs(struct msm_gem_vm *vm, struct drm_framebuffer *fb, struct dpu_hw_fmt_layout *layout) { @@ -384,7 +384,7 @@ void dpu_format_populate_addrs(struct msm_gem_address_space *aspace, /* Populate the addresses given the fb */ if (MSM_FORMAT_IS_UBWC(fmt) || MSM_FORMAT_IS_TILE(fmt)) - _dpu_format_populate_addrs_ubwc(aspace, fb, layout); + _dpu_format_populate_addrs_ubwc(vm, fb, layout); else - _dpu_format_populate_addrs_linear(aspace, fb, layout); + _dpu_format_populate_addrs_linear(vm, fb, layout); } diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_formats.h b/drivers/gpu/drm/msm/disp/dpu1/dpu_formats.h index c6145d43aa3f..989f3e13c497 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_formats.h +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_formats.h @@ -31,7 +31,7 @@ static inline bool dpu_find_format(u32 format, const u32 *supported_formats, return false; } -void dpu_format_populate_addrs(struct msm_gem_address_space *aspace, +void dpu_format_populate_addrs(struct msm_gem_vm *vm, struct drm_framebuffer *fb, struct dpu_hw_fmt_layout *layout); diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c index 3305ad0623ca..bb5db6da636a 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c @@ -1095,26 +1095,26 @@ static void _dpu_kms_mmu_destroy(struct dpu_kms *dpu_kms) { struct msm_mmu *mmu; - if (!dpu_kms->base.aspace) + if (!dpu_kms->base.vm) return; - mmu = dpu_kms->base.aspace->mmu; + mmu = dpu_kms->base.vm->mmu; mmu->funcs->detach(mmu); - msm_gem_address_space_put(dpu_kms->base.aspace); + msm_gem_vm_put(dpu_kms->base.vm); - dpu_kms->base.aspace = NULL; + dpu_kms->base.vm = NULL; } static int _dpu_kms_mmu_init(struct dpu_kms *dpu_kms) { - struct msm_gem_address_space *aspace; + struct msm_gem_vm *vm; - aspace = msm_kms_init_aspace(dpu_kms->dev); - if (IS_ERR(aspace)) - return PTR_ERR(aspace); + vm = msm_kms_init_vm(dpu_kms->dev); + if (IS_ERR(vm)) + return PTR_ERR(vm); - dpu_kms->base.aspace = aspace; + dpu_kms->base.vm = vm; return 0; } diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c index e03d6091f736..2640ab9e6e90 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c @@ -71,7 +71,7 @@ static const uint32_t qcom_compressed_supported_formats[] = { /* * struct dpu_plane - local dpu plane structure - * @aspace: address space pointer + * @vm: address space pointer * @csc_ptr: Points to dpu_csc_cfg structure to use for current * @catalog: Points to dpu catalog structure * @revalidate: force revalidation of all the plane properties @@ -654,8 +654,8 @@ static int dpu_plane_prepare_fb(struct drm_plane *plane, DPU_DEBUG_PLANE(pdpu, "FB[%u]\n", fb->base.id); - /* cache aspace */ - pstate->aspace = kms->base.aspace; + /* cache vm */ + pstate->vm = kms->base.vm; /* * TODO: Need to sort out the msm_framebuffer_prepare() call below so @@ -664,9 +664,9 @@ static int dpu_plane_prepare_fb(struct drm_plane *plane, */ drm_gem_plane_helper_prepare_fb(plane, new_state); - if (pstate->aspace) { + if (pstate->vm) { ret = msm_framebuffer_prepare(new_state->fb, - pstate->aspace, pstate->needs_dirtyfb); + pstate->vm, pstate->needs_dirtyfb); if (ret) { DPU_ERROR("failed to prepare framebuffer\n"); return ret; @@ -689,7 +689,7 @@ static void dpu_plane_cleanup_fb(struct drm_plane *plane, DPU_DEBUG_PLANE(pdpu, "FB[%u]\n", old_state->fb->base.id); - msm_framebuffer_cleanup(old_state->fb, old_pstate->aspace, + msm_framebuffer_cleanup(old_state->fb, old_pstate->vm, old_pstate->needs_dirtyfb); } @@ -1353,7 +1353,7 @@ static void dpu_plane_sspp_atomic_update(struct drm_plane *plane, pstate->needs_qos_remap |= (is_rt_pipe != pdpu->is_rt_pipe); pdpu->is_rt_pipe = is_rt_pipe; - dpu_format_populate_addrs(pstate->aspace, new_state->fb, &pstate->layout); + dpu_format_populate_addrs(pstate->vm, new_state->fb, &pstate->layout); DPU_DEBUG_PLANE(pdpu, "FB[%u] " DRM_RECT_FP_FMT "->crtc%u " DRM_RECT_FMT ", %p4cc ubwc %d\n", fb->base.id, DRM_RECT_FP_ARG(&state->src), diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.h b/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.h index acd5725175cd..3578f52048a5 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.h +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.h @@ -17,7 +17,7 @@ /** * struct dpu_plane_state: Define dpu extension of drm plane state object * @base: base drm plane state object - * @aspace: pointer to address space for input/output buffers + * @vm: pointer to address space for input/output buffers * @pipe: software pipe description * @r_pipe: software pipe description of the second pipe * @pipe_cfg: software pipe configuration @@ -34,7 +34,7 @@ */ struct dpu_plane_state { struct drm_plane_state base; - struct msm_gem_address_space *aspace; + struct msm_gem_vm *vm; struct dpu_sw_pipe pipe; struct dpu_sw_pipe r_pipe; struct dpu_sw_pipe_cfg pipe_cfg; diff --git a/drivers/gpu/drm/msm/disp/mdp4/mdp4_crtc.c b/drivers/gpu/drm/msm/disp/mdp4/mdp4_crtc.c index b8610aa806ea..0133c0c01a0b 100644 --- a/drivers/gpu/drm/msm/disp/mdp4/mdp4_crtc.c +++ b/drivers/gpu/drm/msm/disp/mdp4/mdp4_crtc.c @@ -120,7 +120,7 @@ static void unref_cursor_worker(struct drm_flip_work *work, void *val) struct mdp4_kms *mdp4_kms = get_kms(&mdp4_crtc->base); struct msm_kms *kms = &mdp4_kms->base.base; - msm_gem_unpin_iova(val, kms->aspace); + msm_gem_unpin_iova(val, kms->vm); drm_gem_object_put(val); } @@ -369,7 +369,7 @@ static void update_cursor(struct drm_crtc *crtc) if (next_bo) { /* take a obj ref + iova ref when we start scanning out: */ drm_gem_object_get(next_bo); - msm_gem_get_and_pin_iova(next_bo, kms->aspace, &iova); + msm_gem_get_and_pin_iova(next_bo, kms->vm, &iova); /* enable cursor: */ mdp4_write(mdp4_kms, REG_MDP4_DMA_CURSOR_SIZE(dma), @@ -427,7 +427,7 @@ static int mdp4_crtc_cursor_set(struct drm_crtc *crtc, } if (cursor_bo) { - ret = msm_gem_get_and_pin_iova(cursor_bo, kms->aspace, &iova); + ret = msm_gem_get_and_pin_iova(cursor_bo, kms->vm, &iova); if (ret) goto fail; } else { diff --git a/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c b/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c index c469e66cfc11..94fbc20b2fbd 100644 --- a/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c +++ b/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c @@ -120,15 +120,15 @@ static void mdp4_destroy(struct msm_kms *kms) { struct mdp4_kms *mdp4_kms = to_mdp4_kms(to_mdp_kms(kms)); struct device *dev = mdp4_kms->dev->dev; - struct msm_gem_address_space *aspace = kms->aspace; + struct msm_gem_vm *vm = kms->vm; if (mdp4_kms->blank_cursor_iova) - msm_gem_unpin_iova(mdp4_kms->blank_cursor_bo, kms->aspace); + msm_gem_unpin_iova(mdp4_kms->blank_cursor_bo, kms->vm); drm_gem_object_put(mdp4_kms->blank_cursor_bo); - if (aspace) { - aspace->mmu->funcs->detach(aspace->mmu); - msm_gem_address_space_put(aspace); + if (vm) { + vm->mmu->funcs->detach(vm->mmu); + msm_gem_vm_put(vm); } if (mdp4_kms->rpm_enabled) @@ -380,7 +380,7 @@ static int mdp4_kms_init(struct drm_device *dev) struct mdp4_kms *mdp4_kms = to_mdp4_kms(to_mdp_kms(priv->kms)); struct msm_kms *kms = NULL; struct msm_mmu *mmu; - struct msm_gem_address_space *aspace; + struct msm_gem_vm *vm; int ret; u32 major, minor; unsigned long max_clk; @@ -449,19 +449,19 @@ static int mdp4_kms_init(struct drm_device *dev) } else if (!mmu) { DRM_DEV_INFO(dev->dev, "no iommu, fallback to phys " "contig buffers for scanout\n"); - aspace = NULL; + vm = NULL; } else { - aspace = msm_gem_address_space_create(mmu, + vm = msm_gem_vm_create(mmu, "mdp4", 0x1000, 0x100000000 - 0x1000); - if (IS_ERR(aspace)) { + if (IS_ERR(vm)) { if (!IS_ERR(mmu)) mmu->funcs->destroy(mmu); - ret = PTR_ERR(aspace); + ret = PTR_ERR(vm); goto fail; } - kms->aspace = aspace; + kms->vm = vm; } ret = modeset_init(mdp4_kms); @@ -478,7 +478,7 @@ static int mdp4_kms_init(struct drm_device *dev) goto fail; } - ret = msm_gem_get_and_pin_iova(mdp4_kms->blank_cursor_bo, kms->aspace, + ret = msm_gem_get_and_pin_iova(mdp4_kms->blank_cursor_bo, kms->vm, &mdp4_kms->blank_cursor_iova); if (ret) { DRM_DEV_ERROR(dev->dev, "could not pin blank-cursor bo: %d\n", ret); diff --git a/drivers/gpu/drm/msm/disp/mdp4/mdp4_plane.c b/drivers/gpu/drm/msm/disp/mdp4/mdp4_plane.c index 3fefb2088008..7743be6167f8 100644 --- a/drivers/gpu/drm/msm/disp/mdp4/mdp4_plane.c +++ b/drivers/gpu/drm/msm/disp/mdp4/mdp4_plane.c @@ -87,7 +87,7 @@ static int mdp4_plane_prepare_fb(struct drm_plane *plane, drm_gem_plane_helper_prepare_fb(plane, new_state); - return msm_framebuffer_prepare(new_state->fb, kms->aspace, false); + return msm_framebuffer_prepare(new_state->fb, kms->vm, false); } static void mdp4_plane_cleanup_fb(struct drm_plane *plane, @@ -102,7 +102,7 @@ static void mdp4_plane_cleanup_fb(struct drm_plane *plane, return; DBG("%s: cleanup: FB[%u]", mdp4_plane->name, fb->base.id); - msm_framebuffer_cleanup(fb, kms->aspace, false); + msm_framebuffer_cleanup(fb, kms->vm, false); } @@ -153,13 +153,13 @@ static void mdp4_plane_set_scanout(struct drm_plane *plane, MDP4_PIPE_SRC_STRIDE_B_P3(fb->pitches[3])); mdp4_write(mdp4_kms, REG_MDP4_PIPE_SRCP0_BASE(pipe), - msm_framebuffer_iova(fb, kms->aspace, 0)); + msm_framebuffer_iova(fb, kms->vm, 0)); mdp4_write(mdp4_kms, REG_MDP4_PIPE_SRCP1_BASE(pipe), - msm_framebuffer_iova(fb, kms->aspace, 1)); + msm_framebuffer_iova(fb, kms->vm, 1)); mdp4_write(mdp4_kms, REG_MDP4_PIPE_SRCP2_BASE(pipe), - msm_framebuffer_iova(fb, kms->aspace, 2)); + msm_framebuffer_iova(fb, kms->vm, 2)); mdp4_write(mdp4_kms, REG_MDP4_PIPE_SRCP3_BASE(pipe), - msm_framebuffer_iova(fb, kms->aspace, 3)); + msm_framebuffer_iova(fb, kms->vm, 3)); } static void mdp4_write_csc_config(struct mdp4_kms *mdp4_kms, diff --git a/drivers/gpu/drm/msm/disp/mdp5/mdp5_crtc.c b/drivers/gpu/drm/msm/disp/mdp5/mdp5_crtc.c index 0f653e62b4a0..298861f373b0 100644 --- a/drivers/gpu/drm/msm/disp/mdp5/mdp5_crtc.c +++ b/drivers/gpu/drm/msm/disp/mdp5/mdp5_crtc.c @@ -169,7 +169,7 @@ static void unref_cursor_worker(struct drm_flip_work *work, void *val) struct mdp5_kms *mdp5_kms = get_kms(&mdp5_crtc->base); struct msm_kms *kms = &mdp5_kms->base.base; - msm_gem_unpin_iova(val, kms->aspace); + msm_gem_unpin_iova(val, kms->vm); drm_gem_object_put(val); } @@ -993,7 +993,7 @@ static int mdp5_crtc_cursor_set(struct drm_crtc *crtc, if (!cursor_bo) return -ENOENT; - ret = msm_gem_get_and_pin_iova(cursor_bo, kms->aspace, + ret = msm_gem_get_and_pin_iova(cursor_bo, kms->vm, &mdp5_crtc->cursor.iova); if (ret) { drm_gem_object_put(cursor_bo); diff --git a/drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c b/drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c index 3fcca7a3d82e..9dca0385a42d 100644 --- a/drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c +++ b/drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c @@ -198,11 +198,11 @@ static void mdp5_destroy(struct mdp5_kms *mdp5_kms); static void mdp5_kms_destroy(struct msm_kms *kms) { struct mdp5_kms *mdp5_kms = to_mdp5_kms(to_mdp_kms(kms)); - struct msm_gem_address_space *aspace = kms->aspace; + struct msm_gem_vm *vm = kms->vm; - if (aspace) { - aspace->mmu->funcs->detach(aspace->mmu); - msm_gem_address_space_put(aspace); + if (vm) { + vm->mmu->funcs->detach(vm->mmu); + msm_gem_vm_put(vm); } mdp_kms_destroy(&mdp5_kms->base); @@ -500,7 +500,7 @@ static int mdp5_kms_init(struct drm_device *dev) struct mdp5_kms *mdp5_kms; struct mdp5_cfg *config; struct msm_kms *kms = priv->kms; - struct msm_gem_address_space *aspace; + struct msm_gem_vm *vm; int i, ret; ret = mdp5_init(to_platform_device(dev->dev), dev); @@ -534,13 +534,13 @@ static int mdp5_kms_init(struct drm_device *dev) } mdelay(16); - aspace = msm_kms_init_aspace(mdp5_kms->dev); - if (IS_ERR(aspace)) { - ret = PTR_ERR(aspace); + vm = msm_kms_init_vm(mdp5_kms->dev); + if (IS_ERR(vm)) { + ret = PTR_ERR(vm); goto fail; } - kms->aspace = aspace; + kms->vm = vm; pm_runtime_put_sync(&pdev->dev); diff --git a/drivers/gpu/drm/msm/disp/mdp5/mdp5_plane.c b/drivers/gpu/drm/msm/disp/mdp5/mdp5_plane.c index bb1601921938..9f68a4747203 100644 --- a/drivers/gpu/drm/msm/disp/mdp5/mdp5_plane.c +++ b/drivers/gpu/drm/msm/disp/mdp5/mdp5_plane.c @@ -144,7 +144,7 @@ static int mdp5_plane_prepare_fb(struct drm_plane *plane, drm_gem_plane_helper_prepare_fb(plane, new_state); - return msm_framebuffer_prepare(new_state->fb, kms->aspace, needs_dirtyfb); + return msm_framebuffer_prepare(new_state->fb, kms->vm, needs_dirtyfb); } static void mdp5_plane_cleanup_fb(struct drm_plane *plane, @@ -159,7 +159,7 @@ static void mdp5_plane_cleanup_fb(struct drm_plane *plane, return; DBG("%s: cleanup: FB[%u]", plane->name, fb->base.id); - msm_framebuffer_cleanup(fb, kms->aspace, needed_dirtyfb); + msm_framebuffer_cleanup(fb, kms->vm, needed_dirtyfb); } static int mdp5_plane_atomic_check_with_state(struct drm_crtc_state *crtc_state, @@ -478,13 +478,13 @@ static void set_scanout_locked(struct mdp5_kms *mdp5_kms, MDP5_PIPE_SRC_STRIDE_B_P3(fb->pitches[3])); mdp5_write(mdp5_kms, REG_MDP5_PIPE_SRC0_ADDR(pipe), - msm_framebuffer_iova(fb, kms->aspace, 0)); + msm_framebuffer_iova(fb, kms->vm, 0)); mdp5_write(mdp5_kms, REG_MDP5_PIPE_SRC1_ADDR(pipe), - msm_framebuffer_iova(fb, kms->aspace, 1)); + msm_framebuffer_iova(fb, kms->vm, 1)); mdp5_write(mdp5_kms, REG_MDP5_PIPE_SRC2_ADDR(pipe), - msm_framebuffer_iova(fb, kms->aspace, 2)); + msm_framebuffer_iova(fb, kms->vm, 2)); mdp5_write(mdp5_kms, REG_MDP5_PIPE_SRC3_ADDR(pipe), - msm_framebuffer_iova(fb, kms->aspace, 3)); + msm_framebuffer_iova(fb, kms->vm, 3)); } /* Note: mdp5_plane->pipe_lock must be locked */ diff --git a/drivers/gpu/drm/msm/dsi/dsi_host.c b/drivers/gpu/drm/msm/dsi/dsi_host.c index 4d75529c0e85..16335ebd21e4 100644 --- a/drivers/gpu/drm/msm/dsi/dsi_host.c +++ b/drivers/gpu/drm/msm/dsi/dsi_host.c @@ -143,7 +143,7 @@ struct msm_dsi_host { /* DSI 6G TX buffer*/ struct drm_gem_object *tx_gem_obj; - struct msm_gem_address_space *aspace; + struct msm_gem_vm *vm; /* DSI v2 TX buffer */ void *tx_buf; @@ -1146,10 +1146,10 @@ int dsi_tx_buf_alloc_6g(struct msm_dsi_host *msm_host, int size) uint64_t iova; u8 *data; - msm_host->aspace = msm_gem_address_space_get(priv->kms->aspace); + msm_host->vm = msm_gem_vm_get(priv->kms->vm); data = msm_gem_kernel_new(dev, size, MSM_BO_WC, - msm_host->aspace, + msm_host->vm, &msm_host->tx_gem_obj, &iova); if (IS_ERR(data)) { @@ -1193,10 +1193,10 @@ void msm_dsi_tx_buf_free(struct mipi_dsi_host *host) return; if (msm_host->tx_gem_obj) { - msm_gem_kernel_put(msm_host->tx_gem_obj, msm_host->aspace); - msm_gem_address_space_put(msm_host->aspace); + msm_gem_kernel_put(msm_host->tx_gem_obj, msm_host->vm); + msm_gem_vm_put(msm_host->vm); msm_host->tx_gem_obj = NULL; - msm_host->aspace = NULL; + msm_host->vm = NULL; } if (msm_host->tx_buf) @@ -1327,7 +1327,7 @@ int dsi_dma_base_get_6g(struct msm_dsi_host *msm_host, uint64_t *dma_base) return -EINVAL; return msm_gem_get_and_pin_iova(msm_host->tx_gem_obj, - priv->kms->aspace, dma_base); + priv->kms->vm, dma_base); } int dsi_dma_base_get_v2(struct msm_dsi_host *msm_host, uint64_t *dma_base) diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c index 29ca24548c67..903abf3532e0 100644 --- a/drivers/gpu/drm/msm/msm_drv.c +++ b/drivers/gpu/drm/msm/msm_drv.c @@ -345,7 +345,7 @@ static int context_init(struct drm_device *dev, struct drm_file *file) kref_init(&ctx->ref); msm_submitqueue_init(dev, ctx); - ctx->aspace = msm_gpu_create_private_address_space(priv->gpu, current); + ctx->vm = msm_gpu_create_private_vm(priv->gpu, current); file->driver_priv = ctx; ctx->seqno = atomic_inc_return(&ident); @@ -523,7 +523,7 @@ static int msm_ioctl_gem_info_iova(struct drm_device *dev, * Don't pin the memory here - just get an address so that userspace can * be productive */ - return msm_gem_get_iova(obj, ctx->aspace, iova); + return msm_gem_get_iova(obj, ctx->vm, iova); } static int msm_ioctl_gem_info_set_iova(struct drm_device *dev, @@ -537,13 +537,13 @@ static int msm_ioctl_gem_info_set_iova(struct drm_device *dev, return -EINVAL; /* Only supported if per-process address space is supported: */ - if (priv->gpu->aspace == ctx->aspace) + if (priv->gpu->vm == ctx->vm) return UERR(EOPNOTSUPP, dev, "requires per-process pgtables"); if (should_fail(&fail_gem_iova, obj->size)) return -ENOMEM; - return msm_gem_set_iova(obj, ctx->aspace, iova); + return msm_gem_set_iova(obj, ctx->vm, iova); } static int msm_ioctl_gem_info_set_metadata(struct drm_gem_object *obj, diff --git a/drivers/gpu/drm/msm/msm_drv.h b/drivers/gpu/drm/msm/msm_drv.h index a65077855201..0e675c9a7f83 100644 --- a/drivers/gpu/drm/msm/msm_drv.h +++ b/drivers/gpu/drm/msm/msm_drv.h @@ -48,7 +48,7 @@ struct msm_rd_state; struct msm_perf_state; struct msm_gem_submit; struct msm_fence_context; -struct msm_gem_address_space; +struct msm_gem_vm; struct msm_gem_vma; struct msm_disp_state; @@ -241,7 +241,7 @@ void msm_crtc_disable_vblank(struct drm_crtc *crtc); int msm_register_mmu(struct drm_device *dev, struct msm_mmu *mmu); void msm_unregister_mmu(struct drm_device *dev, struct msm_mmu *mmu); -struct msm_gem_address_space *msm_kms_init_aspace(struct drm_device *dev); +struct msm_gem_vm *msm_kms_init_vm(struct drm_device *dev); bool msm_use_mmu(struct drm_device *dev); int msm_ioctl_gem_submit(struct drm_device *dev, void *data, @@ -263,11 +263,11 @@ int msm_gem_prime_pin(struct drm_gem_object *obj); void msm_gem_prime_unpin(struct drm_gem_object *obj); int msm_framebuffer_prepare(struct drm_framebuffer *fb, - struct msm_gem_address_space *aspace, bool needs_dirtyfb); + struct msm_gem_vm *vm, bool needs_dirtyfb); void msm_framebuffer_cleanup(struct drm_framebuffer *fb, - struct msm_gem_address_space *aspace, bool needed_dirtyfb); + struct msm_gem_vm *vm, bool needed_dirtyfb); uint32_t msm_framebuffer_iova(struct drm_framebuffer *fb, - struct msm_gem_address_space *aspace, int plane); + struct msm_gem_vm *vm, int plane); struct drm_gem_object *msm_framebuffer_bo(struct drm_framebuffer *fb, int plane); const struct msm_format *msm_framebuffer_format(struct drm_framebuffer *fb); struct drm_framebuffer *msm_framebuffer_create(struct drm_device *dev, diff --git a/drivers/gpu/drm/msm/msm_fb.c b/drivers/gpu/drm/msm/msm_fb.c index 09268e416843..6df318b73534 100644 --- a/drivers/gpu/drm/msm/msm_fb.c +++ b/drivers/gpu/drm/msm/msm_fb.c @@ -76,7 +76,7 @@ void msm_framebuffer_describe(struct drm_framebuffer *fb, struct seq_file *m) /* prepare/pin all the fb's bo's for scanout. */ int msm_framebuffer_prepare(struct drm_framebuffer *fb, - struct msm_gem_address_space *aspace, + struct msm_gem_vm *vm, bool needs_dirtyfb) { struct msm_framebuffer *msm_fb = to_msm_framebuffer(fb); @@ -88,7 +88,7 @@ int msm_framebuffer_prepare(struct drm_framebuffer *fb, atomic_inc(&msm_fb->prepare_count); for (i = 0; i < n; i++) { - ret = msm_gem_get_and_pin_iova(fb->obj[i], aspace, &msm_fb->iova[i]); + ret = msm_gem_get_and_pin_iova(fb->obj[i], vm, &msm_fb->iova[i]); drm_dbg_state(fb->dev, "FB[%u]: iova[%d]: %08llx (%d)\n", fb->base.id, i, msm_fb->iova[i], ret); if (ret) @@ -99,7 +99,7 @@ int msm_framebuffer_prepare(struct drm_framebuffer *fb, } void msm_framebuffer_cleanup(struct drm_framebuffer *fb, - struct msm_gem_address_space *aspace, + struct msm_gem_vm *vm, bool needed_dirtyfb) { struct msm_framebuffer *msm_fb = to_msm_framebuffer(fb); @@ -109,14 +109,14 @@ void msm_framebuffer_cleanup(struct drm_framebuffer *fb, refcount_dec(&msm_fb->dirtyfb); for (i = 0; i < n; i++) - msm_gem_unpin_iova(fb->obj[i], aspace); + msm_gem_unpin_iova(fb->obj[i], vm); if (!atomic_dec_return(&msm_fb->prepare_count)) memset(msm_fb->iova, 0, sizeof(msm_fb->iova)); } uint32_t msm_framebuffer_iova(struct drm_framebuffer *fb, - struct msm_gem_address_space *aspace, int plane) + struct msm_gem_vm *vm, int plane) { struct msm_framebuffer *msm_fb = to_msm_framebuffer(fb); return msm_fb->iova[plane] + fb->offsets[plane]; diff --git a/drivers/gpu/drm/msm/msm_fbdev.c b/drivers/gpu/drm/msm/msm_fbdev.c index c62249b1ab3d..b5969374d53f 100644 --- a/drivers/gpu/drm/msm/msm_fbdev.c +++ b/drivers/gpu/drm/msm/msm_fbdev.c @@ -122,7 +122,7 @@ int msm_fbdev_driver_fbdev_probe(struct drm_fb_helper *helper, * in panic (ie. lock-safe, etc) we could avoid pinning the * buffer now: */ - ret = msm_gem_get_and_pin_iova(bo, priv->kms->aspace, &paddr); + ret = msm_gem_get_and_pin_iova(bo, priv->kms->vm, &paddr); if (ret) { DRM_DEV_ERROR(dev->dev, "failed to get buffer obj iova: %d\n", ret); goto fail; diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index fdeb6cf7eeb5..07a30d29248c 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -402,14 +402,14 @@ uint64_t msm_gem_mmap_offset(struct drm_gem_object *obj) } static struct msm_gem_vma *add_vma(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace) + struct msm_gem_vm *vm) { struct msm_gem_object *msm_obj = to_msm_bo(obj); struct msm_gem_vma *vma; msm_gem_assert_locked(obj); - vma = msm_gem_vma_new(aspace); + vma = msm_gem_vma_new(vm); if (!vma) return ERR_PTR(-ENOMEM); @@ -419,7 +419,7 @@ static struct msm_gem_vma *add_vma(struct drm_gem_object *obj, } static struct msm_gem_vma *lookup_vma(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace) + struct msm_gem_vm *vm) { struct msm_gem_object *msm_obj = to_msm_bo(obj); struct msm_gem_vma *vma; @@ -427,7 +427,7 @@ static struct msm_gem_vma *lookup_vma(struct drm_gem_object *obj, msm_gem_assert_locked(obj); list_for_each_entry(vma, &msm_obj->vmas, list) { - if (vma->aspace == aspace) + if (vma->vm == vm) return vma; } @@ -458,7 +458,7 @@ put_iova_spaces(struct drm_gem_object *obj, bool close) msm_gem_assert_locked(obj); list_for_each_entry(vma, &msm_obj->vmas, list) { - if (vma->aspace) { + if (vma->vm) { msm_gem_vma_purge(vma); if (close) msm_gem_vma_close(vma); @@ -481,19 +481,19 @@ put_iova_vmas(struct drm_gem_object *obj) } static struct msm_gem_vma *get_vma_locked(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace, + struct msm_gem_vm *vm, u64 range_start, u64 range_end) { struct msm_gem_vma *vma; msm_gem_assert_locked(obj); - vma = lookup_vma(obj, aspace); + vma = lookup_vma(obj, vm); if (!vma) { int ret; - vma = add_vma(obj, aspace); + vma = add_vma(obj, vm); if (IS_ERR(vma)) return vma; @@ -569,13 +569,13 @@ void msm_gem_unpin_active(struct drm_gem_object *obj) } struct msm_gem_vma *msm_gem_get_vma_locked(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace) + struct msm_gem_vm *vm) { - return get_vma_locked(obj, aspace, 0, U64_MAX); + return get_vma_locked(obj, vm, 0, U64_MAX); } static int get_and_pin_iova_range_locked(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace, uint64_t *iova, + struct msm_gem_vm *vm, uint64_t *iova, u64 range_start, u64 range_end) { struct msm_gem_vma *vma; @@ -583,7 +583,7 @@ static int get_and_pin_iova_range_locked(struct drm_gem_object *obj, msm_gem_assert_locked(obj); - vma = get_vma_locked(obj, aspace, range_start, range_end); + vma = get_vma_locked(obj, vm, range_start, range_end); if (IS_ERR(vma)) return PTR_ERR(vma); @@ -601,13 +601,13 @@ static int get_and_pin_iova_range_locked(struct drm_gem_object *obj, * limits iova to specified range (in pages) */ int msm_gem_get_and_pin_iova_range(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace, uint64_t *iova, + struct msm_gem_vm *vm, uint64_t *iova, u64 range_start, u64 range_end) { int ret; msm_gem_lock(obj); - ret = get_and_pin_iova_range_locked(obj, aspace, iova, range_start, range_end); + ret = get_and_pin_iova_range_locked(obj, vm, iova, range_start, range_end); msm_gem_unlock(obj); return ret; @@ -615,9 +615,9 @@ int msm_gem_get_and_pin_iova_range(struct drm_gem_object *obj, /* get iova and pin it. Should have a matching put */ int msm_gem_get_and_pin_iova(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace, uint64_t *iova) + struct msm_gem_vm *vm, uint64_t *iova) { - return msm_gem_get_and_pin_iova_range(obj, aspace, iova, 0, U64_MAX); + return msm_gem_get_and_pin_iova_range(obj, vm, iova, 0, U64_MAX); } /* @@ -625,13 +625,13 @@ int msm_gem_get_and_pin_iova(struct drm_gem_object *obj, * valid for the life of the object */ int msm_gem_get_iova(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace, uint64_t *iova) + struct msm_gem_vm *vm, uint64_t *iova) { struct msm_gem_vma *vma; int ret = 0; msm_gem_lock(obj); - vma = get_vma_locked(obj, aspace, 0, U64_MAX); + vma = get_vma_locked(obj, vm, 0, U64_MAX); if (IS_ERR(vma)) { ret = PTR_ERR(vma); } else { @@ -643,9 +643,9 @@ int msm_gem_get_iova(struct drm_gem_object *obj, } static int clear_iova(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace) + struct msm_gem_vm *vm) { - struct msm_gem_vma *vma = lookup_vma(obj, aspace); + struct msm_gem_vma *vma = lookup_vma(obj, vm); if (!vma) return 0; @@ -665,20 +665,20 @@ static int clear_iova(struct drm_gem_object *obj, * Setting an iova of zero will clear the vma. */ int msm_gem_set_iova(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace, uint64_t iova) + struct msm_gem_vm *vm, uint64_t iova) { int ret = 0; msm_gem_lock(obj); if (!iova) { - ret = clear_iova(obj, aspace); + ret = clear_iova(obj, vm); } else { struct msm_gem_vma *vma; - vma = get_vma_locked(obj, aspace, iova, iova + obj->size); + vma = get_vma_locked(obj, vm, iova, iova + obj->size); if (IS_ERR(vma)) { ret = PTR_ERR(vma); } else if (GEM_WARN_ON(vma->iova != iova)) { - clear_iova(obj, aspace); + clear_iova(obj, vm); ret = -EBUSY; } } @@ -693,12 +693,12 @@ int msm_gem_set_iova(struct drm_gem_object *obj, * to get rid of it */ void msm_gem_unpin_iova(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace) + struct msm_gem_vm *vm) { struct msm_gem_vma *vma; msm_gem_lock(obj); - vma = lookup_vma(obj, aspace); + vma = lookup_vma(obj, vm); if (!GEM_WARN_ON(!vma)) { msm_gem_unpin_locked(obj); } @@ -1016,23 +1016,23 @@ void msm_gem_describe(struct drm_gem_object *obj, struct seq_file *m, list_for_each_entry(vma, &msm_obj->vmas, list) { const char *name, *comm; - if (vma->aspace) { - struct msm_gem_address_space *aspace = vma->aspace; + if (vma->vm) { + struct msm_gem_vm *vm = vma->vm; struct task_struct *task = - get_pid_task(aspace->pid, PIDTYPE_PID); + get_pid_task(vm->pid, PIDTYPE_PID); if (task) { comm = kstrdup(task->comm, GFP_KERNEL); put_task_struct(task); } else { comm = NULL; } - name = aspace->name; + name = vm->name; } else { name = comm = NULL; } - seq_printf(m, " [%s%s%s: aspace=%p, %08llx,%s]", + seq_printf(m, " [%s%s%s: vm=%p, %08llx,%s]", name, comm ? ":" : "", comm ? comm : "", - vma->aspace, vma->iova, + vma->vm, vma->iova, vma->mapped ? "mapped" : "unmapped"); kfree(comm); } @@ -1357,7 +1357,7 @@ struct drm_gem_object *msm_gem_import(struct drm_device *dev, } void *msm_gem_kernel_new(struct drm_device *dev, uint32_t size, - uint32_t flags, struct msm_gem_address_space *aspace, + uint32_t flags, struct msm_gem_vm *vm, struct drm_gem_object **bo, uint64_t *iova) { void *vaddr; @@ -1368,14 +1368,14 @@ void *msm_gem_kernel_new(struct drm_device *dev, uint32_t size, return ERR_CAST(obj); if (iova) { - ret = msm_gem_get_and_pin_iova(obj, aspace, iova); + ret = msm_gem_get_and_pin_iova(obj, vm, iova); if (ret) goto err; } vaddr = msm_gem_get_vaddr(obj); if (IS_ERR(vaddr)) { - msm_gem_unpin_iova(obj, aspace); + msm_gem_unpin_iova(obj, vm); ret = PTR_ERR(vaddr); goto err; } @@ -1392,13 +1392,13 @@ void *msm_gem_kernel_new(struct drm_device *dev, uint32_t size, } void msm_gem_kernel_put(struct drm_gem_object *bo, - struct msm_gem_address_space *aspace) + struct msm_gem_vm *vm) { if (IS_ERR_OR_NULL(bo)) return; msm_gem_put_vaddr(bo); - msm_gem_unpin_iova(bo, aspace); + msm_gem_unpin_iova(bo, vm); drm_gem_object_put(bo); } diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index 85f0257e83da..d2f39a371373 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -22,7 +22,7 @@ #define MSM_BO_STOLEN 0x10000000 /* try to use stolen/splash memory */ #define MSM_BO_MAP_PRIV 0x20000000 /* use IOMMU_PRIV when mapping */ -struct msm_gem_address_space { +struct msm_gem_vm { const char *name; /* NOTE: mm managed at the page level, size is in # of pages * and position mm_node->start is in # of pages: @@ -47,13 +47,13 @@ struct msm_gem_address_space { uint64_t va_size; }; -struct msm_gem_address_space * -msm_gem_address_space_get(struct msm_gem_address_space *aspace); +struct msm_gem_vm * +msm_gem_vm_get(struct msm_gem_vm *vm); -void msm_gem_address_space_put(struct msm_gem_address_space *aspace); +void msm_gem_vm_put(struct msm_gem_vm *vm); -struct msm_gem_address_space * -msm_gem_address_space_create(struct msm_mmu *mmu, const char *name, +struct msm_gem_vm * +msm_gem_vm_create(struct msm_mmu *mmu, const char *name, u64 va_start, u64 size); struct msm_fence_context; @@ -61,12 +61,12 @@ struct msm_fence_context; struct msm_gem_vma { struct drm_mm_node node; uint64_t iova; - struct msm_gem_address_space *aspace; + struct msm_gem_vm *vm; struct list_head list; /* node in msm_gem_object::vmas */ bool mapped; }; -struct msm_gem_vma *msm_gem_vma_new(struct msm_gem_address_space *aspace); +struct msm_gem_vma *msm_gem_vma_new(struct msm_gem_vm *vm); int msm_gem_vma_init(struct msm_gem_vma *vma, int size, u64 range_start, u64 range_end); void msm_gem_vma_purge(struct msm_gem_vma *vma); @@ -127,18 +127,18 @@ int msm_gem_pin_vma_locked(struct drm_gem_object *obj, struct msm_gem_vma *vma); void msm_gem_unpin_locked(struct drm_gem_object *obj); void msm_gem_unpin_active(struct drm_gem_object *obj); struct msm_gem_vma *msm_gem_get_vma_locked(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace); + struct msm_gem_vm *vm); int msm_gem_get_iova(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace, uint64_t *iova); + struct msm_gem_vm *vm, uint64_t *iova); int msm_gem_set_iova(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace, uint64_t iova); + struct msm_gem_vm *vm, uint64_t iova); int msm_gem_get_and_pin_iova_range(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace, uint64_t *iova, + struct msm_gem_vm *vm, uint64_t *iova, u64 range_start, u64 range_end); int msm_gem_get_and_pin_iova(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace, uint64_t *iova); + struct msm_gem_vm *vm, uint64_t *iova); void msm_gem_unpin_iova(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace); + struct msm_gem_vm *vm); void msm_gem_pin_obj_locked(struct drm_gem_object *obj); struct page **msm_gem_pin_pages_locked(struct drm_gem_object *obj); void msm_gem_unpin_pages_locked(struct drm_gem_object *obj); @@ -160,10 +160,10 @@ int msm_gem_new_handle(struct drm_device *dev, struct drm_file *file, struct drm_gem_object *msm_gem_new(struct drm_device *dev, uint32_t size, uint32_t flags); void *msm_gem_kernel_new(struct drm_device *dev, uint32_t size, - uint32_t flags, struct msm_gem_address_space *aspace, + uint32_t flags, struct msm_gem_vm *vm, struct drm_gem_object **bo, uint64_t *iova); void msm_gem_kernel_put(struct drm_gem_object *bo, - struct msm_gem_address_space *aspace); + struct msm_gem_vm *vm); struct drm_gem_object *msm_gem_import(struct drm_device *dev, struct dma_buf *dmabuf, struct sg_table *sgt); __printf(2, 3) @@ -257,7 +257,7 @@ struct msm_gem_submit { struct kref ref; struct drm_device *dev; struct msm_gpu *gpu; - struct msm_gem_address_space *aspace; + struct msm_gem_vm *vm; struct list_head node; /* node in ring submit list */ struct drm_exec exec; uint32_t seqno; /* Sequence number of the submit on the ring */ diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c index 3aabf7f1da6d..a59816b6b6de 100644 --- a/drivers/gpu/drm/msm/msm_gem_submit.c +++ b/drivers/gpu/drm/msm/msm_gem_submit.c @@ -63,7 +63,7 @@ static struct msm_gem_submit *submit_create(struct drm_device *dev, kref_init(&submit->ref); submit->dev = dev; - submit->aspace = queue->ctx->aspace; + submit->vm = queue->ctx->vm; submit->gpu = gpu; submit->cmd = (void *)&submit->bos[nr_bos]; submit->queue = queue; @@ -311,7 +311,7 @@ static int submit_pin_objects(struct msm_gem_submit *submit) struct msm_gem_vma *vma; /* if locking succeeded, pin bo: */ - vma = msm_gem_get_vma_locked(obj, submit->aspace); + vma = msm_gem_get_vma_locked(obj, submit->vm); if (IS_ERR(vma)) { ret = PTR_ERR(vma); break; @@ -669,7 +669,7 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data, if (args->pad) return -EINVAL; - if (unlikely(!ctx->aspace) && !capable(CAP_SYS_RAWIO)) { + if (unlikely(!ctx->vm) && !capable(CAP_SYS_RAWIO)) { DRM_ERROR_RATELIMITED("IOMMU support or CAP_SYS_RAWIO required!\n"); return -EPERM; } diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_gem_vma.c index 11e842dda73c..9419692f0cc8 100644 --- a/drivers/gpu/drm/msm/msm_gem_vma.c +++ b/drivers/gpu/drm/msm/msm_gem_vma.c @@ -10,45 +10,44 @@ #include "msm_mmu.h" static void -msm_gem_address_space_destroy(struct kref *kref) +msm_gem_vm_destroy(struct kref *kref) { - struct msm_gem_address_space *aspace = container_of(kref, - struct msm_gem_address_space, kref); - - drm_mm_takedown(&aspace->mm); - if (aspace->mmu) - aspace->mmu->funcs->destroy(aspace->mmu); - put_pid(aspace->pid); - kfree(aspace); + struct msm_gem_vm *vm = container_of(kref, struct msm_gem_vm, kref); + + drm_mm_takedown(&vm->mm); + if (vm->mmu) + vm->mmu->funcs->destroy(vm->mmu); + put_pid(vm->pid); + kfree(vm); } -void msm_gem_address_space_put(struct msm_gem_address_space *aspace) +void msm_gem_vm_put(struct msm_gem_vm *vm) { - if (aspace) - kref_put(&aspace->kref, msm_gem_address_space_destroy); + if (vm) + kref_put(&vm->kref, msm_gem_vm_destroy); } -struct msm_gem_address_space * -msm_gem_address_space_get(struct msm_gem_address_space *aspace) +struct msm_gem_vm * +msm_gem_vm_get(struct msm_gem_vm *vm) { - if (!IS_ERR_OR_NULL(aspace)) - kref_get(&aspace->kref); + if (!IS_ERR_OR_NULL(vm)) + kref_get(&vm->kref); - return aspace; + return vm; } /* Actually unmap memory for the vma */ void msm_gem_vma_purge(struct msm_gem_vma *vma) { - struct msm_gem_address_space *aspace = vma->aspace; + struct msm_gem_vm *vm = vma->vm; unsigned size = vma->node.size; /* Don't do anything if the memory isn't mapped */ if (!vma->mapped) return; - aspace->mmu->funcs->unmap(aspace->mmu, vma->iova, size); + vm->mmu->funcs->unmap(vm->mmu, vma->iova, size); vma->mapped = false; } @@ -58,7 +57,7 @@ int msm_gem_vma_map(struct msm_gem_vma *vma, int prot, struct sg_table *sgt, int size) { - struct msm_gem_address_space *aspace = vma->aspace; + struct msm_gem_vm *vm = vma->vm; int ret; if (GEM_WARN_ON(!vma->iova)) @@ -69,7 +68,7 @@ msm_gem_vma_map(struct msm_gem_vma *vma, int prot, vma->mapped = true; - if (!aspace) + if (!vm) return 0; /* @@ -81,7 +80,7 @@ msm_gem_vma_map(struct msm_gem_vma *vma, int prot, * Revisit this if we can come up with a scheme to pre-alloc pages * for the pgtable in map/unmap ops. */ - ret = aspace->mmu->funcs->map(aspace->mmu, vma->iova, sgt, size, prot); + ret = vm->mmu->funcs->map(vm->mmu, vma->iova, sgt, size, prot); if (ret) { vma->mapped = false; @@ -93,21 +92,21 @@ msm_gem_vma_map(struct msm_gem_vma *vma, int prot, /* Close an iova. Warn if it is still in use */ void msm_gem_vma_close(struct msm_gem_vma *vma) { - struct msm_gem_address_space *aspace = vma->aspace; + struct msm_gem_vm *vm = vma->vm; GEM_WARN_ON(vma->mapped); - spin_lock(&aspace->lock); + spin_lock(&vm->lock); if (vma->iova) drm_mm_remove_node(&vma->node); - spin_unlock(&aspace->lock); + spin_unlock(&vm->lock); vma->iova = 0; - msm_gem_address_space_put(aspace); + msm_gem_vm_put(vm); } -struct msm_gem_vma *msm_gem_vma_new(struct msm_gem_address_space *aspace) +struct msm_gem_vma *msm_gem_vma_new(struct msm_gem_vm *vm) { struct msm_gem_vma *vma; @@ -115,7 +114,7 @@ struct msm_gem_vma *msm_gem_vma_new(struct msm_gem_address_space *aspace) if (!vma) return NULL; - vma->aspace = aspace; + vma->vm = vm; return vma; } @@ -124,20 +123,20 @@ struct msm_gem_vma *msm_gem_vma_new(struct msm_gem_address_space *aspace) int msm_gem_vma_init(struct msm_gem_vma *vma, int size, u64 range_start, u64 range_end) { - struct msm_gem_address_space *aspace = vma->aspace; + struct msm_gem_vm *vm = vma->vm; int ret; - if (GEM_WARN_ON(!aspace)) + if (GEM_WARN_ON(!vm)) return -EINVAL; if (GEM_WARN_ON(vma->iova)) return -EBUSY; - spin_lock(&aspace->lock); - ret = drm_mm_insert_node_in_range(&aspace->mm, &vma->node, + spin_lock(&vm->lock); + ret = drm_mm_insert_node_in_range(&vm->mm, &vma->node, size, PAGE_SIZE, 0, range_start, range_end, 0); - spin_unlock(&aspace->lock); + spin_unlock(&vm->lock); if (ret) return ret; @@ -145,33 +144,33 @@ int msm_gem_vma_init(struct msm_gem_vma *vma, int size, vma->iova = vma->node.start; vma->mapped = false; - kref_get(&aspace->kref); + kref_get(&vm->kref); return 0; } -struct msm_gem_address_space * -msm_gem_address_space_create(struct msm_mmu *mmu, const char *name, +struct msm_gem_vm * +msm_gem_vm_create(struct msm_mmu *mmu, const char *name, u64 va_start, u64 size) { - struct msm_gem_address_space *aspace; + struct msm_gem_vm *vm; if (IS_ERR(mmu)) return ERR_CAST(mmu); - aspace = kzalloc(sizeof(*aspace), GFP_KERNEL); - if (!aspace) + vm = kzalloc(sizeof(*vm), GFP_KERNEL); + if (!vm) return ERR_PTR(-ENOMEM); - spin_lock_init(&aspace->lock); - aspace->name = name; - aspace->mmu = mmu; - aspace->va_start = va_start; - aspace->va_size = size; + spin_lock_init(&vm->lock); + vm->name = name; + vm->mmu = mmu; + vm->va_start = va_start; + vm->va_size = size; - drm_mm_init(&aspace->mm, va_start, size); + drm_mm_init(&vm->mm, va_start, size); - kref_init(&aspace->kref); + kref_init(&vm->kref); - return aspace; + return vm; } diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c index d786fcfad62f..0d466a2e9b32 100644 --- a/drivers/gpu/drm/msm/msm_gpu.c +++ b/drivers/gpu/drm/msm/msm_gpu.c @@ -283,7 +283,7 @@ static void msm_gpu_crashstate_capture(struct msm_gpu *gpu, if (state->fault_info.ttbr0) { struct msm_gpu_fault_info *info = &state->fault_info; - struct msm_mmu *mmu = submit->aspace->mmu; + struct msm_mmu *mmu = submit->vm->mmu; msm_iommu_pagetable_params(mmu, &info->pgtbl_ttbr0, &info->asid); @@ -386,8 +386,8 @@ static void recover_worker(struct kthread_work *work) /* Increment the fault counts */ submit->queue->faults++; - if (submit->aspace) - submit->aspace->faults++; + if (submit->vm) + submit->vm->faults++; get_comm_cmdline(submit, &comm, &cmd); @@ -492,7 +492,7 @@ static void fault_worker(struct kthread_work *work) resume_smmu: memset(&gpu->fault_info, 0, sizeof(gpu->fault_info)); - gpu->aspace->mmu->funcs->resume_translation(gpu->aspace->mmu); + gpu->vm->mmu->funcs->resume_translation(gpu->vm->mmu); mutex_unlock(&gpu->lock); } @@ -829,10 +829,10 @@ static int get_clocks(struct platform_device *pdev, struct msm_gpu *gpu) } /* Return a new address space for a msm_drm_private instance */ -struct msm_gem_address_space * -msm_gpu_create_private_address_space(struct msm_gpu *gpu, struct task_struct *task) +struct msm_gem_vm * +msm_gpu_create_private_vm(struct msm_gpu *gpu, struct task_struct *task) { - struct msm_gem_address_space *aspace = NULL; + struct msm_gem_vm *vm = NULL; if (!gpu) return NULL; @@ -840,16 +840,16 @@ msm_gpu_create_private_address_space(struct msm_gpu *gpu, struct task_struct *ta * If the target doesn't support private address spaces then return * the global one */ - if (gpu->funcs->create_private_address_space) { - aspace = gpu->funcs->create_private_address_space(gpu); - if (!IS_ERR(aspace)) - aspace->pid = get_pid(task_pid(task)); + if (gpu->funcs->create_private_vm) { + vm = gpu->funcs->create_private_vm(gpu); + if (!IS_ERR(vm)) + vm->pid = get_pid(task_pid(task)); } - if (IS_ERR_OR_NULL(aspace)) - aspace = msm_gem_address_space_get(gpu->aspace); + if (IS_ERR_OR_NULL(vm)) + vm = msm_gem_vm_get(gpu->vm); - return aspace; + return vm; } int msm_gpu_init(struct drm_device *drm, struct platform_device *pdev, @@ -945,18 +945,18 @@ int msm_gpu_init(struct drm_device *drm, struct platform_device *pdev, msm_devfreq_init(gpu); - gpu->aspace = gpu->funcs->create_address_space(gpu, pdev); + gpu->vm = gpu->funcs->create_vm(gpu, pdev); - if (gpu->aspace == NULL) + if (gpu->vm == NULL) DRM_DEV_INFO(drm->dev, "%s: no IOMMU, fallback to VRAM carveout!\n", name); - else if (IS_ERR(gpu->aspace)) { - ret = PTR_ERR(gpu->aspace); + else if (IS_ERR(gpu->vm)) { + ret = PTR_ERR(gpu->vm); goto fail; } memptrs = msm_gem_kernel_new(drm, sizeof(struct msm_rbmemptrs) * nr_rings, - check_apriv(gpu, MSM_BO_WC), gpu->aspace, &gpu->memptrs_bo, + check_apriv(gpu, MSM_BO_WC), gpu->vm, &gpu->memptrs_bo, &memptrs_iova); if (IS_ERR(memptrs)) { @@ -1000,7 +1000,7 @@ int msm_gpu_init(struct drm_device *drm, struct platform_device *pdev, gpu->rb[i] = NULL; } - msm_gem_kernel_put(gpu->memptrs_bo, gpu->aspace); + msm_gem_kernel_put(gpu->memptrs_bo, gpu->vm); platform_set_drvdata(pdev, NULL); return ret; @@ -1017,11 +1017,11 @@ void msm_gpu_cleanup(struct msm_gpu *gpu) gpu->rb[i] = NULL; } - msm_gem_kernel_put(gpu->memptrs_bo, gpu->aspace); + msm_gem_kernel_put(gpu->memptrs_bo, gpu->vm); - if (!IS_ERR_OR_NULL(gpu->aspace)) { - gpu->aspace->mmu->funcs->detach(gpu->aspace->mmu); - msm_gem_address_space_put(gpu->aspace); + if (!IS_ERR_OR_NULL(gpu->vm)) { + gpu->vm->mmu->funcs->detach(gpu->vm->mmu); + msm_gem_vm_put(gpu->vm); } if (gpu->worker) { diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h index c699ce0c557b..1f26ba00f773 100644 --- a/drivers/gpu/drm/msm/msm_gpu.h +++ b/drivers/gpu/drm/msm/msm_gpu.h @@ -78,10 +78,8 @@ struct msm_gpu_funcs { /* note: gpu_set_freq() can assume that we have been pm_resumed */ void (*gpu_set_freq)(struct msm_gpu *gpu, struct dev_pm_opp *opp, bool suspended); - struct msm_gem_address_space *(*create_address_space) - (struct msm_gpu *gpu, struct platform_device *pdev); - struct msm_gem_address_space *(*create_private_address_space) - (struct msm_gpu *gpu); + struct msm_gem_vm *(*create_vm)(struct msm_gpu *gpu, struct platform_device *pdev); + struct msm_gem_vm *(*create_private_vm)(struct msm_gpu *gpu); uint32_t (*get_rptr)(struct msm_gpu *gpu, struct msm_ringbuffer *ring); /** @@ -236,7 +234,7 @@ struct msm_gpu { void __iomem *mmio; int irq; - struct msm_gem_address_space *aspace; + struct msm_gem_vm *vm; /* Power Control: */ struct regulator *gpu_reg, *gpu_cx; @@ -364,8 +362,8 @@ struct msm_context { */ int queueid; - /** @aspace: the per-process GPU address-space */ - struct msm_gem_address_space *aspace; + /** @vm: the per-process GPU address-space */ + struct msm_gem_vm *vm; /** @kref: the reference count */ struct kref ref; @@ -675,8 +673,8 @@ int msm_gpu_init(struct drm_device *drm, struct platform_device *pdev, struct msm_gpu *gpu, const struct msm_gpu_funcs *funcs, const char *name, struct msm_gpu_config *config); -struct msm_gem_address_space * -msm_gpu_create_private_address_space(struct msm_gpu *gpu, struct task_struct *task); +struct msm_gem_vm * +msm_gpu_create_private_vm(struct msm_gpu *gpu, struct task_struct *task); void msm_gpu_cleanup(struct msm_gpu *gpu); diff --git a/drivers/gpu/drm/msm/msm_kms.c b/drivers/gpu/drm/msm/msm_kms.c index 35d5397e73b4..88504c4b842f 100644 --- a/drivers/gpu/drm/msm/msm_kms.c +++ b/drivers/gpu/drm/msm/msm_kms.c @@ -176,9 +176,9 @@ static int msm_kms_fault_handler(void *arg, unsigned long iova, int flags, void return -ENOSYS; } -struct msm_gem_address_space *msm_kms_init_aspace(struct drm_device *dev) +struct msm_gem_vm *msm_kms_init_vm(struct drm_device *dev) { - struct msm_gem_address_space *aspace; + struct msm_gem_vm *vm; struct msm_mmu *mmu; struct device *mdp_dev = dev->dev; struct device *mdss_dev = mdp_dev->parent; @@ -204,17 +204,17 @@ struct msm_gem_address_space *msm_kms_init_aspace(struct drm_device *dev) return NULL; } - aspace = msm_gem_address_space_create(mmu, "mdp_kms", + vm = msm_gem_vm_create(mmu, "mdp_kms", 0x1000, 0x100000000 - 0x1000); - if (IS_ERR(aspace)) { - dev_err(mdp_dev, "aspace create, error %pe\n", aspace); + if (IS_ERR(vm)) { + dev_err(mdp_dev, "vm create, error %pe\n", vm); mmu->funcs->destroy(mmu); - return aspace; + return vm; } - msm_mmu_set_fault_handler(aspace->mmu, kms, msm_kms_fault_handler); + msm_mmu_set_fault_handler(vm->mmu, kms, msm_kms_fault_handler); - return aspace; + return vm; } void msm_drm_kms_uninit(struct device *dev) diff --git a/drivers/gpu/drm/msm/msm_kms.h b/drivers/gpu/drm/msm/msm_kms.h index 43b58d052ee6..f45996a03e15 100644 --- a/drivers/gpu/drm/msm/msm_kms.h +++ b/drivers/gpu/drm/msm/msm_kms.h @@ -139,7 +139,7 @@ struct msm_kms { atomic_t fault_snapshot_capture; /* mapper-id used to request GEM buffer mapped for scanout: */ - struct msm_gem_address_space *aspace; + struct msm_gem_vm *vm; /* disp snapshot support */ struct kthread_worker *dump_worker; diff --git a/drivers/gpu/drm/msm/msm_ringbuffer.c b/drivers/gpu/drm/msm/msm_ringbuffer.c index c5651c39ac2a..bbf8503f6bb5 100644 --- a/drivers/gpu/drm/msm/msm_ringbuffer.c +++ b/drivers/gpu/drm/msm/msm_ringbuffer.c @@ -84,7 +84,7 @@ struct msm_ringbuffer *msm_ringbuffer_new(struct msm_gpu *gpu, int id, ring->start = msm_gem_kernel_new(gpu->dev, MSM_GPU_RINGBUFFER_SZ, check_apriv(gpu, MSM_BO_WC | MSM_BO_GPU_READONLY), - gpu->aspace, &ring->bo, &ring->iova); + gpu->vm, &ring->bo, &ring->iova); if (IS_ERR(ring->start)) { ret = PTR_ERR(ring->start); @@ -131,7 +131,7 @@ void msm_ringbuffer_destroy(struct msm_ringbuffer *ring) msm_fence_context_free(ring->fctx); - msm_gem_kernel_put(ring->bo, ring->gpu->aspace); + msm_gem_kernel_put(ring->bo, ring->gpu->vm); kfree(ring); } diff --git a/drivers/gpu/drm/msm/msm_submitqueue.c b/drivers/gpu/drm/msm/msm_submitqueue.c index 1acc0fe36353..6298233c3568 100644 --- a/drivers/gpu/drm/msm/msm_submitqueue.c +++ b/drivers/gpu/drm/msm/msm_submitqueue.c @@ -59,7 +59,7 @@ void __msm_context_destroy(struct kref *kref) kfree(ctx->entities[i]); } - msm_gem_address_space_put(ctx->aspace); + msm_gem_vm_put(ctx->vm); kfree(ctx->comm); kfree(ctx->cmdline); kfree(ctx); From patchwork Thu Jun 5 18:28:50 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 894240 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2C60F27E1C3 for ; Thu, 5 Jun 2025 18:32:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.168.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749148351; cv=none; b=fDh8ZTkdkpjiPg9fgU1y5wB68LNqn9iBC9R+ypripjG++WD7YZodW9L5oZsh0EIn0j9CfsZQ/e4APytR+fn4GmMvJqzkITUnPr4+Hq8JouM4m8/fJ1/jg437+uEOAdJD2HxyhI9fgU7VQLGsZBlnBu5Gkrw/sWDiAbTCSGcqZsI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749148351; c=relaxed/simple; bh=582gsS3SPyTENXAfRN07BZXSPUUl4eK209uRnsSMntk=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=gerDUeAu8OHku5gjbJdDe3JNt4h2JO8Cl8CPKbUsUOH0XZ4XNmVKXAj+TLJsBTpedM1G6XD7rWWyQdEBNseN+mIq2aWpvK16Oz6gPNRrpKVhsO2LXvQY9X7dEIPwgYd4SYYl2g/GToSUXyA6fYP6zEz/TG6+MkytCPC2jAPh1yE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=Qb86/u9E; arc=none smtp.client-ip=205.220.168.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="Qb86/u9E" Received: from pps.filterd (m0279862.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 555BJb0u030292 for ; Thu, 5 Jun 2025 18:32:28 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=Brhrophe0A7 Gd1KgTCcLb6vKHfZb9nFr0On2wKb0sBY=; b=Qb86/u9EyG2SbaVD1gFrzhrOtvB y7/LXNzqOk8+BEsajTw+YDOuTdsZxgaVccRwr45EGEDuFZMEc+CB1VAsDTfmMJs7 gDVrUlyUM0fPhuv0oMfqBZdNrhSyl8hOA2GQA9n5QjfmZgOYlyBPQNYWXpWGPxje b6jXEGG69lE14seWHTQq5Zf7WccQlFvdCU13l9HnW5429n7ICpkmUMSY1rOuxWhp 3Y9ONVebgCBO48qc2Cw/l+XRlR7QGaHT6Sw3TgUJ4tZTDg20tN0s7ctKoaUsaYzg KiKtPiSVs3vQOsgxbS4pWXGRqpmxaap+GHsyuqQOBWflTRr3lurN6lpv+Wg== Received: from mail-pl1-f198.google.com (mail-pl1-f198.google.com [209.85.214.198]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 472be86179-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Thu, 05 Jun 2025 18:32:28 +0000 (GMT) Received: by mail-pl1-f198.google.com with SMTP id d9443c01a7336-235eefe6a8fso9115305ad.1 for ; Thu, 05 Jun 2025 11:32:28 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1749148347; x=1749753147; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Brhrophe0A7Gd1KgTCcLb6vKHfZb9nFr0On2wKb0sBY=; b=Q/WlRjlNM3uz129zm0s99hnuBrxBhdHRT+LvklnmOVgoRtgEkVh9TNHhPp3hN1mMCu 5Qfqkh9VJokPCD9gYl0fF09u75ivHbbulmxE2UOagMYEJ07TfywRwoHsEiqk24XAHJ+Q 34bPOTqI+dKAMDdIGy3UuBeNp7VEsG051ZUp8koX8+SGDQPHzz03iPlpEK2j3j912mI5 5v2p4k+WlNaRSESgtvfH7q2PsdiAGuvb9EdyfoYE2BXk1yffwglBLU5tcdpGe6OflUAd SjifJLUfi9EyL+oqqy/v5iOf7jbvIXMTfV5PtOBj6WFZLRgoGNpcM0S4EmyQaDGOuihj 72ng== X-Forwarded-Encrypted: i=1; AJvYcCWq6LdgO9V/R6SnVI46iR5DxxUzRa5hwA1h7cWU7oKfeZgQFFN9ygcEEIiV4N58LljTdtOlff8pm9OErg0P@vger.kernel.org X-Gm-Message-State: AOJu0YxLe8ScK2FzkP6vTq4ruuJ9/VfnZOaoc5YiwqOrC/YMBaQbpLdl IWRvFQlh4euJXDz3xHU0ecHHKpAaOz2vxv7mKrp7GKezlEvkShbWB0OMAJBQrY3cE3YzqeiR3bd bG6GjKPG8h1cni2RE1w7IMGY2KFAhVI4BxvjRWpgMZJV6IJ+7XxMTY9vd+d+JPKZjiRWvxSTqIP 9q X-Gm-Gg: ASbGncsOieZpF8ZPcYQ5wucwUTvKZeMmjNYRn8fGt9JJ0v3nxpXJ36HGCqnd8canvfw 1Z1DxOKszFmTXGeT4p8VUNRRTToTS+qtm1oTnoVfEml/mcLo/2i2nvYtzqyLst24vJatkMfBPKK vmUfAyHC+N4cAgRiJRIU7nJnNmP2ZvuO8Z670MySQXYJ79LdYrUrExOZCcw7Dfb/JLs4tidULwp hZksoksfOTk2+Er92QfGoa2E5dk8iJbGsrSJmaCJU/X0iewTubDuApvPXrUiV65Vv+olRN/fTBw McQBHsilQWQ5NS+Iv6vtnw== X-Received: by 2002:a17:902:e74b:b0:234:d292:be84 with SMTP id d9443c01a7336-23601d03d04mr5238145ad.10.1749148346976; Thu, 05 Jun 2025 11:32:26 -0700 (PDT) X-Google-Smtp-Source: AGHT+IEIpAaR9nJk0+C/rdnRHx8nDbyCbsUqHMUAYtyQHeF1cAWV1uU8sWLZbiJNIyhPn1qDZ2MQuQ== X-Received: by 2002:a17:902:e74b:b0:234:d292:be84 with SMTP id d9443c01a7336-23601d03d04mr5237625ad.10.1749148346417; Thu, 05 Jun 2025 11:32:26 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:89fa:e299:1a34:c1f5]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-23506d14c84sm122209175ad.222.2025.06.05.11.32.25 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 05 Jun 2025 11:32:26 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Rob Clark , Rob Clark , Rob Clark , Sean Paul , Konrad Dybcio , Abhinav Kumar , Dmitry Baryshkov , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v6 05/40] drm/msm: Remove vram carveout support Date: Thu, 5 Jun 2025 11:28:50 -0700 Message-ID: <20250605183111.163594-6-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250605183111.163594-1-robin.clark@oss.qualcomm.com> References: <20250605183111.163594-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Authority-Analysis: v=2.4 cv=bNYWIO+Z c=1 sm=1 tr=0 ts=6841e2bc cx=c_pps a=MTSHoo12Qbhz2p7MsH1ifg==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=cm27Pg_UAAAA:8 a=EUspDBNiAAAA:8 a=2B-FfE5MDWLa5Dp4eRYA:9 a=GvdueXVYPmCkWapjIL-Q:22 X-Proofpoint-GUID: vkwv1snURBcCNWHHEB1_o_LuZU6FuZSd X-Proofpoint-ORIG-GUID: vkwv1snURBcCNWHHEB1_o_LuZU6FuZSd X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjA1MDE2NSBTYWx0ZWRfXxhf1rut7G0gx Dyp8iktpCOBH7Hk9IXt1WUXg5BYLbHqKujWVBlzb1le93zH7MHPDDtokOnsxj81BpJpBJWl0Pzm RjT3Ticqk3Ljm5h2HmylDZGmcnp/6eZkdsrX3grGj++Ve/wGSZ8QPS85g1aeoZiPnTc92VwPL64 LimLr1MLcNbb6xweLdJ0QEQqR5PpSaoO9R7VMq55S0IoqywYQutuK9bouhk4wdy4DhYN0U+zNuv FjGfhz5tBFuptjM/+CynKiAwzCRg7f8ZHe29yjszg+DHZgPWM9imtKBPNb9XnVeygQF70aR6nJP eMCNVLHV9cJetTMCQJGdAajBViZesswFXevtO6JOI0cQ8jtrtN3LWMRhcFdC8FiMS84t4V5U0v5 PJG7SZtvrbOeIOJs39PU3CJB2M6AJVHFLGm0mxXFF+3UNr3SsV3QMCHLjS1sgM5rCfLdW5e7 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.0.736,FMLib:17.12.80.40 definitions=2025-06-05_05,2025-06-05_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 lowpriorityscore=0 malwarescore=0 phishscore=0 priorityscore=1501 suspectscore=0 mlxscore=0 impostorscore=0 spamscore=0 clxscore=1015 mlxlogscore=999 adultscore=0 bulkscore=0 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506050165 From: Rob Clark It is standing in the way of drm_gpuvm / VM_BIND support. Not to mention frequently broken and rarely tested. And I think only needed for a 10yr old not quite upstream SoC (msm8974). Maybe we can add support back in later, but I'm doubtful. Signed-off-by: Rob Clark Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/adreno/a2xx_gpu.c | 8 -- drivers/gpu/drm/msm/adreno/a3xx_gpu.c | 15 --- drivers/gpu/drm/msm/adreno/a4xx_gpu.c | 15 --- drivers/gpu/drm/msm/adreno/a5xx_gpu.c | 3 +- drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 3 +- drivers/gpu/drm/msm/adreno/adreno_device.c | 4 - drivers/gpu/drm/msm/adreno/adreno_gpu.c | 4 +- drivers/gpu/drm/msm/adreno/adreno_gpu.h | 1 - drivers/gpu/drm/msm/msm_drv.c | 117 +----------------- drivers/gpu/drm/msm/msm_drv.h | 11 -- drivers/gpu/drm/msm/msm_gem.c | 131 ++------------------- drivers/gpu/drm/msm/msm_gem.h | 5 - drivers/gpu/drm/msm/msm_gem_submit.c | 5 - drivers/gpu/drm/msm/msm_gpu.c | 6 +- 14 files changed, 19 insertions(+), 309 deletions(-) diff --git a/drivers/gpu/drm/msm/adreno/a2xx_gpu.c b/drivers/gpu/drm/msm/adreno/a2xx_gpu.c index 5eb063ed0b46..095bae92e3e8 100644 --- a/drivers/gpu/drm/msm/adreno/a2xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a2xx_gpu.c @@ -551,14 +551,6 @@ struct msm_gpu *a2xx_gpu_init(struct drm_device *dev) else adreno_gpu->registers = a220_registers; - if (!gpu->vm) { - dev_err(dev->dev, "No memory protection without MMU\n"); - if (!allow_vram_carveout) { - ret = -ENXIO; - goto fail; - } - } - return gpu; fail: diff --git a/drivers/gpu/drm/msm/adreno/a3xx_gpu.c b/drivers/gpu/drm/msm/adreno/a3xx_gpu.c index 434e6ededf83..a956cd79195e 100644 --- a/drivers/gpu/drm/msm/adreno/a3xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a3xx_gpu.c @@ -581,21 +581,6 @@ struct msm_gpu *a3xx_gpu_init(struct drm_device *dev) goto fail; } - if (!gpu->vm) { - /* TODO we think it is possible to configure the GPU to - * restrict access to VRAM carveout. But the required - * registers are unknown. For now just bail out and - * limp along with just modesetting. If it turns out - * to not be possible to restrict access, then we must - * implement a cmdstream validator. - */ - DRM_DEV_ERROR(dev->dev, "No memory protection without IOMMU\n"); - if (!allow_vram_carveout) { - ret = -ENXIO; - goto fail; - } - } - icc_path = devm_of_icc_get(&pdev->dev, "gfx-mem"); if (IS_ERR(icc_path)) { ret = PTR_ERR(icc_path); diff --git a/drivers/gpu/drm/msm/adreno/a4xx_gpu.c b/drivers/gpu/drm/msm/adreno/a4xx_gpu.c index 2c75debcfd84..83f6329accba 100644 --- a/drivers/gpu/drm/msm/adreno/a4xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a4xx_gpu.c @@ -695,21 +695,6 @@ struct msm_gpu *a4xx_gpu_init(struct drm_device *dev) adreno_gpu->uche_trap_base = 0xffff0000ffff0000ull; - if (!gpu->vm) { - /* TODO we think it is possible to configure the GPU to - * restrict access to VRAM carveout. But the required - * registers are unknown. For now just bail out and - * limp along with just modesetting. If it turns out - * to not be possible to restrict access, then we must - * implement a cmdstream validator. - */ - DRM_DEV_ERROR(dev->dev, "No memory protection without IOMMU\n"); - if (!allow_vram_carveout) { - ret = -ENXIO; - goto fail; - } - } - icc_path = devm_of_icc_get(&pdev->dev, "gfx-mem"); if (IS_ERR(icc_path)) { ret = PTR_ERR(icc_path); diff --git a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c index dc31bc0afca4..04138a06724b 100644 --- a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c @@ -1786,8 +1786,7 @@ struct msm_gpu *a5xx_gpu_init(struct drm_device *dev) return ERR_PTR(ret); } - if (gpu->vm) - msm_mmu_set_fault_handler(gpu->vm->mmu, gpu, a5xx_fault_handler); + msm_mmu_set_fault_handler(gpu->vm->mmu, gpu, a5xx_fault_handler); /* Set up the preemption specific bits and pieces for each ringbuffer */ a5xx_preempt_init(gpu); diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c index d05c00624f74..f4d9cdbc5602 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c @@ -2547,8 +2547,7 @@ struct msm_gpu *a6xx_gpu_init(struct drm_device *dev) adreno_gpu->uche_trap_base = 0x1fffffffff000ull; - if (gpu->vm) - msm_mmu_set_fault_handler(gpu->vm->mmu, gpu, a6xx_fault_handler); + msm_mmu_set_fault_handler(gpu->vm->mmu, gpu, a6xx_fault_handler); a6xx_calc_ubwc_config(adreno_gpu); /* Set up the preemption specific bits and pieces for each ringbuffer */ diff --git a/drivers/gpu/drm/msm/adreno/adreno_device.c b/drivers/gpu/drm/msm/adreno/adreno_device.c index f4552b8c6767..6b0390c38bff 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_device.c +++ b/drivers/gpu/drm/msm/adreno/adreno_device.c @@ -16,10 +16,6 @@ bool snapshot_debugbus = false; MODULE_PARM_DESC(snapshot_debugbus, "Include debugbus sections in GPU devcoredump (if not fused off)"); module_param_named(snapshot_debugbus, snapshot_debugbus, bool, 0600); -bool allow_vram_carveout = false; -MODULE_PARM_DESC(allow_vram_carveout, "Allow using VRAM Carveout, in place of IOMMU"); -module_param_named(allow_vram_carveout, allow_vram_carveout, bool, 0600); - int enable_preemption = -1; MODULE_PARM_DESC(enable_preemption, "Enable preemption (A7xx only) (1=on , 0=disable, -1=auto (default))"); module_param(enable_preemption, int, 0600); diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c b/drivers/gpu/drm/msm/adreno/adreno_gpu.c index b01d9efb8663..35a99c81f7e0 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c @@ -209,7 +209,9 @@ adreno_iommu_create_vm(struct msm_gpu *gpu, u64 start, size; mmu = msm_iommu_gpu_new(&pdev->dev, gpu, quirks); - if (IS_ERR_OR_NULL(mmu)) + if (!mmu) + return ERR_PTR(-ENODEV); + else if (IS_ERR_OR_NULL(mmu)) return ERR_CAST(mmu); geometry = msm_iommu_get_geometry(mmu); diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.h b/drivers/gpu/drm/msm/adreno/adreno_gpu.h index 258c5c6dde2e..bbd7e664286e 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.h +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.h @@ -18,7 +18,6 @@ #include "adreno_pm4.xml.h" extern bool snapshot_debugbus; -extern bool allow_vram_carveout; enum { ADRENO_FW_PM4 = 0, diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c index 903abf3532e0..978f1d355b42 100644 --- a/drivers/gpu/drm/msm/msm_drv.c +++ b/drivers/gpu/drm/msm/msm_drv.c @@ -46,12 +46,6 @@ #define MSM_VERSION_MINOR 12 #define MSM_VERSION_PATCHLEVEL 0 -static void msm_deinit_vram(struct drm_device *ddev); - -static char *vram = "16m"; -MODULE_PARM_DESC(vram, "Configure VRAM size (for devices without IOMMU/GPUMMU)"); -module_param(vram, charp, 0); - bool dumpstate; MODULE_PARM_DESC(dumpstate, "Dump KMS state on errors"); module_param(dumpstate, bool, 0600); @@ -97,8 +91,6 @@ static int msm_drm_uninit(struct device *dev) if (priv->kms) msm_drm_kms_uninit(dev); - msm_deinit_vram(ddev); - component_unbind_all(dev, ddev); ddev->dev_private = NULL; @@ -109,107 +101,6 @@ static int msm_drm_uninit(struct device *dev) return 0; } -bool msm_use_mmu(struct drm_device *dev) -{ - struct msm_drm_private *priv = dev->dev_private; - - /* - * a2xx comes with its own MMU - * On other platforms IOMMU can be declared specified either for the - * MDP/DPU device or for its parent, MDSS device. - */ - return priv->is_a2xx || - device_iommu_mapped(dev->dev) || - device_iommu_mapped(dev->dev->parent); -} - -static int msm_init_vram(struct drm_device *dev) -{ - struct msm_drm_private *priv = dev->dev_private; - struct device_node *node; - unsigned long size = 0; - int ret = 0; - - /* In the device-tree world, we could have a 'memory-region' - * phandle, which gives us a link to our "vram". Allocating - * is all nicely abstracted behind the dma api, but we need - * to know the entire size to allocate it all in one go. There - * are two cases: - * 1) device with no IOMMU, in which case we need exclusive - * access to a VRAM carveout big enough for all gpu - * buffers - * 2) device with IOMMU, but where the bootloader puts up - * a splash screen. In this case, the VRAM carveout - * need only be large enough for fbdev fb. But we need - * exclusive access to the buffer to avoid the kernel - * using those pages for other purposes (which appears - * as corruption on screen before we have a chance to - * load and do initial modeset) - */ - - node = of_parse_phandle(dev->dev->of_node, "memory-region", 0); - if (node) { - struct resource r; - ret = of_address_to_resource(node, 0, &r); - of_node_put(node); - if (ret) - return ret; - size = r.end - r.start + 1; - DRM_INFO("using VRAM carveout: %lx@%pa\n", size, &r.start); - - /* if we have no IOMMU, then we need to use carveout allocator. - * Grab the entire DMA chunk carved out in early startup in - * mach-msm: - */ - } else if (!msm_use_mmu(dev)) { - DRM_INFO("using %s VRAM carveout\n", vram); - size = memparse(vram, NULL); - } - - if (size) { - unsigned long attrs = 0; - void *p; - - priv->vram.size = size; - - drm_mm_init(&priv->vram.mm, 0, (size >> PAGE_SHIFT) - 1); - spin_lock_init(&priv->vram.lock); - - attrs |= DMA_ATTR_NO_KERNEL_MAPPING; - attrs |= DMA_ATTR_WRITE_COMBINE; - - /* note that for no-kernel-mapping, the vaddr returned - * is bogus, but non-null if allocation succeeded: - */ - p = dma_alloc_attrs(dev->dev, size, - &priv->vram.paddr, GFP_KERNEL, attrs); - if (!p) { - DRM_DEV_ERROR(dev->dev, "failed to allocate VRAM\n"); - priv->vram.paddr = 0; - return -ENOMEM; - } - - DRM_DEV_INFO(dev->dev, "VRAM: %08x->%08x\n", - (uint32_t)priv->vram.paddr, - (uint32_t)(priv->vram.paddr + size)); - } - - return ret; -} - -static void msm_deinit_vram(struct drm_device *ddev) -{ - struct msm_drm_private *priv = ddev->dev_private; - unsigned long attrs = DMA_ATTR_NO_KERNEL_MAPPING; - - if (!priv->vram.paddr) - return; - - drm_mm_takedown(&priv->vram.mm); - dma_free_attrs(ddev->dev, priv->vram.size, NULL, priv->vram.paddr, - attrs); -} - static int msm_drm_init(struct device *dev, const struct drm_driver *drv) { struct msm_drm_private *priv = dev_get_drvdata(dev); @@ -256,16 +147,12 @@ static int msm_drm_init(struct device *dev, const struct drm_driver *drv) goto err_destroy_wq; } - ret = msm_init_vram(ddev); - if (ret) - goto err_destroy_wq; - dma_set_max_seg_size(dev, UINT_MAX); /* Bind all our sub-components: */ ret = component_bind_all(dev, ddev); if (ret) - goto err_deinit_vram; + goto err_destroy_wq; ret = msm_gem_shrinker_init(ddev); if (ret) @@ -302,8 +189,6 @@ static int msm_drm_init(struct device *dev, const struct drm_driver *drv) return ret; -err_deinit_vram: - msm_deinit_vram(ddev); err_destroy_wq: destroy_workqueue(priv->wq); err_put_dev: diff --git a/drivers/gpu/drm/msm/msm_drv.h b/drivers/gpu/drm/msm/msm_drv.h index 0e675c9a7f83..ad509403f072 100644 --- a/drivers/gpu/drm/msm/msm_drv.h +++ b/drivers/gpu/drm/msm/msm_drv.h @@ -183,17 +183,6 @@ struct msm_drm_private { struct msm_drm_thread event_thread[MAX_CRTCS]; - /* VRAM carveout, used when no IOMMU: */ - struct { - unsigned long size; - dma_addr_t paddr; - /* NOTE: mm managed at the page level, size is in # of pages - * and position mm_node->start is in # of pages: - */ - struct drm_mm mm; - spinlock_t lock; /* Protects drm_mm node allocation/removal */ - } vram; - struct notifier_block vmap_notifier; struct shrinker *shrinker; diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 07a30d29248c..621fb4e17a2e 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -17,24 +17,8 @@ #include #include "msm_drv.h" -#include "msm_fence.h" #include "msm_gem.h" #include "msm_gpu.h" -#include "msm_mmu.h" - -static dma_addr_t physaddr(struct drm_gem_object *obj) -{ - struct msm_gem_object *msm_obj = to_msm_bo(obj); - struct msm_drm_private *priv = obj->dev->dev_private; - return (((dma_addr_t)msm_obj->vram_node->start) << PAGE_SHIFT) + - priv->vram.paddr; -} - -static bool use_pages(struct drm_gem_object *obj) -{ - struct msm_gem_object *msm_obj = to_msm_bo(obj); - return !msm_obj->vram_node; -} static int pgprot = 0; module_param(pgprot, int, 0600); @@ -139,36 +123,6 @@ static void update_lru(struct drm_gem_object *obj) mutex_unlock(&priv->lru.lock); } -/* allocate pages from VRAM carveout, used when no IOMMU: */ -static struct page **get_pages_vram(struct drm_gem_object *obj, int npages) -{ - struct msm_gem_object *msm_obj = to_msm_bo(obj); - struct msm_drm_private *priv = obj->dev->dev_private; - dma_addr_t paddr; - struct page **p; - int ret, i; - - p = kvmalloc_array(npages, sizeof(struct page *), GFP_KERNEL); - if (!p) - return ERR_PTR(-ENOMEM); - - spin_lock(&priv->vram.lock); - ret = drm_mm_insert_node(&priv->vram.mm, msm_obj->vram_node, npages); - spin_unlock(&priv->vram.lock); - if (ret) { - kvfree(p); - return ERR_PTR(ret); - } - - paddr = physaddr(obj); - for (i = 0; i < npages; i++) { - p[i] = pfn_to_page(__phys_to_pfn(paddr)); - paddr += PAGE_SIZE; - } - - return p; -} - static struct page **get_pages(struct drm_gem_object *obj) { struct msm_gem_object *msm_obj = to_msm_bo(obj); @@ -180,10 +134,7 @@ static struct page **get_pages(struct drm_gem_object *obj) struct page **p; int npages = obj->size >> PAGE_SHIFT; - if (use_pages(obj)) - p = drm_gem_get_pages(obj); - else - p = get_pages_vram(obj, npages); + p = drm_gem_get_pages(obj); if (IS_ERR(p)) { DRM_DEV_ERROR(dev->dev, "could not get pages: %ld\n", @@ -216,18 +167,6 @@ static struct page **get_pages(struct drm_gem_object *obj) return msm_obj->pages; } -static void put_pages_vram(struct drm_gem_object *obj) -{ - struct msm_gem_object *msm_obj = to_msm_bo(obj); - struct msm_drm_private *priv = obj->dev->dev_private; - - spin_lock(&priv->vram.lock); - drm_mm_remove_node(msm_obj->vram_node); - spin_unlock(&priv->vram.lock); - - kvfree(msm_obj->pages); -} - static void put_pages(struct drm_gem_object *obj) { struct msm_gem_object *msm_obj = to_msm_bo(obj); @@ -248,10 +187,7 @@ static void put_pages(struct drm_gem_object *obj) update_device_mem(obj->dev->dev_private, -obj->size); - if (use_pages(obj)) - drm_gem_put_pages(obj, msm_obj->pages, true, false); - else - put_pages_vram(obj); + drm_gem_put_pages(obj, msm_obj->pages, true, false); msm_obj->pages = NULL; update_lru(obj); @@ -1215,19 +1151,10 @@ struct drm_gem_object *msm_gem_new(struct drm_device *dev, uint32_t size, uint32 struct msm_drm_private *priv = dev->dev_private; struct msm_gem_object *msm_obj; struct drm_gem_object *obj = NULL; - bool use_vram = false; int ret; size = PAGE_ALIGN(size); - if (!msm_use_mmu(dev)) - use_vram = true; - else if ((flags & (MSM_BO_STOLEN | MSM_BO_SCANOUT)) && priv->vram.size) - use_vram = true; - - if (GEM_WARN_ON(use_vram && !priv->vram.size)) - return ERR_PTR(-EINVAL); - /* Disallow zero sized objects as they make the underlying * infrastructure grumpy */ @@ -1240,44 +1167,16 @@ struct drm_gem_object *msm_gem_new(struct drm_device *dev, uint32_t size, uint32 msm_obj = to_msm_bo(obj); - if (use_vram) { - struct msm_gem_vma *vma; - struct page **pages; - - drm_gem_private_object_init(dev, obj, size); - - msm_gem_lock(obj); - - vma = add_vma(obj, NULL); - msm_gem_unlock(obj); - if (IS_ERR(vma)) { - ret = PTR_ERR(vma); - goto fail; - } - - to_msm_bo(obj)->vram_node = &vma->node; - - msm_gem_lock(obj); - pages = get_pages(obj); - msm_gem_unlock(obj); - if (IS_ERR(pages)) { - ret = PTR_ERR(pages); - goto fail; - } - - vma->iova = physaddr(obj); - } else { - ret = drm_gem_object_init(dev, obj, size); - if (ret) - goto fail; - /* - * Our buffers are kept pinned, so allocating them from the - * MOVABLE zone is a really bad idea, and conflicts with CMA. - * See comments above new_inode() why this is required _and_ - * expected if you're going to pin these pages. - */ - mapping_set_gfp_mask(obj->filp->f_mapping, GFP_HIGHUSER); - } + ret = drm_gem_object_init(dev, obj, size); + if (ret) + goto fail; + /* + * Our buffers are kept pinned, so allocating them from the + * MOVABLE zone is a really bad idea, and conflicts with CMA. + * See comments above new_inode() why this is required _and_ + * expected if you're going to pin these pages. + */ + mapping_set_gfp_mask(obj->filp->f_mapping, GFP_HIGHUSER); drm_gem_lru_move_tail(&priv->lru.unbacked, obj); @@ -1305,12 +1204,6 @@ struct drm_gem_object *msm_gem_import(struct drm_device *dev, uint32_t size; int ret, npages; - /* if we don't have IOMMU, don't bother pretending we can import: */ - if (!msm_use_mmu(dev)) { - DRM_DEV_ERROR(dev->dev, "cannot import without IOMMU\n"); - return ERR_PTR(-EINVAL); - } - size = PAGE_ALIGN(dmabuf->size); ret = msm_gem_new_impl(dev, size, MSM_BO_WC, &obj); diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index d2f39a371373..c16b11182831 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -102,11 +102,6 @@ struct msm_gem_object { struct list_head vmas; /* list of msm_gem_vma */ - /* For physically contiguous buffers. Used when we don't have - * an IOMMU. Also used for stolen/splashscreen buffer. - */ - struct drm_mm_node *vram_node; - char name[32]; /* Identifier to print for the debugfs files */ /* userspace metadata backchannel */ diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c index a59816b6b6de..c184b1a1f522 100644 --- a/drivers/gpu/drm/msm/msm_gem_submit.c +++ b/drivers/gpu/drm/msm/msm_gem_submit.c @@ -669,11 +669,6 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data, if (args->pad) return -EINVAL; - if (unlikely(!ctx->vm) && !capable(CAP_SYS_RAWIO)) { - DRM_ERROR_RATELIMITED("IOMMU support or CAP_SYS_RAWIO required!\n"); - return -EPERM; - } - /* for now, we just have 3d pipe.. eventually this would need to * be more clever to dispatch to appropriate gpu module: */ diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c index 0d466a2e9b32..b30800f80120 100644 --- a/drivers/gpu/drm/msm/msm_gpu.c +++ b/drivers/gpu/drm/msm/msm_gpu.c @@ -944,12 +944,8 @@ int msm_gpu_init(struct drm_device *drm, struct platform_device *pdev, msm_devfreq_init(gpu); - gpu->vm = gpu->funcs->create_vm(gpu, pdev); - - if (gpu->vm == NULL) - DRM_DEV_INFO(drm->dev, "%s: no IOMMU, fallback to VRAM carveout!\n", name); - else if (IS_ERR(gpu->vm)) { + if (IS_ERR(gpu->vm)) { ret = PTR_ERR(gpu->vm); goto fail; } From patchwork Thu Jun 5 18:28:51 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 894510 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7F19427A105 for ; Thu, 5 Jun 2025 18:32:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.168.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749148352; cv=none; b=nAPPvfMjCkBkX6XFZMxNd0p3k7Xl+rijjnQSIePq3zHhW0jdZ656j6/0MABKdK2eO3VN++kxXV+0bl5nU0xSZGnp3DpA4GdspYG23Ll9YhuZiHChVa/hoE8TP/dqIZWApch6IO3NVAhOrfj0itAM50BEE0N6EQYodQYNfrJK55o= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749148352; c=relaxed/simple; bh=x6l1U8SQbNXar2T+nKC44CZbNnCrpWZd9rRpAvG91aA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=bFp8iAAzSLqKt981FB8JxTJgv866maaPF1MoWTM8t5hVWy1As5vzOHfRbQmv8LJ/D1U/hmiM9UEMRsrsx3+txl6bkDNXhMkBLjUjprCQyfWCmZLeML/tgw83QXTzbLfF/f/Pqr4B1WQBxrPSEMgDTzKsi2Mr+LTrDLzkYvQPQvI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=jpdhTPiK; arc=none smtp.client-ip=205.220.168.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="jpdhTPiK" Received: from pps.filterd (m0279862.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 555AC5q6013969 for ; Thu, 5 Jun 2025 18:32:29 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=BHyE+ROYmr1 BYZwR3C+QXhzqfJ1dS+58uUEwZ+AZnWU=; b=jpdhTPiKagEF5j6KX7pPe0LyzeH OZTEVdqkA/8sYYJSnJXLxEkmfBiCgiLYEGmqjhlJ6/nYCHn0OZYObLKwfBJX1DpA vqmJDJhYS1cO2S24WA9NsAk7/cJzj5hj+pB4/2Uce+IldRnNRPJewIt8363Q0N7E OJSp1k7MfJrDoaw+etoLMlGkHSoSV3rmWJGunNggNEL++DL1l3gJ3WtAIrfXEiR/ cz9JiCaafyZo2IYAeoQ8CTV6epyZ7Q0V0c/+RFPT+G0gcJ6q/jKn1U/zNe7mSRU/ tw4AaIQI4NXPjPdTDmWiY2fwWiz/gpnXlgx44mNb4UOShSsdXn6l80RJ22Q== Received: from mail-pf1-f198.google.com (mail-pf1-f198.google.com [209.85.210.198]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 472be8617d-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Thu, 05 Jun 2025 18:32:29 +0000 (GMT) Received: by mail-pf1-f198.google.com with SMTP id d2e1a72fcca58-742951722b3so1111279b3a.2 for ; Thu, 05 Jun 2025 11:32:29 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1749148348; x=1749753148; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=BHyE+ROYmr1BYZwR3C+QXhzqfJ1dS+58uUEwZ+AZnWU=; b=ga848YfwCHKdflMnrhZgksG7XaJI0fihWK4WWC8914e9ZEthqxuh0e8cFlrT7J2Xrk NmKf9KCp+JNepsbl/sEdZuvlwWiwtFuG5hsRxhHpelMLJohssOzZbIN1su2flqWfsXRe RBpLPIzf1LzdjKp31AsMz6t1a1x9BugL5XOTNW6r6v70pPqOk+H2dFSPUr/eQ6C8CRNz 71RvYgOuw/Hf1400p/jPwReta7merdf65eSiWdASNpH/n0z2rLN0AonrcZcBG2mRvnmu zGuemlsFH6zUAttaZ/br4FlRezIY9nBNtNdrl0jLQNaB6pA8XvtMQYvvqjd05YyRYK1R DR9Q== X-Forwarded-Encrypted: i=1; AJvYcCX3idmLSUpfwh1fEOlv5rFElWg2vjORz1jnBEW33EUa+pcGDfPvKoPKgQm5/xjSL2CWV+H0WDAV5VIY95tY@vger.kernel.org X-Gm-Message-State: AOJu0Yx+MPwvPBzRfcTDi6N+l6LPKTMyBFChBToaSJ133ZWZuFP+Q3Ir h8x+i5ZAIlXKydayVB+0Uns7xfJmHnu6qY5J2SxI5ZklRbT6KeDgzwRL/Nwr4pK3c4sl1o+oMDa X1hVlZQthWNiPJfkxKL0bKqMfgjQqX9ywEcfCMBfy+5vO2KEEuzaqLOPqefkr3aTtCqtSKiQ/Rd C7 X-Gm-Gg: ASbGncuNrYteDaNpeD3VjzENaY/itWq7CXi0CXZ9IsX6IleqlgfDYI1OyKI4B015vIM uZfMgpAI4SuKsvUXMfPUhvhoQ8K0wRhR4EPxM6Z8ORtZBLO4UKkue/bSIqOXsL7mXTEIHzEXNXL ale53183XsdC5HS1L9SvB3mF0YK6YCFtrCP3/ESvc9zvtiWwUvhS3yGkAoqTGxad1rsz4Kx9gZ9 dSHMyvfY+Z3GRoGld5W4DSi7tFZt9SFKy2xdK9FgWIUf4m35UcNQbat8pmQkeC6gLk70ha/qo0B troyMHJ2ogLv9t4culuKF8OT/hPmycm2 X-Received: by 2002:a05:6a20:3d8a:b0:1f3:41d5:65f6 with SMTP id adf61e73a8af0-21ee2637d11mr356808637.32.1749148348141; Thu, 05 Jun 2025 11:32:28 -0700 (PDT) X-Google-Smtp-Source: AGHT+IGVrfLgNyWqLp9DNSwslns/gOD9utCo3fzpYrtaVTz2iGKhZuHEW0/CiegXluYIPkvKb67fXw== X-Received: by 2002:a05:6a20:3d8a:b0:1f3:41d5:65f6 with SMTP id adf61e73a8af0-21ee2637d11mr356783637.32.1749148347737; Thu, 05 Jun 2025 11:32:27 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:89fa:e299:1a34:c1f5]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-747affcfa0csm13511882b3a.131.2025.06.05.11.32.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 05 Jun 2025 11:32:27 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Rob Clark , Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v6 06/40] drm/msm: Collapse vma allocation and initialization Date: Thu, 5 Jun 2025 11:28:51 -0700 Message-ID: <20250605183111.163594-7-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250605183111.163594-1-robin.clark@oss.qualcomm.com> References: <20250605183111.163594-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Authority-Analysis: v=2.4 cv=bNYWIO+Z c=1 sm=1 tr=0 ts=6841e2bd cx=c_pps a=m5Vt/hrsBiPMCU0y4gIsQw==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=cm27Pg_UAAAA:8 a=EUspDBNiAAAA:8 a=Fs5xBETKxiRrdmql5B8A:9 a=IoOABgeZipijB_acs4fv:22 X-Proofpoint-GUID: N3HaW6oCNu-oywNygStb1TqjXL99KCxx X-Proofpoint-ORIG-GUID: N3HaW6oCNu-oywNygStb1TqjXL99KCxx X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjA1MDE2NSBTYWx0ZWRfX0gkIP2NzX/Ry yGzSjfINo5fwDUfxjobyd+bQMfaH1YS0UXCDNfq+aW8ewqcYsyc+v+pkgXbkwEP9w19JBlLPNKE zoKiYwTx2szJho8XxlTzryzxuZ9TdXPSdYA0rO3P8IQZmprBLUnOZ5KiFd5SX7qLKP9OXOeSm+F mIn10JO7LSU/4wwlaN0SlxAkKkXQboxk5g/6ohtn+k/06RenpIl++3k3j8fHAFiDxXGV+tNi59J o0s2YB8yjXBD0aWM2YbdB2G+zTTpNd8Yih3Zj2myuaPMGlC93vYMtVX7tZlrvpi2E3ZJDG/mAM7 8F6GuCEMegE2GlsmfOP9Eq7oPnjnbynaSw8T5ZyYZeBVuZVLZhfT2hKaEvtU6OeWMDnWVqe5lRX knUWnztrOn1LuM6eYwg/pO2oAVoVdBH/9tahFA5UwYugoLtdcux+QyaHtK+aOgbWMYbdbNnM X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.0.736,FMLib:17.12.80.40 definitions=2025-06-05_05,2025-06-05_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 lowpriorityscore=0 malwarescore=0 phishscore=0 priorityscore=1501 suspectscore=0 mlxscore=0 impostorscore=0 spamscore=0 clxscore=1015 mlxlogscore=999 adultscore=0 bulkscore=0 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506050165 From: Rob Clark Now that we've dropped vram carveout support, we can collapse vma allocation and initialization. This better matches how things work with drm_gpuvm. Signed-off-by: Rob Clark Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gem.c | 30 +++----------------------- drivers/gpu/drm/msm/msm_gem.h | 4 ++-- drivers/gpu/drm/msm/msm_gem_vma.c | 36 +++++++++++++------------------ 3 files changed, 20 insertions(+), 50 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 621fb4e17a2e..29247911f048 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -337,23 +337,6 @@ uint64_t msm_gem_mmap_offset(struct drm_gem_object *obj) return offset; } -static struct msm_gem_vma *add_vma(struct drm_gem_object *obj, - struct msm_gem_vm *vm) -{ - struct msm_gem_object *msm_obj = to_msm_bo(obj); - struct msm_gem_vma *vma; - - msm_gem_assert_locked(obj); - - vma = msm_gem_vma_new(vm); - if (!vma) - return ERR_PTR(-ENOMEM); - - list_add_tail(&vma->list, &msm_obj->vmas); - - return vma; -} - static struct msm_gem_vma *lookup_vma(struct drm_gem_object *obj, struct msm_gem_vm *vm) { @@ -420,6 +403,7 @@ static struct msm_gem_vma *get_vma_locked(struct drm_gem_object *obj, struct msm_gem_vm *vm, u64 range_start, u64 range_end) { + struct msm_gem_object *msm_obj = to_msm_bo(obj); struct msm_gem_vma *vma; msm_gem_assert_locked(obj); @@ -427,18 +411,10 @@ static struct msm_gem_vma *get_vma_locked(struct drm_gem_object *obj, vma = lookup_vma(obj, vm); if (!vma) { - int ret; - - vma = add_vma(obj, vm); + vma = msm_gem_vma_new(vm, obj, range_start, range_end); if (IS_ERR(vma)) return vma; - - ret = msm_gem_vma_init(vma, obj->size, - range_start, range_end); - if (ret) { - del_vma(vma); - return ERR_PTR(ret); - } + list_add_tail(&vma->list, &msm_obj->vmas); } else { GEM_WARN_ON(vma->iova < range_start); GEM_WARN_ON((vma->iova + obj->size) > range_end); diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index c16b11182831..9bd78642671c 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -66,8 +66,8 @@ struct msm_gem_vma { bool mapped; }; -struct msm_gem_vma *msm_gem_vma_new(struct msm_gem_vm *vm); -int msm_gem_vma_init(struct msm_gem_vma *vma, int size, +struct msm_gem_vma * +msm_gem_vma_new(struct msm_gem_vm *vm, struct drm_gem_object *obj, u64 range_start, u64 range_end); void msm_gem_vma_purge(struct msm_gem_vma *vma); int msm_gem_vma_map(struct msm_gem_vma *vma, int prot, struct sg_table *sgt, int size); diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_gem_vma.c index 9419692f0cc8..6d18364f321c 100644 --- a/drivers/gpu/drm/msm/msm_gem_vma.c +++ b/drivers/gpu/drm/msm/msm_gem_vma.c @@ -106,47 +106,41 @@ void msm_gem_vma_close(struct msm_gem_vma *vma) msm_gem_vm_put(vm); } -struct msm_gem_vma *msm_gem_vma_new(struct msm_gem_vm *vm) +/* Create a new vma and allocate an iova for it */ +struct msm_gem_vma * +msm_gem_vma_new(struct msm_gem_vm *vm, struct drm_gem_object *obj, + u64 range_start, u64 range_end) { struct msm_gem_vma *vma; + int ret; vma = kzalloc(sizeof(*vma), GFP_KERNEL); if (!vma) - return NULL; + return ERR_PTR(-ENOMEM); vma->vm = vm; - return vma; -} - -/* Initialize a new vma and allocate an iova for it */ -int msm_gem_vma_init(struct msm_gem_vma *vma, int size, - u64 range_start, u64 range_end) -{ - struct msm_gem_vm *vm = vma->vm; - int ret; - - if (GEM_WARN_ON(!vm)) - return -EINVAL; - - if (GEM_WARN_ON(vma->iova)) - return -EBUSY; - spin_lock(&vm->lock); ret = drm_mm_insert_node_in_range(&vm->mm, &vma->node, - size, PAGE_SIZE, 0, + obj->size, PAGE_SIZE, 0, range_start, range_end, 0); spin_unlock(&vm->lock); if (ret) - return ret; + goto err_free_vma; vma->iova = vma->node.start; vma->mapped = false; + INIT_LIST_HEAD(&vma->list); + kref_get(&vm->kref); - return 0; + return vma; + +err_free_vma: + kfree(vma); + return ERR_PTR(ret); } struct msm_gem_vm * From patchwork Thu Jun 5 18:28:52 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 894239 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 04FF127AC25 for ; Thu, 5 Jun 2025 18:32:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.168.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749148352; cv=none; b=sXA1Zkr3xuzpEM9b4h6pyK2nKt2C0g1rt85FpS+ZHbDVx7/J8UTy4o5O9QivLOWhrU/d3wlQ3L0ultjVeyd47JDbnEF3jiYPc+T0l1+p6Q5/bHcd5PpOUYD83CRiKZEAKpmz2EZX+/P9QT/jbtdfk4n1xGjHJBn34FgXzDD0b6s= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749148352; c=relaxed/simple; bh=Xoz678jgKXUU5eflKb2/hYyyho0cK1Uk/BBv29BM5m0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=EgvNHufN06bB8jk0VTMkX5Qc0qW8rFVvMebirkoKuMgJa5JmhrjBxvyMAVqt0RMvZYd7MCFVXMDofzmiGgLpeJ9hH2HoAysWEY1UFtru63PkwGKGp6Cuwez0PoyBXkm6Y2CQ+ZxbBhqjQh6/QZj1h71Li1xaE/l4vng0nEL5dV4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=RAU+I33u; arc=none smtp.client-ip=205.220.168.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="RAU+I33u" Received: from pps.filterd (m0279867.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 555GrJTZ006553 for ; Thu, 5 Jun 2025 18:32:30 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=pRucYDqGv/m aZtfEL5JUnKr0yOuvQGJ3l3EY7Z9RO/I=; b=RAU+I33usmA4f5r8pB/+83MjQkd nKgsaG2tXLsDYbppqPQXrhFQ9cEOS3ywU/JfEcPn+zjix7z2kjfQelMFU2j74uQs WEA/QJWQLT8jZHTTf3FivbTD81qu0lJCsAxVd0myzt53FUz2AGloc6iT/38hTF5b zqGtx//1a3B1gKH6HLqgqGVQBancscjVIpqlRmFjuch/EFy0CJ3QZ7AeILzljkv2 tPjht7o4fWU/Mx5i8oRTR+WHjRUqfQ7ywMMaNclDNB93Q4lXNA6Ji99IQEHSYSI7 IHXDSCx38TBNVsdCHjhUzCoQ+dGwAewCnnwFGaUZ6ruPo93RoRS3lrqmExQ== Received: from mail-pl1-f198.google.com (mail-pl1-f198.google.com [209.85.214.198]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 471g8t2a9b-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Thu, 05 Jun 2025 18:32:30 +0000 (GMT) Received: by mail-pl1-f198.google.com with SMTP id d9443c01a7336-23505adcac8so11470845ad.3 for ; Thu, 05 Jun 2025 11:32:30 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1749148349; x=1749753149; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=pRucYDqGv/maZtfEL5JUnKr0yOuvQGJ3l3EY7Z9RO/I=; b=B1vF4YTPuR6i3yo77dRdAPuhVj+TlOTbgazaNlTRWmRknT3WpPzNF5BlC+MaPkBbUb 1BS0A7LhEq9Wz1kVrfxWhR9XPtB0RKVmBhQoiQXxC3A8ZaHbcXiovCFPt5VXMIawnNjJ Q6g4KybWFgHpqSf9WkXshUTeTvn3kzc4WpKasfNPWOgVYnPnVq2onasYaBN4DMnXwTnE NWWLJwYqd6hFfxU0g32VXH0VmG+GuexkiJ5sEKX8NObTRoydx+MtTTSzh4zjdejh11iL GCPxnB1BHiaJjZFf34IAr/5wzrY7k0/Rvg6Zky8LvDrODj6uJB3URzFsHtWftgqfa/BB w6+g== X-Forwarded-Encrypted: i=1; AJvYcCWQqqcLWw7bJk7BZIlBFgeZA6rIg0gxJoQR1VUpUzvm2UhldVND6Z7Yq6/nIS+2fvd1wv0uzva4rWJrHgz/@vger.kernel.org X-Gm-Message-State: AOJu0Yw/KcdKReRdE8C9i9FqXyWwJ5XoezhobCXB4kpO4rmjrNW4XpFN gfcY6XFBOTiLDmmgrQlhNDzqZz3cOSO8MIJ53fP4VwAHazXXsSvkKcaOp42eCpCv7mXiv431Wvq /y7PN/jfTMWV+k+tsNSFl/Cq9IldK/S73+DIxYqW26Ee4s2qgLzej/jLb9x4SVDl5LoVR X-Gm-Gg: ASbGnctqbm3/VmfJwnUCHD46b6Q4sw3pwv+AQYmvO7QEpdDPrWf+BEt+6HOqILwIqcq 4sjS8zQvaSfnp6mfXVe5GEzuE2eMtSjJWvB4FxQeMrEQYcS5Pl6E3c+xcc3peA6bnzQsYJpseS3 oajmFFIsri+acNE5d7Q7Wu7RUiYfClkYacZHMykwL8kMCFBs94M9RJVZez3p+t99r/dgF9Tx37Z 2IZnCf1+tkbPrOQgCOdnpPeLZrAT08twGmpS3N8EacVSQzozdmWnprwr2nJy0xUDwu/coPYeC2R euPtUwX6Yxeivr/52hUJwkXB9IAK5es5 X-Received: by 2002:a17:902:f70c:b0:234:9fed:ccb2 with SMTP id d9443c01a7336-23601d22bcfmr6076315ad.29.1749148349352; Thu, 05 Jun 2025 11:32:29 -0700 (PDT) X-Google-Smtp-Source: AGHT+IFZPYl3bi4wrAR9P4qs263RGT6xV2qooQoNsSl5TQxnW+d38CFd1PSGOkF9JdpsRdaUXFkAvA== X-Received: by 2002:a17:902:f70c:b0:234:9fed:ccb2 with SMTP id d9443c01a7336-23601d22bcfmr6075895ad.29.1749148348947; Thu, 05 Jun 2025 11:32:28 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:89fa:e299:1a34:c1f5]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-23506cf47c0sm122361975ad.175.2025.06.05.11.32.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 05 Jun 2025 11:32:28 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Rob Clark , Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v6 07/40] drm/msm: Collapse vma close and delete Date: Thu, 5 Jun 2025 11:28:52 -0700 Message-ID: <20250605183111.163594-8-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250605183111.163594-1-robin.clark@oss.qualcomm.com> References: <20250605183111.163594-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: bTBlHbOy-cKJTIsotbx2zbl-JufCK3py X-Authority-Analysis: v=2.4 cv=EPcG00ZC c=1 sm=1 tr=0 ts=6841e2be cx=c_pps a=MTSHoo12Qbhz2p7MsH1ifg==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=cm27Pg_UAAAA:8 a=EUspDBNiAAAA:8 a=RIvuzEnNBJp2qadMtJ0A:9 a=GvdueXVYPmCkWapjIL-Q:22 X-Proofpoint-GUID: bTBlHbOy-cKJTIsotbx2zbl-JufCK3py X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjA1MDE2NSBTYWx0ZWRfX0vfzUOsRNQvG W2etn5KO2EtmT3amA+qhI/loFs0G12IrPfPs9guLKWk+Rns2oBLfcEFyyWtNPkn73OpgKPEW4RT MNkHkmTxZONJC2RTtfmo/u+cbKdo46zMPtotladQzjBpd9PQTb79KybEpSziDdnTlmcHNqTk9cx /E4IPJTrn73RwpZ4vxqPJfkVmrR/uKEGKClYQDWyeUJb/ZdUhydM9pNwbmg8mM8UicQZIl1oGay lDhBsbNcv718UDFSirZzryb4GWNvJIdbZxRTrlWMBBAaBS2QGrLgTldMvhNdmX02tAiZdlosaib pAtW5n66yKliMSQHuVQpytmLr0l2m4Q4/4xi28rxJQvBFJTYII9enZadmScsSkEGL52zoeako3v o+KxSCry57nH9VLzt6dCfwyO3lDx733MRYeGk6vJ+msZZOJCchlRtAHf01SBXmZTbNnYi8UV X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.0.736,FMLib:17.12.80.40 definitions=2025-06-05_05,2025-06-05_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 suspectscore=0 bulkscore=0 priorityscore=1501 phishscore=0 impostorscore=0 lowpriorityscore=0 adultscore=0 malwarescore=0 mlxscore=0 clxscore=1015 mlxlogscore=999 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506050165 From: Rob Clark This fits better drm_gpuvm/drm_gpuva. Signed-off-by: Rob Clark Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gem.c | 16 +++------------- drivers/gpu/drm/msm/msm_gem_vma.c | 2 ++ 2 files changed, 5 insertions(+), 13 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 29247911f048..4c10eca404e0 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -353,15 +353,6 @@ static struct msm_gem_vma *lookup_vma(struct drm_gem_object *obj, return NULL; } -static void del_vma(struct msm_gem_vma *vma) -{ - if (!vma) - return; - - list_del(&vma->list); - kfree(vma); -} - /* * If close is true, this also closes the VMA (releasing the allocated * iova range) in addition to removing the iommu mapping. In the eviction @@ -372,11 +363,11 @@ static void put_iova_spaces(struct drm_gem_object *obj, bool close) { struct msm_gem_object *msm_obj = to_msm_bo(obj); - struct msm_gem_vma *vma; + struct msm_gem_vma *vma, *tmp; msm_gem_assert_locked(obj); - list_for_each_entry(vma, &msm_obj->vmas, list) { + list_for_each_entry_safe(vma, tmp, &msm_obj->vmas, list) { if (vma->vm) { msm_gem_vma_purge(vma); if (close) @@ -395,7 +386,7 @@ put_iova_vmas(struct drm_gem_object *obj) msm_gem_assert_locked(obj); list_for_each_entry_safe(vma, tmp, &msm_obj->vmas, list) { - del_vma(vma); + msm_gem_vma_close(vma); } } @@ -564,7 +555,6 @@ static int clear_iova(struct drm_gem_object *obj, msm_gem_vma_purge(vma); msm_gem_vma_close(vma); - del_vma(vma); return 0; } diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_gem_vma.c index 6d18364f321c..ca29e81d79d2 100644 --- a/drivers/gpu/drm/msm/msm_gem_vma.c +++ b/drivers/gpu/drm/msm/msm_gem_vma.c @@ -102,8 +102,10 @@ void msm_gem_vma_close(struct msm_gem_vma *vma) spin_unlock(&vm->lock); vma->iova = 0; + list_del(&vma->list); msm_gem_vm_put(vm); + kfree(vma); } /* Create a new vma and allocate an iova for it */ From patchwork Thu Jun 5 18:28:53 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 894238 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C912F27F161 for ; Thu, 5 Jun 2025 18:32:32 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.168.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749148355; cv=none; b=UPsnCnRxuUKr1AqDKoO3CydPpflkJO8qnvlwckfhVqLyjb3RrhXiR763Cr3VK2IKm7iQg7fb1iLrmX0GxLRorsNM6/pTypdaoALgZObXZ++oBZyarLtKRcDw7j6ph3cwRT0i874ZKBx6v+CRxb9Af0YCCpz6UoxLBRNR8kVWCsM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749148355; c=relaxed/simple; bh=GrGA6aQZ+A819VWvU0l5WrQm1ewySWpRFv6nLaO7aCA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=XEOam7M6N40SBAi5/EWronGBIR0VTUpWHyrPO8Dvz1FFWQC3KG7Jahbi4WcCjzbcumhJlJzXB9JF68yKnGCvNgce5UPjo6ZWJ46rdDhZfego38631D2rCGAHT3qrUwGtzyKcADn9kHHVjRXkjICjF1NMLb+5q7XpOsJhTSLUfjU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=JUa4wx1M; arc=none smtp.client-ip=205.220.168.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="JUa4wx1M" Received: from pps.filterd (m0279865.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 555HXrDG007808 for ; Thu, 5 Jun 2025 18:32:32 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=aJRux9Mmt7D uLoNeHMGnLL0gFwmGFL0eGNY7k6BguSQ=; b=JUa4wx1MONrVIMYhDve2lIQwsws 4cOE0chvJ8aod9TXux8ng4rmXFhO/oFJadVBjgwuOWPdDkMpZ2oZw5z3Zmf3o3sK O9DsVa4mmS3kJ8cuWa8slPvUJePRZcNLoF9wpZ+6xyBa4X6lJd6Vdkxl7HDUr1dM gAwmBcbnAPq4devLEuvIT8/P04suQDLeDHJiHQLu5h3d1omNjsBEM0DJEUPHT8jo F+a1sux+uZ+2gcSTJrLy6WroRqAadVpiVVBnPlHHBYANN9CG8QraClFeg8EJRnIN i/jbXtrvVVEuDE+spHob9fewbWibvEbVG7FMhlZtaes71nIf4M5wF8aH8FA== Received: from mail-pg1-f197.google.com (mail-pg1-f197.google.com [209.85.215.197]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 471g8t29m7-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Thu, 05 Jun 2025 18:32:32 +0000 (GMT) Received: by mail-pg1-f197.google.com with SMTP id 41be03b00d2f7-b26e0fee459so973039a12.1 for ; Thu, 05 Jun 2025 11:32:31 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1749148351; x=1749753151; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=aJRux9Mmt7DuLoNeHMGnLL0gFwmGFL0eGNY7k6BguSQ=; b=VipfXodxSwOq4ypeXO4Udyq9GpZAB7hsodx/fMjJKQXnx4jqBSwi0442oH93YDt9jr SVqFCnVmXYkGqcq7nPVgnt9wOgQP9q21XLoCPgKl7y0inkK3F8Nicge7BsF8if+Q6a79 tPMhnqXAayOLB3P0QIsB486IW2oREUr6hPbu6oindPnvNmbqsrRBDfjOhv3Rp0v+2qcK UiIh/uWg9PFg43++ikyOda5m55/ycqtuKM0DyM8GM9trQay4TB6y/UgfKWJ/0aI+nLmL TIzqvcSoYkyh1PdurPQMwrXrrvXO0/aZPDD/VmjSgviyEd4kGuZEOwLOJjRUEVP13hgq A3aw== X-Forwarded-Encrypted: i=1; AJvYcCVWzaJzHunBllGj4nr4OWBxoixfw0XgNBz8HMRiyvUK1vJiaXcLXKGRktHYf68MePlhpkORtA9BguArJ0mS@vger.kernel.org X-Gm-Message-State: AOJu0YzWkfwC/qwwWYOA3XKmZzt9+BK+06DdkHNnz0QXYxfqaZfqXg0Z Ve2NZQiRrIR7UFd+/El7uhta3FJkEo3qw18y+lyTwgz1C1wH1i/aDo3/YQkNkptJD0EVVbVfzGs 0H6nAi9lgtUNdhowZqCprYZV1K/moYXblhDKjTATD/LuNmhRESzCM3xVvpBs7nH4h+PO4 X-Gm-Gg: ASbGncsMuyccuuXsg6DawxXLyzu+SnEVc+hxwiy3zKrfizPkW78+dHDpyNQ0r5cWjMD T6vaaD+raPeWgnGqgpxWIkHIePxXj6de64dw7aMfe9LpEpMuwPKAKhEfrkUOnl7PO6KxeuMGded wCYZcAydSjtKUqrqVbSfUXc/NoEaYv/+gPpRsczKIPiSskmqnbq6a75bA1SBN8yrBxj9LVLojFC gl93ZNEkkmnNjxHW6vWJMJWSI9pNjd9uKXsSuCymvQ/XuAlcv9h7rOtgyjbgAahoo/fBWj8U9bO +Fgb8zE74s6nYnwBwJtDVg== X-Received: by 2002:a17:90b:3d8f:b0:311:abba:53b6 with SMTP id 98e67ed59e1d1-31349fb9e7bmr314331a91.14.1749148350475; Thu, 05 Jun 2025 11:32:30 -0700 (PDT) X-Google-Smtp-Source: AGHT+IFFf9Ed15WnJ0bK+g5LNOmp59mIQDowolw20rilCr32CVyAeCUxH7k2sAVz4uNjydss8qX1kA== X-Received: by 2002:a17:90b:3d8f:b0:311:abba:53b6 with SMTP id 98e67ed59e1d1-31349fb9e7bmr314299a91.14.1749148350077; Thu, 05 Jun 2025 11:32:30 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:89fa:e299:1a34:c1f5]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-31349f6d8f3sm61281a91.46.2025.06.05.11.32.29 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 05 Jun 2025 11:32:29 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Rob Clark , Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v6 08/40] drm/msm: Don't close VMAs on purge Date: Thu, 5 Jun 2025 11:28:53 -0700 Message-ID: <20250605183111.163594-9-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250605183111.163594-1-robin.clark@oss.qualcomm.com> References: <20250605183111.163594-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Authority-Analysis: v=2.4 cv=RMizH5i+ c=1 sm=1 tr=0 ts=6841e2c0 cx=c_pps a=rz3CxIlbcmazkYymdCej/Q==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=cm27Pg_UAAAA:8 a=EUspDBNiAAAA:8 a=pCKbN_IlIROpsMws84IA:9 a=bFCP_H2QrGi7Okbo017w:22 X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjA1MDE2NCBTYWx0ZWRfX5JUNig9EQuWH lgykIVYhJovRAPx3WZ1mCQtlfFCjJW87WMT9+X+TQ9BeH5b5X9wgOA/SGjVKEsthvRc7B918sQ9 yYNxIDETLrMDsD4gDW5ANejd3Hz5Vo2aceE86CQR16uweAx+aMXUYrS2TxG3Rh+5Cla2gwF2OrY JHPpWmMTh1aUD4ss4bVWGvRd/TXqpy8B/lKLjoDmELyQKWXd6mzZZfNpli6kVec3mQnyfBdTRfJ DppI2YY+7UZO7HefuQEILXxtTj2lUcoUkox07nt7YtdOVHCa5po2w4lUPWhlCR5NXDP/mkqB1Tz mzIEZ+MlPw551EAKzU+pXWtQWglWq/24Bscj+TANo0S7ZI1F1TE12k2spsl9vF4CbLKbtMMMaPU 0Rvr4Gq1+q3Ck5CKAEsoQ7JzcKfCE5g0koQgHdvo/vGwQKjo0IBhDQcn3KgIsX393jZj5Q3l X-Proofpoint-GUID: gt6piperoYqr-1lgn_uxEQcm3gOY6cUq X-Proofpoint-ORIG-GUID: gt6piperoYqr-1lgn_uxEQcm3gOY6cUq X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.0.736,FMLib:17.12.80.40 definitions=2025-06-05_05,2025-06-05_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 impostorscore=0 phishscore=0 spamscore=0 lowpriorityscore=0 malwarescore=0 bulkscore=0 suspectscore=0 adultscore=0 mlxlogscore=999 mlxscore=0 clxscore=1015 priorityscore=1501 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506050164 From: Rob Clark Previously we'd also tear down the VMA, making the address space available again. But with drm_gpuvm conversion, this would require holding the locks of all VMs the GEM object is mapped in. Which is problematic for the shrinker. Instead just let the VMA hang around until the GEM object is freed. Signed-off-by: Rob Clark Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gem.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 4c10eca404e0..50b866dcf439 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -763,7 +763,7 @@ void msm_gem_purge(struct drm_gem_object *obj) GEM_WARN_ON(!is_purgeable(msm_obj)); /* Get rid of any iommu mapping(s): */ - put_iova_spaces(obj, true); + put_iova_spaces(obj, false); msm_gem_vunmap(obj); From patchwork Thu Jun 5 18:28:54 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 894508 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1CFD227FB3C for ; Thu, 5 Jun 2025 18:32:39 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.168.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749148361; cv=none; b=mpMpUCeLaW2u+sxbxVYi0lFLpz1PW1S8EUG4F44KPjMqPq1xsnVIkramJQAA5OeWxo7/MxvteJyenzVcmmpFzAswgsaccAqDU8bg5k3f2O7EG6j+Zs0U+u4d7CCZqDRUzuaeFRQOXPO4jytq5tW4aRZZ9qRc53wc6pL29/hAm/0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749148361; c=relaxed/simple; bh=ckoCa1/xzl9ludIfhHdUrw8hq3Oma0o6nVNB/YTPkEg=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=iGAtVmxD8OrauFYlk4+5IbK60gFF/TzyZR+c/2zssEAP6ObI7Z5T6/An+15EGQ8ZY/amQ+9POpAOriZFhYHgADhPYFnqiPPGFU9aCLv3E4KObyd2z6CaUBJGJC7Wogx//nYXNDvGmSZ5sj+9XSaRR3GaR+kEAMsHjHz+v6jenTA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=N0xpfn/y; arc=none smtp.client-ip=205.220.168.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="N0xpfn/y" Received: from pps.filterd (m0279865.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 55593qcf007709 for ; Thu, 5 Jun 2025 18:32:38 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=nNsabTPQHzS iOOHIBX1s5pcQmikAfPXYam4gqeCq7oQ=; b=N0xpfn/yp5/iUvKRTASXlCzbwlW GixAM9krz6ZNhvtcNzkRH/NTUkkquaMXESD8jr22C9cdML1gsi64+RpDtkEhh74u k6AsI9FMh3oRCboESH8oldcwwYgQeTeQaHGBfsjPkD9zDJ7yc+G7qAJFRFV4SAip kI/pPcSVHQ8ruUusSGZSi9+Pjt0XLla+pyuBsLTpgkC89RuBw+hFCRxtV1d+nxwq tpx30C9dX+VOxaWL5ChXBtjhTCbClXqoh653N+wm3Y0XfrFF/yKgo2PBm73OYSxj bWBqxthZYzyrN/RZ4bb7cu3jNRU0SOptuOpa0Oh4A3IBKhWG7851Lc6Lf9g== Received: from mail-pl1-f197.google.com (mail-pl1-f197.google.com [209.85.214.197]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 471g8t29mg-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Thu, 05 Jun 2025 18:32:38 +0000 (GMT) Received: by mail-pl1-f197.google.com with SMTP id d9443c01a7336-235dd77d11fso12141845ad.0 for ; Thu, 05 Jun 2025 11:32:38 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1749148357; x=1749753157; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=nNsabTPQHzSiOOHIBX1s5pcQmikAfPXYam4gqeCq7oQ=; b=toZuvPjULBrL1c24GIuDyJ1q6W2YgTDPlSZ+EPU2wuil3cvJsJ27yvd2O4E9P+LOd/ anZUXTbjPozo5u68FTlt/jroQPgTXhygZSfrW+rmfSgj8ARbutAcS4sK8aa0Bl91O0Hk zZlGmdj98dwwigEowz05OZoBbVWCnkvnsfvRbpLpV2W87K7da0rvNVYlULeuOD7zBtxt CYZEgKDWMNuBDShbHlv1MN3qpvICWnBZDwh4vdoOdet8R2sPyOgfKCIvQ073+UHyHDbh KJMLe3mZzyavI9DxZEjGFkVy2MSwfqsxPr4iID4jbLZ/1LFlDTIMZ2+IXSVV3Ou2rqW1 uoow== X-Forwarded-Encrypted: i=1; AJvYcCVWJouVY0wtRu+eeP4P0YCKZMJSTP4W58hPj3Rqe+CGtRJ+TI5zD8C5wSdCXTnWllVFksF8yA8J4U8nEj6i@vger.kernel.org X-Gm-Message-State: AOJu0Yz8uh3JrYcZP8SmTDJ5/3VdLTDrkgV3zxAjuVYyuQvMansyUgWe Ulpxq/EB3rgFawpDcTNziC9XPPpp4ZDhSY1mX7baWM/NiBIxrA3C3AZ2kXsx4m3IHtSIcOotcG/ OyO9rO7RVtXoi2hhhsX888eYXX6g6Hrqh34kfDHW4rYmVn+d9YA3sDFdbVTs/Bgi7cmy2 X-Gm-Gg: ASbGncv/C0BUjo3kK1PnYfJ71QJRkM1WmquuojARQUxzhesgQWgiB7Ws7jn1ZyXUpM4 WJeE1uz1jk/1x27JQAVXL5YR5OSlgumDUIwGM1YdfUs7eplpjslttV+K2g7W2gg+2cbcSq6mIy2 l3HAfizoOdWyBxPG14N4zXwRTi3JieS1VDJtXknTgm0ndUq/cxTOhSaRQjEI8ibC0oKdirYWxkp nGmYccp7AFQm+jz5FGH4GwPMgtq4i3DWV8IRXSBOsgxiFccdPur1AhAL8tTyqGrWhoyUdaGsOE2 iKxl6hCiXADXahvMRI+iOmhihHN3lTgR X-Received: by 2002:a17:902:da88:b0:235:129e:f649 with SMTP id d9443c01a7336-23601d01ddamr6533185ad.12.1749148357310; Thu, 05 Jun 2025 11:32:37 -0700 (PDT) X-Google-Smtp-Source: AGHT+IGDEbBPBXfLLeGNLjyptBss1DKqnfFt9DFSqvCRHo4bvjXEYogrclMUNArQc2QthNDlwVbztQ== X-Received: by 2002:a17:902:da88:b0:235:129e:f649 with SMTP id d9443c01a7336-23601d01ddamr6532705ad.12.1749148356838; Thu, 05 Jun 2025 11:32:36 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:89fa:e299:1a34:c1f5]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-23506d21818sm122981355ad.254.2025.06.05.11.32.36 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 05 Jun 2025 11:32:36 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , Jessica Zhang , Jun Nie , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v6 09/40] drm/msm: Stop passing vm to msm_framebuffer Date: Thu, 5 Jun 2025 11:28:54 -0700 Message-ID: <20250605183111.163594-10-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250605183111.163594-1-robin.clark@oss.qualcomm.com> References: <20250605183111.163594-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Authority-Analysis: v=2.4 cv=RMizH5i+ c=1 sm=1 tr=0 ts=6841e2c6 cx=c_pps a=cmESyDAEBpBGqyK7t0alAg==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=EUspDBNiAAAA:8 a=HL1_eh3RQQ0Sznupcm4A:9 a=1OuFwYUASf3TG4hYMiVC:22 X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjA1MDE2NCBTYWx0ZWRfX6N52TyYfg5jI xxdC2VroSRUVHRZfDJV19IyEYZaV0vXiwfUXxdYi57cszfJYZpHkJwDqPyvjaVF1uLQfwG4mXj+ vaYnF4eKj9/44jj4lLUUSzWreQsaNbeC+4ZjEHOgELSzSNX1COVKNalLMwPI+4B2hytCO41BrQL Uo5bwhOIFOJhE3sqOpWcngnFdwproiVqSUrts09CAINZDG8zB1HcKajTCtHdYwz7C2U0Bb6XIG0 mdAT7oYlDdk/Sif4tIB/0XfZkcmtqhBjhYW3yM0zStwK7dYeIm99i0GM6fhTBqjrz5+WIPkIKiP W1VlXgGH/QOxGmHFXJvEadY5eI2w2PORB9Ma/k/YDis0DYw9hUdbwvb9Bgjh5gturYr7w2m971E DYH66er8565c/f5eUFzvGKuq51mx9n5ynY4n3Dfw7w4sJB9QEs+09Ws9+Wt7IkDzCQXEYDo6 X-Proofpoint-GUID: BobxBAj6dQDTr6hPmGlUJEymnOMued_k X-Proofpoint-ORIG-GUID: BobxBAj6dQDTr6hPmGlUJEymnOMued_k X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.0.736,FMLib:17.12.80.40 definitions=2025-06-05_05,2025-06-05_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 impostorscore=0 phishscore=0 spamscore=0 lowpriorityscore=0 malwarescore=0 bulkscore=0 suspectscore=0 adultscore=0 mlxlogscore=999 mlxscore=0 clxscore=1015 priorityscore=1501 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506050164 The fb only deals with kms->vm, so make that explicit. This will start letting us refcount the # of times the fb is pinned, so we can only unpin the vma after last user of the fb is done. Having a single reference count really only works if there is only a single vm. Signed-off-by: Rob Clark --- .../drm/msm/disp/dpu1/dpu_encoder_phys_wb.c | 11 +++------- drivers/gpu/drm/msm/disp/dpu1/dpu_formats.c | 18 +++++++---------- drivers/gpu/drm/msm/disp/dpu1/dpu_formats.h | 3 +-- drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c | 20 ++++++------------- drivers/gpu/drm/msm/disp/dpu1/dpu_plane.h | 2 -- drivers/gpu/drm/msm/disp/mdp4/mdp4_plane.c | 18 ++++++----------- drivers/gpu/drm/msm/disp/mdp5/mdp5_plane.c | 18 ++++++----------- drivers/gpu/drm/msm/msm_drv.h | 9 +++------ drivers/gpu/drm/msm/msm_fb.c | 15 +++++++------- 9 files changed, 39 insertions(+), 75 deletions(-) diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_wb.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_wb.c index 32e208ee946d..9a54da1c9e3c 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_wb.c +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_wb.c @@ -566,7 +566,6 @@ static void dpu_encoder_phys_wb_prepare_wb_job(struct dpu_encoder_phys *phys_enc struct drm_writeback_job *job) { const struct msm_format *format; - struct msm_gem_vm *vm; struct dpu_hw_wb_cfg *wb_cfg; int ret; struct dpu_encoder_phys_wb *wb_enc = to_dpu_encoder_phys_wb(phys_enc); @@ -576,13 +575,12 @@ static void dpu_encoder_phys_wb_prepare_wb_job(struct dpu_encoder_phys *phys_enc wb_enc->wb_job = job; wb_enc->wb_conn = job->connector; - vm = phys_enc->dpu_kms->base.vm; wb_cfg = &wb_enc->wb_cfg; memset(wb_cfg, 0, sizeof(struct dpu_hw_wb_cfg)); - ret = msm_framebuffer_prepare(job->fb, vm, false); + ret = msm_framebuffer_prepare(job->fb, false); if (ret) { DPU_ERROR("prep fb failed, %d\n", ret); return; @@ -596,7 +594,7 @@ static void dpu_encoder_phys_wb_prepare_wb_job(struct dpu_encoder_phys *phys_enc return; } - dpu_format_populate_addrs(vm, job->fb, &wb_cfg->dest); + dpu_format_populate_addrs(job->fb, &wb_cfg->dest); wb_cfg->dest.width = job->fb->width; wb_cfg->dest.height = job->fb->height; @@ -619,14 +617,11 @@ static void dpu_encoder_phys_wb_cleanup_wb_job(struct dpu_encoder_phys *phys_enc struct drm_writeback_job *job) { struct dpu_encoder_phys_wb *wb_enc = to_dpu_encoder_phys_wb(phys_enc); - struct msm_gem_vm *vm; if (!job->fb) return; - vm = phys_enc->dpu_kms->base.vm; - - msm_framebuffer_cleanup(job->fb, vm, false); + msm_framebuffer_cleanup(job->fb, false); wb_enc->wb_job = NULL; wb_enc->wb_conn = NULL; } diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_formats.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_formats.c index d115b79af771..b0d585c5315c 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_formats.c +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_formats.c @@ -274,15 +274,14 @@ int dpu_format_populate_plane_sizes( return _dpu_format_populate_plane_sizes_linear(fmt, fb, layout); } -static void _dpu_format_populate_addrs_ubwc(struct msm_gem_vm *vm, - struct drm_framebuffer *fb, +static void _dpu_format_populate_addrs_ubwc(struct drm_framebuffer *fb, struct dpu_hw_fmt_layout *layout) { const struct msm_format *fmt; uint32_t base_addr = 0; bool meta; - base_addr = msm_framebuffer_iova(fb, vm, 0); + base_addr = msm_framebuffer_iova(fb, 0); fmt = msm_framebuffer_format(fb); meta = MSM_FORMAT_IS_UBWC(fmt); @@ -355,26 +354,23 @@ static void _dpu_format_populate_addrs_ubwc(struct msm_gem_vm *vm, } } -static void _dpu_format_populate_addrs_linear(struct msm_gem_vm *vm, - struct drm_framebuffer *fb, +static void _dpu_format_populate_addrs_linear(struct drm_framebuffer *fb, struct dpu_hw_fmt_layout *layout) { unsigned int i; /* Populate addresses for simple formats here */ for (i = 0; i < layout->num_planes; ++i) - layout->plane_addr[i] = msm_framebuffer_iova(fb, vm, i); + layout->plane_addr[i] = msm_framebuffer_iova(fb, i); } /** * dpu_format_populate_addrs - populate buffer addresses based on * mmu, fb, and format found in the fb - * @vm: address space pointer * @fb: framebuffer pointer * @layout: format layout structure to populate */ -void dpu_format_populate_addrs(struct msm_gem_vm *vm, - struct drm_framebuffer *fb, +void dpu_format_populate_addrs(struct drm_framebuffer *fb, struct dpu_hw_fmt_layout *layout) { const struct msm_format *fmt; @@ -384,7 +380,7 @@ void dpu_format_populate_addrs(struct msm_gem_vm *vm, /* Populate the addresses given the fb */ if (MSM_FORMAT_IS_UBWC(fmt) || MSM_FORMAT_IS_TILE(fmt)) - _dpu_format_populate_addrs_ubwc(vm, fb, layout); + _dpu_format_populate_addrs_ubwc(fb, layout); else - _dpu_format_populate_addrs_linear(vm, fb, layout); + _dpu_format_populate_addrs_linear(fb, layout); } diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_formats.h b/drivers/gpu/drm/msm/disp/dpu1/dpu_formats.h index 989f3e13c497..dc03f522e616 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_formats.h +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_formats.h @@ -31,8 +31,7 @@ static inline bool dpu_find_format(u32 format, const u32 *supported_formats, return false; } -void dpu_format_populate_addrs(struct msm_gem_vm *vm, - struct drm_framebuffer *fb, +void dpu_format_populate_addrs(struct drm_framebuffer *fb, struct dpu_hw_fmt_layout *layout); int dpu_format_populate_plane_sizes( diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c index 2640ab9e6e90..8f5f7cc27215 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c @@ -646,7 +646,6 @@ static int dpu_plane_prepare_fb(struct drm_plane *plane, struct drm_framebuffer *fb = new_state->fb; struct dpu_plane *pdpu = to_dpu_plane(plane); struct dpu_plane_state *pstate = to_dpu_plane_state(new_state); - struct dpu_kms *kms = _dpu_plane_get_kms(&pdpu->base); int ret; if (!new_state->fb) @@ -654,9 +653,6 @@ static int dpu_plane_prepare_fb(struct drm_plane *plane, DPU_DEBUG_PLANE(pdpu, "FB[%u]\n", fb->base.id); - /* cache vm */ - pstate->vm = kms->base.vm; - /* * TODO: Need to sort out the msm_framebuffer_prepare() call below so * we can use msm_atomic_prepare_fb() instead of doing the @@ -664,13 +660,10 @@ static int dpu_plane_prepare_fb(struct drm_plane *plane, */ drm_gem_plane_helper_prepare_fb(plane, new_state); - if (pstate->vm) { - ret = msm_framebuffer_prepare(new_state->fb, - pstate->vm, pstate->needs_dirtyfb); - if (ret) { - DPU_ERROR("failed to prepare framebuffer\n"); - return ret; - } + ret = msm_framebuffer_prepare(new_state->fb, pstate->needs_dirtyfb); + if (ret) { + DPU_ERROR("failed to prepare framebuffer\n"); + return ret; } return 0; @@ -689,8 +682,7 @@ static void dpu_plane_cleanup_fb(struct drm_plane *plane, DPU_DEBUG_PLANE(pdpu, "FB[%u]\n", old_state->fb->base.id); - msm_framebuffer_cleanup(old_state->fb, old_pstate->vm, - old_pstate->needs_dirtyfb); + msm_framebuffer_cleanup(old_state->fb, old_pstate->needs_dirtyfb); } static int dpu_plane_check_inline_rotation(struct dpu_plane *pdpu, @@ -1353,7 +1345,7 @@ static void dpu_plane_sspp_atomic_update(struct drm_plane *plane, pstate->needs_qos_remap |= (is_rt_pipe != pdpu->is_rt_pipe); pdpu->is_rt_pipe = is_rt_pipe; - dpu_format_populate_addrs(pstate->vm, new_state->fb, &pstate->layout); + dpu_format_populate_addrs(new_state->fb, &pstate->layout); DPU_DEBUG_PLANE(pdpu, "FB[%u] " DRM_RECT_FP_FMT "->crtc%u " DRM_RECT_FMT ", %p4cc ubwc %d\n", fb->base.id, DRM_RECT_FP_ARG(&state->src), diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.h b/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.h index 3578f52048a5..a3a6e9028333 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.h +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.h @@ -17,7 +17,6 @@ /** * struct dpu_plane_state: Define dpu extension of drm plane state object * @base: base drm plane state object - * @vm: pointer to address space for input/output buffers * @pipe: software pipe description * @r_pipe: software pipe description of the second pipe * @pipe_cfg: software pipe configuration @@ -34,7 +33,6 @@ */ struct dpu_plane_state { struct drm_plane_state base; - struct msm_gem_vm *vm; struct dpu_sw_pipe pipe; struct dpu_sw_pipe r_pipe; struct dpu_sw_pipe_cfg pipe_cfg; diff --git a/drivers/gpu/drm/msm/disp/mdp4/mdp4_plane.c b/drivers/gpu/drm/msm/disp/mdp4/mdp4_plane.c index 7743be6167f8..098c3b5ff2b2 100644 --- a/drivers/gpu/drm/msm/disp/mdp4/mdp4_plane.c +++ b/drivers/gpu/drm/msm/disp/mdp4/mdp4_plane.c @@ -79,30 +79,25 @@ static const struct drm_plane_funcs mdp4_plane_funcs = { static int mdp4_plane_prepare_fb(struct drm_plane *plane, struct drm_plane_state *new_state) { - struct msm_drm_private *priv = plane->dev->dev_private; - struct msm_kms *kms = priv->kms; - if (!new_state->fb) return 0; drm_gem_plane_helper_prepare_fb(plane, new_state); - return msm_framebuffer_prepare(new_state->fb, kms->vm, false); + return msm_framebuffer_prepare(new_state->fb, false); } static void mdp4_plane_cleanup_fb(struct drm_plane *plane, struct drm_plane_state *old_state) { struct mdp4_plane *mdp4_plane = to_mdp4_plane(plane); - struct mdp4_kms *mdp4_kms = get_kms(plane); - struct msm_kms *kms = &mdp4_kms->base.base; struct drm_framebuffer *fb = old_state->fb; if (!fb) return; DBG("%s: cleanup: FB[%u]", mdp4_plane->name, fb->base.id); - msm_framebuffer_cleanup(fb, kms->vm, false); + msm_framebuffer_cleanup(fb, false); } @@ -141,7 +136,6 @@ static void mdp4_plane_set_scanout(struct drm_plane *plane, { struct mdp4_plane *mdp4_plane = to_mdp4_plane(plane); struct mdp4_kms *mdp4_kms = get_kms(plane); - struct msm_kms *kms = &mdp4_kms->base.base; enum mdp4_pipe pipe = mdp4_plane->pipe; mdp4_write(mdp4_kms, REG_MDP4_PIPE_SRC_STRIDE_A(pipe), @@ -153,13 +147,13 @@ static void mdp4_plane_set_scanout(struct drm_plane *plane, MDP4_PIPE_SRC_STRIDE_B_P3(fb->pitches[3])); mdp4_write(mdp4_kms, REG_MDP4_PIPE_SRCP0_BASE(pipe), - msm_framebuffer_iova(fb, kms->vm, 0)); + msm_framebuffer_iova(fb, 0)); mdp4_write(mdp4_kms, REG_MDP4_PIPE_SRCP1_BASE(pipe), - msm_framebuffer_iova(fb, kms->vm, 1)); + msm_framebuffer_iova(fb, 1)); mdp4_write(mdp4_kms, REG_MDP4_PIPE_SRCP2_BASE(pipe), - msm_framebuffer_iova(fb, kms->vm, 2)); + msm_framebuffer_iova(fb, 2)); mdp4_write(mdp4_kms, REG_MDP4_PIPE_SRCP3_BASE(pipe), - msm_framebuffer_iova(fb, kms->vm, 3)); + msm_framebuffer_iova(fb, 3)); } static void mdp4_write_csc_config(struct mdp4_kms *mdp4_kms, diff --git a/drivers/gpu/drm/msm/disp/mdp5/mdp5_plane.c b/drivers/gpu/drm/msm/disp/mdp5/mdp5_plane.c index 9f68a4747203..7c790406d533 100644 --- a/drivers/gpu/drm/msm/disp/mdp5/mdp5_plane.c +++ b/drivers/gpu/drm/msm/disp/mdp5/mdp5_plane.c @@ -135,8 +135,6 @@ static const struct drm_plane_funcs mdp5_plane_funcs = { static int mdp5_plane_prepare_fb(struct drm_plane *plane, struct drm_plane_state *new_state) { - struct msm_drm_private *priv = plane->dev->dev_private; - struct msm_kms *kms = priv->kms; bool needs_dirtyfb = to_mdp5_plane_state(new_state)->needs_dirtyfb; if (!new_state->fb) @@ -144,14 +142,12 @@ static int mdp5_plane_prepare_fb(struct drm_plane *plane, drm_gem_plane_helper_prepare_fb(plane, new_state); - return msm_framebuffer_prepare(new_state->fb, kms->vm, needs_dirtyfb); + return msm_framebuffer_prepare(new_state->fb, needs_dirtyfb); } static void mdp5_plane_cleanup_fb(struct drm_plane *plane, struct drm_plane_state *old_state) { - struct mdp5_kms *mdp5_kms = get_kms(plane); - struct msm_kms *kms = &mdp5_kms->base.base; struct drm_framebuffer *fb = old_state->fb; bool needed_dirtyfb = to_mdp5_plane_state(old_state)->needs_dirtyfb; @@ -159,7 +155,7 @@ static void mdp5_plane_cleanup_fb(struct drm_plane *plane, return; DBG("%s: cleanup: FB[%u]", plane->name, fb->base.id); - msm_framebuffer_cleanup(fb, kms->vm, needed_dirtyfb); + msm_framebuffer_cleanup(fb, needed_dirtyfb); } static int mdp5_plane_atomic_check_with_state(struct drm_crtc_state *crtc_state, @@ -467,8 +463,6 @@ static void set_scanout_locked(struct mdp5_kms *mdp5_kms, enum mdp5_pipe pipe, struct drm_framebuffer *fb) { - struct msm_kms *kms = &mdp5_kms->base.base; - mdp5_write(mdp5_kms, REG_MDP5_PIPE_SRC_STRIDE_A(pipe), MDP5_PIPE_SRC_STRIDE_A_P0(fb->pitches[0]) | MDP5_PIPE_SRC_STRIDE_A_P1(fb->pitches[1])); @@ -478,13 +472,13 @@ static void set_scanout_locked(struct mdp5_kms *mdp5_kms, MDP5_PIPE_SRC_STRIDE_B_P3(fb->pitches[3])); mdp5_write(mdp5_kms, REG_MDP5_PIPE_SRC0_ADDR(pipe), - msm_framebuffer_iova(fb, kms->vm, 0)); + msm_framebuffer_iova(fb, 0)); mdp5_write(mdp5_kms, REG_MDP5_PIPE_SRC1_ADDR(pipe), - msm_framebuffer_iova(fb, kms->vm, 1)); + msm_framebuffer_iova(fb, 1)); mdp5_write(mdp5_kms, REG_MDP5_PIPE_SRC2_ADDR(pipe), - msm_framebuffer_iova(fb, kms->vm, 2)); + msm_framebuffer_iova(fb, 2)); mdp5_write(mdp5_kms, REG_MDP5_PIPE_SRC3_ADDR(pipe), - msm_framebuffer_iova(fb, kms->vm, 3)); + msm_framebuffer_iova(fb, 3)); } /* Note: mdp5_plane->pipe_lock must be locked */ diff --git a/drivers/gpu/drm/msm/msm_drv.h b/drivers/gpu/drm/msm/msm_drv.h index ad509403f072..e4c57deaa1f9 100644 --- a/drivers/gpu/drm/msm/msm_drv.h +++ b/drivers/gpu/drm/msm/msm_drv.h @@ -251,12 +251,9 @@ struct drm_gem_object *msm_gem_prime_import_sg_table(struct drm_device *dev, int msm_gem_prime_pin(struct drm_gem_object *obj); void msm_gem_prime_unpin(struct drm_gem_object *obj); -int msm_framebuffer_prepare(struct drm_framebuffer *fb, - struct msm_gem_vm *vm, bool needs_dirtyfb); -void msm_framebuffer_cleanup(struct drm_framebuffer *fb, - struct msm_gem_vm *vm, bool needed_dirtyfb); -uint32_t msm_framebuffer_iova(struct drm_framebuffer *fb, - struct msm_gem_vm *vm, int plane); +int msm_framebuffer_prepare(struct drm_framebuffer *fb, bool needs_dirtyfb); +void msm_framebuffer_cleanup(struct drm_framebuffer *fb, bool needed_dirtyfb); +uint32_t msm_framebuffer_iova(struct drm_framebuffer *fb, int plane); struct drm_gem_object *msm_framebuffer_bo(struct drm_framebuffer *fb, int plane); const struct msm_format *msm_framebuffer_format(struct drm_framebuffer *fb); struct drm_framebuffer *msm_framebuffer_create(struct drm_device *dev, diff --git a/drivers/gpu/drm/msm/msm_fb.c b/drivers/gpu/drm/msm/msm_fb.c index 6df318b73534..8a3b88130f4d 100644 --- a/drivers/gpu/drm/msm/msm_fb.c +++ b/drivers/gpu/drm/msm/msm_fb.c @@ -75,10 +75,10 @@ void msm_framebuffer_describe(struct drm_framebuffer *fb, struct seq_file *m) /* prepare/pin all the fb's bo's for scanout. */ -int msm_framebuffer_prepare(struct drm_framebuffer *fb, - struct msm_gem_vm *vm, - bool needs_dirtyfb) +int msm_framebuffer_prepare(struct drm_framebuffer *fb, bool needs_dirtyfb) { + struct msm_drm_private *priv = fb->dev->dev_private; + struct msm_gem_vm *vm = priv->kms->vm; struct msm_framebuffer *msm_fb = to_msm_framebuffer(fb); int ret, i, n = fb->format->num_planes; @@ -98,10 +98,10 @@ int msm_framebuffer_prepare(struct drm_framebuffer *fb, return 0; } -void msm_framebuffer_cleanup(struct drm_framebuffer *fb, - struct msm_gem_vm *vm, - bool needed_dirtyfb) +void msm_framebuffer_cleanup(struct drm_framebuffer *fb, bool needed_dirtyfb) { + struct msm_drm_private *priv = fb->dev->dev_private; + struct msm_gem_vm *vm = priv->kms->vm; struct msm_framebuffer *msm_fb = to_msm_framebuffer(fb); int i, n = fb->format->num_planes; @@ -115,8 +115,7 @@ void msm_framebuffer_cleanup(struct drm_framebuffer *fb, memset(msm_fb->iova, 0, sizeof(msm_fb->iova)); } -uint32_t msm_framebuffer_iova(struct drm_framebuffer *fb, - struct msm_gem_vm *vm, int plane) +uint32_t msm_framebuffer_iova(struct drm_framebuffer *fb, int plane) { struct msm_framebuffer *msm_fb = to_msm_framebuffer(fb); return msm_fb->iova[plane] + fb->offsets[plane]; From patchwork Thu Jun 5 18:28:55 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 894237 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DB2F827A135 for ; Thu, 5 Jun 2025 18:32:39 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.168.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749148361; cv=none; b=OoRowU6y1CtAatX7xSOXw3Fub8iGeZNdQqHt87BF9rbhTA+PNN9wj9SRH2ljnY32UvhtuTCrCrcfb9BohKL1dp/BdBQiYPXNKJwPJLINiOV5Z53r0ENomMnfwv8nEOGEsQ3OQrQ69xeJH4TsQiuio07FNyo8pmjWemxSfVUVLJg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749148361; c=relaxed/simple; bh=CbdUzDS0r5evR14HE2y2m9T+o2DmOJlOXmMaAhyMoOI=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=gYOwOhICsElVmIn5/Gn/l4aftU6uxtpZ2BQdFpbI2zU40HzzLBKy8CzNolRmQ0Svt2fXmlFJ+ruKMUpfPCIW3qSTNm28w68XuSi7ikCVvQm7T6KqfhBfvx//JXSN+YuM/F6qfEHtaZtZzGJCE6PcwxIlP+Siam9uGrsksK/FiFI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=TdAgdvEx; arc=none smtp.client-ip=205.220.168.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="TdAgdvEx" Received: from pps.filterd (m0279864.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 555HNMa3013506 for ; Thu, 5 Jun 2025 18:32:39 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=vb+Q3ccg7Qa sQsi7gj8/5KoBXUBkdYd/cEtWLqH67ms=; b=TdAgdvExjgLyGj34idphX4xSZZ7 pgITHojzHMnYNLmEHWmE4ZKRINmCWwMX2v/X9Bb1lCsAC/BXVtoG7/9cXZwKhEqh cftJJCR1uhcgs8LQ/FvCcVbuS/AFDM6Hs599yk/7hqAieIUmy8FekRqeZIxwV2t5 ZBDkDCf8Blm4LxPgQ7NzmScNC5n1fNjVTAkx82Le/2vMFA9CIigDoV5gZTwiMKOm HeyPvANF+SLmQ3aud4HAprnJH9KDH9COrJIcobX18Ckmc+TF9NNtxutC5fbhxN4t jPJn3m1/0pgkL62zsyVZgzVV6A+8CiS9ootUTpFz3I6AIFRsGhSqNRf6C9Q== Received: from mail-pl1-f199.google.com (mail-pl1-f199.google.com [209.85.214.199]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 471g8nt9sk-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Thu, 05 Jun 2025 18:32:39 +0000 (GMT) Received: by mail-pl1-f199.google.com with SMTP id d9443c01a7336-235f77f86f6so6206575ad.2 for ; Thu, 05 Jun 2025 11:32:39 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1749148358; x=1749753158; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=vb+Q3ccg7QasQsi7gj8/5KoBXUBkdYd/cEtWLqH67ms=; b=RB9rysAX9vL8m9xsS9zEBxaKEz/yql0zFT9FrWk2JmN8eWUaGk0O7PuolSflKCt8vh FrLIwYWlzdboV1FAQPOHznf9Gptcyi5dWTGSc3gBIMsTJyUtnnGYMTPgImREQO4hkrfM PJTYhSlw2TkK+m2eQasEqVQqkSagDrf47RfMMN4HUkACTTBLeJvGLyUmJc7O9iICZoOW QBk2WSeRZst8HjbiDvGJAHirk1R4CyNB0MWDpWoHcNk+EPF2UzvkJfbH2mSURPDaiGiw +xCY76vRu5HjibTQyseqphtMl4seZGkZcrF/JTQhLIMS6cSsh0w8GzYtgtpSRCLWAQvR NVrQ== X-Forwarded-Encrypted: i=1; AJvYcCVrigbzwP3+G+pqVoPUXLTX2V1n4HrqnwjEa2p2X6v20vH31yETOOH6R1RY0kMGcXecWdn9Vuiitvv9H2GX@vger.kernel.org X-Gm-Message-State: AOJu0YzuAxsOyEQdhXvgZkNhVLb3EZREh7Lv5XTXLeK4W6q4nYs0gSZr At/7JUaW2pII9phgOLcECQcEgtQkvSzhrMwB3RKBwb/emShyUAGCkbaPapn14GR/eZhyHsR87w+ s3112Mh2mB0x1RYjjlsTqfR17JFqqGoqJlq0xQwRnFYVb9bNckEYS8xXCzvkT6ry9y1WM X-Gm-Gg: ASbGncsNFT6nILIW9ZgfZu/yywMrGlIB7XrCS/SqsshFssh6yT1QoyqEIygMV6Efyx4 ReFMBsxcWCQLgx4h29k9SDuPkQEqcIPqewvwtv/NmvIfPILIMf2sUcORW83l1e8ITVe0D+me1R1 RlAKRWNqY++gpyQpneqSz5gIaKNasWy6C9TUWeXB8YPSQNOImk1n7xX6ARu88hGSK96Au6V67pk D3Z+x6R4ZZOMeDSev7lo3/GjAZAS7v8sGizpOUFxELBcni7uejFsVSZF/Zit2RI/WcOeUGf5sTL 3keFyebxeKyDZpBT527sBQ== X-Received: by 2002:a17:902:fc4b:b0:234:8c64:7875 with SMTP id d9443c01a7336-23601d96e8dmr6602455ad.38.1749148358331; Thu, 05 Jun 2025 11:32:38 -0700 (PDT) X-Google-Smtp-Source: AGHT+IGhgBS88FiFnJWBPK5j4gpjhnAM+FBu71NhFYDWj/9txYixjz/XfgZjlMTKenwua3ROM/Jrug== X-Received: by 2002:a17:902:fc4b:b0:234:8c64:7875 with SMTP id d9443c01a7336-23601d96e8dmr6602035ad.38.1749148357957; Thu, 05 Jun 2025 11:32:37 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:89fa:e299:1a34:c1f5]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-235e1e4ec12sm30594695ad.11.2025.06.05.11.32.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 05 Jun 2025 11:32:37 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v6 10/40] drm/msm: Refcount framebuffer pins Date: Thu, 5 Jun 2025 11:28:55 -0700 Message-ID: <20250605183111.163594-11-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250605183111.163594-1-robin.clark@oss.qualcomm.com> References: <20250605183111.163594-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Proofpoint-GUID: nOt6Hj8EAOy4dyg8_f7NNgqntsCsqacW X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjA1MDE2NSBTYWx0ZWRfX4UIkH2sZwvvD naZblx1u0B5nrurUjlJdp7JKen9t2o3GSx8wN0EfbA97QtMW0LstWH/8KVPAWjoe+ixMBOZxu9p 4/KEATu40j+qDsGHs8zuO30EgqSByFLJrJ3U3UP6c4WbQR+p9BnCHBB3iqt7QhvYlhRtOpAthFF anTfE7LWL0/vodNcEO1ZScbVbcViUcGvlDMYlsckIvjMBN/CkkfOCvtOHjNjCL1xcNbRA07+/al 0WYBRC3P5Sh189i8b7OCuUDRWozr184ZiadN10yEjBS6+Up7QUR870lDktjrBRK/AFjucPYMyeg B3+HO9a3346kZgwbsKH4cEOpnaCeb32xW0xJhLriMzazbvv9SKtA021wyKZyCP6/9wzl2bciqqJ aIV2x5ZZnJ/Gu/hrFoTbZgXONxGrOyqScvqylIhigiOuvce2p6XEiOeogDpfdMRlobqiXFiT X-Proofpoint-ORIG-GUID: nOt6Hj8EAOy4dyg8_f7NNgqntsCsqacW X-Authority-Analysis: v=2.4 cv=UphjN/wB c=1 sm=1 tr=0 ts=6841e2c7 cx=c_pps a=JL+w9abYAAE89/QcEU+0QA==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=EUspDBNiAAAA:8 a=MhmIxDhvR8qEtQvFyXAA:9 a=324X-CrmTo6CU4MGRt3R:22 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.0.736,FMLib:17.12.80.40 definitions=2025-06-05_05,2025-06-05_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 spamscore=0 impostorscore=0 lowpriorityscore=0 phishscore=0 mlxlogscore=999 clxscore=1015 malwarescore=0 adultscore=0 bulkscore=0 mlxscore=0 suspectscore=0 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506050165 We were already keeping a refcount of # of prepares (pins), to clear the iova array. Use that to avoid unpinning the iova until the last cleanup (unpin). This way, when msm_gem_unpin_iova() actually tears down the mapping, we won't have problems if the fb is being scanned out on another display (for example). Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_fb.c | 11 +++++++---- 1 file changed, 7 insertions(+), 4 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_fb.c b/drivers/gpu/drm/msm/msm_fb.c index 8a3b88130f4d..3b17d83f6673 100644 --- a/drivers/gpu/drm/msm/msm_fb.c +++ b/drivers/gpu/drm/msm/msm_fb.c @@ -85,7 +85,8 @@ int msm_framebuffer_prepare(struct drm_framebuffer *fb, bool needs_dirtyfb) if (needs_dirtyfb) refcount_inc(&msm_fb->dirtyfb); - atomic_inc(&msm_fb->prepare_count); + if (atomic_inc_return(&msm_fb->prepare_count) > 1) + return 0; for (i = 0; i < n; i++) { ret = msm_gem_get_and_pin_iova(fb->obj[i], vm, &msm_fb->iova[i]); @@ -108,11 +109,13 @@ void msm_framebuffer_cleanup(struct drm_framebuffer *fb, bool needed_dirtyfb) if (needed_dirtyfb) refcount_dec(&msm_fb->dirtyfb); + if (atomic_dec_return(&msm_fb->prepare_count)) + return; + + memset(msm_fb->iova, 0, sizeof(msm_fb->iova)); + for (i = 0; i < n; i++) msm_gem_unpin_iova(fb->obj[i], vm); - - if (!atomic_dec_return(&msm_fb->prepare_count)) - memset(msm_fb->iova, 0, sizeof(msm_fb->iova)); } uint32_t msm_framebuffer_iova(struct drm_framebuffer *fb, int plane) From patchwork Thu Jun 5 18:28:56 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 894236 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A94C927BF7E for ; Thu, 5 Jun 2025 18:32:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.168.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749148367; cv=none; b=j1gZIQFOwYLpOLc5Iao4n9cvpvQLHnE7KhvWRPa+p+0LPT7qxEpFiL6L353WSxjEXHNMtrNJMsKqUSiNEOfRiAR8254pxBxmmnWOy2cNVjO5mEUn4GeehQKnFtg1zIITiBNF5PLZRxvZcBftdiMoXy0G76i6LwCTnDQ2M5Dj9Fs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749148367; c=relaxed/simple; bh=ZWGDzS9YCfclm19XTmFbFgb6gidJ3pQ4cKWTlbucnbk=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=feXCAj4f152SVAiFkEsupAr0x+Q+6UFhFtqmFb6LyENYvzJ5HfMq/fX7iMmo7WFdhYMg7Zr0GMnMTHoU2LzNWH5/J6c3kAyeQzwiRH1h8k91+FR5aBvgj83QvbLhJKW89w2KJP76JpQK9Cxdz0qZgmiQTD0A0z4HL8HrFfs1aQ0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=VGYeP37j; arc=none smtp.client-ip=205.220.168.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="VGYeP37j" Received: from pps.filterd (m0279862.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 555A0b9K012737 for ; Thu, 5 Jun 2025 18:32:43 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=2z0s9AwdlyH joVms9APH/gt8IsWLHs3ePiiKuAF2T9o=; b=VGYeP37jOKKDuCSimADmRRnxvlK NRgJUojAp8/CEP2qjIQK7dEet7iABOMLArLS9Of3DsOTP8VsC/7tn70XJgf466hH MoGl29cBIL8ve/NBn2CQ1S75BGx9jBJBNkhpTyi5cUcXYM0EA2pmYGaebC7zzpa7 lbU1XcSe1l6t0nOhtE79VlJapXxQVFcvFT0LBGA5lqfHI2dxNN4zKCdL36BS5WYU WLR033Obnr8TspzsvFJFK6dVR9MAQnhB4+BFnGYpWl0y4wBOfc3w1eZu/sRXoJ4z IXghyTfI/oniPzFX4rINpgK+GOve2GaTzOQ/y3QM39bWh8t0ZuvD1SNyX2Q== Received: from mail-pf1-f199.google.com (mail-pf1-f199.google.com [209.85.210.199]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 472be8618b-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Thu, 05 Jun 2025 18:32:42 +0000 (GMT) Received: by mail-pf1-f199.google.com with SMTP id d2e1a72fcca58-7395d07a3dcso1042224b3a.3 for ; Thu, 05 Jun 2025 11:32:42 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1749148362; x=1749753162; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=2z0s9AwdlyHjoVms9APH/gt8IsWLHs3ePiiKuAF2T9o=; b=LEWAaME8ke0DyghDk4OphCYYRuF1nt2oSKtADkibwiDWytP4GZJ6V6/eP4O5uPQvQp Ncq9woGUDe6qxfNJac9uaQ9tgbtVtWHAe+x60JO/VlS0aVkthBqHuMDxR/9eADqUKS1x 15rUfi8k0zry8G4K6y4n9FMNk9JRqbSSd3fOPB5fEYSSNmhoz3GlWERMmLAD2KlVD3NN Nm5QdUmTxWyZ9NVJXZUJN5vsOsYo/bph4iZ5s36N0AJCK5hZ09bmK7cNzfGolwWI+CHE rFKW5J0bVLId8NDOvfGpWAVTyCq5z1Ko8BVxjEiPzrqpfZDYXZx9fAYaxbm89G9ozs4K yLYg== X-Forwarded-Encrypted: i=1; AJvYcCUp5UTgImhb142FE4T0+IfTdRkTN00eq7EgevYsiwS6H2uDl2cX/Yyk/aqAZbsrJobXbjke+MFiQ0Sev96S@vger.kernel.org X-Gm-Message-State: AOJu0YwPB79t+qE6Mio6gGps+81C8KFQKuw2GyAlrFiqFTjYXRIfRB11 fYnLvhH5yr9XgMLlB3cLG46Wldg7srvcRiZPCN9wsZ56zJYWf1ITZ7tsvJRqxDUCXcFtjWHkmgt hl/nCV8Uy9ygASh7w+lUvwbzSm71jjP1pnKtE3iGJmkk6SzpCxH4M3xvxMTE94Yg/MJsm X-Gm-Gg: ASbGncuT/0THGlhDxIRUU7BkOUUkAlpYN9hN5ayjELgthomPzBz5Q0KOqh6M/ThQrE5 jjLIPyOTdHYc6tKCZxnt/JsKHQGzzG5WGWcj9hQQLpxnm2pnPGHr329bJ35/JJS4lVaVb4mn7ci 89A5cSpaJqU5yi118oIV7FfV1qm2XCMSBBPHHFW3I1hXkOS5YBIxl9jjAQ17RPTDdnkTPXTkQLd PH0qt2dcYpP0FTOyrA5ttjlQJc/BVsPpYjJElkecafbhmE8bQEZlu6T3GTEc/EswWLSR5oWNRG0 dmMi0eFFn/Qc5mCy6hxfKQ== X-Received: by 2002:a05:6a00:806:b0:742:aecc:c472 with SMTP id d2e1a72fcca58-74827e50d7fmr1030970b3a.2.1749148361243; Thu, 05 Jun 2025 11:32:41 -0700 (PDT) X-Google-Smtp-Source: AGHT+IFb4qmPHaE5TXPcvLLvpAXx1CSZvtVkWUNEKy2+hNwDOe24fvWQyknmwmDlQH31UKnViajZ7w== X-Received: by 2002:a05:6a00:806:b0:742:aecc:c472 with SMTP id d2e1a72fcca58-74827e50d7fmr1030919b3a.2.1749148360610; Thu, 05 Jun 2025 11:32:40 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:89fa:e299:1a34:c1f5]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-747affd4188sm13090495b3a.128.2025.06.05.11.32.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 05 Jun 2025 11:32:40 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Rob Clark , Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , Konrad Dybcio , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v6 11/40] drm/msm: drm_gpuvm conversion Date: Thu, 5 Jun 2025 11:28:56 -0700 Message-ID: <20250605183111.163594-12-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250605183111.163594-1-robin.clark@oss.qualcomm.com> References: <20250605183111.163594-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Authority-Analysis: v=2.4 cv=bNYWIO+Z c=1 sm=1 tr=0 ts=6841e2ca cx=c_pps a=WW5sKcV1LcKqjgzy2JUPuA==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=cm27Pg_UAAAA:8 a=EUspDBNiAAAA:8 a=DWwiU9S3deBeNsXQ0cYA:9 a=hD0YvAvVATwCeCSz:21 a=OpyuDcXvxspvyRM73sMx:22 X-Proofpoint-GUID: v2L7y_C73bIUIVQ7LDydRSsvd_m9fqP4 X-Proofpoint-ORIG-GUID: v2L7y_C73bIUIVQ7LDydRSsvd_m9fqP4 X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjA1MDE2NSBTYWx0ZWRfX6NEWlUfF/NGy ynf5LFV4zm81FZl4cTWBUM4rew6JTXzb2p3/kj339CBuJLFYoT/PrU/4rImlnjU0I9guuPMvyLd Q6jD+Uv7BetO6DKlzGzrI8MwOc9UBwf1xlEilb0slIgXN+cq9G6qI/DL20XcmiLnGFandYMefNt 6BkJa6nI8aCxTJ9AwFviIr7Ks83M316taUk8IlhCCArzdaW5gzvRErZM3a+AUv7FMuiAjoryp0D tXGAd7UaQAEv5ekcorlkai7lFtPUlR9zUaQsZATvUppTnHB/k7GyEgNqv+a2L7hjZNHEaFnids1 GhIUW4cd+ekc807UQsD9JnX40wSMDH3IcZVFsr4gEAvZzpo3OSsU94t4UeXxVgKMOJM/uSKmhvg /Doq/tzlozmFOsCRHuIlQpco7u30UND02l7sYP7cz6OtT1Ty3Z1OX0Ti3TdRbpsi0Nz8HJrn X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.0.736,FMLib:17.12.80.40 definitions=2025-06-05_05,2025-06-05_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 lowpriorityscore=0 malwarescore=0 phishscore=0 priorityscore=1501 suspectscore=0 mlxscore=0 impostorscore=0 spamscore=0 clxscore=1015 mlxlogscore=999 adultscore=0 bulkscore=0 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506050165 From: Rob Clark Now that we've realigned deletion and allocation, switch over to using drm_gpuvm/drm_gpuva. This allows us to support multiple VMAs per BO per VM, to allow mapping different parts of a single BO at different virtual addresses, which is a key requirement for sparse/VM_BIND. This prepares us for using drm_gpuvm to translate a batch of MAP/ MAP_NULL/UNMAP operations from userspace into a sequence of map/remap/ unmap steps for updating the page tables. Since, unlike our prior vm/vma setup, with drm_gpuvm the vm_bo holds a reference to the GEM object. To prevent reference loops causing us to leak all GEM objects, we implicitly tear down the mapping when the GEM handle is close or when the obj is unpinned. Which means the submit needs to also hold a reference to the vm_bo, to prevent the VMA from being torn down while the submit is in-flight. Signed-off-by: Rob Clark Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/Kconfig | 1 + drivers/gpu/drm/msm/adreno/a2xx_gpu.c | 3 +- drivers/gpu/drm/msm/adreno/a6xx_gmu.c | 6 +- drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 5 +- drivers/gpu/drm/msm/adreno/adreno_gpu.c | 7 +- drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c | 5 +- drivers/gpu/drm/msm/msm_drv.c | 1 + drivers/gpu/drm/msm/msm_gem.c | 166 +++++++++++++++-------- drivers/gpu/drm/msm/msm_gem.h | 90 +++++++++--- drivers/gpu/drm/msm/msm_gem_submit.c | 5 +- drivers/gpu/drm/msm/msm_gem_vma.c | 140 +++++++++++++------ drivers/gpu/drm/msm/msm_kms.c | 4 +- 12 files changed, 298 insertions(+), 135 deletions(-) diff --git a/drivers/gpu/drm/msm/Kconfig b/drivers/gpu/drm/msm/Kconfig index 974bc7c0ea76..4af7e896c1d4 100644 --- a/drivers/gpu/drm/msm/Kconfig +++ b/drivers/gpu/drm/msm/Kconfig @@ -21,6 +21,7 @@ config DRM_MSM select DRM_DISPLAY_HELPER select DRM_BRIDGE_CONNECTOR select DRM_EXEC + select DRM_GPUVM select DRM_KMS_HELPER select DRM_PANEL select DRM_BRIDGE diff --git a/drivers/gpu/drm/msm/adreno/a2xx_gpu.c b/drivers/gpu/drm/msm/adreno/a2xx_gpu.c index 095bae92e3e8..889480aa13ba 100644 --- a/drivers/gpu/drm/msm/adreno/a2xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a2xx_gpu.c @@ -472,8 +472,7 @@ a2xx_create_vm(struct msm_gpu *gpu, struct platform_device *pdev) struct msm_mmu *mmu = a2xx_gpummu_new(&pdev->dev, gpu); struct msm_gem_vm *vm; - vm = msm_gem_vm_create(mmu, "gpu", SZ_16M, - 0xfff * SZ_64K); + vm = msm_gem_vm_create(gpu->dev, mmu, "gpu", SZ_16M, 0xfff * SZ_64K, true); if (IS_ERR(vm) && !IS_ERR(mmu)) mmu->funcs->destroy(mmu); diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c index 848acc382b7d..77d9ff9632d1 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c @@ -1311,7 +1311,7 @@ static int a6xx_gmu_memory_alloc(struct a6xx_gmu *gmu, struct a6xx_gmu_bo *bo, return 0; } -static int a6xx_gmu_memory_probe(struct a6xx_gmu *gmu) +static int a6xx_gmu_memory_probe(struct drm_device *drm, struct a6xx_gmu *gmu) { struct msm_mmu *mmu; @@ -1321,7 +1321,7 @@ static int a6xx_gmu_memory_probe(struct a6xx_gmu *gmu) if (IS_ERR(mmu)) return PTR_ERR(mmu); - gmu->vm = msm_gem_vm_create(mmu, "gmu", 0x0, 0x80000000); + gmu->vm = msm_gem_vm_create(drm, mmu, "gmu", 0x0, 0x80000000, true); if (IS_ERR(gmu->vm)) return PTR_ERR(gmu->vm); @@ -1940,7 +1940,7 @@ int a6xx_gmu_init(struct a6xx_gpu *a6xx_gpu, struct device_node *node) if (ret) goto err_put_device; - ret = a6xx_gmu_memory_probe(gmu); + ret = a6xx_gmu_memory_probe(adreno_gpu->base.dev, gmu); if (ret) goto err_put_device; diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c index f4d9cdbc5602..26d0a863f38c 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c @@ -2271,9 +2271,8 @@ a6xx_create_private_vm(struct msm_gpu *gpu) if (IS_ERR(mmu)) return ERR_CAST(mmu); - return msm_gem_vm_create(mmu, - "gpu", ADRENO_VM_START, - adreno_private_vm_size(gpu)); + return msm_gem_vm_create(gpu->dev, mmu, "gpu", ADRENO_VM_START, + adreno_private_vm_size(gpu), true); } static uint32_t a6xx_get_rptr(struct msm_gpu *gpu, struct msm_ringbuffer *ring) diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c b/drivers/gpu/drm/msm/adreno/adreno_gpu.c index 35a99c81f7e0..287b032fefe4 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c @@ -226,7 +226,8 @@ adreno_iommu_create_vm(struct msm_gpu *gpu, start = max_t(u64, SZ_16M, geometry->aperture_start); size = geometry->aperture_end - start + 1; - vm = msm_gem_vm_create(mmu, "gpu", start & GENMASK_ULL(48, 0), size); + vm = msm_gem_vm_create(gpu->dev, mmu, "gpu", start & GENMASK_ULL(48, 0), + size, true); if (IS_ERR(vm) && !IS_ERR(mmu)) mmu->funcs->destroy(mmu); @@ -418,12 +419,12 @@ int adreno_get_param(struct msm_gpu *gpu, struct msm_context *ctx, case MSM_PARAM_VA_START: if (ctx->vm == gpu->vm) return UERR(EINVAL, drm, "requires per-process pgtables"); - *value = ctx->vm->va_start; + *value = ctx->vm->base.mm_start; return 0; case MSM_PARAM_VA_SIZE: if (ctx->vm == gpu->vm) return UERR(EINVAL, drm, "requires per-process pgtables"); - *value = ctx->vm->va_size; + *value = ctx->vm->base.mm_range; return 0; case MSM_PARAM_HIGHEST_BANK_BIT: *value = adreno_gpu->ubwc_config.highest_bank_bit; diff --git a/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c b/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c index 94fbc20b2fbd..d5b5628bee24 100644 --- a/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c +++ b/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c @@ -451,8 +451,9 @@ static int mdp4_kms_init(struct drm_device *dev) "contig buffers for scanout\n"); vm = NULL; } else { - vm = msm_gem_vm_create(mmu, - "mdp4", 0x1000, 0x100000000 - 0x1000); + vm = msm_gem_vm_create(dev, mmu, "mdp4", + 0x1000, 0x100000000 - 0x1000, + true); if (IS_ERR(vm)) { if (!IS_ERR(mmu)) diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c index 978f1d355b42..6ef29bc48bb0 100644 --- a/drivers/gpu/drm/msm/msm_drv.c +++ b/drivers/gpu/drm/msm/msm_drv.c @@ -776,6 +776,7 @@ static const struct file_operations fops = { static const struct drm_driver msm_driver = { .driver_features = DRIVER_GEM | + DRIVER_GEM_GPUVA | DRIVER_RENDER | DRIVER_ATOMIC | DRIVER_MODESET | diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 50b866dcf439..6f99f9356c5c 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -47,9 +47,53 @@ static int msm_gem_open(struct drm_gem_object *obj, struct drm_file *file) return 0; } +static void put_iova_spaces(struct drm_gem_object *obj, struct drm_gpuvm *vm, bool close); + +static void detach_vm(struct drm_gem_object *obj, struct msm_gem_vm *vm) +{ + msm_gem_assert_locked(obj); + + struct drm_gpuvm_bo *vm_bo = drm_gpuvm_bo_find(&vm->base, obj); + if (vm_bo) { + struct drm_gpuva *vma; + + drm_gpuvm_bo_for_each_va (vma, vm_bo) { + if (vma->vm != &vm->base) + continue; + msm_gem_vma_purge(to_msm_vma(vma)); + msm_gem_vma_close(to_msm_vma(vma)); + break; + } + + drm_gpuvm_bo_put(vm_bo); + } +} + static void msm_gem_close(struct drm_gem_object *obj, struct drm_file *file) { + struct msm_context *ctx = file->driver_priv; + update_ctx_mem(file, -obj->size); + + /* + * If VM isn't created yet, nothing to cleanup. And in fact calling + * put_iova_spaces() with vm=NULL would be bad, in that it will tear- + * down the mappings of shared buffers in other contexts. + */ + if (!ctx->vm) + return; + + /* + * TODO we might need to kick this to a queue to avoid blocking + * in CLOSE ioctl + */ + dma_resv_wait_timeout(obj->resv, DMA_RESV_USAGE_READ, false, + msecs_to_jiffies(1000)); + + msm_gem_lock(obj); + put_iova_spaces(obj, &ctx->vm->base, true); + detach_vm(obj, ctx->vm); + msm_gem_unlock(obj); } /* @@ -171,6 +215,13 @@ static void put_pages(struct drm_gem_object *obj) { struct msm_gem_object *msm_obj = to_msm_bo(obj); + /* + * Skip gpuvm in the object free path to avoid a WARN_ON() splat. + * See explaination in msm_gem_assert_locked() + */ + if (kref_read(&obj->refcount)) + drm_gpuvm_bo_gem_evict(obj, true); + if (msm_obj->pages) { if (msm_obj->sgt) { /* For non-cached buffers, ensure the new @@ -338,16 +389,25 @@ uint64_t msm_gem_mmap_offset(struct drm_gem_object *obj) } static struct msm_gem_vma *lookup_vma(struct drm_gem_object *obj, - struct msm_gem_vm *vm) + struct msm_gem_vm *vm) { - struct msm_gem_object *msm_obj = to_msm_bo(obj); - struct msm_gem_vma *vma; + struct drm_gpuvm_bo *vm_bo; msm_gem_assert_locked(obj); - list_for_each_entry(vma, &msm_obj->vmas, list) { - if (vma->vm == vm) - return vma; + drm_gem_for_each_gpuvm_bo (vm_bo, obj) { + struct drm_gpuva *vma; + + drm_gpuvm_bo_for_each_va (vma, vm_bo) { + if (vma->vm == &vm->base) { + /* lookup_vma() should only be used in paths + * with at most one vma per vm + */ + GEM_WARN_ON(!list_is_singular(&vm_bo->list.gpuva)); + + return to_msm_vma(vma); + } + } } return NULL; @@ -360,33 +420,29 @@ static struct msm_gem_vma *lookup_vma(struct drm_gem_object *obj, * mapping. */ static void -put_iova_spaces(struct drm_gem_object *obj, bool close) +put_iova_spaces(struct drm_gem_object *obj, struct drm_gpuvm *vm, bool close) { - struct msm_gem_object *msm_obj = to_msm_bo(obj); - struct msm_gem_vma *vma, *tmp; + struct drm_gpuvm_bo *vm_bo, *tmp; msm_gem_assert_locked(obj); - list_for_each_entry_safe(vma, tmp, &msm_obj->vmas, list) { - if (vma->vm) { - msm_gem_vma_purge(vma); - if (close) - msm_gem_vma_close(vma); - } - } -} + drm_gem_for_each_gpuvm_bo_safe (vm_bo, tmp, obj) { + struct drm_gpuva *vma, *vmatmp; -/* Called with msm_obj locked */ -static void -put_iova_vmas(struct drm_gem_object *obj) -{ - struct msm_gem_object *msm_obj = to_msm_bo(obj); - struct msm_gem_vma *vma, *tmp; + if (vm && vm_bo->vm != vm) + continue; - msm_gem_assert_locked(obj); + drm_gpuvm_bo_get(vm_bo); + + drm_gpuvm_bo_for_each_va_safe (vma, vmatmp, vm_bo) { + struct msm_gem_vma *msm_vma = to_msm_vma(vma); + + msm_gem_vma_purge(msm_vma); + if (close) + msm_gem_vma_close(msm_vma); + } - list_for_each_entry_safe(vma, tmp, &msm_obj->vmas, list) { - msm_gem_vma_close(vma); + drm_gpuvm_bo_put(vm_bo); } } @@ -394,7 +450,6 @@ static struct msm_gem_vma *get_vma_locked(struct drm_gem_object *obj, struct msm_gem_vm *vm, u64 range_start, u64 range_end) { - struct msm_gem_object *msm_obj = to_msm_bo(obj); struct msm_gem_vma *vma; msm_gem_assert_locked(obj); @@ -403,12 +458,9 @@ static struct msm_gem_vma *get_vma_locked(struct drm_gem_object *obj, if (!vma) { vma = msm_gem_vma_new(vm, obj, range_start, range_end); - if (IS_ERR(vma)) - return vma; - list_add_tail(&vma->list, &msm_obj->vmas); } else { - GEM_WARN_ON(vma->iova < range_start); - GEM_WARN_ON((vma->iova + obj->size) > range_end); + GEM_WARN_ON(vma->base.va.addr < range_start); + GEM_WARN_ON((vma->base.va.addr + obj->size) > range_end); } return vma; @@ -492,7 +544,7 @@ static int get_and_pin_iova_range_locked(struct drm_gem_object *obj, ret = msm_gem_pin_vma_locked(obj, vma); if (!ret) { - *iova = vma->iova; + *iova = vma->base.va.addr; pin_obj_locked(obj); } @@ -538,7 +590,7 @@ int msm_gem_get_iova(struct drm_gem_object *obj, if (IS_ERR(vma)) { ret = PTR_ERR(vma); } else { - *iova = vma->iova; + *iova = vma->base.va.addr; } msm_gem_unlock(obj); @@ -579,7 +631,7 @@ int msm_gem_set_iova(struct drm_gem_object *obj, vma = get_vma_locked(obj, vm, iova, iova + obj->size); if (IS_ERR(vma)) { ret = PTR_ERR(vma); - } else if (GEM_WARN_ON(vma->iova != iova)) { + } else if (GEM_WARN_ON(vma->base.va.addr != iova)) { clear_iova(obj, vm); ret = -EBUSY; } @@ -601,9 +653,10 @@ void msm_gem_unpin_iova(struct drm_gem_object *obj, msm_gem_lock(obj); vma = lookup_vma(obj, vm); - if (!GEM_WARN_ON(!vma)) { + if (vma) { msm_gem_unpin_locked(obj); } + detach_vm(obj, vm); msm_gem_unlock(obj); } @@ -763,7 +816,7 @@ void msm_gem_purge(struct drm_gem_object *obj) GEM_WARN_ON(!is_purgeable(msm_obj)); /* Get rid of any iommu mapping(s): */ - put_iova_spaces(obj, false); + put_iova_spaces(obj, NULL, false); msm_gem_vunmap(obj); @@ -771,8 +824,6 @@ void msm_gem_purge(struct drm_gem_object *obj) put_pages(obj); - put_iova_vmas(obj); - mutex_lock(&priv->lru.lock); /* A one-way transition: */ msm_obj->madv = __MSM_MADV_PURGED; @@ -803,7 +854,7 @@ void msm_gem_evict(struct drm_gem_object *obj) GEM_WARN_ON(is_unevictable(msm_obj)); /* Get rid of any iommu mapping(s): */ - put_iova_spaces(obj, false); + put_iova_spaces(obj, NULL, false); drm_vma_node_unmap(&obj->vma_node, dev->anon_inode->i_mapping); @@ -869,7 +920,6 @@ void msm_gem_describe(struct drm_gem_object *obj, struct seq_file *m, { struct msm_gem_object *msm_obj = to_msm_bo(obj); struct dma_resv *robj = obj->resv; - struct msm_gem_vma *vma; uint64_t off = drm_vma_node_start(&obj->vma_node); const char *madv; @@ -912,14 +962,17 @@ void msm_gem_describe(struct drm_gem_object *obj, struct seq_file *m, seq_printf(m, " %08zu %9s %-32s\n", obj->size, madv, msm_obj->name); - if (!list_empty(&msm_obj->vmas)) { + if (!list_empty(&obj->gpuva.list)) { + struct drm_gpuvm_bo *vm_bo; seq_puts(m, " vmas:"); - list_for_each_entry(vma, &msm_obj->vmas, list) { - const char *name, *comm; - if (vma->vm) { - struct msm_gem_vm *vm = vma->vm; + drm_gem_for_each_gpuvm_bo (vm_bo, obj) { + struct drm_gpuva *vma; + + drm_gpuvm_bo_for_each_va (vma, vm_bo) { + const char *name, *comm; + struct msm_gem_vm *vm = to_msm_vm(vma->vm); struct task_struct *task = get_pid_task(vm->pid, PIDTYPE_PID); if (task) { @@ -928,15 +981,14 @@ void msm_gem_describe(struct drm_gem_object *obj, struct seq_file *m, } else { comm = NULL; } - name = vm->name; - } else { - name = comm = NULL; + name = vm->base.name; + + seq_printf(m, " [%s%s%s: vm=%p, %08llx, %smapped]", + name, comm ? ":" : "", comm ? comm : "", + vma->vm, vma->va.addr, + to_msm_vma(vma)->mapped ? "" : "un"); + kfree(comm); } - seq_printf(m, " [%s%s%s: vm=%p, %08llx,%s]", - name, comm ? ":" : "", comm ? comm : "", - vma->vm, vma->iova, - vma->mapped ? "mapped" : "unmapped"); - kfree(comm); } seq_puts(m, "\n"); @@ -982,7 +1034,7 @@ static void msm_gem_free_object(struct drm_gem_object *obj) list_del(&msm_obj->node); mutex_unlock(&priv->obj_lock); - put_iova_spaces(obj, true); + put_iova_spaces(obj, NULL, true); if (obj->import_attach) { GEM_WARN_ON(msm_obj->vaddr); @@ -992,13 +1044,10 @@ static void msm_gem_free_object(struct drm_gem_object *obj) */ kvfree(msm_obj->pages); - put_iova_vmas(obj); - drm_prime_gem_destroy(obj, msm_obj->sgt); } else { msm_gem_vunmap(obj); put_pages(obj); - put_iova_vmas(obj); } drm_gem_object_release(obj); @@ -1104,7 +1153,6 @@ static int msm_gem_new_impl(struct drm_device *dev, msm_obj->madv = MSM_MADV_WILLNEED; INIT_LIST_HEAD(&msm_obj->node); - INIT_LIST_HEAD(&msm_obj->vmas); *obj = &msm_obj->base; (*obj)->funcs = &msm_gem_object_funcs; diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index 9bd78642671c..60769c68d408 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -10,6 +10,7 @@ #include #include #include "drm/drm_exec.h" +#include "drm/drm_gpuvm.h" #include "drm/gpu_scheduler.h" #include "msm_drv.h" @@ -22,30 +23,67 @@ #define MSM_BO_STOLEN 0x10000000 /* try to use stolen/splash memory */ #define MSM_BO_MAP_PRIV 0x20000000 /* use IOMMU_PRIV when mapping */ +/** + * struct msm_gem_vm - VM object + * + * A VM object representing a GPU (or display or GMU or ...) virtual address + * space. + * + * In the case of GPU, if per-process address spaces are supported, the address + * space is split into two VMs, which map to TTBR0 and TTBR1 in the SMMU. TTBR0 + * is used for userspace objects, and is unique per msm_context/drm_file, while + * TTBR1 is the same for all processes. (The kernel controlled ringbuffer and + * a few other kernel controlled buffers live in TTBR1.) + * + * The GPU TTBR0 vm can be managed by userspace or by the kernel, depending on + * whether userspace supports VM_BIND. All other vm's are managed by the kernel. + * (Managed by kernel means the kernel is responsible for VA allocation.) + * + * Note that because VM_BIND allows a given BO to be mapped multiple times in + * a VM, and therefore have multiple VMA's in a VM, there is an extra object + * provided by drm_gpuvm infrastructure.. the drm_gpuvm_bo, which is not + * embedded in any larger driver structure. The GEM object holds a list of + * drm_gpuvm_bo, which in turn holds a list of msm_gem_vma. A linked vma + * holds a reference to the vm_bo, and drops it when the vma is unlinked. + * So we just need to call drm_gpuvm_bo_obtain() to return a ref to an + * existing vm_bo, or create a new one. Once the vma is linked, the ref + * to the vm_bo can be dropped (since the vma is holding one). + */ struct msm_gem_vm { - const char *name; - /* NOTE: mm managed at the page level, size is in # of pages - * and position mm_node->start is in # of pages: + /** @base: Inherit from drm_gpuvm. */ + struct drm_gpuvm base; + + /** + * @mm: Memory management for kernel managed VA allocations + * + * Only used for kernel managed VMs, unused for user managed VMs. + * + * Protected by @mm_lock. */ struct drm_mm mm; - spinlock_t lock; /* Protects drm_mm node allocation/removal */ + + /** @mm_lock: protects @mm node allocation/removal */ + struct spinlock mm_lock; + + /** @vm_lock: protects gpuvm insert/remove/traverse */ + struct mutex vm_lock; + + /** @mmu: The mmu object which manages the pgtables */ struct msm_mmu *mmu; - struct kref kref; - /* For address spaces associated with a specific process, this + /** + * @pid: For address spaces associated with a specific process, this * will be non-NULL: */ struct pid *pid; - /* @faults: the number of GPU hangs associated with this address space */ + /** @faults: the number of GPU hangs associated with this address space */ int faults; - /** @va_start: lowest possible address to allocate */ - uint64_t va_start; - - /** @va_size: the size of the address space (in bytes) */ - uint64_t va_size; + /** @managed: is this a kernel managed VM? */ + bool managed; }; +#define to_msm_vm(x) container_of(x, struct msm_gem_vm, base) struct msm_gem_vm * msm_gem_vm_get(struct msm_gem_vm *vm); @@ -53,18 +91,33 @@ msm_gem_vm_get(struct msm_gem_vm *vm); void msm_gem_vm_put(struct msm_gem_vm *vm); struct msm_gem_vm * -msm_gem_vm_create(struct msm_mmu *mmu, const char *name, - u64 va_start, u64 size); +msm_gem_vm_create(struct drm_device *drm, struct msm_mmu *mmu, const char *name, + u64 va_start, u64 va_size, bool managed); struct msm_fence_context; +#define MSM_VMA_DUMP (DRM_GPUVA_USERBITS << 0) + +/** + * struct msm_gem_vma - a VMA mapping + * + * Represents a combination of a GEM object plus a VM. + */ struct msm_gem_vma { + /** @base: inherit from drm_gpuva */ + struct drm_gpuva base; + + /** + * @node: mm node for VA allocation + * + * Only used by kernel managed VMs + */ struct drm_mm_node node; - uint64_t iova; - struct msm_gem_vm *vm; - struct list_head list; /* node in msm_gem_object::vmas */ + + /** @mapped: Is this VMA mapped? */ bool mapped; }; +#define to_msm_vma(x) container_of(x, struct msm_gem_vma, base) struct msm_gem_vma * msm_gem_vma_new(struct msm_gem_vm *vm, struct drm_gem_object *obj, @@ -100,8 +153,6 @@ struct msm_gem_object { struct sg_table *sgt; void *vaddr; - struct list_head vmas; /* list of msm_gem_vma */ - char name[32]; /* Identifier to print for the debugfs files */ /* userspace metadata backchannel */ @@ -292,6 +343,7 @@ struct msm_gem_submit { struct drm_gem_object *obj; uint32_t handle; }; + struct drm_gpuvm_bo *vm_bo; uint64_t iova; } bos[]; }; diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c index c184b1a1f522..c8e6750e77a3 100644 --- a/drivers/gpu/drm/msm/msm_gem_submit.c +++ b/drivers/gpu/drm/msm/msm_gem_submit.c @@ -321,7 +321,8 @@ static int submit_pin_objects(struct msm_gem_submit *submit) if (ret) break; - submit->bos[i].iova = vma->iova; + submit->bos[i].vm_bo = drm_gpuvm_bo_get(vma->base.vm_bo); + submit->bos[i].iova = vma->base.va.addr; } /* @@ -474,8 +475,10 @@ void msm_submit_retire(struct msm_gem_submit *submit) for (i = 0; i < submit->nr_bos; i++) { struct drm_gem_object *obj = submit->bos[i].obj; + struct drm_gpuvm_bo *vm_bo = submit->bos[i].vm_bo; drm_gem_object_put(obj); + drm_gpuvm_bo_put(vm_bo); } } diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_gem_vma.c index ca29e81d79d2..1f4c9b5c2e8f 100644 --- a/drivers/gpu/drm/msm/msm_gem_vma.c +++ b/drivers/gpu/drm/msm/msm_gem_vma.c @@ -5,14 +5,13 @@ */ #include "msm_drv.h" -#include "msm_fence.h" #include "msm_gem.h" #include "msm_mmu.h" static void -msm_gem_vm_destroy(struct kref *kref) +msm_gem_vm_free(struct drm_gpuvm *gpuvm) { - struct msm_gem_vm *vm = container_of(kref, struct msm_gem_vm, kref); + struct msm_gem_vm *vm = container_of(gpuvm, struct msm_gem_vm, base); drm_mm_takedown(&vm->mm); if (vm->mmu) @@ -25,14 +24,14 @@ msm_gem_vm_destroy(struct kref *kref) void msm_gem_vm_put(struct msm_gem_vm *vm) { if (vm) - kref_put(&vm->kref, msm_gem_vm_destroy); + drm_gpuvm_put(&vm->base); } struct msm_gem_vm * msm_gem_vm_get(struct msm_gem_vm *vm) { if (!IS_ERR_OR_NULL(vm)) - kref_get(&vm->kref); + drm_gpuvm_get(&vm->base); return vm; } @@ -40,14 +39,14 @@ msm_gem_vm_get(struct msm_gem_vm *vm) /* Actually unmap memory for the vma */ void msm_gem_vma_purge(struct msm_gem_vma *vma) { - struct msm_gem_vm *vm = vma->vm; - unsigned size = vma->node.size; + struct msm_gem_vm *vm = to_msm_vm(vma->base.vm); + unsigned size = vma->base.va.range; /* Don't do anything if the memory isn't mapped */ if (!vma->mapped) return; - vm->mmu->funcs->unmap(vm->mmu, vma->iova, size); + vm->mmu->funcs->unmap(vm->mmu, vma->base.va.addr, size); vma->mapped = false; } @@ -57,10 +56,10 @@ int msm_gem_vma_map(struct msm_gem_vma *vma, int prot, struct sg_table *sgt, int size) { - struct msm_gem_vm *vm = vma->vm; + struct msm_gem_vm *vm = to_msm_vm(vma->base.vm); int ret; - if (GEM_WARN_ON(!vma->iova)) + if (GEM_WARN_ON(!vma->base.va.addr)) return -EINVAL; if (vma->mapped) @@ -68,9 +67,6 @@ msm_gem_vma_map(struct msm_gem_vma *vma, int prot, vma->mapped = true; - if (!vm) - return 0; - /* * NOTE: iommu/io-pgtable can allocate pages, so we cannot hold * a lock across map/unmap which is also used in the job_run() @@ -80,7 +76,7 @@ msm_gem_vma_map(struct msm_gem_vma *vma, int prot, * Revisit this if we can come up with a scheme to pre-alloc pages * for the pgtable in map/unmap ops. */ - ret = vm->mmu->funcs->map(vm->mmu, vma->iova, sgt, size, prot); + ret = vm->mmu->funcs->map(vm->mmu, vma->base.va.addr, sgt, size, prot); if (ret) { vma->mapped = false; @@ -92,19 +88,20 @@ msm_gem_vma_map(struct msm_gem_vma *vma, int prot, /* Close an iova. Warn if it is still in use */ void msm_gem_vma_close(struct msm_gem_vma *vma) { - struct msm_gem_vm *vm = vma->vm; + struct msm_gem_vm *vm = to_msm_vm(vma->base.vm); GEM_WARN_ON(vma->mapped); - spin_lock(&vm->lock); - if (vma->iova) + spin_lock(&vm->mm_lock); + if (vma->base.va.addr) drm_mm_remove_node(&vma->node); - spin_unlock(&vm->lock); + spin_unlock(&vm->mm_lock); - vma->iova = 0; - list_del(&vma->list); + mutex_lock(&vm->vm_lock); + drm_gpuva_remove(&vma->base); + drm_gpuva_unlink(&vma->base); + mutex_unlock(&vm->vm_lock); - msm_gem_vm_put(vm); kfree(vma); } @@ -113,6 +110,7 @@ struct msm_gem_vma * msm_gem_vma_new(struct msm_gem_vm *vm, struct drm_gem_object *obj, u64 range_start, u64 range_end) { + struct drm_gpuvm_bo *vm_bo; struct msm_gem_vma *vma; int ret; @@ -120,36 +118,83 @@ msm_gem_vma_new(struct msm_gem_vm *vm, struct drm_gem_object *obj, if (!vma) return ERR_PTR(-ENOMEM); - vma->vm = vm; + if (vm->managed) { + spin_lock(&vm->mm_lock); + ret = drm_mm_insert_node_in_range(&vm->mm, &vma->node, + obj->size, PAGE_SIZE, 0, + range_start, range_end, 0); + spin_unlock(&vm->mm_lock); - spin_lock(&vm->lock); - ret = drm_mm_insert_node_in_range(&vm->mm, &vma->node, - obj->size, PAGE_SIZE, 0, - range_start, range_end, 0); - spin_unlock(&vm->lock); + if (ret) + goto err_free_vma; - if (ret) - goto err_free_vma; + range_start = vma->node.start; + range_end = range_start + obj->size; + } - vma->iova = vma->node.start; + GEM_WARN_ON((range_end - range_start) > obj->size); + + drm_gpuva_init(&vma->base, range_start, range_end - range_start, obj, 0); vma->mapped = false; - INIT_LIST_HEAD(&vma->list); + mutex_lock(&vm->vm_lock); + ret = drm_gpuva_insert(&vm->base, &vma->base); + mutex_unlock(&vm->vm_lock); + if (ret) + goto err_free_range; - kref_get(&vm->kref); + vm_bo = drm_gpuvm_bo_obtain(&vm->base, obj); + if (IS_ERR(vm_bo)) { + ret = PTR_ERR(vm_bo); + goto err_va_remove; + } + + mutex_lock(&vm->vm_lock); + drm_gpuvm_bo_extobj_add(vm_bo); + drm_gpuva_link(&vma->base, vm_bo); + mutex_unlock(&vm->vm_lock); + GEM_WARN_ON(drm_gpuvm_bo_put(vm_bo)); return vma; +err_va_remove: + mutex_lock(&vm->vm_lock); + drm_gpuva_remove(&vma->base); + mutex_unlock(&vm->vm_lock); +err_free_range: + if (vm->managed) + drm_mm_remove_node(&vma->node); err_free_vma: kfree(vma); return ERR_PTR(ret); } +static const struct drm_gpuvm_ops msm_gpuvm_ops = { + .vm_free = msm_gem_vm_free, +}; + +/** + * msm_gem_vm_create() - Create and initialize a &msm_gem_vm + * @drm: the drm device + * @mmu: the backing MMU objects handling mapping/unmapping + * @name: the name of the VM + * @va_start: the start offset of the VA space + * @va_size: the size of the VA space + * @managed: is it a kernel managed VM? + * + * In a kernel managed VM, the kernel handles address allocation, and only + * synchronous operations are supported. In a user managed VM, userspace + * handles virtual address allocation, and both async and sync operations + * are supported. + */ struct msm_gem_vm * -msm_gem_vm_create(struct msm_mmu *mmu, const char *name, - u64 va_start, u64 size) +msm_gem_vm_create(struct drm_device *drm, struct msm_mmu *mmu, const char *name, + u64 va_start, u64 va_size, bool managed) { + enum drm_gpuvm_flags flags = 0; struct msm_gem_vm *vm; + struct drm_gem_object *dummy_gem; + int ret = 0; if (IS_ERR(mmu)) return ERR_CAST(mmu); @@ -158,15 +203,28 @@ msm_gem_vm_create(struct msm_mmu *mmu, const char *name, if (!vm) return ERR_PTR(-ENOMEM); - spin_lock_init(&vm->lock); - vm->name = name; - vm->mmu = mmu; - vm->va_start = va_start; - vm->va_size = size; + dummy_gem = drm_gpuvm_resv_object_alloc(drm); + if (!dummy_gem) { + ret = -ENOMEM; + goto err_free_vm; + } + + drm_gpuvm_init(&vm->base, name, flags, drm, dummy_gem, + va_start, va_size, 0, 0, &msm_gpuvm_ops); + drm_gem_object_put(dummy_gem); + + spin_lock_init(&vm->mm_lock); + mutex_init(&vm->vm_lock); - drm_mm_init(&vm->mm, va_start, size); + vm->mmu = mmu; + vm->managed = managed; - kref_init(&vm->kref); + drm_mm_init(&vm->mm, va_start, va_size); return vm; + +err_free_vm: + kfree(vm); + return ERR_PTR(ret); + } diff --git a/drivers/gpu/drm/msm/msm_kms.c b/drivers/gpu/drm/msm/msm_kms.c index 88504c4b842f..6458bd82a0cd 100644 --- a/drivers/gpu/drm/msm/msm_kms.c +++ b/drivers/gpu/drm/msm/msm_kms.c @@ -204,8 +204,8 @@ struct msm_gem_vm *msm_kms_init_vm(struct drm_device *dev) return NULL; } - vm = msm_gem_vm_create(mmu, "mdp_kms", - 0x1000, 0x100000000 - 0x1000); + vm = msm_gem_vm_create(dev, mmu, "mdp_kms", + 0x1000, 0x100000000 - 0x1000, true); if (IS_ERR(vm)) { dev_err(mdp_dev, "vm create, error %pe\n", vm); mmu->funcs->destroy(mmu); From patchwork Thu Jun 5 18:28:57 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 894507 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9488F27BF80 for ; Thu, 5 Jun 2025 18:32:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.168.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749148366; cv=none; b=OkihNC876vDosrLhN3izKw+QzKPj/+it/MQUsVLKjZCfnRbYDtil0XoPrHxmaReWtEdx0PjeKfQUQ/xdpopCGbNQ1ma23A40GHAZJDWd4JZ70EpQNbKBn9luCoxbWkT+otrr8BNHNc6OwfYX4Gdajboc9af4u7CAW9qvamH7/ZQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749148366; c=relaxed/simple; bh=sYS+aKw/gABPR7PYsJ+n5RgqCLi3Dftd/qyhBuvF11g=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=qMNOwzgaohGktu3zMgfRfWZCdOJmkcPAE6nv0kk9RuFJchPLH2NGaCXt73a84Im88Hx6QPBuzj/TNyQ1PmUc5hnB5s4BSe3oOjhzjkVgaFgEu74Et/oAKO6c8GRMPwX35nf3QKLww+V5fQYjdGAs2szu3R2Ekw+NiMahUzei8Rs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=XL1x/7cg; arc=none smtp.client-ip=205.220.168.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="XL1x/7cg" Received: from pps.filterd (m0279867.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 555Ge1ck006332 for ; Thu, 5 Jun 2025 18:32:44 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=per7/bQv9KK rMSAdUyj0aPAV3teNoLUko8S4ZLOn4zQ=; b=XL1x/7cg1sVh33PWVhLqNW7Qewn sIO+XduyubxtC9xxJLHxCaxhj2D5IY4LLf3dCfWQezB/xK95pZ/jVfbfF8I2NaCT dNLxfmfj2EFpbCJ1fRcinJDMhkWER5MNAZwzjH2MxuT+GpSWo58NjFhnuS1Omxlu VJNfZapGVYzmgmdXU/9gddtooS2PdevK41kTG14nqtzK6gXpsARuDnc0zIzej3lp ce1qdimBZPxN5swvezU/busoIzQ4pM91qefmFppzuESDD829JVPMSeDhR4uQ47Pq nQx3lY+00XBfSZxTi13+BMAQbYX2VrIlJ+bev8FRlzpb4TM0K6OcdKys72A== Received: from mail-pf1-f199.google.com (mail-pf1-f199.google.com [209.85.210.199]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 471g8t2aaq-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Thu, 05 Jun 2025 18:32:43 +0000 (GMT) Received: by mail-pf1-f199.google.com with SMTP id d2e1a72fcca58-73bfc657aefso969408b3a.1 for ; Thu, 05 Jun 2025 11:32:43 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1749148363; x=1749753163; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=per7/bQv9KKrMSAdUyj0aPAV3teNoLUko8S4ZLOn4zQ=; b=n1HX10ELJ85uwXiVHNYZLuaLANuAZse/zXKAY88r+CQyOAJVIJFzmB75wesCiHknqm pqi9b7nc1HQ5GYKQLfApyEDsK9pZ+rI5egY9t0S1MSOic1U/kCvTXCe7BmomJRGve5Oo Pwat+sNOvqjJqQ4k3bnsBE6rMVjNUY8b+MwBwLViakNoF0wG7MeQNxel9U20mfeGp4xh qx3HTMw2iM/TXzXTVygmkOSAUes6Iy21iD9LSoQ/ldH7omZ6RF7zIdgVrt2fZHP9Z2b6 3cvG9zbS6y4r6sn+GCxizOq/CyfRk504SBf1NT1qbPAD0n9ebBGpNxF5BXvF4txu4jQ4 8h9Q== X-Forwarded-Encrypted: i=1; AJvYcCWFEXhWb2XE319QfquvRM3fOqKuuWKJOJOgEZ8qgu0QUtkzJWTki5x4wJ9UHJopCU+HX1+SMjOfMKlpTTP4@vger.kernel.org X-Gm-Message-State: AOJu0YxIhao/VxsB9BJ4Qhb54/x/3MWYKlXvFe6sLo4xfRnw0U+Gbsje 1yUGf3vTX2ArWyGab3a+Od2S63CGZQ51Dm2jUbx8NepPizVe4gLU5LvTyWUfRA1EyDFyILL+YbP CEQsO6lFuhHEeTV7K34sz2zBanj5Zw4IlDiVilYUr22l84wO7LKvJPz/+DdUAafPtUafg X-Gm-Gg: ASbGncuEWOi4PIgbq+YW2AzGGIKyZBQ3vRUFTP147Ga/2l2gp4ajDvxAQSR9wpLVnnp LNmteL/GXLlYo1BR+sfc1NCqTq0Oj4Eru20iaUOhgVA/wkFpbdB7toAX/3c2HdX302iPyzdTT2/ UduQRXz61INoSng8pCoNpTd6zQWU2m/iJ2L+OZdzO1+J6kCLZ5ZPRP1R/cxgjavHh51kK7Mcjgb +C9ZyPKCYr/anF19Pslch1oov0f8XPY2GRg3bjceXKHcf0XZ2+jILTB6AWOZkPUL0pQ5ZAp4ckH s7N8dBO29KwyBhoYwZsdBA== X-Received: by 2002:a05:6a20:6a04:b0:216:1ea0:a512 with SMTP id adf61e73a8af0-21ee25f4c3dmr371439637.38.1749148362610; Thu, 05 Jun 2025 11:32:42 -0700 (PDT) X-Google-Smtp-Source: AGHT+IGx4hc5cplZqHt3x16+9cvSoNvp4hmqgoUhAWwaR0a+GiCzUk7RxvLfFjIv6XU/JzwvLbbU0Q== X-Received: by 2002:a05:6a20:6a04:b0:216:1ea0:a512 with SMTP id adf61e73a8af0-21ee25f4c3dmr371403637.38.1749148362082; Thu, 05 Jun 2025 11:32:42 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:89fa:e299:1a34:c1f5]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-b2f5f899ff5sm5045a12.69.2025.06.05.11.32.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 05 Jun 2025 11:32:41 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Rob Clark , Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= , linux-kernel@vger.kernel.org (open list), linux-media@vger.kernel.org (open list:DMA BUFFER SHARING FRAMEWORK:Keyword:\bdma_(?:buf|fence|resv)\b), linaro-mm-sig@lists.linaro.org (moderated list:DMA BUFFER SHARING FRAMEWORK:Keyword:\bdma_(?:buf|fence|resv)\b) Subject: [PATCH v6 12/40] drm/msm: Convert vm locking Date: Thu, 5 Jun 2025 11:28:57 -0700 Message-ID: <20250605183111.163594-13-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250605183111.163594-1-robin.clark@oss.qualcomm.com> References: <20250605183111.163594-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: 59RLBbnnxGBaAUZr8iZU5tRlsBWUBnRN X-Authority-Analysis: v=2.4 cv=EPcG00ZC c=1 sm=1 tr=0 ts=6841e2cb cx=c_pps a=WW5sKcV1LcKqjgzy2JUPuA==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=cm27Pg_UAAAA:8 a=EUspDBNiAAAA:8 a=1nDRvngas_nmfiIOXgsA:9 a=OpyuDcXvxspvyRM73sMx:22 X-Proofpoint-GUID: 59RLBbnnxGBaAUZr8iZU5tRlsBWUBnRN X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjA1MDE2NSBTYWx0ZWRfX8PB7UMEnkzvZ LfEH0qLjDxz/v20PBS9M9a6j/T9KK6ZM3Zpziud1PHWLMpOyX4Ch9lv8Mb87KZQRGBSHLJhc2lG NSIa/GLvPbIjvH7Qn3m2f46lG/qCl8YO89OGGy3PTaJo9KdrfF/p+kU333dt2JSF+/6SI5BYMMR s7g2mNyHHxFYcIp6l5FmzSVm4CEn0uWOhbJTPI9cBAE7MlwjD868V+LmaYQ/ikj7qiA+QkcXdUD 7awvVd+fCGVzsG45/8QbOgL8SiIu/rdUk57XfkjCTMC00Csg2GAuL+FX9yQhkHh42x7d2KBQOb5 2FECxXZn4SbKOFHGCedmCU4uF8CP3NyVyj1XhchSndSN7dyQ5BYziLbDJDsoerLYd071cb8QfFU t2A7dULmY6V3ipMa5fOZIZjAcgp9WKhaIrWgqd7Jd3OW66Q+H8EN0jnSXSmtO8kWIJYX+50U X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.0.736,FMLib:17.12.80.40 definitions=2025-06-05_05,2025-06-05_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 suspectscore=0 bulkscore=0 priorityscore=1501 phishscore=0 impostorscore=0 lowpriorityscore=0 adultscore=0 malwarescore=0 mlxscore=0 clxscore=1015 mlxlogscore=999 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506050165 From: Rob Clark Convert to using the gpuvm's r_obj for serializing access to the VM. This way we can use the drm_exec helper for dealing with deadlock detection and backoff. This will let us deal with upcoming locking order conflicts with the VM_BIND implmentation (ie. in some scenarious we need to acquire the obj lock first, for ex. to iterate all the VMs an obj is bound in, and in other scenarious we need to acquire the VM lock first). Signed-off-by: Rob Clark Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gem.c | 41 +++++++++---- drivers/gpu/drm/msm/msm_gem.h | 37 ++++++++++-- drivers/gpu/drm/msm/msm_gem_shrinker.c | 80 +++++++++++++++++++++++--- drivers/gpu/drm/msm/msm_gem_submit.c | 9 ++- drivers/gpu/drm/msm/msm_gem_vma.c | 24 +++----- 5 files changed, 152 insertions(+), 39 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 6f99f9356c5c..45a542173cca 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -52,6 +52,7 @@ static void put_iova_spaces(struct drm_gem_object *obj, struct drm_gpuvm *vm, bo static void detach_vm(struct drm_gem_object *obj, struct msm_gem_vm *vm) { msm_gem_assert_locked(obj); + drm_gpuvm_resv_assert_held(&vm->base); struct drm_gpuvm_bo *vm_bo = drm_gpuvm_bo_find(&vm->base, obj); if (vm_bo) { @@ -72,6 +73,7 @@ static void detach_vm(struct drm_gem_object *obj, struct msm_gem_vm *vm) static void msm_gem_close(struct drm_gem_object *obj, struct drm_file *file) { struct msm_context *ctx = file->driver_priv; + struct drm_exec exec; update_ctx_mem(file, -obj->size); @@ -90,10 +92,10 @@ static void msm_gem_close(struct drm_gem_object *obj, struct drm_file *file) dma_resv_wait_timeout(obj->resv, DMA_RESV_USAGE_READ, false, msecs_to_jiffies(1000)); - msm_gem_lock(obj); + msm_gem_lock_vm_and_obj(&exec, obj, ctx->vm); put_iova_spaces(obj, &ctx->vm->base, true); detach_vm(obj, ctx->vm); - msm_gem_unlock(obj); + drm_exec_fini(&exec); /* drop locks */ } /* @@ -559,11 +561,12 @@ int msm_gem_get_and_pin_iova_range(struct drm_gem_object *obj, struct msm_gem_vm *vm, uint64_t *iova, u64 range_start, u64 range_end) { + struct drm_exec exec; int ret; - msm_gem_lock(obj); + msm_gem_lock_vm_and_obj(&exec, obj, vm); ret = get_and_pin_iova_range_locked(obj, vm, iova, range_start, range_end); - msm_gem_unlock(obj); + drm_exec_fini(&exec); /* drop locks */ return ret; } @@ -583,16 +586,17 @@ int msm_gem_get_iova(struct drm_gem_object *obj, struct msm_gem_vm *vm, uint64_t *iova) { struct msm_gem_vma *vma; + struct drm_exec exec; int ret = 0; - msm_gem_lock(obj); + msm_gem_lock_vm_and_obj(&exec, obj, vm); vma = get_vma_locked(obj, vm, 0, U64_MAX); if (IS_ERR(vma)) { ret = PTR_ERR(vma); } else { *iova = vma->base.va.addr; } - msm_gem_unlock(obj); + drm_exec_fini(&exec); /* drop locks */ return ret; } @@ -621,9 +625,10 @@ static int clear_iova(struct drm_gem_object *obj, int msm_gem_set_iova(struct drm_gem_object *obj, struct msm_gem_vm *vm, uint64_t iova) { + struct drm_exec exec; int ret = 0; - msm_gem_lock(obj); + msm_gem_lock_vm_and_obj(&exec, obj, vm); if (!iova) { ret = clear_iova(obj, vm); } else { @@ -636,7 +641,7 @@ int msm_gem_set_iova(struct drm_gem_object *obj, ret = -EBUSY; } } - msm_gem_unlock(obj); + drm_exec_fini(&exec); /* drop locks */ return ret; } @@ -650,14 +655,15 @@ void msm_gem_unpin_iova(struct drm_gem_object *obj, struct msm_gem_vm *vm) { struct msm_gem_vma *vma; + struct drm_exec exec; - msm_gem_lock(obj); + msm_gem_lock_vm_and_obj(&exec, obj, vm); vma = lookup_vma(obj, vm); if (vma) { msm_gem_unpin_locked(obj); } detach_vm(obj, vm); - msm_gem_unlock(obj); + drm_exec_fini(&exec); /* drop locks */ } int msm_gem_dumb_create(struct drm_file *file, struct drm_device *dev, @@ -1029,12 +1035,27 @@ static void msm_gem_free_object(struct drm_gem_object *obj) struct msm_gem_object *msm_obj = to_msm_bo(obj); struct drm_device *dev = obj->dev; struct msm_drm_private *priv = dev->dev_private; + struct drm_exec exec; mutex_lock(&priv->obj_lock); list_del(&msm_obj->node); mutex_unlock(&priv->obj_lock); + /* + * We need to lock any VMs the object is still attached to, but not + * the object itself (see explaination in msm_gem_assert_locked()), + * so just open-code this special case: + */ + drm_exec_init(&exec, 0, 0); + drm_exec_until_all_locked (&exec) { + struct drm_gpuvm_bo *vm_bo; + drm_gem_for_each_gpuvm_bo (vm_bo, obj) { + drm_exec_lock_obj(&exec, drm_gpuvm_resv_obj(vm_bo->vm)); + drm_exec_retry_on_contention(&exec); + } + } put_iova_spaces(obj, NULL, true); + drm_exec_fini(&exec); /* drop locks */ if (obj->import_attach) { GEM_WARN_ON(msm_obj->vaddr); diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index 60769c68d408..31933ed8fb2c 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -62,12 +62,6 @@ struct msm_gem_vm { */ struct drm_mm mm; - /** @mm_lock: protects @mm node allocation/removal */ - struct spinlock mm_lock; - - /** @vm_lock: protects gpuvm insert/remove/traverse */ - struct mutex vm_lock; - /** @mmu: The mmu object which manages the pgtables */ struct msm_mmu *mmu; @@ -246,6 +240,37 @@ msm_gem_unlock(struct drm_gem_object *obj) dma_resv_unlock(obj->resv); } +/** + * msm_gem_lock_vm_and_obj() - Helper to lock an obj + VM + * @exec: the exec context helper which will be initalized + * @obj: the GEM object to lock + * @vm: the VM to lock + * + * Operations which modify a VM frequently need to lock both the VM and + * the object being mapped/unmapped/etc. This helper uses drm_exec to + * acquire both locks, dealing with potential deadlock/backoff scenarios + * which arise when multiple locks are involved. + */ +static inline int +msm_gem_lock_vm_and_obj(struct drm_exec *exec, + struct drm_gem_object *obj, + struct msm_gem_vm *vm) +{ + int ret = 0; + + drm_exec_init(exec, 0, 2); + drm_exec_until_all_locked (exec) { + ret = drm_exec_lock_obj(exec, drm_gpuvm_resv_obj(&vm->base)); + if (!ret && (obj->resv != drm_gpuvm_resv(&vm->base))) + ret = drm_exec_lock_obj(exec, obj); + drm_exec_retry_on_contention(exec); + if (GEM_WARN_ON(ret)) + break; + } + + return ret; +} + static inline void msm_gem_assert_locked(struct drm_gem_object *obj) { diff --git a/drivers/gpu/drm/msm/msm_gem_shrinker.c b/drivers/gpu/drm/msm/msm_gem_shrinker.c index de185fc34084..5faf6227584a 100644 --- a/drivers/gpu/drm/msm/msm_gem_shrinker.c +++ b/drivers/gpu/drm/msm/msm_gem_shrinker.c @@ -43,6 +43,75 @@ msm_gem_shrinker_count(struct shrinker *shrinker, struct shrink_control *sc) return count; } +static bool +with_vm_locks(struct ww_acquire_ctx *ticket, + void (*fn)(struct drm_gem_object *obj), + struct drm_gem_object *obj) +{ + /* + * Track last locked entry for for unwinding locks in error and + * success paths + */ + struct drm_gpuvm_bo *vm_bo, *last_locked = NULL; + int ret = 0; + + drm_gem_for_each_gpuvm_bo (vm_bo, obj) { + struct dma_resv *resv = drm_gpuvm_resv(vm_bo->vm); + + if (resv == obj->resv) + continue; + + ret = dma_resv_lock(resv, ticket); + + /* + * Since we already skip the case when the VM and obj + * share a resv (ie. _NO_SHARE objs), we don't expect + * to hit a double-locking scenario... which the lock + * unwinding cannot really cope with. + */ + WARN_ON(ret == -EALREADY); + + /* + * Don't bother with slow-lock / backoff / retry sequence, + * if we can't get the lock just give up and move on to + * the next object. + */ + if (ret) + goto out_unlock; + + /* + * Hold a ref to prevent the vm_bo from being freed + * and removed from the obj's gpuva list, as that would + * would result in missing the unlock below + */ + drm_gpuvm_bo_get(vm_bo); + + last_locked = vm_bo; + } + + fn(obj); + +out_unlock: + if (last_locked) { + drm_gem_for_each_gpuvm_bo (vm_bo, obj) { + struct dma_resv *resv = drm_gpuvm_resv(vm_bo->vm); + + if (resv == obj->resv) + continue; + + dma_resv_unlock(resv); + + /* Drop the ref taken while locking: */ + drm_gpuvm_bo_put(vm_bo); + + if (last_locked == vm_bo) + break; + } + } + + return ret == 0; +} + static bool purge(struct drm_gem_object *obj, struct ww_acquire_ctx *ticket) { @@ -52,9 +121,7 @@ purge(struct drm_gem_object *obj, struct ww_acquire_ctx *ticket) if (msm_gem_active(obj)) return false; - msm_gem_purge(obj); - - return true; + return with_vm_locks(ticket, msm_gem_purge, obj); } static bool @@ -66,9 +133,7 @@ evict(struct drm_gem_object *obj, struct ww_acquire_ctx *ticket) if (msm_gem_active(obj)) return false; - msm_gem_evict(obj); - - return true; + return with_vm_locks(ticket, msm_gem_evict, obj); } static bool @@ -100,6 +165,7 @@ static unsigned long msm_gem_shrinker_scan(struct shrinker *shrinker, struct shrink_control *sc) { struct msm_drm_private *priv = shrinker->private_data; + struct ww_acquire_ctx ticket; struct { struct drm_gem_lru *lru; bool (*shrink)(struct drm_gem_object *obj, struct ww_acquire_ctx *ticket); @@ -124,7 +190,7 @@ msm_gem_shrinker_scan(struct shrinker *shrinker, struct shrink_control *sc) drm_gem_lru_scan(stages[i].lru, nr, &stages[i].remaining, stages[i].shrink, - NULL); + &ticket); nr -= stages[i].freed; freed += stages[i].freed; remaining += stages[i].remaining; diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c index c8e6750e77a3..01e3ae71ffe8 100644 --- a/drivers/gpu/drm/msm/msm_gem_submit.c +++ b/drivers/gpu/drm/msm/msm_gem_submit.c @@ -256,11 +256,18 @@ static int submit_lookup_cmds(struct msm_gem_submit *submit, /* This is where we make sure all the bo's are reserved and pin'd: */ static int submit_lock_objects(struct msm_gem_submit *submit) { + unsigned flags = DRM_EXEC_IGNORE_DUPLICATES | DRM_EXEC_INTERRUPTIBLE_WAIT; int ret; - drm_exec_init(&submit->exec, DRM_EXEC_INTERRUPTIBLE_WAIT, submit->nr_bos); +// TODO need to add vm_bind path which locks vm resv + external objs + drm_exec_init(&submit->exec, flags, submit->nr_bos); drm_exec_until_all_locked (&submit->exec) { + ret = drm_exec_lock_obj(&submit->exec, + drm_gpuvm_resv_obj(&submit->vm->base)); + drm_exec_retry_on_contention(&submit->exec); + if (ret) + goto error; for (unsigned i = 0; i < submit->nr_bos; i++) { struct drm_gem_object *obj = submit->bos[i].obj; ret = drm_exec_prepare_obj(&submit->exec, obj, 1); diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_gem_vma.c index 1f4c9b5c2e8f..ccb20897a2b0 100644 --- a/drivers/gpu/drm/msm/msm_gem_vma.c +++ b/drivers/gpu/drm/msm/msm_gem_vma.c @@ -92,15 +92,13 @@ void msm_gem_vma_close(struct msm_gem_vma *vma) GEM_WARN_ON(vma->mapped); - spin_lock(&vm->mm_lock); + drm_gpuvm_resv_assert_held(&vm->base); + if (vma->base.va.addr) drm_mm_remove_node(&vma->node); - spin_unlock(&vm->mm_lock); - mutex_lock(&vm->vm_lock); drm_gpuva_remove(&vma->base); drm_gpuva_unlink(&vma->base); - mutex_unlock(&vm->vm_lock); kfree(vma); } @@ -114,16 +112,16 @@ msm_gem_vma_new(struct msm_gem_vm *vm, struct drm_gem_object *obj, struct msm_gem_vma *vma; int ret; + drm_gpuvm_resv_assert_held(&vm->base); + vma = kzalloc(sizeof(*vma), GFP_KERNEL); if (!vma) return ERR_PTR(-ENOMEM); if (vm->managed) { - spin_lock(&vm->mm_lock); ret = drm_mm_insert_node_in_range(&vm->mm, &vma->node, obj->size, PAGE_SIZE, 0, range_start, range_end, 0); - spin_unlock(&vm->mm_lock); if (ret) goto err_free_vma; @@ -137,9 +135,7 @@ msm_gem_vma_new(struct msm_gem_vm *vm, struct drm_gem_object *obj, drm_gpuva_init(&vma->base, range_start, range_end - range_start, obj, 0); vma->mapped = false; - mutex_lock(&vm->vm_lock); ret = drm_gpuva_insert(&vm->base, &vma->base); - mutex_unlock(&vm->vm_lock); if (ret) goto err_free_range; @@ -149,18 +145,14 @@ msm_gem_vma_new(struct msm_gem_vm *vm, struct drm_gem_object *obj, goto err_va_remove; } - mutex_lock(&vm->vm_lock); drm_gpuvm_bo_extobj_add(vm_bo); drm_gpuva_link(&vma->base, vm_bo); - mutex_unlock(&vm->vm_lock); GEM_WARN_ON(drm_gpuvm_bo_put(vm_bo)); return vma; err_va_remove: - mutex_lock(&vm->vm_lock); drm_gpuva_remove(&vma->base); - mutex_unlock(&vm->vm_lock); err_free_range: if (vm->managed) drm_mm_remove_node(&vma->node); @@ -191,6 +183,11 @@ struct msm_gem_vm * msm_gem_vm_create(struct drm_device *drm, struct msm_mmu *mmu, const char *name, u64 va_start, u64 va_size, bool managed) { + /* + * We mostly want to use DRM_GPUVM_RESV_PROTECTED, except that + * makes drm_gpuvm_bo_evict() a no-op for extobjs (ie. we loose + * tracking that an extobj is evicted) :facepalm: + */ enum drm_gpuvm_flags flags = 0; struct msm_gem_vm *vm; struct drm_gem_object *dummy_gem; @@ -213,9 +210,6 @@ msm_gem_vm_create(struct drm_device *drm, struct msm_mmu *mmu, const char *name, va_start, va_size, 0, 0, &msm_gpuvm_ops); drm_gem_object_put(dummy_gem); - spin_lock_init(&vm->mm_lock); - mutex_init(&vm->vm_lock); - vm->mmu = mmu; vm->managed = managed; From patchwork Thu Jun 5 18:28:58 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 894235 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EF5E62777FC for ; Thu, 5 Jun 2025 18:32:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.168.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749148374; cv=none; b=s2q8htnguVinAQIO9y1AXQ09pXb4kEh68CvUS6u8R2jEbYHto6tw142xZm1g0t3J6fVKICBiqhdp4c/wa4yyXgOFSbhHSnL9t3ZBiDIE+L+FpWNYluheCnEI2qkTxgm183kgK21mq6bas1MnXm8hZCb2SDsRnsW0HUEpXURDOsw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749148374; c=relaxed/simple; bh=M8u6m8lnnP5Fpj7OeNCsQI7LEJoMkaiARyvrcAKOeR0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=UqTe0l91At5z2iut24LpHmSilcmVcjdYlAkBpvLzLFor09ZoKXHmW3Sph3RMYg9RjAnFDuV8PlvEFNOjaBFCJ6WetlBQR0CrlVyEMGFMpofQ5G7ywjdp3PcQxNdrCoqaM5RnE02iBipyg14OVhcvCQqpk9tqjq7KPGYBegUwTyo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=cxWknbqA; arc=none smtp.client-ip=205.220.168.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="cxWknbqA" Received: from pps.filterd (m0279864.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 555HFnv8013514 for ; Thu, 5 Jun 2025 18:32:50 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=Ttp63SgEXeQ TvoxvaExGRGgcOXv2PjWeZiw1TbLgpws=; b=cxWknbqA2E+Yu/zDGJxNQm7HzAT FdXo5Sy6RGN4N6xgadFRsb2mm/OiBW9cPE9FmGDUH/fAbnL9Ukb7TPyuvg3HO5lA qisIhJjYvwvawAbxGIl7NCtQeKxDWKcrYjzgvWcPDw4oG8piTGBTLvaieu0VWwmI 4Dg8+n85NGdJ49tD8sr+zAIEig6TaNJXQp+eMUCdJgNRkuVdyFy+dY0Be8wVu16N uGmmTELnWJVL9me1mHwXA+OHam/jWMloc2n41tpVHTvoEfENkKbNnM28g4OaPu7b qEZbhaQ1nZysK1HvzCWzgSxetDXeHoLdo7HGSlGWtanJsA+gCDR/VqSw0Kg== Received: from mail-pf1-f199.google.com (mail-pf1-f199.google.com [209.85.210.199]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 471g8nt9tp-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Thu, 05 Jun 2025 18:32:49 +0000 (GMT) Received: by mail-pf1-f199.google.com with SMTP id d2e1a72fcca58-7480d44e6f9so1257674b3a.2 for ; Thu, 05 Jun 2025 11:32:49 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1749148369; x=1749753169; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Ttp63SgEXeQTvoxvaExGRGgcOXv2PjWeZiw1TbLgpws=; b=UuN9aKmHfZOiijUj40waOItQrrgGXPUd9T4oKbeInrfTKQex6lIhDvnWKsb0BZ+u+e UMZ2EgkpbvtGyxzz5CZKHwc+o/LeCLDE2ogLdo5B7GEhFfJFlQ29EzekdP+VMo6tIWAg PuA6TX3tWQQCxjdi2f7nwwiZvNlnTLxk9TggTms1RkzzrKCH65871r0ap2H5bFDpzti9 Q+tAHIwLTuJO3fqheDS2uMVSV+Z2jz/90+HdSCpLUUtGypu1RCe+fSxJoRtAxvep8jaM QJujvuanD0xfFx6zpZPSkLdfEIo3zUirw8uyi6RlGQW5+nA4Zp7x0kHRhVhIIv7YN1hS NPQw== X-Forwarded-Encrypted: i=1; AJvYcCUaZCcspJkMEzG6lElvACyOdFs2YfvjZIFJXtS8XVelun1GnI2FqXTQHCl1pSCXXbVA34yJg3Z3u73//hyZ@vger.kernel.org X-Gm-Message-State: AOJu0YxkcdyJPA+GkgGu/R8s4QA55ypIO/94c/QeCpGdbbuQfcG0At/Y hE3MbGmqCJA6UT45GJI9GfPxjTZML4UjaXMjK0DBwq7iFAyV9Q0JZ6TpXdeDFTPFDBmTxEXD5lj RLPoqxN/pi4u3fgSCP2YEv7HBCblKWFU1LwotxuUHDHn+9dvyzdmihVlB8UMAMx924gBA X-Gm-Gg: ASbGncuS24AlJr6BdUQIMlYiICsk4d9tcy3rHC0ZYAvxx9/Ef509lxFJBzYHaT6ZUEI 7fmck/SEjRXd50+wfPt+OYfIB6CcSpSxhv5HJuwPDmLLsjHu2+3+1fndkGYMcSVSPsNXG2rX86/ gcrq0TxUqBtcndQ+oF0Q+TFKpUXe37B4d74i8R+4PIGE0EwY5ripstfPNvCcckOBqOZd4c+aoBy x7c0T9t1GeQJEQh/GnC4Pj9Mvrngcd7rGv7bvGTTL9jxCTf/noRCi4zFmEsnK79gvg8rToR0iEN SOmw7F7a11jY3HxS50o8A3+/5dSXHZkl X-Received: by 2002:a05:6a00:1796:b0:742:a334:466a with SMTP id d2e1a72fcca58-74827ea23a8mr1041199b3a.12.1749148368209; Thu, 05 Jun 2025 11:32:48 -0700 (PDT) X-Google-Smtp-Source: AGHT+IHKdzt5v/vsVK8gJWCbnTOJ+cteShpb+jAi7uLQPm19YFK3I7YhEc1e7uDirACgNL6ynBQphw== X-Received: by 2002:a05:6a00:1796:b0:742:a334:466a with SMTP id d2e1a72fcca58-74827ea23a8mr1041149b3a.12.1749148367530; Thu, 05 Jun 2025 11:32:47 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:89fa:e299:1a34:c1f5]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-747afff7270sm13551228b3a.173.2025.06.05.11.32.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 05 Jun 2025 11:32:47 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Rob Clark , Rob Clark , Rob Clark , Sean Paul , Konrad Dybcio , Abhinav Kumar , Dmitry Baryshkov , Marijn Suijten , David Airlie , Simona Vetter , Jessica Zhang , Arnd Bergmann , Jonathan Marek , Krzysztof Kozlowski , Eugene Lepshy , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v6 13/40] drm/msm: Use drm_gpuvm types more Date: Thu, 5 Jun 2025 11:28:58 -0700 Message-ID: <20250605183111.163594-14-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250605183111.163594-1-robin.clark@oss.qualcomm.com> References: <20250605183111.163594-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Proofpoint-GUID: wmrhovO0rmMlmupMMOvfmp6SRhBW6_Mt X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjA1MDE2NSBTYWx0ZWRfXwtCnsHEtTxK2 1GcymWNC+FDSXDpmaSm9LYEDD/UM8tZ8/cNQvKn8eRebnrpM/6lbjYlBIQBqVyRUT1yBjHK6rpA XtLYjYmQbwanVUM7OKkq0ld5Rklv1uSE4HXgFWhirHpk1cKHhUAMcBEmtmpTJ/qqxUPWl+VWN40 7YZ9YX1Ti3cqzIGnT4h29MbXHs9LjMIEZqB2JyUSfmK8rGWekK/vfT65HoUBAzU1lcSX1C0NKGP aCRxZeHFoZDKmJgZJBM34oSIYHR2UeJrKNRymLx3z9DQUTrC0oPIGFMMESruRSDO989BEqhMrR4 v7OJgNNOOVkKOcHsIgNcOU87Wl5EaQJ1B65nU5tZKyr+pLvSSjXX0IHF6tcKQIbeizohG5P/awE rUESd3kODhB98rn23h4vTr1fMmscvt5WbtUM5kDnTkWMmXyHnwKJeuvUtB5EpofgXEnqYN6B X-Proofpoint-ORIG-GUID: wmrhovO0rmMlmupMMOvfmp6SRhBW6_Mt X-Authority-Analysis: v=2.4 cv=UphjN/wB c=1 sm=1 tr=0 ts=6841e2d1 cx=c_pps a=WW5sKcV1LcKqjgzy2JUPuA==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=cm27Pg_UAAAA:8 a=EUspDBNiAAAA:8 a=5ZgAWMNDsa3KKn93vecA:9 a=OpyuDcXvxspvyRM73sMx:22 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.0.736,FMLib:17.12.80.40 definitions=2025-06-05_05,2025-06-05_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 spamscore=0 impostorscore=0 lowpriorityscore=0 phishscore=0 mlxlogscore=999 clxscore=1015 malwarescore=0 adultscore=0 bulkscore=0 mlxscore=0 suspectscore=0 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506050165 From: Rob Clark Most of the driver code doesn't need to reach in to msm specific fields, so just use the drm_gpuvm/drm_gpuva types directly. This should hopefully improve commonality with other drivers and make the code easier to understand. Signed-off-by: Rob Clark Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/adreno/a2xx_gpu.c | 6 +- drivers/gpu/drm/msm/adreno/a5xx_gpu.c | 3 +- drivers/gpu/drm/msm/adreno/a6xx_gmu.c | 6 +- drivers/gpu/drm/msm/adreno/a6xx_gmu.h | 2 +- drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 11 +-- drivers/gpu/drm/msm/adreno/a6xx_preempt.c | 2 +- drivers/gpu/drm/msm/adreno/adreno_gpu.c | 21 ++--- drivers/gpu/drm/msm/adreno/adreno_gpu.h | 4 +- drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c | 6 +- drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c | 11 +-- drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c | 11 +-- drivers/gpu/drm/msm/dsi/dsi_host.c | 6 +- drivers/gpu/drm/msm/msm_drv.h | 4 +- drivers/gpu/drm/msm/msm_fb.c | 4 +- drivers/gpu/drm/msm/msm_gem.c | 94 +++++++++++------------ drivers/gpu/drm/msm/msm_gem.h | 59 +++++++------- drivers/gpu/drm/msm/msm_gem_submit.c | 8 +- drivers/gpu/drm/msm/msm_gem_vma.c | 70 +++++++---------- drivers/gpu/drm/msm/msm_gpu.c | 21 ++--- drivers/gpu/drm/msm/msm_gpu.h | 10 +-- drivers/gpu/drm/msm/msm_kms.c | 6 +- drivers/gpu/drm/msm/msm_kms.h | 2 +- drivers/gpu/drm/msm/msm_submitqueue.c | 2 +- 23 files changed, 178 insertions(+), 191 deletions(-) diff --git a/drivers/gpu/drm/msm/adreno/a2xx_gpu.c b/drivers/gpu/drm/msm/adreno/a2xx_gpu.c index 889480aa13ba..ec38db45d8a3 100644 --- a/drivers/gpu/drm/msm/adreno/a2xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a2xx_gpu.c @@ -113,7 +113,7 @@ static int a2xx_hw_init(struct msm_gpu *gpu) uint32_t *ptr, len; int i, ret; - a2xx_gpummu_params(gpu->vm->mmu, &pt_base, &tran_error); + a2xx_gpummu_params(to_msm_vm(gpu->vm)->mmu, &pt_base, &tran_error); DBG("%s", gpu->name); @@ -466,11 +466,11 @@ static struct msm_gpu_state *a2xx_gpu_state_get(struct msm_gpu *gpu) return state; } -static struct msm_gem_vm * +static struct drm_gpuvm * a2xx_create_vm(struct msm_gpu *gpu, struct platform_device *pdev) { struct msm_mmu *mmu = a2xx_gpummu_new(&pdev->dev, gpu); - struct msm_gem_vm *vm; + struct drm_gpuvm *vm; vm = msm_gem_vm_create(gpu->dev, mmu, "gpu", SZ_16M, 0xfff * SZ_64K, true); diff --git a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c index 04138a06724b..ee927d8cc0dc 100644 --- a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c @@ -1786,7 +1786,8 @@ struct msm_gpu *a5xx_gpu_init(struct drm_device *dev) return ERR_PTR(ret); } - msm_mmu_set_fault_handler(gpu->vm->mmu, gpu, a5xx_fault_handler); + msm_mmu_set_fault_handler(to_msm_vm(gpu->vm)->mmu, gpu, + a5xx_fault_handler); /* Set up the preemption specific bits and pieces for each ringbuffer */ a5xx_preempt_init(gpu); diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c index 77d9ff9632d1..28e6705c6da6 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c @@ -1259,6 +1259,8 @@ int a6xx_gmu_stop(struct a6xx_gpu *a6xx_gpu) static void a6xx_gmu_memory_free(struct a6xx_gmu *gmu) { + struct msm_mmu *mmu = to_msm_vm(gmu->vm)->mmu; + msm_gem_kernel_put(gmu->hfi.obj, gmu->vm); msm_gem_kernel_put(gmu->debug.obj, gmu->vm); msm_gem_kernel_put(gmu->icache.obj, gmu->vm); @@ -1266,8 +1268,8 @@ static void a6xx_gmu_memory_free(struct a6xx_gmu *gmu) msm_gem_kernel_put(gmu->dummy.obj, gmu->vm); msm_gem_kernel_put(gmu->log.obj, gmu->vm); - gmu->vm->mmu->funcs->detach(gmu->vm->mmu); - msm_gem_vm_put(gmu->vm); + mmu->funcs->detach(mmu); + drm_gpuvm_put(gmu->vm); } static int a6xx_gmu_memory_alloc(struct a6xx_gmu *gmu, struct a6xx_gmu_bo *bo, diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.h b/drivers/gpu/drm/msm/adreno/a6xx_gmu.h index cceda7d9c33a..5da36226b93d 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.h +++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.h @@ -62,7 +62,7 @@ struct a6xx_gmu { /* For serializing communication with the GMU: */ struct mutex lock; - struct msm_gem_vm *vm; + struct drm_gpuvm *vm; void __iomem *mmio; void __iomem *rscc; diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c index 26d0a863f38c..c43a443661e4 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c @@ -120,7 +120,7 @@ static void a6xx_set_pagetable(struct a6xx_gpu *a6xx_gpu, if (ctx->seqno == ring->cur_ctx_seqno) return; - if (msm_iommu_pagetable_params(ctx->vm->mmu, &ttbr, &asid)) + if (msm_iommu_pagetable_params(to_msm_vm(ctx->vm)->mmu, &ttbr, &asid)) return; if (adreno_gpu->info->family >= ADRENO_7XX_GEN1) { @@ -2243,7 +2243,7 @@ static void a6xx_gpu_set_freq(struct msm_gpu *gpu, struct dev_pm_opp *opp, mutex_unlock(&a6xx_gpu->gmu.lock); } -static struct msm_gem_vm * +static struct drm_gpuvm * a6xx_create_vm(struct msm_gpu *gpu, struct platform_device *pdev) { struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu); @@ -2261,12 +2261,12 @@ a6xx_create_vm(struct msm_gpu *gpu, struct platform_device *pdev) return adreno_iommu_create_vm(gpu, pdev, quirks); } -static struct msm_gem_vm * +static struct drm_gpuvm * a6xx_create_private_vm(struct msm_gpu *gpu) { struct msm_mmu *mmu; - mmu = msm_iommu_pagetable_create(gpu->vm->mmu); + mmu = msm_iommu_pagetable_create(to_msm_vm(gpu->vm)->mmu); if (IS_ERR(mmu)) return ERR_CAST(mmu); @@ -2546,7 +2546,8 @@ struct msm_gpu *a6xx_gpu_init(struct drm_device *dev) adreno_gpu->uche_trap_base = 0x1fffffffff000ull; - msm_mmu_set_fault_handler(gpu->vm->mmu, gpu, a6xx_fault_handler); + msm_mmu_set_fault_handler(to_msm_vm(gpu->vm)->mmu, gpu, + a6xx_fault_handler); a6xx_calc_ubwc_config(adreno_gpu); /* Set up the preemption specific bits and pieces for each ringbuffer */ diff --git a/drivers/gpu/drm/msm/adreno/a6xx_preempt.c b/drivers/gpu/drm/msm/adreno/a6xx_preempt.c index b14a7c630bd0..7fd560a2c1ce 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_preempt.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_preempt.c @@ -376,7 +376,7 @@ static int preempt_init_ring(struct a6xx_gpu *a6xx_gpu, struct a7xx_cp_smmu_info *smmu_info_ptr = ptr; - msm_iommu_pagetable_params(gpu->vm->mmu, &ttbr, &asid); + msm_iommu_pagetable_params(to_msm_vm(gpu->vm)->mmu, &ttbr, &asid); smmu_info_ptr->magic = GEN7_CP_SMMU_INFO_MAGIC; smmu_info_ptr->ttbr0 = ttbr; diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c b/drivers/gpu/drm/msm/adreno/adreno_gpu.c index 287b032fefe4..f6624a246694 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c @@ -191,21 +191,21 @@ int adreno_zap_shader_load(struct msm_gpu *gpu, u32 pasid) return zap_shader_load_mdt(gpu, adreno_gpu->info->zapfw, pasid); } -struct msm_gem_vm * +struct drm_gpuvm * adreno_create_vm(struct msm_gpu *gpu, struct platform_device *pdev) { return adreno_iommu_create_vm(gpu, pdev, 0); } -struct msm_gem_vm * +struct drm_gpuvm * adreno_iommu_create_vm(struct msm_gpu *gpu, struct platform_device *pdev, unsigned long quirks) { struct iommu_domain_geometry *geometry; struct msm_mmu *mmu; - struct msm_gem_vm *vm; + struct drm_gpuvm *vm; u64 start, size; mmu = msm_iommu_gpu_new(&pdev->dev, gpu, quirks); @@ -274,9 +274,10 @@ void adreno_check_and_reenable_stall(struct adreno_gpu *adreno_gpu) if (!adreno_gpu->stall_enabled && ktime_after(ktime_get(), adreno_gpu->stall_reenable_time) && !READ_ONCE(gpu->crashstate)) { + struct msm_mmu *mmu = to_msm_vm(gpu->vm)->mmu; adreno_gpu->stall_enabled = true; - gpu->vm->mmu->funcs->set_stall(gpu->vm->mmu, true); + mmu->funcs->set_stall(mmu, true); } spin_unlock_irqrestore(&adreno_gpu->fault_stall_lock, flags); } @@ -290,6 +291,7 @@ int adreno_fault_handler(struct msm_gpu *gpu, unsigned long iova, int flags, struct adreno_smmu_fault_info *info, const char *block, u32 scratch[4]) { + struct msm_mmu *mmu = to_msm_vm(gpu->vm)->mmu; struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu); const char *type = "UNKNOWN"; bool do_devcoredump = info && (info->fsr & ARM_SMMU_FSR_SS) && @@ -302,9 +304,10 @@ int adreno_fault_handler(struct msm_gpu *gpu, unsigned long iova, int flags, */ spin_lock_irqsave(&adreno_gpu->fault_stall_lock, irq_flags); if (adreno_gpu->stall_enabled) { + adreno_gpu->stall_enabled = false; - gpu->vm->mmu->funcs->set_stall(gpu->vm->mmu, false); + mmu->funcs->set_stall(mmu, false); } adreno_gpu->stall_reenable_time = ktime_add_ms(ktime_get(), 500); spin_unlock_irqrestore(&adreno_gpu->fault_stall_lock, irq_flags); @@ -314,7 +317,7 @@ int adreno_fault_handler(struct msm_gpu *gpu, unsigned long iova, int flags, * it now. */ if (!do_devcoredump) { - gpu->vm->mmu->funcs->resume_translation(gpu->vm->mmu); + mmu->funcs->resume_translation(mmu); } /* @@ -409,7 +412,7 @@ int adreno_get_param(struct msm_gpu *gpu, struct msm_context *ctx, return 0; case MSM_PARAM_FAULTS: if (ctx->vm) - *value = gpu->global_faults + ctx->vm->faults; + *value = gpu->global_faults + to_msm_vm(ctx->vm)->faults; else *value = gpu->global_faults; return 0; @@ -419,12 +422,12 @@ int adreno_get_param(struct msm_gpu *gpu, struct msm_context *ctx, case MSM_PARAM_VA_START: if (ctx->vm == gpu->vm) return UERR(EINVAL, drm, "requires per-process pgtables"); - *value = ctx->vm->base.mm_start; + *value = ctx->vm->mm_start; return 0; case MSM_PARAM_VA_SIZE: if (ctx->vm == gpu->vm) return UERR(EINVAL, drm, "requires per-process pgtables"); - *value = ctx->vm->base.mm_range; + *value = ctx->vm->mm_range; return 0; case MSM_PARAM_HIGHEST_BANK_BIT: *value = adreno_gpu->ubwc_config.highest_bank_bit; diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.h b/drivers/gpu/drm/msm/adreno/adreno_gpu.h index bbd7e664286e..e9a63fbd131b 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.h +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.h @@ -644,11 +644,11 @@ void adreno_show_object(struct drm_printer *p, void **ptr, int len, * Common helper function to initialize the default address space for arm-smmu * attached targets */ -struct msm_gem_vm * +struct drm_gpuvm * adreno_create_vm(struct msm_gpu *gpu, struct platform_device *pdev); -struct msm_gem_vm * +struct drm_gpuvm * adreno_iommu_create_vm(struct msm_gpu *gpu, struct platform_device *pdev, unsigned long quirks); diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c index bb5db6da636a..a9cd215cfd33 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c @@ -1098,17 +1098,17 @@ static void _dpu_kms_mmu_destroy(struct dpu_kms *dpu_kms) if (!dpu_kms->base.vm) return; - mmu = dpu_kms->base.vm->mmu; + mmu = to_msm_vm(dpu_kms->base.vm)->mmu; mmu->funcs->detach(mmu); - msm_gem_vm_put(dpu_kms->base.vm); + drm_gpuvm_put(dpu_kms->base.vm); dpu_kms->base.vm = NULL; } static int _dpu_kms_mmu_init(struct dpu_kms *dpu_kms) { - struct msm_gem_vm *vm; + struct drm_gpuvm *vm; vm = msm_kms_init_vm(dpu_kms->dev); if (IS_ERR(vm)) diff --git a/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c b/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c index d5b5628bee24..9326ed3aab04 100644 --- a/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c +++ b/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c @@ -120,15 +120,16 @@ static void mdp4_destroy(struct msm_kms *kms) { struct mdp4_kms *mdp4_kms = to_mdp4_kms(to_mdp_kms(kms)); struct device *dev = mdp4_kms->dev->dev; - struct msm_gem_vm *vm = kms->vm; if (mdp4_kms->blank_cursor_iova) msm_gem_unpin_iova(mdp4_kms->blank_cursor_bo, kms->vm); drm_gem_object_put(mdp4_kms->blank_cursor_bo); - if (vm) { - vm->mmu->funcs->detach(vm->mmu); - msm_gem_vm_put(vm); + if (kms->vm) { + struct msm_mmu *mmu = to_msm_vm(kms->vm)->mmu; + + mmu->funcs->detach(mmu); + drm_gpuvm_put(kms->vm); } if (mdp4_kms->rpm_enabled) @@ -380,7 +381,7 @@ static int mdp4_kms_init(struct drm_device *dev) struct mdp4_kms *mdp4_kms = to_mdp4_kms(to_mdp_kms(priv->kms)); struct msm_kms *kms = NULL; struct msm_mmu *mmu; - struct msm_gem_vm *vm; + struct drm_gpuvm *vm; int ret; u32 major, minor; unsigned long max_clk; diff --git a/drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c b/drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c index 9dca0385a42d..b6e6bd1f95ee 100644 --- a/drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c +++ b/drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c @@ -198,11 +198,12 @@ static void mdp5_destroy(struct mdp5_kms *mdp5_kms); static void mdp5_kms_destroy(struct msm_kms *kms) { struct mdp5_kms *mdp5_kms = to_mdp5_kms(to_mdp_kms(kms)); - struct msm_gem_vm *vm = kms->vm; - if (vm) { - vm->mmu->funcs->detach(vm->mmu); - msm_gem_vm_put(vm); + if (kms->vm) { + struct msm_mmu *mmu = to_msm_vm(kms->vm)->mmu; + + mmu->funcs->detach(mmu); + drm_gpuvm_put(kms->vm); } mdp_kms_destroy(&mdp5_kms->base); @@ -500,7 +501,7 @@ static int mdp5_kms_init(struct drm_device *dev) struct mdp5_kms *mdp5_kms; struct mdp5_cfg *config; struct msm_kms *kms = priv->kms; - struct msm_gem_vm *vm; + struct drm_gpuvm *vm; int i, ret; ret = mdp5_init(to_platform_device(dev->dev), dev); diff --git a/drivers/gpu/drm/msm/dsi/dsi_host.c b/drivers/gpu/drm/msm/dsi/dsi_host.c index 16335ebd21e4..2d1699b7dc93 100644 --- a/drivers/gpu/drm/msm/dsi/dsi_host.c +++ b/drivers/gpu/drm/msm/dsi/dsi_host.c @@ -143,7 +143,7 @@ struct msm_dsi_host { /* DSI 6G TX buffer*/ struct drm_gem_object *tx_gem_obj; - struct msm_gem_vm *vm; + struct drm_gpuvm *vm; /* DSI v2 TX buffer */ void *tx_buf; @@ -1146,7 +1146,7 @@ int dsi_tx_buf_alloc_6g(struct msm_dsi_host *msm_host, int size) uint64_t iova; u8 *data; - msm_host->vm = msm_gem_vm_get(priv->kms->vm); + msm_host->vm = drm_gpuvm_get(priv->kms->vm); data = msm_gem_kernel_new(dev, size, MSM_BO_WC, msm_host->vm, @@ -1194,7 +1194,7 @@ void msm_dsi_tx_buf_free(struct mipi_dsi_host *host) if (msm_host->tx_gem_obj) { msm_gem_kernel_put(msm_host->tx_gem_obj, msm_host->vm); - msm_gem_vm_put(msm_host->vm); + drm_gpuvm_put(msm_host->vm); msm_host->tx_gem_obj = NULL; msm_host->vm = NULL; } diff --git a/drivers/gpu/drm/msm/msm_drv.h b/drivers/gpu/drm/msm/msm_drv.h index e4c57deaa1f9..136dd928135a 100644 --- a/drivers/gpu/drm/msm/msm_drv.h +++ b/drivers/gpu/drm/msm/msm_drv.h @@ -48,8 +48,6 @@ struct msm_rd_state; struct msm_perf_state; struct msm_gem_submit; struct msm_fence_context; -struct msm_gem_vm; -struct msm_gem_vma; struct msm_disp_state; #define MAX_CRTCS 8 @@ -230,7 +228,7 @@ void msm_crtc_disable_vblank(struct drm_crtc *crtc); int msm_register_mmu(struct drm_device *dev, struct msm_mmu *mmu); void msm_unregister_mmu(struct drm_device *dev, struct msm_mmu *mmu); -struct msm_gem_vm *msm_kms_init_vm(struct drm_device *dev); +struct drm_gpuvm *msm_kms_init_vm(struct drm_device *dev); bool msm_use_mmu(struct drm_device *dev); int msm_ioctl_gem_submit(struct drm_device *dev, void *data, diff --git a/drivers/gpu/drm/msm/msm_fb.c b/drivers/gpu/drm/msm/msm_fb.c index 3b17d83f6673..8ae2f326ec54 100644 --- a/drivers/gpu/drm/msm/msm_fb.c +++ b/drivers/gpu/drm/msm/msm_fb.c @@ -78,7 +78,7 @@ void msm_framebuffer_describe(struct drm_framebuffer *fb, struct seq_file *m) int msm_framebuffer_prepare(struct drm_framebuffer *fb, bool needs_dirtyfb) { struct msm_drm_private *priv = fb->dev->dev_private; - struct msm_gem_vm *vm = priv->kms->vm; + struct drm_gpuvm *vm = priv->kms->vm; struct msm_framebuffer *msm_fb = to_msm_framebuffer(fb); int ret, i, n = fb->format->num_planes; @@ -102,7 +102,7 @@ int msm_framebuffer_prepare(struct drm_framebuffer *fb, bool needs_dirtyfb) void msm_framebuffer_cleanup(struct drm_framebuffer *fb, bool needed_dirtyfb) { struct msm_drm_private *priv = fb->dev->dev_private; - struct msm_gem_vm *vm = priv->kms->vm; + struct drm_gpuvm *vm = priv->kms->vm; struct msm_framebuffer *msm_fb = to_msm_framebuffer(fb); int i, n = fb->format->num_planes; diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 45a542173cca..87949d0e87bf 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -49,20 +49,20 @@ static int msm_gem_open(struct drm_gem_object *obj, struct drm_file *file) static void put_iova_spaces(struct drm_gem_object *obj, struct drm_gpuvm *vm, bool close); -static void detach_vm(struct drm_gem_object *obj, struct msm_gem_vm *vm) +static void detach_vm(struct drm_gem_object *obj, struct drm_gpuvm *vm) { msm_gem_assert_locked(obj); - drm_gpuvm_resv_assert_held(&vm->base); + drm_gpuvm_resv_assert_held(vm); - struct drm_gpuvm_bo *vm_bo = drm_gpuvm_bo_find(&vm->base, obj); + struct drm_gpuvm_bo *vm_bo = drm_gpuvm_bo_find(vm, obj); if (vm_bo) { struct drm_gpuva *vma; drm_gpuvm_bo_for_each_va (vma, vm_bo) { - if (vma->vm != &vm->base) + if (vma->vm != vm) continue; - msm_gem_vma_purge(to_msm_vma(vma)); - msm_gem_vma_close(to_msm_vma(vma)); + msm_gem_vma_purge(vma); + msm_gem_vma_close(vma); break; } @@ -93,7 +93,7 @@ static void msm_gem_close(struct drm_gem_object *obj, struct drm_file *file) msecs_to_jiffies(1000)); msm_gem_lock_vm_and_obj(&exec, obj, ctx->vm); - put_iova_spaces(obj, &ctx->vm->base, true); + put_iova_spaces(obj, ctx->vm, true); detach_vm(obj, ctx->vm); drm_exec_fini(&exec); /* drop locks */ } @@ -390,8 +390,8 @@ uint64_t msm_gem_mmap_offset(struct drm_gem_object *obj) return offset; } -static struct msm_gem_vma *lookup_vma(struct drm_gem_object *obj, - struct msm_gem_vm *vm) +static struct drm_gpuva *lookup_vma(struct drm_gem_object *obj, + struct drm_gpuvm *vm) { struct drm_gpuvm_bo *vm_bo; @@ -401,13 +401,13 @@ static struct msm_gem_vma *lookup_vma(struct drm_gem_object *obj, struct drm_gpuva *vma; drm_gpuvm_bo_for_each_va (vma, vm_bo) { - if (vma->vm == &vm->base) { + if (vma->vm == vm) { /* lookup_vma() should only be used in paths * with at most one vma per vm */ GEM_WARN_ON(!list_is_singular(&vm_bo->list.gpuva)); - return to_msm_vma(vma); + return vma; } } } @@ -437,22 +437,20 @@ put_iova_spaces(struct drm_gem_object *obj, struct drm_gpuvm *vm, bool close) drm_gpuvm_bo_get(vm_bo); drm_gpuvm_bo_for_each_va_safe (vma, vmatmp, vm_bo) { - struct msm_gem_vma *msm_vma = to_msm_vma(vma); - - msm_gem_vma_purge(msm_vma); + msm_gem_vma_purge(vma); if (close) - msm_gem_vma_close(msm_vma); + msm_gem_vma_close(vma); } drm_gpuvm_bo_put(vm_bo); } } -static struct msm_gem_vma *get_vma_locked(struct drm_gem_object *obj, - struct msm_gem_vm *vm, - u64 range_start, u64 range_end) +static struct drm_gpuva *get_vma_locked(struct drm_gem_object *obj, + struct drm_gpuvm *vm, u64 range_start, + u64 range_end) { - struct msm_gem_vma *vma; + struct drm_gpuva *vma; msm_gem_assert_locked(obj); @@ -461,14 +459,14 @@ static struct msm_gem_vma *get_vma_locked(struct drm_gem_object *obj, if (!vma) { vma = msm_gem_vma_new(vm, obj, range_start, range_end); } else { - GEM_WARN_ON(vma->base.va.addr < range_start); - GEM_WARN_ON((vma->base.va.addr + obj->size) > range_end); + GEM_WARN_ON(vma->va.addr < range_start); + GEM_WARN_ON((vma->va.addr + obj->size) > range_end); } return vma; } -int msm_gem_pin_vma_locked(struct drm_gem_object *obj, struct msm_gem_vma *vma) +int msm_gem_pin_vma_locked(struct drm_gem_object *obj, struct drm_gpuva *vma) { struct msm_gem_object *msm_obj = to_msm_bo(obj); struct page **pages; @@ -525,17 +523,17 @@ void msm_gem_unpin_active(struct drm_gem_object *obj) update_lru_active(obj); } -struct msm_gem_vma *msm_gem_get_vma_locked(struct drm_gem_object *obj, - struct msm_gem_vm *vm) +struct drm_gpuva *msm_gem_get_vma_locked(struct drm_gem_object *obj, + struct drm_gpuvm *vm) { return get_vma_locked(obj, vm, 0, U64_MAX); } static int get_and_pin_iova_range_locked(struct drm_gem_object *obj, - struct msm_gem_vm *vm, uint64_t *iova, - u64 range_start, u64 range_end) + struct drm_gpuvm *vm, uint64_t *iova, + u64 range_start, u64 range_end) { - struct msm_gem_vma *vma; + struct drm_gpuva *vma; int ret; msm_gem_assert_locked(obj); @@ -546,7 +544,7 @@ static int get_and_pin_iova_range_locked(struct drm_gem_object *obj, ret = msm_gem_pin_vma_locked(obj, vma); if (!ret) { - *iova = vma->base.va.addr; + *iova = vma->va.addr; pin_obj_locked(obj); } @@ -558,8 +556,8 @@ static int get_and_pin_iova_range_locked(struct drm_gem_object *obj, * limits iova to specified range (in pages) */ int msm_gem_get_and_pin_iova_range(struct drm_gem_object *obj, - struct msm_gem_vm *vm, uint64_t *iova, - u64 range_start, u64 range_end) + struct drm_gpuvm *vm, uint64_t *iova, + u64 range_start, u64 range_end) { struct drm_exec exec; int ret; @@ -572,8 +570,8 @@ int msm_gem_get_and_pin_iova_range(struct drm_gem_object *obj, } /* get iova and pin it. Should have a matching put */ -int msm_gem_get_and_pin_iova(struct drm_gem_object *obj, - struct msm_gem_vm *vm, uint64_t *iova) +int msm_gem_get_and_pin_iova(struct drm_gem_object *obj, struct drm_gpuvm *vm, + uint64_t *iova) { return msm_gem_get_and_pin_iova_range(obj, vm, iova, 0, U64_MAX); } @@ -582,10 +580,10 @@ int msm_gem_get_and_pin_iova(struct drm_gem_object *obj, * Get an iova but don't pin it. Doesn't need a put because iovas are currently * valid for the life of the object */ -int msm_gem_get_iova(struct drm_gem_object *obj, - struct msm_gem_vm *vm, uint64_t *iova) +int msm_gem_get_iova(struct drm_gem_object *obj, struct drm_gpuvm *vm, + uint64_t *iova) { - struct msm_gem_vma *vma; + struct drm_gpuva *vma; struct drm_exec exec; int ret = 0; @@ -594,7 +592,7 @@ int msm_gem_get_iova(struct drm_gem_object *obj, if (IS_ERR(vma)) { ret = PTR_ERR(vma); } else { - *iova = vma->base.va.addr; + *iova = vma->va.addr; } drm_exec_fini(&exec); /* drop locks */ @@ -602,9 +600,9 @@ int msm_gem_get_iova(struct drm_gem_object *obj, } static int clear_iova(struct drm_gem_object *obj, - struct msm_gem_vm *vm) + struct drm_gpuvm *vm) { - struct msm_gem_vma *vma = lookup_vma(obj, vm); + struct drm_gpuva *vma = lookup_vma(obj, vm); if (!vma) return 0; @@ -623,7 +621,7 @@ static int clear_iova(struct drm_gem_object *obj, * Setting an iova of zero will clear the vma. */ int msm_gem_set_iova(struct drm_gem_object *obj, - struct msm_gem_vm *vm, uint64_t iova) + struct drm_gpuvm *vm, uint64_t iova) { struct drm_exec exec; int ret = 0; @@ -632,11 +630,11 @@ int msm_gem_set_iova(struct drm_gem_object *obj, if (!iova) { ret = clear_iova(obj, vm); } else { - struct msm_gem_vma *vma; + struct drm_gpuva *vma; vma = get_vma_locked(obj, vm, iova, iova + obj->size); if (IS_ERR(vma)) { ret = PTR_ERR(vma); - } else if (GEM_WARN_ON(vma->base.va.addr != iova)) { + } else if (GEM_WARN_ON(vma->va.addr != iova)) { clear_iova(obj, vm); ret = -EBUSY; } @@ -651,10 +649,9 @@ int msm_gem_set_iova(struct drm_gem_object *obj, * purged until something else (shrinker, mm_notifier, destroy, etc) decides * to get rid of it */ -void msm_gem_unpin_iova(struct drm_gem_object *obj, - struct msm_gem_vm *vm) +void msm_gem_unpin_iova(struct drm_gem_object *obj, struct drm_gpuvm *vm) { - struct msm_gem_vma *vma; + struct drm_gpuva *vma; struct drm_exec exec; msm_gem_lock_vm_and_obj(&exec, obj, vm); @@ -1284,9 +1281,9 @@ struct drm_gem_object *msm_gem_import(struct drm_device *dev, return ERR_PTR(ret); } -void *msm_gem_kernel_new(struct drm_device *dev, uint32_t size, - uint32_t flags, struct msm_gem_vm *vm, - struct drm_gem_object **bo, uint64_t *iova) +void *msm_gem_kernel_new(struct drm_device *dev, uint32_t size, uint32_t flags, + struct drm_gpuvm *vm, struct drm_gem_object **bo, + uint64_t *iova) { void *vaddr; struct drm_gem_object *obj = msm_gem_new(dev, size, flags); @@ -1319,8 +1316,7 @@ void *msm_gem_kernel_new(struct drm_device *dev, uint32_t size, } -void msm_gem_kernel_put(struct drm_gem_object *bo, - struct msm_gem_vm *vm) +void msm_gem_kernel_put(struct drm_gem_object *bo, struct drm_gpuvm *vm) { if (IS_ERR_OR_NULL(bo)) return; diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index 31933ed8fb2c..557b6804181f 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -79,12 +79,7 @@ struct msm_gem_vm { }; #define to_msm_vm(x) container_of(x, struct msm_gem_vm, base) -struct msm_gem_vm * -msm_gem_vm_get(struct msm_gem_vm *vm); - -void msm_gem_vm_put(struct msm_gem_vm *vm); - -struct msm_gem_vm * +struct drm_gpuvm * msm_gem_vm_create(struct drm_device *drm, struct msm_mmu *mmu, const char *name, u64 va_start, u64 va_size, bool managed); @@ -113,12 +108,12 @@ struct msm_gem_vma { }; #define to_msm_vma(x) container_of(x, struct msm_gem_vma, base) -struct msm_gem_vma * -msm_gem_vma_new(struct msm_gem_vm *vm, struct drm_gem_object *obj, +struct drm_gpuva * +msm_gem_vma_new(struct drm_gpuvm *vm, struct drm_gem_object *obj, u64 range_start, u64 range_end); -void msm_gem_vma_purge(struct msm_gem_vma *vma); -int msm_gem_vma_map(struct msm_gem_vma *vma, int prot, struct sg_table *sgt, int size); -void msm_gem_vma_close(struct msm_gem_vma *vma); +void msm_gem_vma_purge(struct drm_gpuva *vma); +int msm_gem_vma_map(struct drm_gpuva *vma, int prot, struct sg_table *sgt, int size); +void msm_gem_vma_close(struct drm_gpuva *vma); struct msm_gem_object { struct drm_gem_object base; @@ -163,22 +158,21 @@ struct msm_gem_object { #define to_msm_bo(x) container_of(x, struct msm_gem_object, base) uint64_t msm_gem_mmap_offset(struct drm_gem_object *obj); -int msm_gem_pin_vma_locked(struct drm_gem_object *obj, struct msm_gem_vma *vma); +int msm_gem_pin_vma_locked(struct drm_gem_object *obj, struct drm_gpuva *vma); void msm_gem_unpin_locked(struct drm_gem_object *obj); void msm_gem_unpin_active(struct drm_gem_object *obj); -struct msm_gem_vma *msm_gem_get_vma_locked(struct drm_gem_object *obj, - struct msm_gem_vm *vm); -int msm_gem_get_iova(struct drm_gem_object *obj, - struct msm_gem_vm *vm, uint64_t *iova); -int msm_gem_set_iova(struct drm_gem_object *obj, - struct msm_gem_vm *vm, uint64_t iova); +struct drm_gpuva *msm_gem_get_vma_locked(struct drm_gem_object *obj, + struct drm_gpuvm *vm); +int msm_gem_get_iova(struct drm_gem_object *obj, struct drm_gpuvm *vm, + uint64_t *iova); +int msm_gem_set_iova(struct drm_gem_object *obj, struct drm_gpuvm *vm, + uint64_t iova); int msm_gem_get_and_pin_iova_range(struct drm_gem_object *obj, - struct msm_gem_vm *vm, uint64_t *iova, - u64 range_start, u64 range_end); -int msm_gem_get_and_pin_iova(struct drm_gem_object *obj, - struct msm_gem_vm *vm, uint64_t *iova); -void msm_gem_unpin_iova(struct drm_gem_object *obj, - struct msm_gem_vm *vm); + struct drm_gpuvm *vm, uint64_t *iova, + u64 range_start, u64 range_end); +int msm_gem_get_and_pin_iova(struct drm_gem_object *obj, struct drm_gpuvm *vm, + uint64_t *iova); +void msm_gem_unpin_iova(struct drm_gem_object *obj, struct drm_gpuvm *vm); void msm_gem_pin_obj_locked(struct drm_gem_object *obj); struct page **msm_gem_pin_pages_locked(struct drm_gem_object *obj); void msm_gem_unpin_pages_locked(struct drm_gem_object *obj); @@ -199,11 +193,10 @@ int msm_gem_new_handle(struct drm_device *dev, struct drm_file *file, uint32_t size, uint32_t flags, uint32_t *handle, char *name); struct drm_gem_object *msm_gem_new(struct drm_device *dev, uint32_t size, uint32_t flags); -void *msm_gem_kernel_new(struct drm_device *dev, uint32_t size, - uint32_t flags, struct msm_gem_vm *vm, - struct drm_gem_object **bo, uint64_t *iova); -void msm_gem_kernel_put(struct drm_gem_object *bo, - struct msm_gem_vm *vm); +void *msm_gem_kernel_new(struct drm_device *dev, uint32_t size, uint32_t flags, + struct drm_gpuvm *vm, struct drm_gem_object **bo, + uint64_t *iova); +void msm_gem_kernel_put(struct drm_gem_object *bo, struct drm_gpuvm *vm); struct drm_gem_object *msm_gem_import(struct drm_device *dev, struct dma_buf *dmabuf, struct sg_table *sgt); __printf(2, 3) @@ -254,14 +247,14 @@ msm_gem_unlock(struct drm_gem_object *obj) static inline int msm_gem_lock_vm_and_obj(struct drm_exec *exec, struct drm_gem_object *obj, - struct msm_gem_vm *vm) + struct drm_gpuvm *vm) { int ret = 0; drm_exec_init(exec, 0, 2); drm_exec_until_all_locked (exec) { - ret = drm_exec_lock_obj(exec, drm_gpuvm_resv_obj(&vm->base)); - if (!ret && (obj->resv != drm_gpuvm_resv(&vm->base))) + ret = drm_exec_lock_obj(exec, drm_gpuvm_resv_obj(vm)); + if (!ret && (obj->resv != drm_gpuvm_resv(vm))) ret = drm_exec_lock_obj(exec, obj); drm_exec_retry_on_contention(exec); if (GEM_WARN_ON(ret)) @@ -328,7 +321,7 @@ struct msm_gem_submit { struct kref ref; struct drm_device *dev; struct msm_gpu *gpu; - struct msm_gem_vm *vm; + struct drm_gpuvm *vm; struct list_head node; /* node in ring submit list */ struct drm_exec exec; uint32_t seqno; /* Sequence number of the submit on the ring */ diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c index 01e3ae71ffe8..f51f2c00e6e2 100644 --- a/drivers/gpu/drm/msm/msm_gem_submit.c +++ b/drivers/gpu/drm/msm/msm_gem_submit.c @@ -264,7 +264,7 @@ static int submit_lock_objects(struct msm_gem_submit *submit) drm_exec_until_all_locked (&submit->exec) { ret = drm_exec_lock_obj(&submit->exec, - drm_gpuvm_resv_obj(&submit->vm->base)); + drm_gpuvm_resv_obj(submit->vm)); drm_exec_retry_on_contention(&submit->exec); if (ret) goto error; @@ -315,7 +315,7 @@ static int submit_pin_objects(struct msm_gem_submit *submit) for (i = 0; i < submit->nr_bos; i++) { struct drm_gem_object *obj = submit->bos[i].obj; - struct msm_gem_vma *vma; + struct drm_gpuva *vma; /* if locking succeeded, pin bo: */ vma = msm_gem_get_vma_locked(obj, submit->vm); @@ -328,8 +328,8 @@ static int submit_pin_objects(struct msm_gem_submit *submit) if (ret) break; - submit->bos[i].vm_bo = drm_gpuvm_bo_get(vma->base.vm_bo); - submit->bos[i].iova = vma->base.va.addr; + submit->bos[i].vm_bo = drm_gpuvm_bo_get(vma->vm_bo); + submit->bos[i].iova = vma->va.addr; } /* diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_gem_vma.c index ccb20897a2b0..df8eb910ca31 100644 --- a/drivers/gpu/drm/msm/msm_gem_vma.c +++ b/drivers/gpu/drm/msm/msm_gem_vma.c @@ -20,52 +20,38 @@ msm_gem_vm_free(struct drm_gpuvm *gpuvm) kfree(vm); } - -void msm_gem_vm_put(struct msm_gem_vm *vm) -{ - if (vm) - drm_gpuvm_put(&vm->base); -} - -struct msm_gem_vm * -msm_gem_vm_get(struct msm_gem_vm *vm) -{ - if (!IS_ERR_OR_NULL(vm)) - drm_gpuvm_get(&vm->base); - - return vm; -} - /* Actually unmap memory for the vma */ -void msm_gem_vma_purge(struct msm_gem_vma *vma) +void msm_gem_vma_purge(struct drm_gpuva *vma) { - struct msm_gem_vm *vm = to_msm_vm(vma->base.vm); - unsigned size = vma->base.va.range; + struct msm_gem_vma *msm_vma = to_msm_vma(vma); + struct msm_gem_vm *vm = to_msm_vm(vma->vm); + unsigned size = vma->va.range; /* Don't do anything if the memory isn't mapped */ - if (!vma->mapped) + if (!msm_vma->mapped) return; - vm->mmu->funcs->unmap(vm->mmu, vma->base.va.addr, size); + vm->mmu->funcs->unmap(vm->mmu, vma->va.addr, size); - vma->mapped = false; + msm_vma->mapped = false; } /* Map and pin vma: */ int -msm_gem_vma_map(struct msm_gem_vma *vma, int prot, +msm_gem_vma_map(struct drm_gpuva *vma, int prot, struct sg_table *sgt, int size) { - struct msm_gem_vm *vm = to_msm_vm(vma->base.vm); + struct msm_gem_vma *msm_vma = to_msm_vma(vma); + struct msm_gem_vm *vm = to_msm_vm(vma->vm); int ret; - if (GEM_WARN_ON(!vma->base.va.addr)) + if (GEM_WARN_ON(!vma->va.addr)) return -EINVAL; - if (vma->mapped) + if (msm_vma->mapped) return 0; - vma->mapped = true; + msm_vma->mapped = true; /* * NOTE: iommu/io-pgtable can allocate pages, so we cannot hold @@ -76,38 +62,40 @@ msm_gem_vma_map(struct msm_gem_vma *vma, int prot, * Revisit this if we can come up with a scheme to pre-alloc pages * for the pgtable in map/unmap ops. */ - ret = vm->mmu->funcs->map(vm->mmu, vma->base.va.addr, sgt, size, prot); + ret = vm->mmu->funcs->map(vm->mmu, vma->va.addr, sgt, size, prot); if (ret) { - vma->mapped = false; + msm_vma->mapped = false; } return ret; } /* Close an iova. Warn if it is still in use */ -void msm_gem_vma_close(struct msm_gem_vma *vma) +void msm_gem_vma_close(struct drm_gpuva *vma) { - struct msm_gem_vm *vm = to_msm_vm(vma->base.vm); + struct msm_gem_vm *vm = to_msm_vm(vma->vm); + struct msm_gem_vma *msm_vma = to_msm_vma(vma); - GEM_WARN_ON(vma->mapped); + GEM_WARN_ON(msm_vma->mapped); drm_gpuvm_resv_assert_held(&vm->base); - if (vma->base.va.addr) - drm_mm_remove_node(&vma->node); + if (vma->va.addr && vm->managed) + drm_mm_remove_node(&msm_vma->node); - drm_gpuva_remove(&vma->base); - drm_gpuva_unlink(&vma->base); + drm_gpuva_remove(vma); + drm_gpuva_unlink(vma); kfree(vma); } /* Create a new vma and allocate an iova for it */ -struct msm_gem_vma * -msm_gem_vma_new(struct msm_gem_vm *vm, struct drm_gem_object *obj, +struct drm_gpuva * +msm_gem_vma_new(struct drm_gpuvm *gpuvm, struct drm_gem_object *obj, u64 range_start, u64 range_end) { + struct msm_gem_vm *vm = to_msm_vm(gpuvm); struct drm_gpuvm_bo *vm_bo; struct msm_gem_vma *vma; int ret; @@ -149,7 +137,7 @@ msm_gem_vma_new(struct msm_gem_vm *vm, struct drm_gem_object *obj, drm_gpuva_link(&vma->base, vm_bo); GEM_WARN_ON(drm_gpuvm_bo_put(vm_bo)); - return vma; + return &vma->base; err_va_remove: drm_gpuva_remove(&vma->base); @@ -179,7 +167,7 @@ static const struct drm_gpuvm_ops msm_gpuvm_ops = { * handles virtual address allocation, and both async and sync operations * are supported. */ -struct msm_gem_vm * +struct drm_gpuvm * msm_gem_vm_create(struct drm_device *drm, struct msm_mmu *mmu, const char *name, u64 va_start, u64 va_size, bool managed) { @@ -215,7 +203,7 @@ msm_gem_vm_create(struct drm_device *drm, struct msm_mmu *mmu, const char *name, drm_mm_init(&vm->mm, va_start, va_size); - return vm; + return &vm->base; err_free_vm: kfree(vm); diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c index b30800f80120..82e33aa1ccd0 100644 --- a/drivers/gpu/drm/msm/msm_gpu.c +++ b/drivers/gpu/drm/msm/msm_gpu.c @@ -283,7 +283,7 @@ static void msm_gpu_crashstate_capture(struct msm_gpu *gpu, if (state->fault_info.ttbr0) { struct msm_gpu_fault_info *info = &state->fault_info; - struct msm_mmu *mmu = submit->vm->mmu; + struct msm_mmu *mmu = to_msm_vm(submit->vm)->mmu; msm_iommu_pagetable_params(mmu, &info->pgtbl_ttbr0, &info->asid); @@ -387,7 +387,7 @@ static void recover_worker(struct kthread_work *work) /* Increment the fault counts */ submit->queue->faults++; if (submit->vm) - submit->vm->faults++; + to_msm_vm(submit->vm)->faults++; get_comm_cmdline(submit, &comm, &cmd); @@ -463,6 +463,7 @@ static void fault_worker(struct kthread_work *work) { struct msm_gpu *gpu = container_of(work, struct msm_gpu, fault_work); struct msm_gem_submit *submit; + struct msm_mmu *mmu = to_msm_vm(gpu->vm)->mmu; struct msm_ringbuffer *cur_ring = gpu->funcs->active_ring(gpu); char *comm = NULL, *cmd = NULL; @@ -492,7 +493,7 @@ static void fault_worker(struct kthread_work *work) resume_smmu: memset(&gpu->fault_info, 0, sizeof(gpu->fault_info)); - gpu->vm->mmu->funcs->resume_translation(gpu->vm->mmu); + mmu->funcs->resume_translation(mmu); mutex_unlock(&gpu->lock); } @@ -829,10 +830,11 @@ static int get_clocks(struct platform_device *pdev, struct msm_gpu *gpu) } /* Return a new address space for a msm_drm_private instance */ -struct msm_gem_vm * +struct drm_gpuvm * msm_gpu_create_private_vm(struct msm_gpu *gpu, struct task_struct *task) { - struct msm_gem_vm *vm = NULL; + struct drm_gpuvm *vm = NULL; + if (!gpu) return NULL; @@ -843,11 +845,11 @@ msm_gpu_create_private_vm(struct msm_gpu *gpu, struct task_struct *task) if (gpu->funcs->create_private_vm) { vm = gpu->funcs->create_private_vm(gpu); if (!IS_ERR(vm)) - vm->pid = get_pid(task_pid(task)); + to_msm_vm(vm)->pid = get_pid(task_pid(task)); } if (IS_ERR_OR_NULL(vm)) - vm = msm_gem_vm_get(gpu->vm); + vm = drm_gpuvm_get(gpu->vm); return vm; } @@ -1016,8 +1018,9 @@ void msm_gpu_cleanup(struct msm_gpu *gpu) msm_gem_kernel_put(gpu->memptrs_bo, gpu->vm); if (!IS_ERR_OR_NULL(gpu->vm)) { - gpu->vm->mmu->funcs->detach(gpu->vm->mmu); - msm_gem_vm_put(gpu->vm); + struct msm_mmu *mmu = to_msm_vm(gpu->vm)->mmu; + mmu->funcs->detach(mmu); + drm_gpuvm_put(gpu->vm); } if (gpu->worker) { diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h index 1f26ba00f773..d8425e6d7f5a 100644 --- a/drivers/gpu/drm/msm/msm_gpu.h +++ b/drivers/gpu/drm/msm/msm_gpu.h @@ -78,8 +78,8 @@ struct msm_gpu_funcs { /* note: gpu_set_freq() can assume that we have been pm_resumed */ void (*gpu_set_freq)(struct msm_gpu *gpu, struct dev_pm_opp *opp, bool suspended); - struct msm_gem_vm *(*create_vm)(struct msm_gpu *gpu, struct platform_device *pdev); - struct msm_gem_vm *(*create_private_vm)(struct msm_gpu *gpu); + struct drm_gpuvm *(*create_vm)(struct msm_gpu *gpu, struct platform_device *pdev); + struct drm_gpuvm *(*create_private_vm)(struct msm_gpu *gpu); uint32_t (*get_rptr)(struct msm_gpu *gpu, struct msm_ringbuffer *ring); /** @@ -234,7 +234,7 @@ struct msm_gpu { void __iomem *mmio; int irq; - struct msm_gem_vm *vm; + struct drm_gpuvm *vm; /* Power Control: */ struct regulator *gpu_reg, *gpu_cx; @@ -363,7 +363,7 @@ struct msm_context { int queueid; /** @vm: the per-process GPU address-space */ - struct msm_gem_vm *vm; + struct drm_gpuvm *vm; /** @kref: the reference count */ struct kref ref; @@ -673,7 +673,7 @@ int msm_gpu_init(struct drm_device *drm, struct platform_device *pdev, struct msm_gpu *gpu, const struct msm_gpu_funcs *funcs, const char *name, struct msm_gpu_config *config); -struct msm_gem_vm * +struct drm_gpuvm * msm_gpu_create_private_vm(struct msm_gpu *gpu, struct task_struct *task); void msm_gpu_cleanup(struct msm_gpu *gpu); diff --git a/drivers/gpu/drm/msm/msm_kms.c b/drivers/gpu/drm/msm/msm_kms.c index 6458bd82a0cd..e82b8569a468 100644 --- a/drivers/gpu/drm/msm/msm_kms.c +++ b/drivers/gpu/drm/msm/msm_kms.c @@ -176,9 +176,9 @@ static int msm_kms_fault_handler(void *arg, unsigned long iova, int flags, void return -ENOSYS; } -struct msm_gem_vm *msm_kms_init_vm(struct drm_device *dev) +struct drm_gpuvm *msm_kms_init_vm(struct drm_device *dev) { - struct msm_gem_vm *vm; + struct drm_gpuvm *vm; struct msm_mmu *mmu; struct device *mdp_dev = dev->dev; struct device *mdss_dev = mdp_dev->parent; @@ -212,7 +212,7 @@ struct msm_gem_vm *msm_kms_init_vm(struct drm_device *dev) return vm; } - msm_mmu_set_fault_handler(vm->mmu, kms, msm_kms_fault_handler); + msm_mmu_set_fault_handler(to_msm_vm(vm)->mmu, kms, msm_kms_fault_handler); return vm; } diff --git a/drivers/gpu/drm/msm/msm_kms.h b/drivers/gpu/drm/msm/msm_kms.h index f45996a03e15..7cdb2eb67700 100644 --- a/drivers/gpu/drm/msm/msm_kms.h +++ b/drivers/gpu/drm/msm/msm_kms.h @@ -139,7 +139,7 @@ struct msm_kms { atomic_t fault_snapshot_capture; /* mapper-id used to request GEM buffer mapped for scanout: */ - struct msm_gem_vm *vm; + struct drm_gpuvm *vm; /* disp snapshot support */ struct kthread_worker *dump_worker; diff --git a/drivers/gpu/drm/msm/msm_submitqueue.c b/drivers/gpu/drm/msm/msm_submitqueue.c index 6298233c3568..8ced49c7557b 100644 --- a/drivers/gpu/drm/msm/msm_submitqueue.c +++ b/drivers/gpu/drm/msm/msm_submitqueue.c @@ -59,7 +59,7 @@ void __msm_context_destroy(struct kref *kref) kfree(ctx->entities[i]); } - msm_gem_vm_put(ctx->vm); + drm_gpuvm_put(ctx->vm); kfree(ctx->comm); kfree(ctx->cmdline); kfree(ctx); From patchwork Thu Jun 5 18:28:59 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 894506 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A723627C150 for ; Thu, 5 Jun 2025 18:32:52 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.180.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749148374; cv=none; b=TYcHxgbdMcNx5bVd27lgaFYK/o8OSW9hu7rErZtZgofPv1qELbLuwBpnZ/5LIL5YFuu8URkrwrqmUnGFLS7CyHR37mU2vjvszL3yII3mmvfZGp3FzKT45oj8KLdjWWNshNsjMAUavu6w2Vx7yuqB4Ipio0s5hoC4ivuf7FQMOHM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749148374; c=relaxed/simple; bh=2jcccFJXFCYh6q0iCZXWcRWf5w1lUItpk77t2RSn1c0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=P8k4pmOZ70uAKia/bD8ap/6URLFMyFo9w3qgVwO+bba9+uu/sHOFgKNBVib34NYDUuUsL9pnLfpuiRWwG8gJZ74/NKV/yzy0krX9nlrai8NI24qoZ3LlsFfjEdR4B4AVSq0xdngIVQmbBmnlgWmxGWMS7dOGsLXYG94DB3egoDM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=oLDwzUCy; arc=none smtp.client-ip=205.220.180.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="oLDwzUCy" Received: from pps.filterd (m0279869.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 5559dR1R012867 for ; Thu, 5 Jun 2025 18:32:51 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=BEMRIRtPgB7 xj7eLCSE0ieGk3YZPnc78Gz5XX/Mz3mY=; b=oLDwzUCyVCYeLYOHG2XBA209dTT ReThbQhzhxZSY4hH7BkyN+uKBchW68D7jC2DHKFOGc9VT+CWaWW6x2XIMHHPM1sj C3PgRM44WTpZoRW/bWD/+AyyiofyH+Cf2A7FE7pvAkynok4XyfWUdXr0QswTRCBb 1xW01ojnVMoCvmzEOJsq2tuMFCCsssT09JwbgQ4/I1AjvrSbYf62U8wpJrGhlj53 m9Qjb3aniJJsciAM02OlBWI9MxF1uMXlGtFlzGKJoIt54YIYoxoYAh70LhS6DyjO UiR3jDbHmkbc92Q9vx5ks2zFoobl9kKLjh3f5Rrdwnw9mwXATV/LLTf/QAQ== Received: from mail-pl1-f198.google.com (mail-pl1-f198.google.com [209.85.214.198]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 47202wg07h-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Thu, 05 Jun 2025 18:32:50 +0000 (GMT) Received: by mail-pl1-f198.google.com with SMTP id d9443c01a7336-2354ba59eb6so18405775ad.1 for ; Thu, 05 Jun 2025 11:32:50 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1749148369; x=1749753169; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=BEMRIRtPgB7xj7eLCSE0ieGk3YZPnc78Gz5XX/Mz3mY=; b=cbK7b6/4kOrstKqCjloiY47bdhK3d62xtmb7LA8qdOprE/Kyf/ZK3BIzTZWFmYbSVr nXbRubbVLLdwGIAV7rGC2YTIk55IvBv0/ZUWrns6hI5zhODafyGMjp9+38rLL9q9lshR 1ZdUSBfoqQNpNdqAzuC4NawvPTXxlUyutWhS30qtI2YaoSJ8SwzhRFHANciAVSkir3Bx wX5q+68TjeZe9SPixN/7kimd0jpZmZ/5nSEbswIGyDxForNAmEZ0I/pIsYh5hWkIkN0X wdDwV1gz0PG4RFVQIac1DXXVrBd5BhJ75VKC26UixNcQxPU1kx1BgokF+F6s/68Rv+8a /kLQ== X-Forwarded-Encrypted: i=1; AJvYcCXIn+aWTSEn+X6FMr2RjoRThsttTj3C2T8EKHc1SjfeFK3WQiOxILQg9odCmWJZ2tBfy9OnEG394p10HJ8b@vger.kernel.org X-Gm-Message-State: AOJu0Yy2grXJDyCHaMK4PRXuTc4NBo/eL3BpLlUG0+bnmkt6qzhhbe+c fLxwqnlN7zaTavOk5Vurtp7n0Sp9C6hTDGq9mQHqX+5aViN4j1KKuk+tJhxjkpMT946p9DXuEPV YDA+NatRjnrik3KDlbpg19g1zW1sIgosRz9mFpZicp7CgWSmreCBDrozgAVtcfNTbD0De X-Gm-Gg: ASbGncu7AIzNwUw5xiMansuGcHu1wY9dfkjAplpce+2cutW53J1oh6CfL2YGYbahXTY is67HKLkXVA2Eo96U+pXqvCW6b95spLwXLCycqh9exOacfMVMDMF8npkKwcgpheZd6vYKsJrM5Y C4dlKrYsjROTHjyfHsu1Vrob7NYJZyWgvOtuJ4kjo2bEDM7Mzx2bWlAbpAlbqqUmXlmT7SL2QRN AqnTkDxNN5BefBJd/f4D8Vs6+A6I+SeBmuzehXXOfB7Cp3jBm07cf+4ew9JDsHDTWVIbItpLrhO Eus/woG6V3rLIpyhgEsPzQ== X-Received: by 2002:a17:903:2392:b0:235:1ae7:a99c with SMTP id d9443c01a7336-23601d712a6mr6064985ad.32.1749148369020; Thu, 05 Jun 2025 11:32:49 -0700 (PDT) X-Google-Smtp-Source: AGHT+IHYILqDVPmHZvxHv/qNui1W3IQVf5p494cYJFRBE2nev6jIvXhPxt31NUKQzkFHvGjIuiZeCw== X-Received: by 2002:a17:903:2392:b0:235:1ae7:a99c with SMTP id d9443c01a7336-23601d712a6mr6064655ad.32.1749148368674; Thu, 05 Jun 2025 11:32:48 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:89fa:e299:1a34:c1f5]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-23506d21802sm122518805ad.215.2025.06.05.11.32.48 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 05 Jun 2025 11:32:48 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Rob Clark , Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v6 14/40] drm/msm: Split out helper to get iommu prot flags Date: Thu, 5 Jun 2025 11:28:59 -0700 Message-ID: <20250605183111.163594-15-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250605183111.163594-1-robin.clark@oss.qualcomm.com> References: <20250605183111.163594-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: azNrmgTH7pHV1v8jATyMRaPFS0bwO8sW X-Proofpoint-GUID: azNrmgTH7pHV1v8jATyMRaPFS0bwO8sW X-Authority-Analysis: v=2.4 cv=Y/D4sgeN c=1 sm=1 tr=0 ts=6841e2d2 cx=c_pps a=MTSHoo12Qbhz2p7MsH1ifg==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=cm27Pg_UAAAA:8 a=EUspDBNiAAAA:8 a=isCY8TonHXl0-fnU9HAA:9 a=GvdueXVYPmCkWapjIL-Q:22 X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjA1MDE2NSBTYWx0ZWRfX2t7M5oEacp2m yWfryhD8ge8gPLvAMsGBxma9QN1LpUdfKaOtDuCOy3xtmbBbh7fbHFFlM3scKp4CU4sBZLsHRrx br1H/9zkt3x+RaxJK8a5Xubbt990JXa9J+UD/13vzzTtcNToT9pHnWdkqa1I5Y5IgKpkpzk8OVJ 8scBueMmj001gb9N7/X2luplsS4h1PXJrW7cVo26y8jG2ed19p/7sWQ/rXRFmRKUz8aDVefBI3A 3U45ipXzqxNBPMKJvay11+Flucc8G9jFco6yox1SGYuK+hqMM7/pOF22pnfx5KqGi5pOe9wMhkW Ikfxsvi8QeXv8Lz+rDdLSWWZ8KvPwMCQ+r0TSQg5R79X5Cnuic/WdkRhPkqywn22r4hxuEUyyE9 k9O+48b6bb+4vmkXy2eity9t31x8BdgabN5+VUZkFXFXP9WI8KgsQnwoIETayDRhciHX8NDi X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.0.736,FMLib:17.12.80.40 definitions=2025-06-05_05,2025-06-05_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 clxscore=1015 priorityscore=1501 spamscore=0 adultscore=0 impostorscore=0 lowpriorityscore=0 phishscore=0 mlxscore=0 mlxlogscore=999 malwarescore=0 bulkscore=0 suspectscore=0 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506050165 From: Rob Clark We'll re-use this in the vm_bind path. Signed-off-by: Rob Clark Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gem.c | 12 ++++++++++-- drivers/gpu/drm/msm/msm_gem.h | 1 + 2 files changed, 11 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 87949d0e87bf..09c40a7e04ac 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -466,10 +466,9 @@ static struct drm_gpuva *get_vma_locked(struct drm_gem_object *obj, return vma; } -int msm_gem_pin_vma_locked(struct drm_gem_object *obj, struct drm_gpuva *vma) +int msm_gem_prot(struct drm_gem_object *obj) { struct msm_gem_object *msm_obj = to_msm_bo(obj); - struct page **pages; int prot = IOMMU_READ; if (!(msm_obj->flags & MSM_BO_GPU_READONLY)) @@ -485,6 +484,15 @@ int msm_gem_pin_vma_locked(struct drm_gem_object *obj, struct drm_gpuva *vma) else if (prot == 2) prot |= IOMMU_USE_LLC_NWA; + return prot; +} + +int msm_gem_pin_vma_locked(struct drm_gem_object *obj, struct drm_gpuva *vma) +{ + struct msm_gem_object *msm_obj = to_msm_bo(obj); + struct page **pages; + int prot = msm_gem_prot(obj); + msm_gem_assert_locked(obj); pages = msm_gem_get_pages_locked(obj, MSM_MADV_WILLNEED); diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index 557b6804181f..278ec34c31fc 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -158,6 +158,7 @@ struct msm_gem_object { #define to_msm_bo(x) container_of(x, struct msm_gem_object, base) uint64_t msm_gem_mmap_offset(struct drm_gem_object *obj); +int msm_gem_prot(struct drm_gem_object *obj); int msm_gem_pin_vma_locked(struct drm_gem_object *obj, struct drm_gpuva *vma); void msm_gem_unpin_locked(struct drm_gem_object *obj); void msm_gem_unpin_active(struct drm_gem_object *obj); From patchwork Thu Jun 5 18:29:00 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 894505 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 28C1127C16A for ; Thu, 5 Jun 2025 18:32:52 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.180.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749148375; cv=none; b=ILnnYDT8ps6Sh+uG7nY1UDQmGX6mO04tbU4wy0klnIEuCi6KU86Msq4ZYyugr6lFF7MzaHztTz1860RxhmgVhnNkk2DLEoJ8aFInbSfcMKYX02UFzFDVrXr3ozo9kNwJYXdG3JyIxxC4h5Lqn8Ax3ERLhQAcvCTfUEpnM9v7+Q0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749148375; c=relaxed/simple; bh=B9eE+MJ7+O1AN7BKJ5NQdswWxGPSuZQjmPfgsgUDJiM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=UZAmmq8J4s8YFfN048H91gT/FOLzOlbGL4QxxEdI6ITSp7y8q8fXsdfU3w80Qb/tlt6a/oYeEdGdVX6ph9kufAYW1zJH78cD0mahKShvGRuhkmqMaFJqP9ubmHRQws5nHv8RrzasEyYrt22//mSjatOZ98ZQJ+m5gQFvUrowv8Q= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=PeKrppCO; arc=none smtp.client-ip=205.220.180.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="PeKrppCO" Received: from pps.filterd (m0279870.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 555GaJ9Q032471 for ; Thu, 5 Jun 2025 18:32:52 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=uUof/zDaOaC uzta8w72KM+mWpGInUe/M4USMZdkFuc4=; b=PeKrppCOxxNODy9F/tVq+FcFJi4 dOkQ+mdt6SuviCpAZfONak5goTINH3OIVgMDyBgW5tePxyUhZqRZrt7VIWCfVclo +cs3HaOfNzEoLMUFyJajX/t9lJV1bkoydeC3i+UQSOUYhi24aPnWStfydborUC18 VRzUSIODKewo+4TKb4VY2ASWLUx+niTc7i6LlcT2uuwWrbmDzpK5qJs6Ak2fNekG X2G0MglPizE/yy7BCh/h0eGTyMWdp7vKI2wMfg5tpxK0pzlctOnvEaGtPWux7FPu oBueTTZxM5un0eZXXHjvtQiyoGICDfkADqgOTQ40fbJob9y7AnPbAoS1LSw== Received: from mail-pl1-f199.google.com (mail-pl1-f199.google.com [209.85.214.199]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 471sfv17r3-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Thu, 05 Jun 2025 18:32:51 +0000 (GMT) Received: by mail-pl1-f199.google.com with SMTP id d9443c01a7336-23536f7c2d7so19120755ad.2 for ; Thu, 05 Jun 2025 11:32:51 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1749148370; x=1749753170; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=uUof/zDaOaCuzta8w72KM+mWpGInUe/M4USMZdkFuc4=; b=PC0DyxyBbB+SZsnzJxF1x92yXyrdxNa/w13wBWJwQ8ZgLGedGxweyCqOggFt2T2WC/ 6fSkdx7xfY/UuTHxy/xSD4SMBlz2C0ZLBtpFXtZGRco+fWEOdSZzJJRWQqe+O5hpcZUi 7KQ2r+Mp3CzInpxl34lD9lorSAuKX5QeQ4YknY1ZL4o9cgxkgZFLvqt45HdiQ8UciLQ8 7LT7zT81Ach602r8NyI8JIoa/vHYBYM8p2jjI3LseyzOR2QpLtjXg+TVq3G+BFFjkcWz WYZmGemJ2GvUmbhm2IHreINr1N1/2c52OD52cOqpFPRBcU5hFaCzBz+z7hfvNqeKZQsF r9Fg== X-Forwarded-Encrypted: i=1; AJvYcCWdSbsJu7lfdKaVqatJftSoUVwyjyfDQh7eMgzCZyFhEzH7JBMS7mzlOCqev7ccT6B2VUJ+tOd1VyiTe71/@vger.kernel.org X-Gm-Message-State: AOJu0Yy4g2N5eZngfg3WweDnjH/TW157/HYs+lSu+budqeORQ6+zUEyp nC346btGbbW8G2ajSkRlO1h0bCuj8i5txzFlAk5EaFEGp3bKOQ1Q3xY5aH95LqPp5f7gVzEC+jG bTrMPSlted4ptWD1j5lPp72dxaWIVgZvgpu4fytxDSFadWZjjtIQnNU+0rEAzS4Mmb4Sq X-Gm-Gg: ASbGncvj1lg33EJLH5DnY4MvfUKkJ02h3jXT+cj3DandmqDEdDTZ0IfALK2/k3CNlm/ rBXEdm4oWIG7ztoExdkeWzeo+sPzEc7MfJMNoxb0WW6fAts8gSGNL6uwj7X+RFaB63dSu+v0vL2 7TG4wpz/wnQ+j5ZVMzek3ou311rH9cU3A3on1Ee+41Ic2KkYrVSw8lzgMtgTKtSM/xKpxiZR3U0 DLWPp8bpDxdi7o/w9FgSlsbFyHsAvKZz1duCEVbVmK129WQE5rsZETkZBBQrkyHp8OizkNPYISa V/YIckb9wjmgYckK6Xf+pUJJk6pSL8Fy X-Received: by 2002:a17:902:e890:b0:235:1706:1fe7 with SMTP id d9443c01a7336-23601cf6b9amr6390105ad.4.1749148370467; Thu, 05 Jun 2025 11:32:50 -0700 (PDT) X-Google-Smtp-Source: AGHT+IHQnM4xAK6Q7sA9TaODreLehbG/7tDzSTuO9E5rA5DKAL+arxWePbEuiI6ffeNfHj2Qkxo12w== X-Received: by 2002:a17:902:e890:b0:235:1706:1fe7 with SMTP id d9443c01a7336-23601cf6b9amr6389695ad.4.1749148370060; Thu, 05 Jun 2025 11:32:50 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:89fa:e299:1a34:c1f5]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-23506bd92c7sm122605935ad.62.2025.06.05.11.32.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 05 Jun 2025 11:32:49 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Rob Clark , Rob Clark , Rob Clark , Sean Paul , Konrad Dybcio , Abhinav Kumar , Dmitry Baryshkov , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v6 15/40] drm/msm: Add mmu support for non-zero offset Date: Thu, 5 Jun 2025 11:29:00 -0700 Message-ID: <20250605183111.163594-16-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250605183111.163594-1-robin.clark@oss.qualcomm.com> References: <20250605183111.163594-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Authority-Analysis: v=2.4 cv=CY8I5Krl c=1 sm=1 tr=0 ts=6841e2d3 cx=c_pps a=JL+w9abYAAE89/QcEU+0QA==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=cm27Pg_UAAAA:8 a=EUspDBNiAAAA:8 a=SJjE8ph6EfIcxDFTuEgA:9 a=324X-CrmTo6CU4MGRt3R:22 X-Proofpoint-ORIG-GUID: 2gARANM91p63xiD3wjhao4AsoYFxtcXT X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjA1MDE2NSBTYWx0ZWRfX85g4bF/ZhGpf OoK9g825skLzY9QiClhKFiH5AUbdGkFerOjjA7At6ISbx3Y76ZLOPTq0DsTJK1yLBQOwiMv481S rTL0VUktace2u3UluPtNRieObGDzk2JiqBV5wIcBhKOPcPaLRrkHIE22hBqJMpWSHTOkGBhnM60 Z3OZi8arFvPn/t+EJMHOHyvTOmN2so+TZBwSnlogtsWk+WLVeQ65GnFsmsyg8KTsqtKavLsdkYq MuQ/N8MMl5QreuCQMfu0UnHVSSRCLDr1XjxjiU/+epn7SmOHVpL5lmPE5O8DiZpQCB9YQ8GWkRP WiOQdW3qnWN6GIQBhpb5jO95cpJuUPFn2tmHOmeks0I3+P/4MpFIGTOBONHXwQNgKPET6dtQjNB I6Gpm7wqI+kTpOpIMhRwIGod7bTGqZzyft5yWIicUhRwxMCnQ0WSi5ia2Tjy68vHB8QiM85w X-Proofpoint-GUID: 2gARANM91p63xiD3wjhao4AsoYFxtcXT X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.0.736,FMLib:17.12.80.40 definitions=2025-06-05_05,2025-06-05_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 mlxscore=0 adultscore=0 bulkscore=0 suspectscore=0 malwarescore=0 clxscore=1015 lowpriorityscore=0 spamscore=0 impostorscore=0 phishscore=0 mlxlogscore=999 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506050165 From: Rob Clark Only needs to be supported for iopgtables mmu, the other cases are either only used for kernel managed mappings (where offset is always zero) or devices which do not support sparse bindings. Signed-off-by: Rob Clark Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/adreno/a2xx_gpummu.c | 5 ++++- drivers/gpu/drm/msm/msm_gem.c | 4 ++-- drivers/gpu/drm/msm/msm_gem.h | 4 ++-- drivers/gpu/drm/msm/msm_gem_vma.c | 13 +++++++------ drivers/gpu/drm/msm/msm_iommu.c | 22 ++++++++++++++++++++-- drivers/gpu/drm/msm/msm_mmu.h | 2 +- 6 files changed, 36 insertions(+), 14 deletions(-) diff --git a/drivers/gpu/drm/msm/adreno/a2xx_gpummu.c b/drivers/gpu/drm/msm/adreno/a2xx_gpummu.c index 39641551eeb6..6124336af2ec 100644 --- a/drivers/gpu/drm/msm/adreno/a2xx_gpummu.c +++ b/drivers/gpu/drm/msm/adreno/a2xx_gpummu.c @@ -29,13 +29,16 @@ static void a2xx_gpummu_detach(struct msm_mmu *mmu) } static int a2xx_gpummu_map(struct msm_mmu *mmu, uint64_t iova, - struct sg_table *sgt, size_t len, int prot) + struct sg_table *sgt, size_t off, size_t len, + int prot) { struct a2xx_gpummu *gpummu = to_a2xx_gpummu(mmu); unsigned idx = (iova - GPUMMU_VA_START) / GPUMMU_PAGE_SIZE; struct sg_dma_page_iter dma_iter; unsigned prot_bits = 0; + WARN_ON(off != 0); + if (prot & IOMMU_WRITE) prot_bits |= 1; if (prot & IOMMU_READ) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 09c40a7e04ac..194a15802a5f 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -457,7 +457,7 @@ static struct drm_gpuva *get_vma_locked(struct drm_gem_object *obj, vma = lookup_vma(obj, vm); if (!vma) { - vma = msm_gem_vma_new(vm, obj, range_start, range_end); + vma = msm_gem_vma_new(vm, obj, 0, range_start, range_end); } else { GEM_WARN_ON(vma->va.addr < range_start); GEM_WARN_ON((vma->va.addr + obj->size) > range_end); @@ -499,7 +499,7 @@ int msm_gem_pin_vma_locked(struct drm_gem_object *obj, struct drm_gpuva *vma) if (IS_ERR(pages)) return PTR_ERR(pages); - return msm_gem_vma_map(vma, prot, msm_obj->sgt, obj->size); + return msm_gem_vma_map(vma, prot, msm_obj->sgt); } void msm_gem_unpin_locked(struct drm_gem_object *obj) diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index 278ec34c31fc..2dd9a7f585f4 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -110,9 +110,9 @@ struct msm_gem_vma { struct drm_gpuva * msm_gem_vma_new(struct drm_gpuvm *vm, struct drm_gem_object *obj, - u64 range_start, u64 range_end); + u64 offset, u64 range_start, u64 range_end); void msm_gem_vma_purge(struct drm_gpuva *vma); -int msm_gem_vma_map(struct drm_gpuva *vma, int prot, struct sg_table *sgt, int size); +int msm_gem_vma_map(struct drm_gpuva *vma, int prot, struct sg_table *sgt); void msm_gem_vma_close(struct drm_gpuva *vma); struct msm_gem_object { diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_gem_vma.c index df8eb910ca31..ef0efd87e4a6 100644 --- a/drivers/gpu/drm/msm/msm_gem_vma.c +++ b/drivers/gpu/drm/msm/msm_gem_vma.c @@ -38,8 +38,7 @@ void msm_gem_vma_purge(struct drm_gpuva *vma) /* Map and pin vma: */ int -msm_gem_vma_map(struct drm_gpuva *vma, int prot, - struct sg_table *sgt, int size) +msm_gem_vma_map(struct drm_gpuva *vma, int prot, struct sg_table *sgt) { struct msm_gem_vma *msm_vma = to_msm_vma(vma); struct msm_gem_vm *vm = to_msm_vm(vma->vm); @@ -62,8 +61,9 @@ msm_gem_vma_map(struct drm_gpuva *vma, int prot, * Revisit this if we can come up with a scheme to pre-alloc pages * for the pgtable in map/unmap ops. */ - ret = vm->mmu->funcs->map(vm->mmu, vma->va.addr, sgt, size, prot); - + ret = vm->mmu->funcs->map(vm->mmu, vma->va.addr, sgt, + vma->gem.offset, vma->va.range, + prot); if (ret) { msm_vma->mapped = false; } @@ -93,7 +93,7 @@ void msm_gem_vma_close(struct drm_gpuva *vma) /* Create a new vma and allocate an iova for it */ struct drm_gpuva * msm_gem_vma_new(struct drm_gpuvm *gpuvm, struct drm_gem_object *obj, - u64 range_start, u64 range_end) + u64 offset, u64 range_start, u64 range_end) { struct msm_gem_vm *vm = to_msm_vm(gpuvm); struct drm_gpuvm_bo *vm_bo; @@ -107,6 +107,7 @@ msm_gem_vma_new(struct drm_gpuvm *gpuvm, struct drm_gem_object *obj, return ERR_PTR(-ENOMEM); if (vm->managed) { + BUG_ON(offset != 0); ret = drm_mm_insert_node_in_range(&vm->mm, &vma->node, obj->size, PAGE_SIZE, 0, range_start, range_end, 0); @@ -120,7 +121,7 @@ msm_gem_vma_new(struct drm_gpuvm *gpuvm, struct drm_gem_object *obj, GEM_WARN_ON((range_end - range_start) > obj->size); - drm_gpuva_init(&vma->base, range_start, range_end - range_start, obj, 0); + drm_gpuva_init(&vma->base, range_start, range_end - range_start, obj, offset); vma->mapped = false; ret = drm_gpuva_insert(&vm->base, &vma->base); diff --git a/drivers/gpu/drm/msm/msm_iommu.c b/drivers/gpu/drm/msm/msm_iommu.c index e70088a91283..2fd48e66bc98 100644 --- a/drivers/gpu/drm/msm/msm_iommu.c +++ b/drivers/gpu/drm/msm/msm_iommu.c @@ -113,7 +113,8 @@ static int msm_iommu_pagetable_unmap(struct msm_mmu *mmu, u64 iova, } static int msm_iommu_pagetable_map(struct msm_mmu *mmu, u64 iova, - struct sg_table *sgt, size_t len, int prot) + struct sg_table *sgt, size_t off, size_t len, + int prot) { struct msm_iommu_pagetable *pagetable = to_pagetable(mmu); struct io_pgtable_ops *ops = pagetable->pgtbl_ops; @@ -125,6 +126,19 @@ static int msm_iommu_pagetable_map(struct msm_mmu *mmu, u64 iova, size_t size = sg->length; phys_addr_t phys = sg_phys(sg); + if (!len) + break; + + if (size <= off) { + off -= size; + continue; + } + + phys += off; + size -= off; + size = min_t(size_t, size, len); + off = 0; + while (size) { size_t pgsize, count, mapped = 0; int ret; @@ -140,6 +154,7 @@ static int msm_iommu_pagetable_map(struct msm_mmu *mmu, u64 iova, phys += mapped; addr += mapped; size -= mapped; + len -= mapped; if (ret) { msm_iommu_pagetable_unmap(mmu, iova, addr - iova); @@ -400,11 +415,14 @@ static void msm_iommu_detach(struct msm_mmu *mmu) } static int msm_iommu_map(struct msm_mmu *mmu, uint64_t iova, - struct sg_table *sgt, size_t len, int prot) + struct sg_table *sgt, size_t off, size_t len, + int prot) { struct msm_iommu *iommu = to_msm_iommu(mmu); size_t ret; + WARN_ON(off != 0); + /* The arm-smmu driver expects the addresses to be sign extended */ if (iova & BIT_ULL(48)) iova |= GENMASK_ULL(63, 49); diff --git a/drivers/gpu/drm/msm/msm_mmu.h b/drivers/gpu/drm/msm/msm_mmu.h index c33247e459d6..c874852b7331 100644 --- a/drivers/gpu/drm/msm/msm_mmu.h +++ b/drivers/gpu/drm/msm/msm_mmu.h @@ -12,7 +12,7 @@ struct msm_mmu_funcs { void (*detach)(struct msm_mmu *mmu); int (*map)(struct msm_mmu *mmu, uint64_t iova, struct sg_table *sgt, - size_t len, int prot); + size_t off, size_t len, int prot); int (*unmap)(struct msm_mmu *mmu, uint64_t iova, size_t len); void (*destroy)(struct msm_mmu *mmu); void (*resume_translation)(struct msm_mmu *mmu); From patchwork Thu Jun 5 18:29:01 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 894234 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A8C6527C163 for ; Thu, 5 Jun 2025 18:32:53 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.168.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749148375; cv=none; b=u0TjGKSDp0W2ej05ndzbYKaRQ0d5Lf6jiQlf3F5my5psaGsgiAbJWRk/LjB+BrZqKruAdzwwIne7NFwEUev3zJ86QP5HK0vyASnVm2vzsc03HkEjkR24ZckpOdJv+qlAzvbVbJQHWxyC1DGc2Cb8qok+vV2Kxvo6DUwqdEIef1w= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749148375; c=relaxed/simple; bh=1DeDF0ar0bWjHYOdiDm1icJLnBGNeW7jU5ESThe4AlM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=nvbt1RXfWr5Qfrn2IzKG3Dk97fDeCUxL6CtXctmnW1Qam19CjaLegNQEJRRrfnA6aYzZQb5x+xRAYtVZ4agrpZbjCrqWD2NI8CoR2Z8tLdN8dELxYllAePy9S5buLjSqAsLin8gX0Z4URSQLaH7QDlA7P4oJPtQm7mh51KDVBME= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=fXfPX0d6; arc=none smtp.client-ip=205.220.168.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="fXfPX0d6" Received: from pps.filterd (m0279863.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 555HisVB027442 for ; Thu, 5 Jun 2025 18:32:53 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=3u7saZqSf9v I7fV2Qhb933URL/6xLtZCDgVKJFmz8jc=; b=fXfPX0d6PaVGIBJgExovILPPK6D 4Z3RwxgfsjKXQxaNjqbCUG7u2YaYqS7eBEK0DZT7vlZ7BAtBFov6m4zWNkp98Cyy iVCic8TFb6pc+wPgE5EJa6KWQaqxm6N2SGe/xbZSKb0P7OFJWw0emJJCBoiGKqhg 3FNhH979z7fcSO2ciTpLTEljwwm70wa6Es3TNLyunn9OoYZj1iucP6KGtst2PbpX hQA+/L85wUnlfG/cMu0tdpyHz6NG3YAujuXVn73DiAfxdNX+7xF3x/B79055yf3a GAnkwYn2AGtj5anXDvi8ghLavu4RXIwP5PhkSGjWzkN2UOnthztJH7jkgMQ== Received: from mail-pj1-f72.google.com (mail-pj1-f72.google.com [209.85.216.72]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 471g8ta9k5-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Thu, 05 Jun 2025 18:32:52 +0000 (GMT) Received: by mail-pj1-f72.google.com with SMTP id 98e67ed59e1d1-311fa374c2fso2159047a91.2 for ; Thu, 05 Jun 2025 11:32:52 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1749148372; x=1749753172; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=3u7saZqSf9vI7fV2Qhb933URL/6xLtZCDgVKJFmz8jc=; b=lyW7rzypdrq857Jai1OUXhR/JQOX+AS9gT4t/2XTAwTfqUireYSuRp2AoFvvpaN1xB JWAsJrjH0N7OaXs2dyRdc6KiwgXFQ9kq7qUXlE0ZLoziCQBxOGOTf+pr8m2JSCwb8nZo Tsp4JUFFNP3uqixYI4x1SU1BqhMv1gYTdAA+FpC4KIl6cyqikOytZG6AuvUHkwmeTnXA I1OdlZITe6kUZWMo2dszfYtVxuT51CRoInUPNInPRrtqNc1E2nP65cpCSqsjIOu0UFYc WyVt/UtnB9BqOTTQqajqUM30dBGbwGLAfDCmc6G0Jqxu1gTkJ5O944MrOxDX5WHGO2RF nVnw== X-Forwarded-Encrypted: i=1; AJvYcCVCZID/vVptNtNrJU2RdU9xQToKwTFvktZIUwaJcy6PwdQgnlKM3Qj/S9tVWleDk0DqpFwJ4/aPu4jUuFsx@vger.kernel.org X-Gm-Message-State: AOJu0Yz5NSTDXRANlJ+OyHJwqQj415FSZJEQNUmg7K1fPl1U86QHJrN3 w8JjrzYxICy0fUoB45HOrXe/Y1In/aEyXgYV2yn2iG/d2+V0cZ725xlNl6BixiHYuYkihxJ6Jml op2Syh4svzpH96t1rSb9lBaCivh8Yb4/+Vfo/S69okDLAEXEvSDY8x6OvLlnr7z7a3p3E X-Gm-Gg: ASbGncvK7zFdCbcZzh8JwC7jTHAXErXcbG3hTwlY3g4roh4CratnBgbSCnr+lV2ce/d jAZ+Jor3aoZazo712x+yiZNFUqzTXcwoKWivZuv8NfE3P2ktyQ0Hoqx4UndMEm2fZtTRdlxdL+3 Xw7WBmq3gs5jaFbiYk1Fn4rBtT/1e6EUWDK+zdg10GTsWWU2rLoVMb6jSLILzZGn4jpTJKcoGBL YmbAmhjNC98PyReiRlpARnePydlj/bKizZjNj/EuuEu/w81C7yxUPC5MBH/y5nFj7MAVEQqkwoU S7Q5VDduUAyfLjPXCovipAXBULGFS4s9 X-Received: by 2002:a17:90b:4e83:b0:312:f54e:ba28 with SMTP id 98e67ed59e1d1-31347057c9amr1124714a91.24.1749148371731; Thu, 05 Jun 2025 11:32:51 -0700 (PDT) X-Google-Smtp-Source: AGHT+IGf8XXJ8Mf/VVr/G14I6Dua2+GHn6PtLKJaQTP4schM9Oyz0kgDx0QvXZTk1AeUyR9rUESodA== X-Received: by 2002:a17:90b:4e83:b0:312:f54e:ba28 with SMTP id 98e67ed59e1d1-31347057c9amr1124675a91.24.1749148371293; Thu, 05 Jun 2025 11:32:51 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:89fa:e299:1a34:c1f5]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-31349fe0048sm59361a91.42.2025.06.05.11.32.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 05 Jun 2025 11:32:50 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Rob Clark , Rob Clark , Rob Clark , Sean Paul , Konrad Dybcio , Abhinav Kumar , Dmitry Baryshkov , Marijn Suijten , David Airlie , Simona Vetter , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v6 16/40] drm/msm: Add PRR support Date: Thu, 5 Jun 2025 11:29:01 -0700 Message-ID: <20250605183111.163594-17-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250605183111.163594-1-robin.clark@oss.qualcomm.com> References: <20250605183111.163594-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Authority-Analysis: v=2.4 cv=eJQTjGp1 c=1 sm=1 tr=0 ts=6841e2d4 cx=c_pps a=RP+M6JBNLl+fLTcSJhASfg==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=cm27Pg_UAAAA:8 a=EUspDBNiAAAA:8 a=m2jltaIWnU9X2HFGTMUA:9 a=iS9zxrgQBfv6-_F4QbHw:22 X-Proofpoint-ORIG-GUID: yNXbLlBsrsrnj_TvO0YFo5bhHx5IilVJ X-Proofpoint-GUID: yNXbLlBsrsrnj_TvO0YFo5bhHx5IilVJ X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjA1MDE2MyBTYWx0ZWRfXx+aFnM4QW5DJ GSvk6Asamdik9LowzApqz31MuntfsWEseQolS03+A9R7hfb/7sU1stUq8wOg82Itqum/MnVGIaI ZSTouXZM1AaqaOdIKzjpkPbrQY191Azk3ET9P5jnne51rU/SrX/m3yy+Bcx0kQo7xWzHI9RK/yj 5i10wvJVFkez3+iVfXQTFAqgc6v3B2iMav0uFUiXgfhPfU3MR1ovJyP2zXZFhegLvkaemtZhe3F bAUbsjcPlM0PAn7zAG3iH+RNRTJEwbrmiqvr6aQDZWKoq43dINyHbugaMsFGtY6XE0KqZH+xhLl GzKZq/WaTeo8j9KeYmLXn+ioKXsITOwqw8FFKaZoKX5FFOn4zHj+dJ09TJBymXiGGeOvylLQTvG Ph3YpGAbH1FHLlYQLQ8hADVs1p5KeOMgCMcGCfjuxEHVluqzLtrMpT4oVzy4macjwFqiD0T4 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.0.736,FMLib:17.12.80.40 definitions=2025-06-05_05,2025-06-05_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 malwarescore=0 adultscore=0 lowpriorityscore=0 clxscore=1015 priorityscore=1501 bulkscore=0 mlxlogscore=999 impostorscore=0 spamscore=0 phishscore=0 mlxscore=0 suspectscore=0 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506050163 From: Rob Clark Add PRR (Partial Resident Region) is a bypass address which make GPU writes go to /dev/null and reads return zero. This is used to implement vulkan sparse residency. To support PRR/NULL mappings, we allocate a page to reserve a physical address which we know will not be used as part of a GEM object, and configure the SMMU to use this address for PRR/NULL mappings. Signed-off-by: Rob Clark Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/adreno/adreno_gpu.c | 10 ++++ drivers/gpu/drm/msm/msm_iommu.c | 62 ++++++++++++++++++++++++- include/uapi/drm/msm_drm.h | 2 + 3 files changed, 73 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c b/drivers/gpu/drm/msm/adreno/adreno_gpu.c index f6624a246694..e24f627daf37 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c @@ -361,6 +361,13 @@ int adreno_fault_handler(struct msm_gpu *gpu, unsigned long iova, int flags, return 0; } +static bool +adreno_smmu_has_prr(struct msm_gpu *gpu) +{ + struct adreno_smmu_priv *adreno_smmu = dev_get_drvdata(&gpu->pdev->dev); + return adreno_smmu && adreno_smmu->set_prr_addr; +} + int adreno_get_param(struct msm_gpu *gpu, struct msm_context *ctx, uint32_t param, uint64_t *value, uint32_t *len) { @@ -444,6 +451,9 @@ int adreno_get_param(struct msm_gpu *gpu, struct msm_context *ctx, case MSM_PARAM_UCHE_TRAP_BASE: *value = adreno_gpu->uche_trap_base; return 0; + case MSM_PARAM_HAS_PRR: + *value = adreno_smmu_has_prr(gpu); + return 0; default: return UERR(EINVAL, drm, "%s: invalid param: %u", gpu->name, param); } diff --git a/drivers/gpu/drm/msm/msm_iommu.c b/drivers/gpu/drm/msm/msm_iommu.c index 2fd48e66bc98..756bd55ee94f 100644 --- a/drivers/gpu/drm/msm/msm_iommu.c +++ b/drivers/gpu/drm/msm/msm_iommu.c @@ -13,6 +13,7 @@ struct msm_iommu { struct msm_mmu base; struct iommu_domain *domain; atomic_t pagetables; + struct page *prr_page; }; #define to_msm_iommu(x) container_of(x, struct msm_iommu, base) @@ -112,6 +113,36 @@ static int msm_iommu_pagetable_unmap(struct msm_mmu *mmu, u64 iova, return (size == 0) ? 0 : -EINVAL; } +static int msm_iommu_pagetable_map_prr(struct msm_mmu *mmu, u64 iova, size_t len, int prot) +{ + struct msm_iommu_pagetable *pagetable = to_pagetable(mmu); + struct io_pgtable_ops *ops = pagetable->pgtbl_ops; + struct msm_iommu *iommu = to_msm_iommu(pagetable->parent); + phys_addr_t phys = page_to_phys(iommu->prr_page); + u64 addr = iova; + + while (len) { + size_t mapped = 0; + size_t size = PAGE_SIZE; + int ret; + + ret = ops->map_pages(ops, addr, phys, size, 1, prot, GFP_KERNEL, &mapped); + + /* map_pages could fail after mapping some of the pages, + * so update the counters before error handling. + */ + addr += mapped; + len -= mapped; + + if (ret) { + msm_iommu_pagetable_unmap(mmu, iova, addr - iova); + return -EINVAL; + } + } + + return 0; +} + static int msm_iommu_pagetable_map(struct msm_mmu *mmu, u64 iova, struct sg_table *sgt, size_t off, size_t len, int prot) @@ -122,6 +153,9 @@ static int msm_iommu_pagetable_map(struct msm_mmu *mmu, u64 iova, u64 addr = iova; unsigned int i; + if (!sgt) + return msm_iommu_pagetable_map_prr(mmu, iova, len, prot); + for_each_sgtable_sg(sgt, sg, i) { size_t size = sg->length; phys_addr_t phys = sg_phys(sg); @@ -177,9 +211,16 @@ static void msm_iommu_pagetable_destroy(struct msm_mmu *mmu) * If this is the last attached pagetable for the parent, * disable TTBR0 in the arm-smmu driver */ - if (atomic_dec_return(&iommu->pagetables) == 0) + if (atomic_dec_return(&iommu->pagetables) == 0) { adreno_smmu->set_ttbr0_cfg(adreno_smmu->cookie, NULL); + if (adreno_smmu->set_prr_bit) { + adreno_smmu->set_prr_bit(adreno_smmu->cookie, false); + __free_page(iommu->prr_page); + iommu->prr_page = NULL; + } + } + free_io_pgtable_ops(pagetable->pgtbl_ops); kfree(pagetable); } @@ -336,6 +377,25 @@ struct msm_mmu *msm_iommu_pagetable_create(struct msm_mmu *parent) kfree(pagetable); return ERR_PTR(ret); } + + BUG_ON(iommu->prr_page); + if (adreno_smmu->set_prr_bit) { + /* + * We need a zero'd page for two reasons: + * + * 1) Reserve a known physical address to use when + * mapping NULL / sparsely resident regions + * 2) Read back zero + * + * It appears the hw drops writes to the PRR region + * on the floor, but reads actually return whatever + * is in the PRR page. + */ + iommu->prr_page = alloc_page(GFP_KERNEL | __GFP_ZERO); + adreno_smmu->set_prr_addr(adreno_smmu->cookie, + page_to_phys(iommu->prr_page)); + adreno_smmu->set_prr_bit(adreno_smmu->cookie, true); + } } /* Needed later for TLB flush */ diff --git a/include/uapi/drm/msm_drm.h b/include/uapi/drm/msm_drm.h index 2342cb90857e..5bc5e4526ccf 100644 --- a/include/uapi/drm/msm_drm.h +++ b/include/uapi/drm/msm_drm.h @@ -91,6 +91,8 @@ struct drm_msm_timespec { #define MSM_PARAM_UBWC_SWIZZLE 0x12 /* RO */ #define MSM_PARAM_MACROTILE_MODE 0x13 /* RO */ #define MSM_PARAM_UCHE_TRAP_BASE 0x14 /* RO */ +/* PRR (Partially Resident Region) is required for sparse residency: */ +#define MSM_PARAM_HAS_PRR 0x15 /* RO */ /* For backwards compat. The original support for preemption was based on * a single ring per priority level so # of priority levels equals the # From patchwork Thu Jun 5 18:29:02 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 894504 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9FD9D27C842 for ; Thu, 5 Jun 2025 18:32:54 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.168.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749148376; cv=none; b=HNTA3AfuN7AtEKUdeW39tIpP1YyI1K3EW7XK/cj1j5zfwvBK/NyO642TFw36IZ9OVGBzl9JPO33S2OZqL2cfwQIDXzP/LmCPQl72Mz80XFGtpx3TBeDcDyTswkH+ncPXPSjBNG+BRgSLMmWsjOu7/yRCE6O6BH5WvyZ1QC7KbFQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749148376; c=relaxed/simple; bh=QW16C4VF1d8e7Sjyfc+eli2odXO+RUUedY3b/XyYGGY=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=ns+9Y6AwS7ge5AwTADq1zDrsKMm2iaJE+A5gIgCH6cEMC8XM2Et27/T9Vl/EV9jpW3M279KsegUWBG5AU8aQruWOxv4C9SWgOqz2PNXWaj/fBJXw8pJjeqBT8uA7syqEEDX8FaUqJ1/yYByyuSsqgwuOYRBHAzkTR6/J+hfugnc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=jiM/bpZ7; arc=none smtp.client-ip=205.220.168.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="jiM/bpZ7" Received: from pps.filterd (m0279866.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 555GFUPT004217 for ; Thu, 5 Jun 2025 18:32:54 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=ZmGKliz2HfS kLFaMboPGj46o2FCBYdJ6NdGmIRY3dt8=; b=jiM/bpZ7zNpc+vJjH40exJvMVGF sZ8pOG32e1uWNY98RpOn2vTEfPgg2z8nRX/v6Hr+jtG7ULbmtdlrrB9JftRTwf9K ozMd5b3w3V6jH0IjilHt8mRfgnarcyjCxQKHYf6pVqotbvjYLNkUKnkFa4Z2v/o6 p4+JqDHJhtEr6mzhhmxuA0YOFELOqo0XX3FBGbjBQZ8wIglcAl551aLY5lPAf8rp Ww297rmTeMr85hLtEENlbsY1Q62Gfyf6LtIhIvYj1JBwNMrbJyQvRl2+PJDu2Nlh MoVNMRI1663AVLQzl2BDICWj2hdq3QWfjgoxWWsxG7UT704SnPYQ/I1v2IQ== Received: from mail-pg1-f200.google.com (mail-pg1-f200.google.com [209.85.215.200]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 471g8s2c5q-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Thu, 05 Jun 2025 18:32:53 +0000 (GMT) Received: by mail-pg1-f200.google.com with SMTP id 41be03b00d2f7-b090c7c2c6aso735891a12.0 for ; Thu, 05 Jun 2025 11:32:53 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1749148373; x=1749753173; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ZmGKliz2HfSkLFaMboPGj46o2FCBYdJ6NdGmIRY3dt8=; b=o4zE7iwShF0qxrkz1mY7K+QW4DlBG3wALl/epeHS1jgoMMZykHkzjMOW6TnFxctN/B bSpWGWzuqDEgc7QUaEwX6LHC3QDWz7mPY80e+FdDdLZUWN7pStNNVu9w4ASS/Gk+gWPI z/8FlHiYi+ml7ePAabz3f2nvV0sKKx6zHkW2F10kFa6GAHcQHbVIpN5GT9CQ1t06O1ly 68D0lOFIynA0tumMBmHCnDhlLV1/MJXHaQLbLb0RkBv4tPYLndGDD04l53lAl5oc2Ue0 ajjNsRHguXeFdvx08RPwBD73BKpw3PVjtbVqT6VUbnib9m7KzoyFkCtXpgjzUX9Pqval Ribg== X-Forwarded-Encrypted: i=1; AJvYcCW0VHiw+gYCeUml/H9675GJlTkRMDJ1BwiyOlwsmbax3VChFzcG/uezake5m3BNGZg4Bfs5xw8JjtUzI91F@vger.kernel.org X-Gm-Message-State: AOJu0YwhpJKn6IvfAAhP7XHAMgj96Tb58wjD0jPnf1Hxya0wfAsJXbOI prSHJErw8/6J/vfUXRRaAEbkfp7xplvdjVbSjXPYBLEwyE1kCcjaSQQeW4raRLcAVa5GxIHB/4p 1r9PxMno0+MibUlOW0qmfaJjWRGZ5p+n13AGzxBl2HYSVK+DZL5OpFeSvdI5202wW67Jx X-Gm-Gg: ASbGnct6XxoIMSyHmu9cVGlvHAtHJc8V2jB9HgNutirgFRnAtOoBwTUlzQv+su8LhxB gr6tBzCdGAXkHI3YiVOFLuVCA/ZznHvgZQ/ZaB6MZ8lZrn94RB8y9T1G0Sa2kUmeXNoy5A4dZxW wvqacu5SkTnODFP7CySuhDYDy4yt9kbFLQjkXtR6MH6ztZrOPPG17sJ9ctdmLXqBfNhpNzrkr7d VSZFYtjcVvDe10Pu7iZjJFzTy+04gvQ0ua8qfiyAl25+YSj6S3he77NHmYLjp2eDd6WP/XNyqJ9 1UH2S9ErSLFsA0E+sStMRgO6VmbDB0Vj X-Received: by 2002:a17:90b:554d:b0:311:c5d9:2c79 with SMTP id 98e67ed59e1d1-31346c50561mr888501a91.21.1749148372981; Thu, 05 Jun 2025 11:32:52 -0700 (PDT) X-Google-Smtp-Source: AGHT+IHyetsmd5uwbFaFtMcmZkbXRpee07zjRboGQZE4XMHIFA39QakUNWJDzUUaxfsPJ5Rk9vOEJw== X-Received: by 2002:a17:90b:554d:b0:311:c5d9:2c79 with SMTP id 98e67ed59e1d1-31346c50561mr888471a91.21.1749148372577; Thu, 05 Jun 2025 11:32:52 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:89fa:e299:1a34:c1f5]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-31349fc5460sm61213a91.23.2025.06.05.11.32.52 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 05 Jun 2025 11:32:52 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Rob Clark , Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v6 17/40] drm/msm: Rename msm_gem_vma_purge() -> _unmap() Date: Thu, 5 Jun 2025 11:29:02 -0700 Message-ID: <20250605183111.163594-18-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250605183111.163594-1-robin.clark@oss.qualcomm.com> References: <20250605183111.163594-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: 7098figwm2OkmLOoefff5iR_NNQiTcsi X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjA1MDE2NSBTYWx0ZWRfX8lcLE80DEKRi FF/SoWa34bPlF0bJB6GjcKJ8qEq1IsF6vFALdz1TGnVnzZsCXKvmp+OIDjpGt+4gHLflcCKuePT 00nSeuh4mtUjY9iksfe/A1TJcQeD0bhCepRXba4xBPqs6XvIVIucEbEGwLCE4BNvU1LdFz6ih99 QMYHzVwBOTtR+Z/vFZG/sl96R1ALxvxxNE1dO14cpLhSn/V46hvLXrmRaHKSp9z5lxJT0+KJwsh uTgRZQEoqkhHQduJUeXhUK1Svz/ghirEvumx9g7illYYqOPETmjsOJqY+LAzQsciw60TCvEpD9S MIUwfxpYTp4vz8D2gBG+LLn5NsafALtfiLlT/QXadklVX3bfmnvMeyyI0+HVLPpgNlRmsZ0derx AL2J4p0nrr/iDtROzFpfwyn6PxfwiJ2Z3H5ZNuB9G59q8QP8RFREvE0f0f8NkJq1jsc9INUA X-Authority-Analysis: v=2.4 cv=RdWQC0tv c=1 sm=1 tr=0 ts=6841e2d5 cx=c_pps a=oF/VQ+ItUULfLr/lQ2/icg==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=cm27Pg_UAAAA:8 a=EUspDBNiAAAA:8 a=jqtXezU9Yb2X64KsF0MA:9 a=3WC7DwWrALyhR5TkjVHa:22 X-Proofpoint-GUID: 7098figwm2OkmLOoefff5iR_NNQiTcsi X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.0.736,FMLib:17.12.80.40 definitions=2025-06-05_05,2025-06-05_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 bulkscore=0 adultscore=0 mlxscore=0 priorityscore=1501 phishscore=0 clxscore=1015 lowpriorityscore=0 malwarescore=0 suspectscore=0 impostorscore=0 spamscore=0 mlxlogscore=999 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506050165 From: Rob Clark This is a more descriptive name. Signed-off-by: Rob Clark Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gem.c | 6 +++--- drivers/gpu/drm/msm/msm_gem.h | 2 +- drivers/gpu/drm/msm/msm_gem_vma.c | 2 +- 3 files changed, 5 insertions(+), 5 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 194a15802a5f..89fead77c0d8 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -61,7 +61,7 @@ static void detach_vm(struct drm_gem_object *obj, struct drm_gpuvm *vm) drm_gpuvm_bo_for_each_va (vma, vm_bo) { if (vma->vm != vm) continue; - msm_gem_vma_purge(vma); + msm_gem_vma_unmap(vma); msm_gem_vma_close(vma); break; } @@ -437,7 +437,7 @@ put_iova_spaces(struct drm_gem_object *obj, struct drm_gpuvm *vm, bool close) drm_gpuvm_bo_get(vm_bo); drm_gpuvm_bo_for_each_va_safe (vma, vmatmp, vm_bo) { - msm_gem_vma_purge(vma); + msm_gem_vma_unmap(vma); if (close) msm_gem_vma_close(vma); } @@ -615,7 +615,7 @@ static int clear_iova(struct drm_gem_object *obj, if (!vma) return 0; - msm_gem_vma_purge(vma); + msm_gem_vma_unmap(vma); msm_gem_vma_close(vma); return 0; diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index 2dd9a7f585f4..ec1a7a837e52 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -111,7 +111,7 @@ struct msm_gem_vma { struct drm_gpuva * msm_gem_vma_new(struct drm_gpuvm *vm, struct drm_gem_object *obj, u64 offset, u64 range_start, u64 range_end); -void msm_gem_vma_purge(struct drm_gpuva *vma); +void msm_gem_vma_unmap(struct drm_gpuva *vma); int msm_gem_vma_map(struct drm_gpuva *vma, int prot, struct sg_table *sgt); void msm_gem_vma_close(struct drm_gpuva *vma); diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_gem_vma.c index ef0efd87e4a6..e16a8cafd8be 100644 --- a/drivers/gpu/drm/msm/msm_gem_vma.c +++ b/drivers/gpu/drm/msm/msm_gem_vma.c @@ -21,7 +21,7 @@ msm_gem_vm_free(struct drm_gpuvm *gpuvm) } /* Actually unmap memory for the vma */ -void msm_gem_vma_purge(struct drm_gpuva *vma) +void msm_gem_vma_unmap(struct drm_gpuva *vma) { struct msm_gem_vma *msm_vma = to_msm_vma(vma); struct msm_gem_vm *vm = to_msm_vm(vma->vm); From patchwork Thu Jun 5 18:29:03 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 894502 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A4D1B283C90 for ; Thu, 5 Jun 2025 18:33:01 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.180.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749148383; cv=none; b=FtJeRlAVO4GvlLbg9D3IXTOgn1HltPb9ek2rW7aTlSPX79Q2mbU+LZVX/hLyO7k05N71/0+zCR59HOobqLNiO6MsUKU1VBL7Hf8eZ09nPO/YvLwI+vBERvtc57xcTlXf5REO4M2RGpG/LEBkcRLiMUzZGM2LcIa9XHcnBKepFbY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749148383; c=relaxed/simple; bh=asg0ofK7A9j8GXUsqP3RShrmeE25j2whkBS5nOJbEJ0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=DXhfjcdGeht91V/3iyXMO87/JjoPKfZ+sDSU00mnHw2pynuGhk0DMr/XpehkkVjHzF7vsBiC6LXfbmJ2C67g35GGPes27IXnavhYI8ZcZj8XM5oSVG2eBIUE/6eWqj22Zyvlc7bQpa8Zu7V6UPAJHCrDFAiU4D0d7bRWVjX/Ues= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=nCgqBwvc; arc=none smtp.client-ip=205.220.180.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="nCgqBwvc" Received: from pps.filterd (m0279869.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 5559f4Rj012952 for ; Thu, 5 Jun 2025 18:33:00 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=ffdUUPweRzg vpO3v9DoMvCSyNHDyemApNNm03T9kbrM=; b=nCgqBwvcddObPEvajmL7730nKjU /vphKVp4/wfzO9IZG+mUCSlP7tSU3GaTGt2ZMXBkf7xHGVtxhVEtF/0DmdxQIz57 XY35OBMcI5sEtRteJjWr5EYXSSJGDNe6bexcOa1MpqCEaQrVMFpBjmAdaJvHSBnn 6xseS24P4kwMN42wZACQ9HZS+39/zL7cFBtbpTOJgjnhzWDw6VHhAEdtb30p4JV7 o1yVwyuLt08w6dLObLF00I0yThX/jaQQqt+/FQ6zQ8trW3cQE+fwFyhBQw89b57x iMbDEOEk4jzepqsG4HjCxcEYraW6BdMpUU5rQYkb6/CVQMRuO6sN+ZMnuuw== Received: from mail-pl1-f198.google.com (mail-pl1-f198.google.com [209.85.214.198]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 47202wg080-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Thu, 05 Jun 2025 18:33:00 +0000 (GMT) Received: by mail-pl1-f198.google.com with SMTP id d9443c01a7336-2358430be61so11042305ad.2 for ; Thu, 05 Jun 2025 11:33:00 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1749148379; x=1749753179; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ffdUUPweRzgvpO3v9DoMvCSyNHDyemApNNm03T9kbrM=; b=IcTqjGX6ksVByIlmbMJ3qF23Bo00f/6UngLbG2oLNSkV5lb1U1h+Sqhq0oU7T+4HG2 qxv/3Ze3RZxJjbVIPxTOcxLMY3/RLLbgMaZjuHAlxi2yCLPvLVTKLWKvHYM9E69xlqNF Ox45w1bZnalx4Agf63HkqxcI7cfKJyVPngWIMhpX6tgWVw1eaYtv9kWazzm1geipOybc Sk0Lw0UceCKXW2SNdZEERAhPZa/0smT3BpZ/IwsFVAsHJjHyJ2eUy66BfaQENHgCvkKQ UEjNTvSaeK04+Y7NEmuLpPpz1Sl/CTXpg9d9rzzPpAN+HaRtMpDk+At+/PUFsNu10hP7 nFNw== X-Forwarded-Encrypted: i=1; AJvYcCW9wYcIrUuBcYUi936O7KZz+ka+MYRW0juuGBq+NV9+haW5kqEq9Nipg6We7P3prxm1gyphpgi2ph0iDx2L@vger.kernel.org X-Gm-Message-State: AOJu0YyUG9oOpFYjbtm9wLJ1yE3AwdTdQm4+3BUwFjTOBA1SHMLt74tM iTKFC6oPFzMTGtZlm+Dji+Y5k3LdVsXzTANGbvzxba8MvHwqOAWj4eW2TlVtoSEweL90lKVZcmz XnEX1fOeUYtVlJ2GIODppnU4kDpXd/xTjwvWcmtIsaLBncv0iO6HthEOUkF7P9arfawcz X-Gm-Gg: ASbGncuArYh2f6Rof6tjg8digThQslcFBYf0muyURLiWeAbfrP3bL2yQruZ3dLVyFXf t3F1j6mTpbLEV4eNl/hPskfpMXpVl3TPCsNLMKnXWo7j66nDuIdQpBxJsGX1Q+xpZLLWy9da8Gd WIYoXqOKZubjGw+zcSbsJaZFhVe8eE72NZQnNlBOA+D7XscjPjfVzAsHF1f5lsscBeKgJ7hsEgT 112pkDgxS7p2qle0CPZXeaYivPwJx6mUqcQqDKUoRrH1hKEK2Q3fLnmdH7Nnmh9MFs1hrFB44nL IYrPFvQNWaWjX198SvxVZw== X-Received: by 2002:a17:903:8c6:b0:235:f55d:99cd with SMTP id d9443c01a7336-23601d04741mr5714525ad.9.1749148374199; Thu, 05 Jun 2025 11:32:54 -0700 (PDT) X-Google-Smtp-Source: AGHT+IE5kPTJZ9p7ITwX2L2PkFEST+pWrFDoURTl9jpp7xvKlXfywvcT8Oeg8Fe8Ntp7oh74HigMLA== X-Received: by 2002:a17:903:8c6:b0:235:f55d:99cd with SMTP id d9443c01a7336-23601d04741mr5714185ad.9.1749148373797; Thu, 05 Jun 2025 11:32:53 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:89fa:e299:1a34:c1f5]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-23506cd75f7sm122855085ad.111.2025.06.05.11.32.53 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 05 Jun 2025 11:32:53 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Rob Clark , Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , Konrad Dybcio , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v6 18/40] drm/msm: Drop queued submits on lastclose() Date: Thu, 5 Jun 2025 11:29:03 -0700 Message-ID: <20250605183111.163594-19-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250605183111.163594-1-robin.clark@oss.qualcomm.com> References: <20250605183111.163594-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: lOsOkQgEm-H2dYRslSSdNYlL1o9uVw-c X-Proofpoint-GUID: lOsOkQgEm-H2dYRslSSdNYlL1o9uVw-c X-Authority-Analysis: v=2.4 cv=Y/D4sgeN c=1 sm=1 tr=0 ts=6841e2dc cx=c_pps a=MTSHoo12Qbhz2p7MsH1ifg==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=cm27Pg_UAAAA:8 a=EUspDBNiAAAA:8 a=Vc5hAS3c26tUa1HFGawA:9 a=GvdueXVYPmCkWapjIL-Q:22 X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjA1MDE2NSBTYWx0ZWRfX7AGqUR8nMjYG H9+TwOCmdOJM7GgE8MxVZYuKy+o9PFGm4PURw9zfEfQL5+oXP3I7yV3arICG6D3gd4T3YdeeOJx ufQ5SlqAfEqcW65qViLCqbNkj2n6cRm+qSKlY5lr8Z3Q7GZxDgiCJWL2UVFV/rnuiCD27jFyqrk JKMXJUxg9SxPP2md6IVhTpdMaokk6c3uxebjF6MOy2nn8iu7srm7w30eswLrCnyoImp+q4BYS4M CNBZOV+6V7OBxbhL2mXkHX4ovOSNAKYW9zjeVCPZuIhArEoK0ymnLSrQp0cSyaJsjrxu9n6Kcnv VQlJdcuraBa9LaHe22kY7SYEEAbA5Tg2Lq7dpCauIzt7pGscN3c3v2XcImceuSHQl3Otw0SiJfy OfqX7i0ygYcv8ZJbOJITUX+CXfHlpZLcZGs3fXpRHnzVy+NMPMWL81VhoLnlTS3raJgKDYDB X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.0.736,FMLib:17.12.80.40 definitions=2025-06-05_05,2025-06-05_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 clxscore=1015 priorityscore=1501 spamscore=0 adultscore=0 impostorscore=0 lowpriorityscore=0 phishscore=0 mlxscore=0 mlxlogscore=999 malwarescore=0 bulkscore=0 suspectscore=0 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506050165 From: Rob Clark If we haven't written the submit into the ringbuffer yet, then drop it. The submit still retires through the normal path, to preserve fence signalling order, but we can skip the IB's to userspace cmdstream. Signed-off-by: Rob Clark Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_drv.c | 1 + drivers/gpu/drm/msm/msm_gpu.h | 8 ++++++++ drivers/gpu/drm/msm/msm_ringbuffer.c | 6 ++++++ 3 files changed, 15 insertions(+) diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c index 6ef29bc48bb0..5909720be48d 100644 --- a/drivers/gpu/drm/msm/msm_drv.c +++ b/drivers/gpu/drm/msm/msm_drv.c @@ -250,6 +250,7 @@ static int msm_open(struct drm_device *dev, struct drm_file *file) static void context_close(struct msm_context *ctx) { + ctx->closed = true; msm_submitqueue_close(ctx); msm_context_put(ctx); } diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h index d8425e6d7f5a..bfaec80e5f2d 100644 --- a/drivers/gpu/drm/msm/msm_gpu.h +++ b/drivers/gpu/drm/msm/msm_gpu.h @@ -362,6 +362,14 @@ struct msm_context { */ int queueid; + /** + * @closed: The device file associated with this context has been closed. + * + * Once the device is closed, any submits that have not been written + * to the ring buffer are no-op'd. + */ + bool closed; + /** @vm: the per-process GPU address-space */ struct drm_gpuvm *vm; diff --git a/drivers/gpu/drm/msm/msm_ringbuffer.c b/drivers/gpu/drm/msm/msm_ringbuffer.c index bbf8503f6bb5..b8bcd5d9690d 100644 --- a/drivers/gpu/drm/msm/msm_ringbuffer.c +++ b/drivers/gpu/drm/msm/msm_ringbuffer.c @@ -17,6 +17,7 @@ static struct dma_fence *msm_job_run(struct drm_sched_job *job) struct msm_fence_context *fctx = submit->ring->fctx; struct msm_gpu *gpu = submit->gpu; struct msm_drm_private *priv = gpu->dev->dev_private; + unsigned nr_cmds = submit->nr_cmds; int i; msm_fence_init(submit->hw_fence, fctx); @@ -36,8 +37,13 @@ static struct dma_fence *msm_job_run(struct drm_sched_job *job) /* TODO move submit path over to using a per-ring lock.. */ mutex_lock(&gpu->lock); + if (submit->queue->ctx->closed) + submit->nr_cmds = 0; + msm_gpu_submit(gpu, submit); + submit->nr_cmds = nr_cmds; + mutex_unlock(&gpu->lock); return dma_fence_get(submit->hw_fence); From patchwork Thu Jun 5 18:29:04 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 894233 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7B642281508 for ; Thu, 5 Jun 2025 18:32:57 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.168.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749148379; cv=none; b=j2GigRdY+sKMwMQMZ4L9kXacn/FKO75UChSIKwxFKUM2aserpQp/aLb8QFEZmlHGo1DsJuJWwvPG/JGDzNbWLTKgIfc3i0fK6eCW7hvAY54bCGcqpQGjmwKESxim6UAeqv4+2Cj8235OCHzHLYIWexfjEdY1AOdaTK2WB7d8QV4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749148379; c=relaxed/simple; bh=70ksOc5iQpyAgZMAtxMrI+YhBlo1TK2wUkE1lyJmQ6M=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=OByO/r+AfTg1uTFooXU3brjRVEaQiHZARlQ4lyPI5dhj6zTMn0b4XDskpZJZFUTW9uYMam0ybdKww6phr448fR5Gfd2IwviCvjmVIbVqBMmR11/Zjs/Zk1RvbVg1T7/x7k2iv8Cqg6qJ486KXqRXQHm2JSPH3PDcL/63P42XDyE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=dnGnsLuX; arc=none smtp.client-ip=205.220.168.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="dnGnsLuX" Received: from pps.filterd (m0279863.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 55597JPq004137 for ; Thu, 5 Jun 2025 18:32:56 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=KKzV8zO4Rj5 qHtCmhXDXV8OJV41Yiw04GJXbLr0rviM=; b=dnGnsLuXUvEm4gmyUVSGleOdhkM Y51qDXt5wPmX90elIq28wPpFlZPSjdetBkW68xql+Xbh7ou9d4ImqOLXOt8abFLZ YiWgpYMGyH94At2VnuL26yY1MblLyChBHHKxuWFVQlC3ePoX2X8pR08gFXVyplls vghh10MnIPkGB9ARioSG5pkMRBEjBh+5eKg3Vwbvk4tPjcSxTDrpt0Wnk7TdNrnh qPqIXyss2Lncaaqd8TN6FwIkwqBm0blrTDkU+qcAVHbh0kOqs5STNEnkKLILo8ZI IJKvvUHarSCkeOcjtP1qBDSjUvv4/KOvoXCZoL11RE1vxq+z7J5hcgYR+3g== Received: from mail-pl1-f199.google.com (mail-pl1-f199.google.com [209.85.214.199]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 471g8ta9ke-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Thu, 05 Jun 2025 18:32:56 +0000 (GMT) Received: by mail-pl1-f199.google.com with SMTP id d9443c01a7336-2358430be61so11041635ad.2 for ; Thu, 05 Jun 2025 11:32:56 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1749148376; x=1749753176; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=KKzV8zO4Rj5qHtCmhXDXV8OJV41Yiw04GJXbLr0rviM=; b=chr97n5R6TE/3SneNMqd+ADLHRbajuknfN1rvvQUjaw+VJTcVvk+Z2X8DO3FuPmeT4 P6rxkfYZtC1uyLQdKeVdyRcLfQZlQhiiqDCOKrFgT7ODpNJCZPDI9/tT4FSY3pD0kIda MHcuLsQ1ZkLZzd1T1grXMTsUPfyQuVZkOOuJuPOgjHt70DOStQG2EWqLerf0kkm8OwZ5 5nx6VKvkRrNmkTYfiPZsERSaabFXXAmVW2NO8B/iRyOnylwdQIbFiXwFIpmqPz4iIhuC jcguwbp9mzKhB9iYS+o1U9OhwVBLSz6ZnHewlHONa6+jRwy3aH6NUC+JpZ+A2yFZkyZL n7Ug== X-Forwarded-Encrypted: i=1; AJvYcCVm5KHD+j61/9u5vDuzHSygtPlT5CHxDcLcszYXqp5Dc9dUC9J1eawSTiGuNkOVvW/bL8fz1sknBTxeQSu9@vger.kernel.org X-Gm-Message-State: AOJu0Yz6uGn2YKbIbIEF69hUJ5cUoWx3ahlyVuQe1w9sRvEDDV6+uCqK Cks7gp67k+u9DSHLvMfFet9GAF+RQtRVVWlLEPIJ6Qhd5+xGVD4pIa1aCJz6k88vdYjSbHb4iGa RIBbWWNoTIK1LZodbIXGifgo3qJoRILtegZsqzWTI0WDJzdo+3R5gCJ+P4Fuj0f/I41ba X-Gm-Gg: ASbGncvCvsnts3Cb0buDqpBA/4+sizB9Vwllv44QppDz3QjguWev69RrcOJaB9LV97H ca+37K6HmDy6L5Vg06PZYpUx/MCDS/0ITum+WxG97Zh6raa23DjfIDQ7TyuvcFOwrTi3IyW7l4f aSK7fLOKW2PvmuAYMWD1f/Y6X+Lh9JScAzkMfWVGkbpU0CrIwPnCUWPViTejkiYtQnkbD11L0Nz FAODjJj/D/GY8II8i6R7LRfj+ds/BzLs0TZfsNjOl+V7tRcD9LRee12hJRWGe2aV2NhY5NAD+je J4kmj+yrX5tN1kbeR7C5utBL+FsGVzwr X-Received: by 2002:a17:902:d48e:b0:235:91a:4d with SMTP id d9443c01a7336-23601d43b6bmr6099935ad.23.1749148375595; Thu, 05 Jun 2025 11:32:55 -0700 (PDT) X-Google-Smtp-Source: AGHT+IEH3weqzHpLNIjHLtP+uTqlW5dhVeoRV0i5zYxxOgw8DNZkhQvY7brneYXfzI8UWx8//Zxt6Q== X-Received: by 2002:a17:902:d48e:b0:235:91a:4d with SMTP id d9443c01a7336-23601d43b6bmr6099545ad.23.1749148375181; Thu, 05 Jun 2025 11:32:55 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:89fa:e299:1a34:c1f5]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-23506bdd034sm122650075ad.92.2025.06.05.11.32.54 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 05 Jun 2025 11:32:54 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Rob Clark , Rob Clark , Rob Clark , Sean Paul , Konrad Dybcio , Abhinav Kumar , Dmitry Baryshkov , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v6 19/40] drm/msm: Lazily create context VM Date: Thu, 5 Jun 2025 11:29:04 -0700 Message-ID: <20250605183111.163594-20-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250605183111.163594-1-robin.clark@oss.qualcomm.com> References: <20250605183111.163594-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Authority-Analysis: v=2.4 cv=eJQTjGp1 c=1 sm=1 tr=0 ts=6841e2d8 cx=c_pps a=JL+w9abYAAE89/QcEU+0QA==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=cm27Pg_UAAAA:8 a=EUspDBNiAAAA:8 a=K8YrE2tTMaBrqk7BmowA:9 a=324X-CrmTo6CU4MGRt3R:22 X-Proofpoint-ORIG-GUID: u5j5ePy_BRN3w8gQC046JNOr6tUatAZX X-Proofpoint-GUID: u5j5ePy_BRN3w8gQC046JNOr6tUatAZX X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjA1MDE2MyBTYWx0ZWRfX84+8+zPpzYgE gwQVCcwhn2HRaOuJW1WcrWvsWrtCOmW2T+NZRxrvTF/mX115r7Sw7Fgknet/5653O7/iPZhgaQo T/ddBbn8UQP9aLbRK8uaJ++1v/3W5lSbPuCk3JqYayXwOyU6fgO7boy/9V1TfKej2PLQUM39Lv3 MJ/4tgOkBQi8YDaZsQlEMws2Ft+uBL/VDn6oB3n7b2cFjvyWOyhGShuwbQVBtNJKwzAUVGq/VKV aUcHIwhE9SZZTbZI8/i2BYDfWKQp8tak3L9MTX8ci5hiTUsL82fGdnaQEXylmmBYMiZua8YMu+Q p7AkwiYOMKXB5nV/qdGr2FBdMzR8Fq7cDq1UfuiVsd0gOC3OW6ZtQLGVTesE1h5nwgAQkY+qVcu FY1vZ8lM1aiuKZDL8L/jDqkeGE9fucN9DMCcb5XMenEVcW/NO2IR0p/9+GmRdBO18QGfrovi X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.0.736,FMLib:17.12.80.40 definitions=2025-06-05_05,2025-06-05_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 malwarescore=0 adultscore=0 lowpriorityscore=0 clxscore=1015 priorityscore=1501 bulkscore=0 mlxlogscore=999 impostorscore=0 spamscore=0 phishscore=0 mlxscore=0 suspectscore=0 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506050163 From: Rob Clark In the next commit, a way for userspace to opt-in to userspace managed VM is added. For this to work, we need to defer creation of the VM until it is needed. Signed-off-by: Rob Clark Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 3 ++- drivers/gpu/drm/msm/adreno/adreno_gpu.c | 14 +++++++----- drivers/gpu/drm/msm/msm_drv.c | 29 ++++++++++++++++++++----- drivers/gpu/drm/msm/msm_gem_submit.c | 2 +- drivers/gpu/drm/msm/msm_gpu.h | 9 +++++++- 5 files changed, 43 insertions(+), 14 deletions(-) diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c index c43a443661e4..0d7c2a2eeb8f 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c @@ -112,6 +112,7 @@ static void a6xx_set_pagetable(struct a6xx_gpu *a6xx_gpu, { bool sysprof = refcount_read(&a6xx_gpu->base.base.sysprof_active) > 1; struct msm_context *ctx = submit->queue->ctx; + struct drm_gpuvm *vm = msm_context_vm(submit->dev, ctx); struct adreno_gpu *adreno_gpu = &a6xx_gpu->base; phys_addr_t ttbr; u32 asid; @@ -120,7 +121,7 @@ static void a6xx_set_pagetable(struct a6xx_gpu *a6xx_gpu, if (ctx->seqno == ring->cur_ctx_seqno) return; - if (msm_iommu_pagetable_params(to_msm_vm(ctx->vm)->mmu, &ttbr, &asid)) + if (msm_iommu_pagetable_params(to_msm_vm(vm)->mmu, &ttbr, &asid)) return; if (adreno_gpu->info->family >= ADRENO_7XX_GEN1) { diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c b/drivers/gpu/drm/msm/adreno/adreno_gpu.c index e24f627daf37..b70ed4bc0e0d 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c @@ -373,6 +373,8 @@ int adreno_get_param(struct msm_gpu *gpu, struct msm_context *ctx, { struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu); struct drm_device *drm = gpu->dev; + /* Note ctx can be NULL when called from rd_open(): */ + struct drm_gpuvm *vm = ctx ? msm_context_vm(drm, ctx) : NULL; /* No pointer params yet */ if (*len != 0) @@ -418,8 +420,8 @@ int adreno_get_param(struct msm_gpu *gpu, struct msm_context *ctx, *value = 0; return 0; case MSM_PARAM_FAULTS: - if (ctx->vm) - *value = gpu->global_faults + to_msm_vm(ctx->vm)->faults; + if (vm) + *value = gpu->global_faults + to_msm_vm(vm)->faults; else *value = gpu->global_faults; return 0; @@ -427,14 +429,14 @@ int adreno_get_param(struct msm_gpu *gpu, struct msm_context *ctx, *value = gpu->suspend_count; return 0; case MSM_PARAM_VA_START: - if (ctx->vm == gpu->vm) + if (vm == gpu->vm) return UERR(EINVAL, drm, "requires per-process pgtables"); - *value = ctx->vm->mm_start; + *value = vm->mm_start; return 0; case MSM_PARAM_VA_SIZE: - if (ctx->vm == gpu->vm) + if (vm == gpu->vm) return UERR(EINVAL, drm, "requires per-process pgtables"); - *value = ctx->vm->mm_range; + *value = vm->mm_range; return 0; case MSM_PARAM_HIGHEST_BANK_BIT: *value = adreno_gpu->ubwc_config.highest_bank_bit; diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c index 5909720be48d..ac8a5b072afe 100644 --- a/drivers/gpu/drm/msm/msm_drv.c +++ b/drivers/gpu/drm/msm/msm_drv.c @@ -214,10 +214,29 @@ static void load_gpu(struct drm_device *dev) mutex_unlock(&init_lock); } +/** + * msm_context_vm - lazily create the context's VM + * + * @dev: the drm device + * @ctx: the context + * + * The VM is lazily created, so that userspace has a chance to opt-in to having + * a userspace managed VM before the VM is created. + * + * Note that this does not return a reference to the VM. Once the VM is created, + * it exists for the lifetime of the context. + */ +struct drm_gpuvm *msm_context_vm(struct drm_device *dev, struct msm_context *ctx) +{ + struct msm_drm_private *priv = dev->dev_private; + if (!ctx->vm) + ctx->vm = msm_gpu_create_private_vm(priv->gpu, current); + return ctx->vm; +} + static int context_init(struct drm_device *dev, struct drm_file *file) { static atomic_t ident = ATOMIC_INIT(0); - struct msm_drm_private *priv = dev->dev_private; struct msm_context *ctx; ctx = kzalloc(sizeof(*ctx), GFP_KERNEL); @@ -230,7 +249,6 @@ static int context_init(struct drm_device *dev, struct drm_file *file) kref_init(&ctx->ref); msm_submitqueue_init(dev, ctx); - ctx->vm = msm_gpu_create_private_vm(priv->gpu, current); file->driver_priv = ctx; ctx->seqno = atomic_inc_return(&ident); @@ -409,7 +427,7 @@ static int msm_ioctl_gem_info_iova(struct drm_device *dev, * Don't pin the memory here - just get an address so that userspace can * be productive */ - return msm_gem_get_iova(obj, ctx->vm, iova); + return msm_gem_get_iova(obj, msm_context_vm(dev, ctx), iova); } static int msm_ioctl_gem_info_set_iova(struct drm_device *dev, @@ -418,18 +436,19 @@ static int msm_ioctl_gem_info_set_iova(struct drm_device *dev, { struct msm_drm_private *priv = dev->dev_private; struct msm_context *ctx = file->driver_priv; + struct drm_gpuvm *vm = msm_context_vm(dev, ctx); if (!priv->gpu) return -EINVAL; /* Only supported if per-process address space is supported: */ - if (priv->gpu->vm == ctx->vm) + if (priv->gpu->vm == vm) return UERR(EOPNOTSUPP, dev, "requires per-process pgtables"); if (should_fail(&fail_gem_iova, obj->size)) return -ENOMEM; - return msm_gem_set_iova(obj, ctx->vm, iova); + return msm_gem_set_iova(obj, vm, iova); } static int msm_ioctl_gem_info_set_metadata(struct drm_gem_object *obj, diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c index f51f2c00e6e2..9d58d6f643af 100644 --- a/drivers/gpu/drm/msm/msm_gem_submit.c +++ b/drivers/gpu/drm/msm/msm_gem_submit.c @@ -63,7 +63,7 @@ static struct msm_gem_submit *submit_create(struct drm_device *dev, kref_init(&submit->ref); submit->dev = dev; - submit->vm = queue->ctx->vm; + submit->vm = msm_context_vm(dev, queue->ctx); submit->gpu = gpu; submit->cmd = (void *)&submit->bos[nr_bos]; submit->queue = queue; diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h index bfaec80e5f2d..d1530de96315 100644 --- a/drivers/gpu/drm/msm/msm_gpu.h +++ b/drivers/gpu/drm/msm/msm_gpu.h @@ -370,7 +370,12 @@ struct msm_context { */ bool closed; - /** @vm: the per-process GPU address-space */ + /** + * @vm: + * + * The per-process GPU address-space. Do not access directly, use + * msm_context_vm(). + */ struct drm_gpuvm *vm; /** @kref: the reference count */ @@ -455,6 +460,8 @@ struct msm_context { atomic64_t ctx_mem; }; +struct drm_gpuvm *msm_context_vm(struct drm_device *dev, struct msm_context *ctx); + /** * msm_gpu_convert_priority - Map userspace priority to ring # and sched priority * From patchwork Thu Jun 5 18:29:05 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 894232 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E0F5A2820B9 for ; Thu, 5 Jun 2025 18:32:58 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.168.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749148382; cv=none; b=W6zlUJ0wwKPJrBeBkxvVn0FJmL5vK4cLrk0dsGE630D26UaPYWx+Vw4Dak3xFaQM+ckjJGVBpVScfKD0CqDfi81wUYSNdOZfXK48Bkhjt9WT/kI4wl0fbzKYmjqHUu9cOLwcVrs5inKXPBnvbWOcLLvRQHiJQ9ka3udgZgEmJd4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749148382; c=relaxed/simple; bh=LREB/xY7C5jNcdjT/n4AlwVuZ2kqMqXCgxBuqCdzI2A=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=pMW0VmJk+x2z2cpKfRY9Z/s+p1FpfSo1RBxLNPAhkFtB+nN1BwthTfit8FXTVH7Y8B3p6ZpkD+Mwf3PNbuvISGPgzx2oiGw4Rkp4+2Pbxuh7H4q/6/H6oNcZaMPjAMbkXdmPh1208+MaQEptewp1GWvgWyHpZt5wz71Plw8pJrA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=E0CYuxZ0; arc=none smtp.client-ip=205.220.168.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="E0CYuxZ0" Received: from pps.filterd (m0279864.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 555GtiYC028364 for ; Thu, 5 Jun 2025 18:32:58 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=Hh5tX4Ew+Aw YGkhPc+5ZLD0+EzWfomf8YCd88hJnzq0=; b=E0CYuxZ0o6sSL/D28WAc1LFM8GQ cQSNOWK7JovpQkSosWohyxwIWP2RgNZh0tDDgLIJb1Zb8wZQWmjHMgeZsZPb8vPG MfVNmuBGMwdV0KmfAVMghgCUfP+T56UtBk+54rNI2HYbFzQFAbWGEW86+aQe+o74 Y1T7BS7jWZ1YJjlH4xWIyok5pZKE6x+W8PaSNrGBWpFgYuWUdBBuCO8y1Vy+3+Ox EMIK8rZOgSYoZfsEF+aGtEJji8AnD2KL1fwM8Tz1YtW4/AuCKlPS0mGu8q4kmCj/ y2t2VJWScspfoyMTtaV1ZcAHLgM783coylCfb2wvKVcp02g6SjzrWUA5HLw== Received: from mail-pg1-f197.google.com (mail-pg1-f197.google.com [209.85.215.197]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 471g8nt9uc-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Thu, 05 Jun 2025 18:32:58 +0000 (GMT) Received: by mail-pg1-f197.google.com with SMTP id 41be03b00d2f7-b26e33ae9d5so1333184a12.1 for ; Thu, 05 Jun 2025 11:32:58 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1749148377; x=1749753177; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Hh5tX4Ew+AwYGkhPc+5ZLD0+EzWfomf8YCd88hJnzq0=; b=OCsL1rJ2zBLFoNd2DJWWom6f2Qbh6KHdQrSZs2r814+CfXtetUM9bxlwdtVIYdn2JS ZrthUbwNc4CRbLehoMNbi1xTBoOli/REE0BS/nAmyVdqqbgLloC9UyUgEU2sSp5YhO34 1P2MywJjlMoIy8KSgWLFH7Z75tPP9oQu4sLpFb/Qf/N1PXHgAOW6XJ33fetZOtNBvrw3 jIfO7pXUcY+QMFEsGKVa+JMbF23y7ERag0j3BfSVe4aplGvTta0WxzcJm2VcbOFM9ltC jS1j67La/CpLIWBAjHKpXrByUJiANEBwqOv6QoenauKq4MrsiKOjagz9AxvzMm7l4wtA cQ0A== X-Forwarded-Encrypted: i=1; AJvYcCXefPFp4vWuakAQMHFvANcwwBWGKKLb/hnla2aJbUsLij3UVV9tbjsTwhFS0JtRMqCSjisIKp1GNBZGqAIB@vger.kernel.org X-Gm-Message-State: AOJu0Yx65PETDllIX326PQWHCx7l+Jua9Jf36BsYLMMn85CPaFN7rsYE /nvOGssVxyN7Cnpn8S72y0/Bysoo22I5h7MI7+CPW2F0cETrW/gudzFd7QtHvf/QEMkEhgEaDZE h/1w3Q55lXTg5i/YiJpkkS8oTCICHBibG3EA+JVOR3AhgGRS9Wjy+jDfRaoHt5tcCa6X1 X-Gm-Gg: ASbGncu7kzR3bbszQ5KWrN/2i0p6YPXVzxDgdlA/fLFMLfYdOnkC4LJprTk3+LTO91V dMeJ8U5IB0z+iS87djmdHFtJhv9pxhBhKSrZ0A6iJFNoQkQFSg3+85s9/bNiHD+V5u51FCRuf/w es9qpk0CSUZs9ZMHTbhfKh1e3DjA5u7EtZ5OV8N4lwNNij6moc/EHmlFREuarT8pCvrgT7lsRIJ EvARTfnnb2lP0h9msRLu7Ly5dAV3WrnU+tx3V/KRJK88A8oLJM/GOulTIYsTZDg0OeVXre0sXWW AlVhijCWzZGDka531OXS1Ipi2tcgGjyl X-Received: by 2002:a05:6a20:72a4:b0:215:e60b:3bc7 with SMTP id adf61e73a8af0-21ee696decemr338229637.26.1749148377146; Thu, 05 Jun 2025 11:32:57 -0700 (PDT) X-Google-Smtp-Source: AGHT+IHddDuhtrWljoIwKO/VQE5To9I4ZsIfzjokkTHxlpETGM6b1GmJTQMyDcr6eu0GUaVfQdym+g== X-Received: by 2002:a05:6a20:72a4:b0:215:e60b:3bc7 with SMTP id adf61e73a8af0-21ee696decemr338183637.26.1749148376692; Thu, 05 Jun 2025 11:32:56 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:89fa:e299:1a34:c1f5]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-b2f5f67505asm7301a12.45.2025.06.05.11.32.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 05 Jun 2025 11:32:56 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Rob Clark , Rob Clark , Rob Clark , Sean Paul , Konrad Dybcio , Abhinav Kumar , Dmitry Baryshkov , Marijn Suijten , David Airlie , Simona Vetter , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v6 20/40] drm/msm: Add opt-in for VM_BIND Date: Thu, 5 Jun 2025 11:29:05 -0700 Message-ID: <20250605183111.163594-21-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250605183111.163594-1-robin.clark@oss.qualcomm.com> References: <20250605183111.163594-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Proofpoint-GUID: UgG8B6PKH62hYggZSqcFU33EoV4fKqzD X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjA1MDE2NSBTYWx0ZWRfX199eVADgCklC J1DpZ51pMRDzGZJYSP7BMLlmnt3OMJLh1S9F4mmPGJ97I5KFAKPWKXa3YGXhkgxbVcVwGKUyWt1 KcYGpZ77f5ltvC7mENCbGM+sPGURXcVGGNPyvDSrbgL/fwEJflsGgfxnZisTzowqrbLIQojae9l IWDBgqcoJnXsPFctZ2jdXDcotU/cYFfk71Xim2bgPNtd/LaXn3lGfyqh6rYwAx4EZ4GUA9slDQs MpJ1/wAPRJxJaikw9OB4PUeuANex40z1GT2ONw3W9AKWWXK9TS15kqFS8biLmL6qgGLDxrq22Sw rAAm6Sj6bHTBus+wDhwiCqjntfjWEA71S9Z24lutuRG8jcS3q81LbQkATSyjK6PXdROXinUyHaH 6Qr0MtkomZe2c0wFAYIB6/yomF3osvGHD0a07vfcfz4frxyI337usq7Ur+oMtTPvw09PSs4C X-Proofpoint-ORIG-GUID: UgG8B6PKH62hYggZSqcFU33EoV4fKqzD X-Authority-Analysis: v=2.4 cv=UphjN/wB c=1 sm=1 tr=0 ts=6841e2da cx=c_pps a=rz3CxIlbcmazkYymdCej/Q==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=cm27Pg_UAAAA:8 a=EUspDBNiAAAA:8 a=iDRtRSvoPPFvr6RSarUA:9 a=bFCP_H2QrGi7Okbo017w:22 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.0.736,FMLib:17.12.80.40 definitions=2025-06-05_05,2025-06-05_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 spamscore=0 impostorscore=0 lowpriorityscore=0 phishscore=0 mlxlogscore=999 clxscore=1015 malwarescore=0 adultscore=0 bulkscore=0 mlxscore=0 suspectscore=0 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506050165 From: Rob Clark Add a SET_PARAM for userspace to request to manage to the VM itself, instead of getting a kernel managed VM. In order to transition to a userspace managed VM, this param must be set before any mappings are created. Signed-off-by: Rob Clark Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 4 ++-- drivers/gpu/drm/msm/adreno/adreno_gpu.c | 15 +++++++++++++ drivers/gpu/drm/msm/msm_drv.c | 22 +++++++++++++++++-- drivers/gpu/drm/msm/msm_gem.c | 8 +++++++ drivers/gpu/drm/msm/msm_gpu.c | 5 +++-- drivers/gpu/drm/msm/msm_gpu.h | 29 +++++++++++++++++++++++-- include/uapi/drm/msm_drm.h | 24 ++++++++++++++++++++ 7 files changed, 99 insertions(+), 8 deletions(-) diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c index 0d7c2a2eeb8f..f0e37733c65d 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c @@ -2263,7 +2263,7 @@ a6xx_create_vm(struct msm_gpu *gpu, struct platform_device *pdev) } static struct drm_gpuvm * -a6xx_create_private_vm(struct msm_gpu *gpu) +a6xx_create_private_vm(struct msm_gpu *gpu, bool kernel_managed) { struct msm_mmu *mmu; @@ -2273,7 +2273,7 @@ a6xx_create_private_vm(struct msm_gpu *gpu) return ERR_CAST(mmu); return msm_gem_vm_create(gpu->dev, mmu, "gpu", ADRENO_VM_START, - adreno_private_vm_size(gpu), true); + adreno_private_vm_size(gpu), kernel_managed); } static uint32_t a6xx_get_rptr(struct msm_gpu *gpu, struct msm_ringbuffer *ring) diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c b/drivers/gpu/drm/msm/adreno/adreno_gpu.c index b70ed4bc0e0d..efe03f3f42ba 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c @@ -508,6 +508,21 @@ int adreno_set_param(struct msm_gpu *gpu, struct msm_context *ctx, if (!capable(CAP_SYS_ADMIN)) return UERR(EPERM, drm, "invalid permissions"); return msm_context_set_sysprof(ctx, gpu, value); + case MSM_PARAM_EN_VM_BIND: + /* We can only support VM_BIND with per-process pgtables: */ + if (ctx->vm == gpu->vm) + return UERR(EINVAL, drm, "requires per-process pgtables"); + + /* + * We can only swtich to VM_BIND mode if the VM has not yet + * been created: + */ + if (ctx->vm) + return UERR(EBUSY, drm, "VM already created"); + + ctx->userspace_managed_vm = value; + + return 0; default: return UERR(EINVAL, drm, "%s: invalid param: %u", gpu->name, param); } diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c index ac8a5b072afe..89cb7820064f 100644 --- a/drivers/gpu/drm/msm/msm_drv.c +++ b/drivers/gpu/drm/msm/msm_drv.c @@ -228,9 +228,21 @@ static void load_gpu(struct drm_device *dev) */ struct drm_gpuvm *msm_context_vm(struct drm_device *dev, struct msm_context *ctx) { + static DEFINE_MUTEX(init_lock); struct msm_drm_private *priv = dev->dev_private; - if (!ctx->vm) - ctx->vm = msm_gpu_create_private_vm(priv->gpu, current); + + /* Once ctx->vm is created it is valid for the lifetime of the context: */ + if (ctx->vm) + return ctx->vm; + + mutex_lock(&init_lock); + if (!ctx->vm) { + ctx->vm = msm_gpu_create_private_vm( + priv->gpu, current, !ctx->userspace_managed_vm); + + } + mutex_unlock(&init_lock); + return ctx->vm; } @@ -420,6 +432,9 @@ static int msm_ioctl_gem_info_iova(struct drm_device *dev, if (!priv->gpu) return -EINVAL; + if (msm_context_is_vmbind(ctx)) + return UERR(EINVAL, dev, "VM_BIND is enabled"); + if (should_fail(&fail_gem_iova, obj->size)) return -ENOMEM; @@ -441,6 +456,9 @@ static int msm_ioctl_gem_info_set_iova(struct drm_device *dev, if (!priv->gpu) return -EINVAL; + if (msm_context_is_vmbind(ctx)) + return UERR(EINVAL, dev, "VM_BIND is enabled"); + /* Only supported if per-process address space is supported: */ if (priv->gpu->vm == vm) return UERR(EOPNOTSUPP, dev, "requires per-process pgtables"); diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 89fead77c0d8..142845378deb 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -85,6 +85,14 @@ static void msm_gem_close(struct drm_gem_object *obj, struct drm_file *file) if (!ctx->vm) return; + /* + * VM_BIND does not depend on implicit teardown of VMAs on handle + * close, but instead on implicit teardown of the VM when the device + * is closed (see msm_gem_vm_close()) + */ + if (msm_context_is_vmbind(ctx)) + return; + /* * TODO we might need to kick this to a queue to avoid blocking * in CLOSE ioctl diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c index 82e33aa1ccd0..0314e15d04c2 100644 --- a/drivers/gpu/drm/msm/msm_gpu.c +++ b/drivers/gpu/drm/msm/msm_gpu.c @@ -831,7 +831,8 @@ static int get_clocks(struct platform_device *pdev, struct msm_gpu *gpu) /* Return a new address space for a msm_drm_private instance */ struct drm_gpuvm * -msm_gpu_create_private_vm(struct msm_gpu *gpu, struct task_struct *task) +msm_gpu_create_private_vm(struct msm_gpu *gpu, struct task_struct *task, + bool kernel_managed) { struct drm_gpuvm *vm = NULL; @@ -843,7 +844,7 @@ msm_gpu_create_private_vm(struct msm_gpu *gpu, struct task_struct *task) * the global one */ if (gpu->funcs->create_private_vm) { - vm = gpu->funcs->create_private_vm(gpu); + vm = gpu->funcs->create_private_vm(gpu, kernel_managed); if (!IS_ERR(vm)) to_msm_vm(vm)->pid = get_pid(task_pid(task)); } diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h index d1530de96315..448ebf721bd8 100644 --- a/drivers/gpu/drm/msm/msm_gpu.h +++ b/drivers/gpu/drm/msm/msm_gpu.h @@ -79,7 +79,7 @@ struct msm_gpu_funcs { void (*gpu_set_freq)(struct msm_gpu *gpu, struct dev_pm_opp *opp, bool suspended); struct drm_gpuvm *(*create_vm)(struct msm_gpu *gpu, struct platform_device *pdev); - struct drm_gpuvm *(*create_private_vm)(struct msm_gpu *gpu); + struct drm_gpuvm *(*create_private_vm)(struct msm_gpu *gpu, bool kernel_managed); uint32_t (*get_rptr)(struct msm_gpu *gpu, struct msm_ringbuffer *ring); /** @@ -370,6 +370,14 @@ struct msm_context { */ bool closed; + /** + * @userspace_managed_vm: + * + * Has userspace opted-in to userspace managed VM (ie. VM_BIND) via + * MSM_PARAM_EN_VM_BIND? + */ + bool userspace_managed_vm; + /** * @vm: * @@ -462,6 +470,22 @@ struct msm_context { struct drm_gpuvm *msm_context_vm(struct drm_device *dev, struct msm_context *ctx); +/** + * msm_context_is_vm_bind() - has userspace opted in to VM_BIND? + * + * @ctx: the drm_file context + * + * See MSM_PARAM_EN_VM_BIND. If userspace is managing the VM, it can + * do sparse binding including having multiple, potentially partial, + * mappings in the VM. Therefore certain legacy uabi (ie. GET_IOVA, + * SET_IOVA) are rejected because they don't have a sensible meaning. + */ +static inline bool +msm_context_is_vmbind(struct msm_context *ctx) +{ + return ctx->userspace_managed_vm; +} + /** * msm_gpu_convert_priority - Map userspace priority to ring # and sched priority * @@ -689,7 +713,8 @@ int msm_gpu_init(struct drm_device *drm, struct platform_device *pdev, const char *name, struct msm_gpu_config *config); struct drm_gpuvm * -msm_gpu_create_private_vm(struct msm_gpu *gpu, struct task_struct *task); +msm_gpu_create_private_vm(struct msm_gpu *gpu, struct task_struct *task, + bool kernel_managed); void msm_gpu_cleanup(struct msm_gpu *gpu); diff --git a/include/uapi/drm/msm_drm.h b/include/uapi/drm/msm_drm.h index 5bc5e4526ccf..b974f5a24dbc 100644 --- a/include/uapi/drm/msm_drm.h +++ b/include/uapi/drm/msm_drm.h @@ -93,6 +93,30 @@ struct drm_msm_timespec { #define MSM_PARAM_UCHE_TRAP_BASE 0x14 /* RO */ /* PRR (Partially Resident Region) is required for sparse residency: */ #define MSM_PARAM_HAS_PRR 0x15 /* RO */ +/* MSM_PARAM_EN_VM_BIND is set to 1 to enable VM_BIND ops. + * + * With VM_BIND enabled, userspace is required to allocate iova and use the + * VM_BIND ops for map/unmap ioctls. MSM_INFO_SET_IOVA and MSM_INFO_GET_IOVA + * will be rejected. (The latter does not have a sensible meaning when a BO + * can have multiple and/or partial mappings.) + * + * With VM_BIND enabled, userspace does not include a submit_bo table in the + * SUBMIT ioctl (this will be rejected), the resident set is determined by + * the the VM_BIND ops. + * + * Enabling VM_BIND will fail on devices which do not have per-process pgtables. + * And it is not allowed to disable VM_BIND once it has been enabled. + * + * Enabling VM_BIND should be done (attempted) prior to allocating any BOs or + * submitqueues of type MSM_SUBMITQUEUE_VM_BIND. + * + * Relatedly, when VM_BIND mode is enabled, the kernel will not try to recover + * from GPU faults or failed async VM_BIND ops, in particular because it is + * difficult to communicate to userspace which op failed so that userspace + * could rewind and try again. When the VM is marked unusable, the SUBMIT + * ioctl will throw -EPIPE. + */ +#define MSM_PARAM_EN_VM_BIND 0x16 /* WO, once */ /* For backwards compat. The original support for preemption was based on * a single ring per priority level so # of priority levels equals the # From patchwork Thu Jun 5 18:29:06 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 894503 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id F22A428312B for ; Thu, 5 Jun 2025 18:32:59 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.168.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749148381; cv=none; b=rQOQQ3TJT3Ikc34/+2SQWeszz0x3Jk8ag1O4WRxks+rCD8Ex4F+khiTDum4tu9iD9x5eao38lXXtDAoa++eDjSuaqJ6FETBavzxAWMZERGUmBx+1oRdYHbn4BteZboKrNaMd/hX9IvaBXxDBlEjGYzuTJ4gLeOUXYvhEoKx3Tso= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749148381; c=relaxed/simple; bh=ftDEYghv9ZPaM93JeZPk7reVm7j6fvbEKPYYqLyUebA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=E1XEUpCOS8iOmRIRK2ptBDN0MlKZN8N8I543KEzdsg8TRMPH0fftOdHa0PLRRbSnaVkYzFAavkQr4B6C5kUMLd3OmC8fOXo/DJVlzVEtj/HLHIzGXiAcsaC1KQNPXz78/mJ3uy5tXuM0fan/R+Hy/nRniQdOfNzBoxaaieugJgA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=kkWZuV7a; arc=none smtp.client-ip=205.220.168.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="kkWZuV7a" Received: from pps.filterd (m0279866.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 555GjXnr004287 for ; Thu, 5 Jun 2025 18:32:59 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=GjwdFtRmZCc ujedmTouJ/e/3QiW6i5MklXvEyiWEeF0=; b=kkWZuV7avWl80lhhB/hYQfk7lUC 6ExfCDrzM83Xwna53MMUjXxiazcxrSxAsGzeE65ihvVDCymzW84/GWfwdqWB4sa5 C16lWV87dXMvGc6HZ92FI0PbHMYkOmUt4wblVak9HqGGe0uen1FBf2BsjNJAEwjy /c/2iU+I+K3rbtane0VO8QWDqyaiWL2U2zoG66heRi0cbYyOQ/ikJvzek73kWwh1 D1tjYiOrm5vT6Ymg9bWJWJJHwwtzugOwkPgijgVJ7wR4hEX1/5dzmXsnndVMLM3Z MHTIGDiCa1zKKVJTH9bQFm+f+vGZo7HFMibe5ROQ7oyFXdwou8/2Re7O8Ng== Received: from mail-pf1-f200.google.com (mail-pf1-f200.google.com [209.85.210.200]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 471g8s2c69-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Thu, 05 Jun 2025 18:32:59 +0000 (GMT) Received: by mail-pf1-f200.google.com with SMTP id d2e1a72fcca58-7425efba1a3so1170599b3a.0 for ; Thu, 05 Jun 2025 11:32:59 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1749148378; x=1749753178; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=GjwdFtRmZCcujedmTouJ/e/3QiW6i5MklXvEyiWEeF0=; b=RpDffkWLcGC9p+U2XqNRU2H5EfIyNcIY+6bGQ9+UMJ+OFa8XGSnh3xH7FltEveTXg+ zJgIoNyNQeAuRJvMoWg1KtXIwCPDGrfSuCIxTA42M2jd2F6MREfcJD4OYgc3msqptSyA 2mu1vvIuKT7w/cRKt44sjPBC10I/zN+E7zPD8eGZLzJCEKwA5eMpEdujyfuWlEUe11Nc Dt14JhONp1UjrYQlI1pjHLrYbix/TRPQ4vdbRuefeL6SSME22TTDX3a7keUVEnoFQf+y pYXBjWKAXONMbZ8jdam+pWsYwbsSV4olmuszRAetWGHdOBlcZJMTzp+Q3E7ya3GfWZB1 OIZA== X-Forwarded-Encrypted: i=1; AJvYcCU7DZVrnL2i/gFSHN+pPwk+AuA8lG8Bn+xlS28sKXqvGG5oVuHeC8XS2yu1R9CzSzuNg8jiDhEuFkOS0jP0@vger.kernel.org X-Gm-Message-State: AOJu0YwxCPu24aeunxWrswtUtRAStI34deNBgQUlR3ARSL9eYI5kpDWR ZUtfp6+m3VNd0IJedcWnfubQHWIDHYmb5f30p+htEnxE6VjBNmWbhnZNfK8ETU8WHNbzLSqvefz z3udZGhYqUpSg2x++o1LHFSHGzCzT95icTFDhg7nVK0HeIRft9eKd3g1u7a5+CAFMuhoW X-Gm-Gg: ASbGncuLODWtQN1hFTI0mdp/xfTO6cUDw1IIcOx1BbiYx8xjm4gKwjUuGHzuZCBl+5p gteJSy33LgWRRWEvHErEcJVeRc6hKi7Vc/xK20K8sDSu/1bnILFry2SqbviyoeNxtqpztSE/qYw mY2LiOAaTcOMVwOYWWTIyY8FgiGakZH9ScWWLjyyAQkK42v0rEF/ZZPMDkXfVYVzdnsuHxgFYvv ZzG/zq8fa0OKVqGAbcEXWr0gt2GBX4xI70XMCQ5T0cnNU8xtL9aOhNInz3T65Y/wLJ3FvxCEG+O szd9/PWC0JmK1iT2nSGkTg== X-Received: by 2002:a05:6a00:f06:b0:742:a24d:aede with SMTP id d2e1a72fcca58-748185160dcmr6346507b3a.8.1749148378386; Thu, 05 Jun 2025 11:32:58 -0700 (PDT) X-Google-Smtp-Source: AGHT+IHuKpjDQOHpbnp+mogU9n35DFZViXAwwcqPGcim05ueVVuUmQIJFSRRzTywH2aBcEBZ1w/yng== X-Received: by 2002:a05:6a00:f06:b0:742:a24d:aede with SMTP id d2e1a72fcca58-748185160dcmr6346484b3a.8.1749148378012; Thu, 05 Jun 2025 11:32:58 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:89fa:e299:1a34:c1f5]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-747affcfa19sm13531812b3a.132.2025.06.05.11.32.57 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 05 Jun 2025 11:32:57 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Rob Clark , Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , Konrad Dybcio , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v6 21/40] drm/msm: Mark VM as unusable on GPU hangs Date: Thu, 5 Jun 2025 11:29:06 -0700 Message-ID: <20250605183111.163594-22-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250605183111.163594-1-robin.clark@oss.qualcomm.com> References: <20250605183111.163594-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: eQAZWsfUEeXSYUAOccd91WD_JIEm4r6R X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjA1MDE2NSBTYWx0ZWRfX9rJgNth7TB7k Ywd6EwsxrlN+vbuSrNibiHtMcqD09HtZ5yCTLqJPnn1WArM2UXQ7jtGtn05Do0k4Bfr6Aep84RV tZJp46EhxwkAoEdDHagvPAU8QvtpehKAte1XAVjIAB0pnWqGEeu5GuB1x6Sl7xt4g7e/f5pcBHS eKZWufhEUp4BYPfSdLYX6OdNAb6hAsAwO85wS1/at0m+qPJCYIjXqI7rhemRsm06BAxOw1i0yyy kP8LMpJf+r/3KL0RFCnkURxqhntgNKQxLbR9DMWBZZEfcmQitd9/tWno5IN2W8gbxpzYxfKUSqN Rslwqu0ddAzbCF1NAjpi6xHOoU3o2qAqTS35KULyqGdGWaJpH+CQ9MYgHA5aeQPbTPqGgwjHSew rbLvQ2gw5sXhO4xsfNuGvstpj5gA7nPk8Kg3VElLycxR2NA+LknzGnZmxPm5GRFkHRLQSNHk X-Authority-Analysis: v=2.4 cv=RdWQC0tv c=1 sm=1 tr=0 ts=6841e2db cx=c_pps a=mDZGXZTwRPZaeRUbqKGCBw==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=cm27Pg_UAAAA:8 a=EUspDBNiAAAA:8 a=DhdyfM8_h7Qj4WYt2N4A:9 a=zc0IvFSfCIW2DFIPzwfm:22 X-Proofpoint-GUID: eQAZWsfUEeXSYUAOccd91WD_JIEm4r6R X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.0.736,FMLib:17.12.80.40 definitions=2025-06-05_05,2025-06-05_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 bulkscore=0 adultscore=0 mlxscore=0 priorityscore=1501 phishscore=0 clxscore=1015 lowpriorityscore=0 malwarescore=0 suspectscore=0 impostorscore=0 spamscore=0 mlxlogscore=999 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506050165 From: Rob Clark If userspace has opted-in to VM_BIND, then GPU hangs and VM_BIND errors will mark the VM as unusable. Signed-off-by: Rob Clark Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gem.h | 17 +++++++++++++++++ drivers/gpu/drm/msm/msm_gem_submit.c | 3 +++ drivers/gpu/drm/msm/msm_gpu.c | 16 ++++++++++++++-- 3 files changed, 34 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index ec1a7a837e52..5e8c419ed834 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -76,6 +76,23 @@ struct msm_gem_vm { /** @managed: is this a kernel managed VM? */ bool managed; + + /** + * @unusable: True if the VM has turned unusable because something + * bad happened during an asynchronous request. + * + * We don't try to recover from such failures, because this implies + * informing userspace about the specific operation that failed, and + * hoping the userspace driver can replay things from there. This all + * sounds very complicated for little gain. + * + * Instead, we should just flag the VM as unusable, and fail any + * further request targeting this VM. + * + * As an analogy, this would be mapped to a VK_ERROR_DEVICE_LOST + * situation, where the logical device needs to be re-created. + */ + bool unusable; }; #define to_msm_vm(x) container_of(x, struct msm_gem_vm, base) diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c index 9d58d6f643af..fe43fd4049de 100644 --- a/drivers/gpu/drm/msm/msm_gem_submit.c +++ b/drivers/gpu/drm/msm/msm_gem_submit.c @@ -679,6 +679,9 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data, if (args->pad) return -EINVAL; + if (to_msm_vm(ctx->vm)->unusable) + return UERR(EPIPE, dev, "context is unusable"); + /* for now, we just have 3d pipe.. eventually this would need to * be more clever to dispatch to appropriate gpu module: */ diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c index 0314e15d04c2..6503ce655b10 100644 --- a/drivers/gpu/drm/msm/msm_gpu.c +++ b/drivers/gpu/drm/msm/msm_gpu.c @@ -386,8 +386,20 @@ static void recover_worker(struct kthread_work *work) /* Increment the fault counts */ submit->queue->faults++; - if (submit->vm) - to_msm_vm(submit->vm)->faults++; + if (submit->vm) { + struct msm_gem_vm *vm = to_msm_vm(submit->vm); + + vm->faults++; + + /* + * If userspace has opted-in to VM_BIND (and therefore userspace + * management of the VM), faults mark the VM as unusuable. This + * matches vulkan expectations (vulkan is the main target for + * VM_BIND) + */ + if (!vm->managed) + vm->unusable = true; + } get_comm_cmdline(submit, &comm, &cmd); From patchwork Thu Jun 5 18:29:07 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 894231 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C30BB283CBD for ; Thu, 5 Jun 2025 18:33:01 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.168.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749148383; cv=none; b=LFF1yo6upf8OMJiUUSghY9Rc5R5SZms96ib/ha5lWTFvUnXWE378Aj6QGEGf2zsJgha40q/zab1eo0RnL8KWYJWeC8lORl+mIcAnzu9IexyaxOiNbcaMBXv4Nf5npdYGDj7/wyIR1gUF4Iky6J9zjn4BYPND9YOW8ZaxS7MdTis= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749148383; c=relaxed/simple; bh=hd+SGUOq+Yjn92HqZGWhDKi1nbHdVQyx7YshAq31C8I=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=nCJstxbqfTzIK/HBW26dtxa+zenB16Mod86ub/Tf5GCuxQgaPtcHLqQJ+WPHNmRFrSRb2tqidy+VjmPYQhmxMrx+0i/+jh52W1wcBrSHA4ccNuvtHvgYG2/NYoF3MtZm/YiOfJ+ASTzkaZwqhsObr/BHQIfNhLTaGBdwvnFS1pI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=EG+GBmbC; arc=none smtp.client-ip=205.220.168.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="EG+GBmbC" Received: from pps.filterd (m0279862.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 555BJb13030292 for ; Thu, 5 Jun 2025 18:33:01 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=nqziWkTpKn7 +Zo16n4ZbnSlGZfnmg6jhLx65Rn+Pn3w=; b=EG+GBmbCNcEOpSWi45XyesNxaYZ dQbxM+yhtbmtR0ULneuEC9Oj++r1LltWpg22pMAdrNpcBi8KZ44q43OPCHbefStO oKtocQFqJHw7XMMpsTXrXFIWOvlzKV37Jpr3O7jIrbmYantwirkqkPF3RVAQcpqv Aq1vIADXSy5ydJ68+9a/tJyR/DjJ8O2cwExr5MJ6yI7RXZTtcU4bSuf5n7sZ/i+Y jpVWnIk5TLNRXy4/rGLMqWKnVqSq4lb8OdEDdS0T0LaCCTkkeQ3FXKC6NWTXSOtx ejobqcFLdBfj7NAyTYTcs7ylwcpPtygHVLYLR/iVyUgcSJJNqK2GOmVMxIw== Received: from mail-pl1-f197.google.com (mail-pl1-f197.google.com [209.85.214.197]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 472be8619v-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Thu, 05 Jun 2025 18:33:00 +0000 (GMT) Received: by mail-pl1-f197.google.com with SMTP id d9443c01a7336-234f1acc707so11270975ad.3 for ; Thu, 05 Jun 2025 11:33:00 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1749148380; x=1749753180; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=nqziWkTpKn7+Zo16n4ZbnSlGZfnmg6jhLx65Rn+Pn3w=; b=sL7SKgSn/pIc1OxgaT229symLBRehKyaqANISPkAyO49PXQlcJiCUg/R9/b2CBxtvP WFhyrfulTAWMvijL13tlT8cYki+4TKHXzziZ1GY5WQiDJzPdeAuUctjPrrq4rVkmjtXo 6L0CcNLOtuKo2UdwkqiB+1xGSsni0Cf9NOhGREiL24a/Y+EMDrJYIt8Kdk3tW57ljoHj sK02Oh3fz/geWIK/YDoCWVV2pt+B4sxm/FZqJJBltpGtpuI8Jj/FaS/dqNFQJPN5dJR/ bbXgTj0RFKnu+E+bIIPEre0bmQlq5vNY2HNkL1hjOPyymkZqACw8N49xeQmlYMianAm0 3WsQ== X-Forwarded-Encrypted: i=1; AJvYcCVQt+ojrY5KGe4RmNfZuXGCUnKo3JZRAGbZeJKACYPgdB6ctk0Vb18Qe+SHf36hoDDUc3SShYSYc02H+dOy@vger.kernel.org X-Gm-Message-State: AOJu0YybNBBLis4MSZkQWfSpTXIruRJnPF5LzRHPVj/Q4HWpVQ6g+QCF 69V6rrC+Bhl6PE3+XjppSyK5DraIbPJ23lNx/7AB2+UXk3gpjQ+71D34PwZMFTw57s7Vl1S3o67 9O2gv5nI7m4RR4vUZ/4TGwCHlky7RUGZg7vIkLhtvc5O+iaGAvK1gnejKSHwPDfBpH1ms X-Gm-Gg: ASbGncvsMKyiNk1uUzy/sg+AfRRiNX8ctAeK47zmsxxuwipQzlZ68N7JnA/cwIDH/Du ysePpukIM/GrXe9H04b8M/DrTobuBVXRP3pebafjLwUK7Ms13oyV/ImUTNIKgJgvqqVMLhYrmOB 26ciF+rW4HYE6UO5jeMsq94x3OlClkfXjVRsdIALsUSTaDKzcYng6V+jbdaK+KbsIWBH/PCpb+4 xxND4X5rNSAvqn8aKJAhsIxwYfGu423BeViRoV832LDxpMDWxoQAygbM12hgVJGsL9qPCjNY9YV XBUCyqWsix3j0owxnD1CoA== X-Received: by 2002:a17:903:46ce:b0:234:c5c1:9b5f with SMTP id d9443c01a7336-23601cfe62dmr6543895ad.16.1749148379771; Thu, 05 Jun 2025 11:32:59 -0700 (PDT) X-Google-Smtp-Source: AGHT+IF3Wc1ozqWJdEJOY5f3ma/qLJznKI268nR0qhjqBDV7W/JjvQrcki5qtl7M6f4gTc8B77fmMw== X-Received: by 2002:a17:903:46ce:b0:234:c5c1:9b5f with SMTP id d9443c01a7336-23601cfe62dmr6543465ad.16.1749148379354; Thu, 05 Jun 2025 11:32:59 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:89fa:e299:1a34:c1f5]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-236024815ccsm367605ad.69.2025.06.05.11.32.58 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 05 Jun 2025 11:32:58 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Rob Clark , Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , Konrad Dybcio , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= , linux-kernel@vger.kernel.org (open list), linux-media@vger.kernel.org (open list:DMA BUFFER SHARING FRAMEWORK:Keyword:\bdma_(?:buf|fence|resv)\b), linaro-mm-sig@lists.linaro.org (moderated list:DMA BUFFER SHARING FRAMEWORK:Keyword:\bdma_(?:buf|fence|resv)\b) Subject: [PATCH v6 22/40] drm/msm: Add _NO_SHARE flag Date: Thu, 5 Jun 2025 11:29:07 -0700 Message-ID: <20250605183111.163594-23-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250605183111.163594-1-robin.clark@oss.qualcomm.com> References: <20250605183111.163594-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Authority-Analysis: v=2.4 cv=bNYWIO+Z c=1 sm=1 tr=0 ts=6841e2dc cx=c_pps a=cmESyDAEBpBGqyK7t0alAg==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=cm27Pg_UAAAA:8 a=EUspDBNiAAAA:8 a=5GAAy6agFmV6x6zTEMEA:9 a=1OuFwYUASf3TG4hYMiVC:22 X-Proofpoint-GUID: pykWP3Jky-lC5J-9xsQX1krbgWW8asUq X-Proofpoint-ORIG-GUID: pykWP3Jky-lC5J-9xsQX1krbgWW8asUq X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjA1MDE2NSBTYWx0ZWRfX4DMXFAZnEFcU SJnZjogxiMdlZblB2vk2lEt+hQ+oDa1p7t5MMuvk63Jvueg+D41O7G2ADf48E9puV6mLLKyYtbm T05kHjt21Q9xPSuK8iB5TVb0r1OdFaW0ap96YT3IP0VujUQbPkrxAioKzEhKXXBMnuvh1Iv6Uq+ 9mi6PwBWkN2eP1dkoqm7iJxlp+AKoo1IEXcDNoT881EEsc3ouQUTsCiSIQXfcKtN6Kk909je4Yu wqcL9ZlMIxuoGpN1+AyHiK5oYZqR6U7Z2KynWJxdxD5WMyNyUGfflOMzcp5U+ZRoMLyrHuCBPm3 yRN5PfZo9vMNclMIC0ntaUk+mg0jsggi0JAsw7apPv36lrcZsHj9OXAeQzOCIKjh+9xEzXZRRls zpViEt87yI4lubUTniTZ4LwdeY3QZ4m6izBd/ek4BfQyQuwqQSz+jNUPNycsjCWASFw5xxET X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.0.736,FMLib:17.12.80.40 definitions=2025-06-05_05,2025-06-05_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 lowpriorityscore=0 malwarescore=0 phishscore=0 priorityscore=1501 suspectscore=0 mlxscore=0 impostorscore=0 spamscore=0 clxscore=1015 mlxlogscore=999 adultscore=0 bulkscore=0 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506050165 From: Rob Clark Buffers that are not shared between contexts can share a single resv object. This way drm_gpuvm will not track them as external objects, and submit-time validating overhead will be O(1) for all N non-shared BOs, instead of O(n). Signed-off-by: Rob Clark Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_drv.h | 1 + drivers/gpu/drm/msm/msm_gem.c | 21 +++++++++++++++++++++ drivers/gpu/drm/msm/msm_gem_prime.c | 15 +++++++++++++++ include/uapi/drm/msm_drm.h | 14 ++++++++++++++ 4 files changed, 51 insertions(+) diff --git a/drivers/gpu/drm/msm/msm_drv.h b/drivers/gpu/drm/msm/msm_drv.h index 136dd928135a..2f42d075f13a 100644 --- a/drivers/gpu/drm/msm/msm_drv.h +++ b/drivers/gpu/drm/msm/msm_drv.h @@ -246,6 +246,7 @@ int msm_gem_prime_vmap(struct drm_gem_object *obj, struct iosys_map *map); void msm_gem_prime_vunmap(struct drm_gem_object *obj, struct iosys_map *map); struct drm_gem_object *msm_gem_prime_import_sg_table(struct drm_device *dev, struct dma_buf_attachment *attach, struct sg_table *sg); +struct dma_buf *msm_gem_prime_export(struct drm_gem_object *obj, int flags); int msm_gem_prime_pin(struct drm_gem_object *obj); void msm_gem_prime_unpin(struct drm_gem_object *obj); diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 142845378deb..ec349719b49a 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -554,6 +554,9 @@ static int get_and_pin_iova_range_locked(struct drm_gem_object *obj, msm_gem_assert_locked(obj); + if (to_msm_bo(obj)->flags & MSM_BO_NO_SHARE) + return -EINVAL; + vma = get_vma_locked(obj, vm, range_start, range_end); if (IS_ERR(vma)) return PTR_ERR(vma); @@ -1084,6 +1087,14 @@ static void msm_gem_free_object(struct drm_gem_object *obj) put_pages(obj); } + if (msm_obj->flags & MSM_BO_NO_SHARE) { + struct drm_gem_object *r_obj = + container_of(obj->resv, struct drm_gem_object, _resv); + + /* Drop reference we hold to shared resv obj: */ + drm_gem_object_put(r_obj); + } + drm_gem_object_release(obj); kfree(msm_obj->metadata); @@ -1116,6 +1127,15 @@ int msm_gem_new_handle(struct drm_device *dev, struct drm_file *file, if (name) msm_gem_object_set_name(obj, "%s", name); + if (flags & MSM_BO_NO_SHARE) { + struct msm_context *ctx = file->driver_priv; + struct drm_gem_object *r_obj = drm_gpuvm_resv_obj(ctx->vm); + + drm_gem_object_get(r_obj); + + obj->resv = r_obj->resv; + } + ret = drm_gem_handle_create(file, obj, handle); /* drop reference from allocate - handle holds it now */ @@ -1148,6 +1168,7 @@ static const struct drm_gem_object_funcs msm_gem_object_funcs = { .free = msm_gem_free_object, .open = msm_gem_open, .close = msm_gem_close, + .export = msm_gem_prime_export, .pin = msm_gem_prime_pin, .unpin = msm_gem_prime_unpin, .get_sg_table = msm_gem_prime_get_sg_table, diff --git a/drivers/gpu/drm/msm/msm_gem_prime.c b/drivers/gpu/drm/msm/msm_gem_prime.c index ee267490c935..1a6d8099196a 100644 --- a/drivers/gpu/drm/msm/msm_gem_prime.c +++ b/drivers/gpu/drm/msm/msm_gem_prime.c @@ -16,6 +16,9 @@ struct sg_table *msm_gem_prime_get_sg_table(struct drm_gem_object *obj) struct msm_gem_object *msm_obj = to_msm_bo(obj); int npages = obj->size >> PAGE_SHIFT; + if (msm_obj->flags & MSM_BO_NO_SHARE) + return ERR_PTR(-EINVAL); + if (WARN_ON(!msm_obj->pages)) /* should have already pinned! */ return ERR_PTR(-ENOMEM); @@ -45,6 +48,15 @@ struct drm_gem_object *msm_gem_prime_import_sg_table(struct drm_device *dev, return msm_gem_import(dev, attach->dmabuf, sg); } + +struct dma_buf *msm_gem_prime_export(struct drm_gem_object *obj, int flags) +{ + if (to_msm_bo(obj)->flags & MSM_BO_NO_SHARE) + return ERR_PTR(-EPERM); + + return drm_gem_prime_export(obj, flags); +} + int msm_gem_prime_pin(struct drm_gem_object *obj) { struct page **pages; @@ -53,6 +65,9 @@ int msm_gem_prime_pin(struct drm_gem_object *obj) if (obj->import_attach) return 0; + if (to_msm_bo(obj)->flags & MSM_BO_NO_SHARE) + return -EINVAL; + pages = msm_gem_pin_pages_locked(obj); if (IS_ERR(pages)) ret = PTR_ERR(pages); diff --git a/include/uapi/drm/msm_drm.h b/include/uapi/drm/msm_drm.h index b974f5a24dbc..1bccc347945c 100644 --- a/include/uapi/drm/msm_drm.h +++ b/include/uapi/drm/msm_drm.h @@ -140,6 +140,19 @@ struct drm_msm_param { #define MSM_BO_SCANOUT 0x00000001 /* scanout capable */ #define MSM_BO_GPU_READONLY 0x00000002 +/* Private buffers do not need to be explicitly listed in the SUBMIT + * ioctl, unless referenced by a drm_msm_gem_submit_cmd. Private + * buffers may NOT be imported/exported or used for scanout (or any + * other situation where buffers can be indefinitely pinned, but + * cases other than scanout are all kernel owned BOs which are not + * visible to userspace). + * + * In exchange for those constraints, all private BOs associated with + * a single context (drm_file) share a single dma_resv, and if there + * has been no eviction since the last submit, there are no per-BO + * bookeeping to do, significantly cutting the SUBMIT overhead. + */ +#define MSM_BO_NO_SHARE 0x00000004 #define MSM_BO_CACHE_MASK 0x000f0000 /* cache modes */ #define MSM_BO_CACHED 0x00010000 @@ -149,6 +162,7 @@ struct drm_msm_param { #define MSM_BO_FLAGS (MSM_BO_SCANOUT | \ MSM_BO_GPU_READONLY | \ + MSM_BO_NO_SHARE | \ MSM_BO_CACHE_MASK) struct drm_msm_gem_new { From patchwork Thu Jun 5 18:29:08 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 894501 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C9BA627CB04 for ; Thu, 5 Jun 2025 18:33:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.168.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749148384; cv=none; b=AVgepg9hovOhJsh2CcOO2gvADH8UPyn1AYdwsOSr7MIhITTZPM41kHplZ6dmaw2LMZyw4oVQdTo8nYSLmc7otRtuQ6kNlokSxLKVtJFAgYN+O6F4cCOD4L0T5JXX3j3dNvud4i2szY47HSqxUzY4fU3Q6CyWTeC2njH8VxUq698= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749148384; c=relaxed/simple; bh=4K+ThbkPsMfskaKnhREr4LpxGki6i+xf+4qaxzdTSv0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=XjpkuEQQP9/LOyknQhNi0OX+00LyTaQ4SvfgYYBySj2WPGLL0hAeajapJhncVnSAtDYTWiAW93xI4yKfM2QoLIFIeJM1puzUEefrJ4s3GpUZYsFlBUhlNdDZvTfN3POjA3HiNSglWqcU1Rm5++eWrJKhYAX02eql0z+2P5HtJV0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=I//7Etx8; arc=none smtp.client-ip=205.220.168.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="I//7Etx8" Received: from pps.filterd (m0279862.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 5559kgBc014378 for ; Thu, 5 Jun 2025 18:33:02 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=NGLp+jqx0Ko YMf7JXKfzU6SErPKJ/xFeT0XGQHG0jTw=; b=I//7Etx8zRlLq5SrZlnM73W8V/v U8OLIX275hpszornK8w1yHVIvocN0tdWCenQbTLRT95l9NmSPzCdGgARNj9MRfrb BW1uUOt+Euot9uIF+mmf4zMzwClJxAlaJPxzSayS0lzJRW94+IkuYpFXehXQdgSX eEjLgOPyFA/xcdf4OZlXM+UXOyCRIXtPAThqLvPpmyBLMZCvl58haGxoueYusmvD LzBOe0nq4JTVuGpBBAf62wPlbCNqMZsS3RYWp66zgyvC1SJntiN5ZNlSZmGwO8hu ByMaMiZqZkngHstNtKp+hjpHY9xRmk2OAxlka7kEtfXNd0hNXEuRIaEfxsQ== Received: from mail-pl1-f197.google.com (mail-pl1-f197.google.com [209.85.214.197]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 472be861a0-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Thu, 05 Jun 2025 18:33:01 +0000 (GMT) Received: by mail-pl1-f197.google.com with SMTP id d9443c01a7336-235f77f86f6so6209015ad.2 for ; Thu, 05 Jun 2025 11:33:01 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1749148381; x=1749753181; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=NGLp+jqx0KoYMf7JXKfzU6SErPKJ/xFeT0XGQHG0jTw=; b=goCE+EpcZx88zxSpt4lovnGLdv625s3BR6DBc+v90v0OP9SmGSUNwA3JuAZczfKhcY zdlJPj+s2PLOIHKpd4pd9t50+KoUlXCMF8VN6/JYEsKk4nNsm/3yfYsUaLPK4AEccK1Q CjHl5YlVShHcQuSzNG7dUyYYNikP3oe54TNfVGlm6sLf5IJV3g4Z0fAWVSRMicRhXXDL S5IiHCaXTgZ35f5pF7YlNXs60EAEbCq9Yx1mAK6ZxEXb+d4vMWHt/dgq935CHMj7d4GH WQo68hq5RY09rDQ7c/E4ja2VUjH/rd1g4LvniJuKoRaIdhV3MY5fJOM3HgaixcfeRfPp 2lbQ== X-Forwarded-Encrypted: i=1; AJvYcCWns4Lu740I164QyykBWwcMUXuNtwg87cpHRhmGPgwtJxbXMl3K4JOURBE+xfBOtg4dxVx67yUxiihteNeJ@vger.kernel.org X-Gm-Message-State: AOJu0YwIgp+dMTTEbtTi4nKYLw9Rx7nCtS9VElFHfb/RStANOcCgsXL8 xiD7YBTCRNuJGtDPJMvlgqcPd7IfjsKhKnM/EtT5z6Z3Uq42+5lIJ76zY2AdFad0XnrstQ8KetA dpz4CtQnOLA2cI69gAyU62omYwZsUx+03dsamIYEdY06VkPtPU4a4tRTQlzkCZUc8ZqoY X-Gm-Gg: ASbGncshitjFXfat1xAqlfO1lqjihZFB6MVOcdwifvdsb34aUOsz55bxMkHDIf+x96W kglTORI/ufh/aRzKpJolUSCm/PQ2PWUYciMaUq01856+VFdDr6/pIk9b/ObpcYcTJG+ztnVaz6W I/NJ3pPnUhNFCNMboPrUCLgEpDqVXimLci9a1v7UQ2CTcKv5eqKX4OR2+yOeMnlVQPRU2zg+s5P /oMbR3ZpyNsIWEYTm3YXmVP9oKU800KSlcLcMRvger9VQ7I0WI1Gxhg5aP3cBto6sw5UnJNu86W rXeGy902oYkXqm5tLCQLiBLX3pmR4CzN X-Received: by 2002:a17:902:ec92:b0:235:f298:cbb3 with SMTP id d9443c01a7336-23601d05c8bmr4749905ad.18.1749148380933; Thu, 05 Jun 2025 11:33:00 -0700 (PDT) X-Google-Smtp-Source: AGHT+IEpPmwrnH8+LJaSqhhgdYBbgE1xYsGuSFpMvCfWrk4O73LnRkaIOhIZ9zOZwzNOMM2oXB4iOg== X-Received: by 2002:a17:902:ec92:b0:235:f298:cbb3 with SMTP id d9443c01a7336-23601d05c8bmr4749565ad.18.1749148380530; Thu, 05 Jun 2025 11:33:00 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:89fa:e299:1a34:c1f5]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-31349f453e4sm68227a91.28.2025.06.05.11.33.00 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 05 Jun 2025 11:33:00 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Rob Clark , Rob Clark , Rob Clark , Sean Paul , Konrad Dybcio , Abhinav Kumar , Dmitry Baryshkov , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v6 23/40] drm/msm: Crashdump prep for sparse mappings Date: Thu, 5 Jun 2025 11:29:08 -0700 Message-ID: <20250605183111.163594-24-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250605183111.163594-1-robin.clark@oss.qualcomm.com> References: <20250605183111.163594-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Authority-Analysis: v=2.4 cv=bNYWIO+Z c=1 sm=1 tr=0 ts=6841e2dd cx=c_pps a=cmESyDAEBpBGqyK7t0alAg==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=cm27Pg_UAAAA:8 a=EUspDBNiAAAA:8 a=bYXzjpskvHxJzFY9Y_MA:9 a=1OuFwYUASf3TG4hYMiVC:22 X-Proofpoint-GUID: vTGBT2fdn797o3Q_altg456mxvG11DK- X-Proofpoint-ORIG-GUID: vTGBT2fdn797o3Q_altg456mxvG11DK- X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjA1MDE2NSBTYWx0ZWRfX9CWaHVjGITus tqjpgEppKZ/7ZrQKXnNoM2WQV445O8MquPMRANDt/vqtKwMD6tqBAA3h7W/oXKPaBtgESsWHKZM scfD5Fd2w8RJMS8+f1obQISWAOGGyAb7jG+uBcPPTzIepsi9IH0Y45xej9+KIatGjSH3E+qDZda rXQlKEaVxDRkC2hlglKSKqAMZJrkUpzUC8n+5TJEgqAkeHzWJDfT5RWjpJ0AreV3+k5jlfhrgMG QNa3ZvR0XIkXkZ6x39MY1Yd3HhB88e+/X8QSWPM/hc2vy8F0yKytio66jlTmiK63DmRDzEAGXCY reLYpsw0/VyhMTgs2PsEHFr2WWjZyqgZw4MPg5YXoMTv0nDo0Bhn42agzlMCmupRRtvl8UQqmR1 QsOM32Mgdp1gXztGXB9rKMJ46Yv+43djC0fxn5Vg+h5lqPQGON2nw+tsBToHUUyVKl3DFG4d X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.0.736,FMLib:17.12.80.40 definitions=2025-06-05_05,2025-06-05_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 lowpriorityscore=0 malwarescore=0 phishscore=0 priorityscore=1501 suspectscore=0 mlxscore=0 impostorscore=0 spamscore=0 clxscore=1015 mlxlogscore=931 adultscore=0 bulkscore=0 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506050165 From: Rob Clark In this case, userspace could request dumping partial GEM obj mappings. Also drop use of should_dump() helper, which really only makes sense in the old submit->bos[] table world. Signed-off-by: Rob Clark Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gpu.c | 17 ++++++++++------- 1 file changed, 10 insertions(+), 7 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c index 6503ce655b10..2eaca2a22de9 100644 --- a/drivers/gpu/drm/msm/msm_gpu.c +++ b/drivers/gpu/drm/msm/msm_gpu.c @@ -219,13 +219,14 @@ static void msm_gpu_devcoredump_free(void *data) } static void msm_gpu_crashstate_get_bo(struct msm_gpu_state *state, - struct drm_gem_object *obj, u64 iova, bool full) + struct drm_gem_object *obj, u64 iova, + bool full, size_t offset, size_t size) { struct msm_gpu_state_bo *state_bo = &state->bos[state->nr_bos]; struct msm_gem_object *msm_obj = to_msm_bo(obj); /* Don't record write only objects */ - state_bo->size = obj->size; + state_bo->size = size; state_bo->flags = msm_obj->flags; state_bo->iova = iova; @@ -236,7 +237,7 @@ static void msm_gpu_crashstate_get_bo(struct msm_gpu_state *state, if (full) { void *ptr; - state_bo->data = kvmalloc(obj->size, GFP_KERNEL); + state_bo->data = kvmalloc(size, GFP_KERNEL); if (!state_bo->data) goto out; @@ -249,7 +250,7 @@ static void msm_gpu_crashstate_get_bo(struct msm_gpu_state *state, goto out; } - memcpy(state_bo->data, ptr, obj->size); + memcpy(state_bo->data, ptr + offset, size); msm_gem_put_vaddr(obj); } out: @@ -279,6 +280,7 @@ static void msm_gpu_crashstate_capture(struct msm_gpu *gpu, state->fault_info = gpu->fault_info; if (submit) { + extern bool rd_full; int i; if (state->fault_info.ttbr0) { @@ -294,9 +296,10 @@ static void msm_gpu_crashstate_capture(struct msm_gpu *gpu, sizeof(struct msm_gpu_state_bo), GFP_KERNEL); for (i = 0; state->bos && i < submit->nr_bos; i++) { - msm_gpu_crashstate_get_bo(state, submit->bos[i].obj, - submit->bos[i].iova, - should_dump(submit, i)); + struct drm_gem_object *obj = submit->bos[i].obj; + bool dump = rd_full || (submit->bos[i].flags & MSM_SUBMIT_BO_DUMP); + msm_gpu_crashstate_get_bo(state, obj, submit->bos[i].iova, + dump, 0, obj->size); } } From patchwork Thu Jun 5 18:29:09 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 894230 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B04FB28469A for ; Thu, 5 Jun 2025 18:33:03 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.168.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749148385; cv=none; b=ei17PJnBVCKBV6HAPWTIoW4aZoCBxCnUZ8HMddrAGnZRCgeutejR7xlJlpGHm9FcRT37MozyDWtHrcDO/Cz6eXXNrvQK+e0VZKTQFowRNuPovjQH4qbX8KKR73Jzi/l3GoRkaxNMdKZ2i3SiGS4+uZEiUV0iru/u0WO+lSli0Lw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749148385; c=relaxed/simple; bh=jAFX8qN/ZOAZrSo0QLmGzc4udiuEqBJZUE1qEmE5rxA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Du2sAz6NRfckPHLLsZAeWA4C/W0q7PiRPEj7fPyje/TJjrFpuvngq6tNagF03u5AWBMjZSuu6gX+lsF30bNwI+H8wLTc+AyxxcjqR0jaYGzzyboTrJivkmOeZsPYZ370gxTdQGx5HqK2Cli309w9lTSQT2PkWyhjfOzwjahlyBA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=DK/cxvkV; arc=none smtp.client-ip=205.220.168.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="DK/cxvkV" Received: from pps.filterd (m0279867.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 555GoWp9022437 for ; Thu, 5 Jun 2025 18:33:03 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=QOMNbElsFZP lX1Zq+j4xLdTNJSwjVmJdOq5xQhhnr9E=; b=DK/cxvkVFPdbTRAAbXd5RjsD9Vx 8v1j9sP812BsQNvgU4jcIshXINVo/is6E1Q+5+Uqpto1FHGgun3r2hPC0abkzMPu Osk6yDURYZ5mTV+38Kmcl9F5Pi1fO0Q9kRXvc2agJHyuYPwfaLlxeZ/p+lm0WmwK ZvKcyjfIVdxJ9WocnbCXSaVcRSpfzpdbTgVe5Poe7Q5sganhvLnzeo0KM3cZi73K i5c6+q4Zc2efEY5f2O1sGk3yy3JG3pjcXjI6HXWIhDYp1V2PSBBWKt/n0MKEERIu qhgxUWrEvm9LgVVSiIMor/6qQn+7FbShaNc6Q6ZGSqz2iXM+oIudO7WK6wA== Received: from mail-pf1-f200.google.com (mail-pf1-f200.google.com [209.85.210.200]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 471g8t2act-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Thu, 05 Jun 2025 18:33:03 +0000 (GMT) Received: by mail-pf1-f200.google.com with SMTP id d2e1a72fcca58-73c09e99069so1446713b3a.0 for ; Thu, 05 Jun 2025 11:33:02 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1749148382; x=1749753182; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=QOMNbElsFZPlX1Zq+j4xLdTNJSwjVmJdOq5xQhhnr9E=; b=futjJNMsaiKczagHak6/FqccDvlQ7p7Nytkrsa11DEGa/VIOIEWKW8QriJlQ2pAVgH 0BTj7btaQCuUzQYBXOnr8eztmTjiCdlXDXJ+W8kHSDuy8mUPsWR9KSRZ3nGw2Pu8lwN8 A/8YZpUcZZcT1XLrxxpJpEVe1WVbnEjgYJ7p0owz4wkebvjjNg9N4q76kSR/wikzNGww W5QzTi8dR4Tvqx3pZL2dOwfodXCKZPgF8GX6KqG5/IK9D++HuzMej1RbFZm45NU8WjIL xXb7JBRtVvljHzYtJmPUU0Hjx4CouPiwcBapufAvopJj4yprDxPxgEtTsP14sHLX0jVr Denw== X-Forwarded-Encrypted: i=1; AJvYcCX3Q+PzIcct9BoFZjqky2eoaNXxCDU0IutkIFlDo4LR4gJW2gN2Y250UfrRwAGYfzy7clgLKtVk6lnvkZsD@vger.kernel.org X-Gm-Message-State: AOJu0YxSxDKoRgSqcZYCM9LhKGMkgFE16311IHItxfaJ6LaPaMQPFIyW hUb+XIb3WDH/20LAf/iJ0lQwsKuRC3tnb8xZ9kyuyNDWyEV/uRLmJUJBJU3tGCFV0bp9R2/QXpc /HttAi9PLsUb9UDYRCg2W/LWCkzCjxF0rKrC+RHcHNoUUYqrRmxlBwOu8OjSIfQmuqdRC X-Gm-Gg: ASbGncuCsJKkZZFmsKy+9TYkwuXP/PjNY+FZYz4SfanNRjyqfgObI5iVbpksOsSkC97 /etB0aBm5BfuPASpODNS9Zw1a60izYROEhggh6FHcevD53GD3mgmAbHQgg68e7rJb8gJu02rg9f 7uDXAmqkC2+LdMjTdw27TZs7rZK/u6+6paBCSUFcczQTF/6VgeaXO8L32rV58GUei919oLFJU0R +F7G44Ib+ZcpXPfl/3bmQwyhWwnAcxaaar3g/ElrPOX7Ru92shPI88768BMFWpCJy7MOld86cUR w8IYr5PRRqvov+w9h4DTew== X-Received: by 2002:a05:6a00:9292:b0:747:af1c:6c12 with SMTP id d2e1a72fcca58-74827e7b12cmr913710b3a.9.1749148382117; Thu, 05 Jun 2025 11:33:02 -0700 (PDT) X-Google-Smtp-Source: AGHT+IHDrhL3Dn+k59t+JotHvK+0jIuQye7jcSS/bDJ3Rw3Z0zwKOzep8XVvL5+334VNT3dzfxxg1A== X-Received: by 2002:a05:6a00:9292:b0:747:af1c:6c12 with SMTP id d2e1a72fcca58-74827e7b12cmr913677b3a.9.1749148381745; Thu, 05 Jun 2025 11:33:01 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:89fa:e299:1a34:c1f5]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-747afe96799sm13679698b3a.15.2025.06.05.11.33.01 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 05 Jun 2025 11:33:01 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Rob Clark , Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v6 24/40] drm/msm: rd dumping prep for sparse mappings Date: Thu, 5 Jun 2025 11:29:09 -0700 Message-ID: <20250605183111.163594-25-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250605183111.163594-1-robin.clark@oss.qualcomm.com> References: <20250605183111.163594-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: wZRssgufOrAr1hUDYLxuxts29TXcGwqZ X-Authority-Analysis: v=2.4 cv=EPcG00ZC c=1 sm=1 tr=0 ts=6841e2df cx=c_pps a=mDZGXZTwRPZaeRUbqKGCBw==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=cm27Pg_UAAAA:8 a=EUspDBNiAAAA:8 a=uvlqqL4q8Y98p8K7alsA:9 a=zc0IvFSfCIW2DFIPzwfm:22 X-Proofpoint-GUID: wZRssgufOrAr1hUDYLxuxts29TXcGwqZ X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjA1MDE2NSBTYWx0ZWRfX7Qr9xQbaFUdo Fuvu+wLhC10jRxbJCoahQTpMC3pEwfRbLr5uGnszs6mMzALpusLCJEHcpUoxqsHxx+tbfKMavEB +WsHAUlL5U1IeiYusmeKMCAnbblO01mLx9L/j6FfsKsT2+4FD7HLFdlcEy5Sm5DdhFVtDN8fsMs cRvfme6bsuYvRoEnlXTqmO3zbOthRBS1/3gres1n3Y/FpkoFNQ3roZnkofHYJgo+5y8x0SghTqL ENrIVZM63nYUaIp+6XfM+ECzxNrgaHUGOHSRZ17IKxypkC7wpOAywz/Qm1i8OKcMjJUL6c/NUes E0MoJQz3HpmdSLwIo0Q4IbT+04aIMD/EWRa8Z+hHlKnU/xwkzKS9RUInUVADfnRw0XzNY/98XUD OTAOvPT8rlSBaahvDfnyXYqn/Rt1Q7nyWme2doV1zZVLWIsds9J/iTlVNphC9AGosbQlA0rv X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.0.736,FMLib:17.12.80.40 definitions=2025-06-05_05,2025-06-05_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 suspectscore=0 bulkscore=0 priorityscore=1501 phishscore=0 impostorscore=0 lowpriorityscore=0 adultscore=0 malwarescore=0 mlxscore=0 clxscore=1015 mlxlogscore=999 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506050165 From: Rob Clark Similar to the previous commit, add support for dumping partial mappings. Signed-off-by: Rob Clark Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gem.h | 10 --------- drivers/gpu/drm/msm/msm_rd.c | 38 ++++++++++++++++------------------- 2 files changed, 17 insertions(+), 31 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index 5e8c419ed834..b44a4f7313c9 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -403,14 +403,4 @@ static inline void msm_gem_submit_put(struct msm_gem_submit *submit) void msm_submit_retire(struct msm_gem_submit *submit); -/* helper to determine of a buffer in submit should be dumped, used for both - * devcoredump and debugfs cmdstream dumping: - */ -static inline bool -should_dump(struct msm_gem_submit *submit, int idx) -{ - extern bool rd_full; - return rd_full || (submit->bos[idx].flags & MSM_SUBMIT_BO_DUMP); -} - #endif /* __MSM_GEM_H__ */ diff --git a/drivers/gpu/drm/msm/msm_rd.c b/drivers/gpu/drm/msm/msm_rd.c index 39138e190cb9..edbcb93410a9 100644 --- a/drivers/gpu/drm/msm/msm_rd.c +++ b/drivers/gpu/drm/msm/msm_rd.c @@ -308,21 +308,11 @@ void msm_rd_debugfs_cleanup(struct msm_drm_private *priv) priv->hangrd = NULL; } -static void snapshot_buf(struct msm_rd_state *rd, - struct msm_gem_submit *submit, int idx, - uint64_t iova, uint32_t size, bool full) +static void snapshot_buf(struct msm_rd_state *rd, struct drm_gem_object *obj, + uint64_t iova, bool full, size_t offset, size_t size) { - struct drm_gem_object *obj = submit->bos[idx].obj; - unsigned offset = 0; const char *buf; - if (iova) { - offset = iova - submit->bos[idx].iova; - } else { - iova = submit->bos[idx].iova; - size = obj->size; - } - /* * Always write the GPUADDR header so can get a complete list of all the * buffers in the cmd @@ -333,10 +323,6 @@ static void snapshot_buf(struct msm_rd_state *rd, if (!full) return; - /* But only dump the contents of buffers marked READ */ - if (!(submit->bos[idx].flags & MSM_SUBMIT_BO_READ)) - return; - buf = msm_gem_get_vaddr_active(obj); if (IS_ERR(buf)) return; @@ -352,6 +338,7 @@ static void snapshot_buf(struct msm_rd_state *rd, void msm_rd_dump_submit(struct msm_rd_state *rd, struct msm_gem_submit *submit, const char *fmt, ...) { + extern bool rd_full; struct task_struct *task; char msg[256]; int i, n; @@ -385,16 +372,25 @@ void msm_rd_dump_submit(struct msm_rd_state *rd, struct msm_gem_submit *submit, rd_write_section(rd, RD_CMD, msg, ALIGN(n, 4)); - for (i = 0; i < submit->nr_bos; i++) - snapshot_buf(rd, submit, i, 0, 0, should_dump(submit, i)); + for (i = 0; i < submit->nr_bos; i++) { + struct drm_gem_object *obj = submit->bos[i].obj; + bool dump = rd_full || (submit->bos[i].flags & MSM_SUBMIT_BO_DUMP); + + snapshot_buf(rd, obj, submit->bos[i].iova, dump, 0, obj->size); + } for (i = 0; i < submit->nr_cmds; i++) { uint32_t szd = submit->cmd[i].size; /* in dwords */ + int idx = submit->cmd[i].idx; + bool dump = rd_full || (submit->bos[idx].flags & MSM_SUBMIT_BO_DUMP); /* snapshot cmdstream bo's (if we haven't already): */ - if (!should_dump(submit, i)) { - snapshot_buf(rd, submit, submit->cmd[i].idx, - submit->cmd[i].iova, szd * 4, true); + if (!dump) { + struct drm_gem_object *obj = submit->bos[idx].obj; + size_t offset = submit->cmd[i].iova - submit->bos[idx].iova; + + snapshot_buf(rd, obj, submit->cmd[i].iova, true, + offset, szd * 4); } } From patchwork Thu Jun 5 18:29:10 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 894229 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4505327CB06 for ; Thu, 5 Jun 2025 18:33:05 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.180.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749148389; cv=none; b=jUaYFemtsQUHDaLEql2W8UDeAr7kZFKIePm7/gTy5EKghBcJnr2OCTxWRlf8Tkpyw4qI9OKynnMmUtk2I0KCbgNDZ8trEat5f3W+qVCBH18FcIBtlskIHrU6Rlw0uZqnNpte5MAzY2OGPShGqErX5EN1DlLej3BzW4qeZgWtgWg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749148389; c=relaxed/simple; bh=cVRNX2A6ELOpK0RfBGNzCk+LFIQL5ZAU1DSg6RQcvzM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=PRoQtyzKCg89keSVMZDRF+cyn5Cc4rG1+dE0k/8w/bZdvWPgZM52P6ViBbwjCwTB2bLE0/gQ7euzuzG64JTvMgGeme7ygjfnnzRQXRz1U2jmJrzyz9tfHrhymFE2nGgbULlW5e+hRCAMlt0yYVn8/cW/YmlisI8yceDStsHCapo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=UC4FJCH5; arc=none smtp.client-ip=205.220.180.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="UC4FJCH5" Received: from pps.filterd (m0279873.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 555HbtTr005392 for ; Thu, 5 Jun 2025 18:33:05 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=F6qh8d6fUyE 8PFPWr8kAxkQ8qVtIka0LBkE1NoV8MqA=; b=UC4FJCH5X1JLCxB5ylfxR7L8laL 9MqIhJsSIs3QAhPEEhv1Ikih6CoaxRVdPGUFvnCgbfSSMa8s5xTTUZEqg94X3QZp y9n0luRXoqq8kjh6UI4M2yXyWpINJkJZQX0hP/A8+mFwRYRwpZ9mjqV9lMXaTwGt +mQ6pWjwAu04mrjCQ4tUSckZ5CIoWORh7LjUR100dorIuMDxXXoMsAiUDtAI6t9l FWMq0BjxWCK9uz3wO94yrSvlLcE+lJ4Zne8ByBpQzh1bY1VrLb9iVBz//IiTIvw5 dMIilNjwl/mAb0HFDexazbq9pzTLUfeeAmbmfIwg3RlMSczgOQYm5zzx4LA== Received: from mail-pl1-f199.google.com (mail-pl1-f199.google.com [209.85.214.199]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 4737me1ruy-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Thu, 05 Jun 2025 18:33:04 +0000 (GMT) Received: by mail-pl1-f199.google.com with SMTP id d9443c01a7336-235f77f86f6so6209245ad.2 for ; Thu, 05 Jun 2025 11:33:04 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1749148383; x=1749753183; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=F6qh8d6fUyE8PFPWr8kAxkQ8qVtIka0LBkE1NoV8MqA=; b=fHn6CgB6FG/VUBsvxmluYshWGnZVXEU8ejN1rtzITwl7hEpCWLje48gSBeWCp8kzOO Exv9ZQNt4WbRSJMo7QDMIRUqaZWLHJKnlmLSBZi9sOxg1usMkbeb1+UeNVWw453aM+Ls Ea7TARPyYIx7diXrPgdU9W09LzyEV1hvDBQT9DrVjptxiccgb83qJocQYnqSBLeU22a3 reibkfcb09M6ndBxYdR2w9SUPtluz5ArXeUdWLNWrbRpZvdoENaOLGBVJHxpo/LR6gP+ xoULSeAY4/PnRMm023cskVhgxfiVUqjmhkjakd3/H/v67/L0Veihhds5spOA8a6mXOgs ujQw== X-Forwarded-Encrypted: i=1; AJvYcCVbDerP1eo6cP/ZduWIK8Oqm76MB0pWAs1qCJaMeHJO1JyZlmUVm1dvUHDDzyEYOmwI+vvbl/l+uaSgVx6W@vger.kernel.org X-Gm-Message-State: AOJu0YxdLUx5udBKpV0Z33JsV8Wn+Et2AAXprUqxQ3FRKHZLSAZLFlt2 k1gf4TOdlTjv0DWEu6Tw7E7uaAUKmPguWtpMXSlChJUWWXBb/X4n/7ObhQnRTb0QCIoAgHu6Jfa md5oCH5QM+AAkCq6XjlDMAMMRKmm1tnUEa9Jx4KOh/C4iSMUGsRVQlvxucSb16MoBOTUJ X-Gm-Gg: ASbGncsqmhpl14wE/SOvvNfoKKm3I3IlQiRzmeWIZV3ndgCbrcX6aYmSZdeXQtH8JRZ 2jIWEkimPc1BzsDLEW4r9PCbUaLem5eYxvu3o7vOOxlnu3avUN+7LY6/ueDVPhb3vScH7WRPnMX jbKceR4YIP7u3pSloLGaDGrn6hIaKgrDnV3/hPCCHVGVyt6vBT1wf58FbBKJT52UQ7eNa0dz2r5 gM5irtYavRVWOx/FUzc1oLy9FFEXTrpBTBUs41W6TUipQSFM7TSv4wJaPCqmy9BToo8R0ox6xZN eJ93WV4fDYSnZrAgtW/fDw== X-Received: by 2002:a17:902:db10:b0:235:ef67:b5a0 with SMTP id d9443c01a7336-23601d977c9mr5811025ad.36.1749148383278; Thu, 05 Jun 2025 11:33:03 -0700 (PDT) X-Google-Smtp-Source: AGHT+IGXsjrpMtJeaeFHHJbJYs5K+E0Ts7FRZDqdRwAsfHwOl9dnEOZ4Y9cggfPiVerZSYeeQaSUqw== X-Received: by 2002:a17:902:db10:b0:235:ef67:b5a0 with SMTP id d9443c01a7336-23601d977c9mr5810695ad.36.1749148382861; Thu, 05 Jun 2025 11:33:02 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:89fa:e299:1a34:c1f5]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-23506bc8863sm122351415ad.19.2025.06.05.11.33.02 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 05 Jun 2025 11:33:02 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Rob Clark , Rob Clark , Rob Clark , Sean Paul , Konrad Dybcio , Abhinav Kumar , Dmitry Baryshkov , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v6 25/40] drm/msm: Crashdump support for sparse Date: Thu, 5 Jun 2025 11:29:10 -0700 Message-ID: <20250605183111.163594-26-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250605183111.163594-1-robin.clark@oss.qualcomm.com> References: <20250605183111.163594-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: Pte4WLCPkEuLvLDM6fuj368tjU5HBxKM X-Authority-Analysis: v=2.4 cv=GPQIEvNK c=1 sm=1 tr=0 ts=6841e2e0 cx=c_pps a=JL+w9abYAAE89/QcEU+0QA==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=cm27Pg_UAAAA:8 a=EUspDBNiAAAA:8 a=MUnOxqT-vkRKCsmERf0A:9 a=324X-CrmTo6CU4MGRt3R:22 X-Proofpoint-GUID: Pte4WLCPkEuLvLDM6fuj368tjU5HBxKM X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjA1MDE2MyBTYWx0ZWRfX3cDOEbiWrSC9 3Fy8na4p3uZcYOZnZTQHhoDRkgoWGY1Cxh0QrGW+G4kj2R4ZqDr36d1Ruywg+Qy0sPNOXzNYOwv VnNkHluxUmyH6H4FJPt3YO6ErYU08IkYb1YduwfY2nxy2KGM3fQGHjzmkR8GVKfkbzh8uqBkCay EyDC89BBoB/6M+q2iZ0/DgSJhFmS9Xt1CINjyTHJg7OGVs7IZJwohRE5mSjzFdqUbZaZOMxYU3L q6ZZmaKSQqXlSTQjk2FP1wj0c/Niunuw7M8N9hNf1qPySi3pwesePJeHUzYAPpSlBKEo11BNHlL 7upEJOmeqReF7mI2cjyzrPYwxnAu/tOJilgdVJFVE/EqDc9XwrMu/YaPRSjtkf2yDC41hce+kKg lFc6aBCU6jyQfbmI7/TDhgsgZ0JlMKmUh1KRv+eqBum8PYHm1B/aygQh8ROPgw88nGhqPYHF X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.0.736,FMLib:17.12.80.40 definitions=2025-06-05_05,2025-06-05_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 suspectscore=0 priorityscore=1501 adultscore=0 phishscore=0 lowpriorityscore=0 spamscore=0 impostorscore=0 mlxlogscore=999 bulkscore=0 clxscore=1015 malwarescore=0 mlxscore=0 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506050163 From: Rob Clark In this case, we need to iterate the VMAs looking for ones with MSM_VMA_DUMP flag. Signed-off-by: Rob Clark Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gpu.c | 96 ++++++++++++++++++++++++++--------- 1 file changed, 72 insertions(+), 24 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c index 2eaca2a22de9..8178b6499478 100644 --- a/drivers/gpu/drm/msm/msm_gpu.c +++ b/drivers/gpu/drm/msm/msm_gpu.c @@ -241,9 +241,7 @@ static void msm_gpu_crashstate_get_bo(struct msm_gpu_state *state, if (!state_bo->data) goto out; - msm_gem_lock(obj); ptr = msm_gem_get_vaddr_active(obj); - msm_gem_unlock(obj); if (IS_ERR(ptr)) { kvfree(state_bo->data); state_bo->data = NULL; @@ -251,12 +249,75 @@ static void msm_gpu_crashstate_get_bo(struct msm_gpu_state *state, } memcpy(state_bo->data, ptr + offset, size); - msm_gem_put_vaddr(obj); + msm_gem_put_vaddr_locked(obj); } out: state->nr_bos++; } +static void crashstate_get_bos(struct msm_gpu_state *state, struct msm_gem_submit *submit) +{ + extern bool rd_full; + + if (!submit) + return; + + if (msm_context_is_vmbind(submit->queue->ctx)) { + struct drm_exec exec; + struct drm_gpuva *vma; + unsigned cnt = 0; + + drm_exec_init(&exec, DRM_EXEC_IGNORE_DUPLICATES, 0); + drm_exec_until_all_locked(&exec) { + cnt = 0; + + drm_exec_lock_obj(&exec, drm_gpuvm_resv_obj(submit->vm)); + drm_exec_retry_on_contention(&exec); + + drm_gpuvm_for_each_va (vma, submit->vm) { + if (!vma->gem.obj) + continue; + + cnt++; + drm_exec_lock_obj(&exec, vma->gem.obj); + drm_exec_retry_on_contention(&exec); + } + + } + + drm_gpuvm_for_each_va (vma, submit->vm) + cnt++; + + state->bos = kcalloc(cnt, sizeof(struct msm_gpu_state_bo), GFP_KERNEL); + + drm_gpuvm_for_each_va (vma, submit->vm) { + bool dump = rd_full || (vma->flags & MSM_VMA_DUMP); + + /* Skip MAP_NULL/PRR VMAs: */ + if (!vma->gem.obj) + continue; + + msm_gpu_crashstate_get_bo(state, vma->gem.obj, vma->va.addr, + dump, vma->gem.offset, vma->va.range); + } + + drm_exec_fini(&exec); + } else { + state->bos = kcalloc(submit->nr_bos, + sizeof(struct msm_gpu_state_bo), GFP_KERNEL); + + for (int i = 0; state->bos && i < submit->nr_bos; i++) { + struct drm_gem_object *obj = submit->bos[i].obj;; + bool dump = rd_full || (submit->bos[i].flags & MSM_SUBMIT_BO_DUMP); + + msm_gem_lock(obj); + msm_gpu_crashstate_get_bo(state, obj, submit->bos[i].iova, + dump, 0, obj->size); + msm_gem_unlock(obj); + } + } +} + static void msm_gpu_crashstate_capture(struct msm_gpu *gpu, struct msm_gem_submit *submit, char *comm, char *cmd) { @@ -279,30 +340,17 @@ static void msm_gpu_crashstate_capture(struct msm_gpu *gpu, state->cmd = kstrdup(cmd, GFP_KERNEL); state->fault_info = gpu->fault_info; - if (submit) { - extern bool rd_full; - int i; - - if (state->fault_info.ttbr0) { - struct msm_gpu_fault_info *info = &state->fault_info; - struct msm_mmu *mmu = to_msm_vm(submit->vm)->mmu; + if (submit && state->fault_info.ttbr0) { + struct msm_gpu_fault_info *info = &state->fault_info; + struct msm_mmu *mmu = to_msm_vm(submit->vm)->mmu; - msm_iommu_pagetable_params(mmu, &info->pgtbl_ttbr0, - &info->asid); - msm_iommu_pagetable_walk(mmu, info->iova, info->ptes); - } - - state->bos = kcalloc(submit->nr_bos, - sizeof(struct msm_gpu_state_bo), GFP_KERNEL); - - for (i = 0; state->bos && i < submit->nr_bos; i++) { - struct drm_gem_object *obj = submit->bos[i].obj; - bool dump = rd_full || (submit->bos[i].flags & MSM_SUBMIT_BO_DUMP); - msm_gpu_crashstate_get_bo(state, obj, submit->bos[i].iova, - dump, 0, obj->size); - } + msm_iommu_pagetable_params(mmu, &info->pgtbl_ttbr0, + &info->asid); + msm_iommu_pagetable_walk(mmu, info->iova, info->ptes); } + crashstate_get_bos(state, submit); + /* Set the active crash state to be dumped on failure */ gpu->crashstate = state; From patchwork Thu Jun 5 18:29:11 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 894500 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3D165286434 for ; Thu, 5 Jun 2025 18:33:06 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.180.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749148389; cv=none; b=CFZ5CONwvLO21rUleB+rMwkUsIQgi0QBTty4fTMtbSc45ecTAnt+78FdmAFP0n5wDvCwe0fCRqh4FEBVbdKZk74g5OyP1h4o8JgDHKM12HiLwQtp/tU1zmso/kTsy/ueMTz3aawjmcWwXRcSfP74rs7dci2gzpm3XBFn0Oq4rZ8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749148389; c=relaxed/simple; bh=qCbhnRnuW+mHLXXemWgEWHentAADrrysxUd/juV5+7A=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=qNP1KCwe1p+5SFMd8GnYNNQuh+VHRU6Cx+9g3/P/py8fTHrNuYpyFKXIJ8O/coW4LuN47CcK2gPW8CYOLPAiFU50Sw9eSpDoHC68xpoSaIzAtMqFrBcr95NmJeGbjH19PDio8TKYwwJIAMrkL3gaKLFE3VTNctZgZ1j3fECiz5g= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=fsuQiSsJ; arc=none smtp.client-ip=205.220.180.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="fsuQiSsJ" Received: from pps.filterd (m0279868.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 555AYG2m022637 for ; Thu, 5 Jun 2025 18:33:05 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=zZ0/1Hg9ehG RWkpUmL2vj2q1yR8YxmohRbA6KHUWPBw=; b=fsuQiSsJj30cmtJOeqGzFAOFOOi Mo9g9UZe8WJq1PPD7eQVIt8R5IZUIh67C1TGbNLloUQIsOTxpMNAv67WMWJTDFNT B6tRE2tNQKmSJCfsGadiw3A24jpEeY51zWCBGKhANib6A8v8Rp4B2b27lLQcOnzF HZuQSbSmzu71tfRzKSMeGj5AKOr5W+oPGWhEa7ojUooMJfCn0rEoElpo99QzmmN3 ekLfyOl7t2TgoE4isly4ZCs+sPVXtv8YXJHk6PiSOJgv7TwS2usKykUNPE1vGuQz N2ka+cPVXlqAWV/5xR+xBOUo65tCfP5hyeAlYlda0jF1PwHM7/UCm2UEYXQ== Received: from mail-pl1-f200.google.com (mail-pl1-f200.google.com [209.85.214.200]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 472mn04kca-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Thu, 05 Jun 2025 18:33:05 +0000 (GMT) Received: by mail-pl1-f200.google.com with SMTP id d9443c01a7336-22aa75e6653so10627395ad.0 for ; Thu, 05 Jun 2025 11:33:05 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1749148384; x=1749753184; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=zZ0/1Hg9ehGRWkpUmL2vj2q1yR8YxmohRbA6KHUWPBw=; b=cdV99mY6UrlEERcRDbtHIofyvldFqKcv6ZuCvhDyBXaVxFkBFgXTw94gFNRKoeQAob mElat+yXWtqFxgWbVv71exB+7CUG1jDXfJsLZQ3GmF0YftVp8LY717bm7qeX4vMKBRct a52fc9L+CrdbxaL1JqLTWUFzyt+1BaRawjJq/iod672SelfW5nLxQyubwBW6Nqn4Ojge YYZN2z/KKv18tFmrFHBZJp3k14hzhkjb4mgdv+dSmgFRYnO0F4Vnn2b0cIi0il5P2s0M czNx0EQSgHgrGvOs6VIGYIUsRZpeYOeCbOukeMjZcJnh6qyCid4VZUC1x9lybZiTuI4N rXgg== X-Forwarded-Encrypted: i=1; AJvYcCXtLu/MAwzWpkuVSetjGTEGoQBjEC5LgLVsD+8A67andxLGtPt1CzqsSr/5MFKVwEgJRo/yFwooxrQqmAjo@vger.kernel.org X-Gm-Message-State: AOJu0Yxp9Sl3nKyPBHiOqaoSXL5xK7SmpvG13ZQjnXSpA8Xiul/R4Ykq Rp4HUOna5ZW9FBs2KoVlMzapT09sn3IKQ3PBicmcNzcOKyH2AmIaoLPWmu+/dLbLw0R/gX9bNt8 MwEutBFriGc7/D6fUua6WFfOPqgw50O6SWy5WPVNrrM25PIGqGP+aRSMDIso6xfFMiwfZ X-Gm-Gg: ASbGncv4XH61gIiX38BGuorp+ACH5MXspuYDp7IZN6qPgCHKdue5AOQVjQ5uWKfY3tV 8Z3Olu+0hpLRZX15qORs/49QhlFofaiL0V5/rS4s3FkX0MZvN7Sp6MrpC+8/TSRiHxT7iK7GvYn Obm6kzZ28giWUwUHkawyc2LBPZXbAwVpVT/c+dUqeuN3/wgFd4PLEmTB2CAZ5FUSeU/d4YGsFjx lUSgbr+VhufnzQsNeh6J5NPy12VdfNyH6uzButEuphjAI9i4BHKufdKy97cFrv74L3d2vvRW8Wy mDi1443iesRKQbYZUqC1hg== X-Received: by 2002:a17:902:ea04:b0:22e:4d50:4f58 with SMTP id d9443c01a7336-23601d19e7emr5805095ad.31.1749148384407; Thu, 05 Jun 2025 11:33:04 -0700 (PDT) X-Google-Smtp-Source: AGHT+IE26cHcfxJduxTai3lL8l1HHuGp3dVml0D45JxZwDr8TR1f4qPISu/CmoAgjVwvP7kZpTw7SQ== X-Received: by 2002:a17:902:ea04:b0:22e:4d50:4f58 with SMTP id d9443c01a7336-23601d19e7emr5804655ad.31.1749148384001; Thu, 05 Jun 2025 11:33:04 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:89fa:e299:1a34:c1f5]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-23506cf471asm122647135ad.164.2025.06.05.11.33.03 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 05 Jun 2025 11:33:03 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Rob Clark , Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v6 26/40] drm/msm: rd dumping support for sparse Date: Thu, 5 Jun 2025 11:29:11 -0700 Message-ID: <20250605183111.163594-27-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250605183111.163594-1-robin.clark@oss.qualcomm.com> References: <20250605183111.163594-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: N8aXY4ODvaeND-1JN3OYlDlxjnkreAnL X-Proofpoint-GUID: N8aXY4ODvaeND-1JN3OYlDlxjnkreAnL X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjA1MDE2NSBTYWx0ZWRfX1LHQFRT1UPvr YsFGjZuTLUllj0z2tGpr2yL9IYGpYtadJqbqVHcveZzTeon6kHAtXDF+8bKMgzxRMrTpo2qJy11 Xt/HTPFu+WthILk8ZEARRDekNf7A8WiIKwiD7AQaGOIqeVL3/uU/uy3v20fS7D/DAXap2AIU1Tl VKJrM5dAx9lJVwYwqV8KYAC4hwpYoRzzpwrgm6OrkMnl2L6UrqpAGvB9gd40kf5HF+516RTj39n 7e0i/AwEOr7nRS8AP5/C8k+d+QHo8OI9JXDE0Kl9ygu8v/df2Y2sa7NxKLHwHHaN58I/UFccP3j 8Rh3aR6QDlNWl8HQ20CR1+iASse5gWg1MnCATAlVyNlt4TtvSnNaO9AStR62cmjpAKWQU0mILX/ 6IMuw0DSVgVWrW3qshYDWZ478n1p7W6X0RcuBBN4U/v7PAXxj6vA79QbsIH0Nf+jbaGSTps0 X-Authority-Analysis: v=2.4 cv=Y8/4sgeN c=1 sm=1 tr=0 ts=6841e2e1 cx=c_pps a=IZJwPbhc+fLeJZngyXXI0A==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=cm27Pg_UAAAA:8 a=EUspDBNiAAAA:8 a=Oi01P0gpvwaEutKy2E0A:9 a=uG9DUKGECoFWVXl0Dc02:22 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.0.736,FMLib:17.12.80.40 definitions=2025-06-05_05,2025-06-05_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxscore=0 lowpriorityscore=0 mlxlogscore=999 spamscore=0 phishscore=0 clxscore=1015 impostorscore=0 bulkscore=0 suspectscore=0 priorityscore=1501 malwarescore=0 adultscore=0 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506050165 From: Rob Clark As with devcoredump, we need to iterate the VMAs to figure out what to dump. Signed-off-by: Rob Clark Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_rd.c | 48 +++++++++++++++++++++++++----------- 1 file changed, 33 insertions(+), 15 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_rd.c b/drivers/gpu/drm/msm/msm_rd.c index edbcb93410a9..54493a94dcb7 100644 --- a/drivers/gpu/drm/msm/msm_rd.c +++ b/drivers/gpu/drm/msm/msm_rd.c @@ -372,25 +372,43 @@ void msm_rd_dump_submit(struct msm_rd_state *rd, struct msm_gem_submit *submit, rd_write_section(rd, RD_CMD, msg, ALIGN(n, 4)); - for (i = 0; i < submit->nr_bos; i++) { - struct drm_gem_object *obj = submit->bos[i].obj; - bool dump = rd_full || (submit->bos[i].flags & MSM_SUBMIT_BO_DUMP); + if (msm_context_is_vmbind(submit->queue->ctx)) { + struct drm_gpuva *vma; - snapshot_buf(rd, obj, submit->bos[i].iova, dump, 0, obj->size); - } + drm_gpuvm_resv_assert_held(submit->vm); - for (i = 0; i < submit->nr_cmds; i++) { - uint32_t szd = submit->cmd[i].size; /* in dwords */ - int idx = submit->cmd[i].idx; - bool dump = rd_full || (submit->bos[idx].flags & MSM_SUBMIT_BO_DUMP); + drm_gpuvm_for_each_va (vma, submit->vm) { + bool dump = rd_full || (vma->flags & MSM_VMA_DUMP); + + /* Skip MAP_NULL/PRR VMAs: */ + if (!vma->gem.obj) + continue; + + snapshot_buf(rd, vma->gem.obj, vma->va.addr, dump, + vma->gem.offset, vma->va.range); + } + + } else { + for (i = 0; i < submit->nr_bos; i++) { + struct drm_gem_object *obj = submit->bos[i].obj; + bool dump = rd_full || (submit->bos[i].flags & MSM_SUBMIT_BO_DUMP); + + snapshot_buf(rd, obj, submit->bos[i].iova, dump, 0, obj->size); + } + + for (i = 0; i < submit->nr_cmds; i++) { + uint32_t szd = submit->cmd[i].size; /* in dwords */ + int idx = submit->cmd[i].idx; + bool dump = rd_full || (submit->bos[idx].flags & MSM_SUBMIT_BO_DUMP); - /* snapshot cmdstream bo's (if we haven't already): */ - if (!dump) { - struct drm_gem_object *obj = submit->bos[idx].obj; - size_t offset = submit->cmd[i].iova - submit->bos[idx].iova; + /* snapshot cmdstream bo's (if we haven't already): */ + if (!dump) { + struct drm_gem_object *obj = submit->bos[idx].obj; + size_t offset = submit->cmd[i].iova - submit->bos[idx].iova; - snapshot_buf(rd, obj, submit->cmd[i].iova, true, - offset, szd * 4); + snapshot_buf(rd, obj, submit->cmd[i].iova, true, + offset, szd * 4); + } } } From patchwork Thu Jun 5 18:29:12 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 894228 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9F4312868A9 for ; Thu, 5 Jun 2025 18:33:08 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.168.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749148391; cv=none; b=sNrUoz1iH5SkeQufS5tvX2mYLY0u3kcSFaTPvTvATCAZTYNQglQRAWTUA9ddh8NZ7F4HNtN6czTdVoCe1JosXZOEx22pOjwQM+ExWzloBd3FZGr8vL7T0zJRAmBv7pSQNlvE0toBarDKHpjA552DBybP2+aEOVkvhrTfDWL7QgE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749148391; c=relaxed/simple; bh=/6AKsBse7WDds582ZCK8Gy/l7/EENecljQSDcjoH4Jo=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=RfbHb13+sOk2cPStMeV6N6t3JgiGhBBq6zL/ZUVdWL8m11+BeM50QLRnJoo143LsMtkCrXzCTJplw1DAqI0zQOYpQ0SyEkk9VTRI5UGw7fRx3EEZpLfF612p87Ia4aUbYA4jfJ50agRbndFl5iOGNLFFcBW7e9aiUgVPdEoWWFg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=SgZTsOzt; arc=none smtp.client-ip=205.220.168.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="SgZTsOzt" Received: from pps.filterd (m0279866.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 555GjXnv004287 for ; Thu, 5 Jun 2025 18:33:08 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=aP3LeJzgV6y FYyMNWz0CV4xB/MBqXT10++CjGP5dcHE=; b=SgZTsOztQcg/26uq9slszbFIH3J aRZCyBRR+Ww59tnlM4Z+QSy0wAlYJaZ8vLMiSPPY6Y5fTHBpOjQB7zE4MtMgoZsJ XapKeEzIGZdy50aofhPzzz4yoj6UIvLZOgwGVDDwhJKc2RXnOT+YIw/P/5QnniaY 1RCmpKBy/nh+UDZoBOn7QhKG4nToamLgWdUmZBFZun/b3UmhyyH4eslIItiyV9vY 82MIooHBdLR2W/rVX5FoUbPnz5jmmMbG2FFA7XL7nyZTuqVJBKqtmcoZtlc50+gG txf9zmEskRIc4ksfW8UCLhCO+86TxCZVmv/FENOOlsUHdfA5T0WYD7LnO5A== Received: from mail-pf1-f199.google.com (mail-pf1-f199.google.com [209.85.210.199]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 471g8s2c7f-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Thu, 05 Jun 2025 18:33:07 +0000 (GMT) Received: by mail-pf1-f199.google.com with SMTP id d2e1a72fcca58-747dd44048cso1187170b3a.3 for ; Thu, 05 Jun 2025 11:33:07 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1749148387; x=1749753187; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=aP3LeJzgV6yFYyMNWz0CV4xB/MBqXT10++CjGP5dcHE=; b=l7dM6XwvbEcW48XyvTfCidh/RP5kpMxOu9lBsY3eX27tYCFZqIqVCWgXgKkq/DL2kf /JfdKVhG21iJCO2b/pG5eOKIaRo7MaRZYn1p3/ThVcU8voDRthRo/n3TyeSuDDub80eb NWD36pn03uUPD3hA13dVhRD1NJfbTE4FPb0JbsWbRV6DjtQph2Oi7J24auN6Gu8UuWEl DK0qLEa1xJzW06cXxamHggWPTHwUN7HP+4PWhB+KmftQbTDAbE8dOGxUmPpOmBKha3Bb CxZ8Wf8asvXXrxUD8Se2L7NypLdlJ82J2GOmOMUoEi6CoLLqMJQU5W8IAzLyUmxoSapm Y7Uw== X-Forwarded-Encrypted: i=1; AJvYcCXm/fss3uo8Dq9quD3gRh/9PJcLHTUl3IDrUEJTUJSg94/ku+cNQFJSwumMswvDiMvXD/Z3mSuVoVim72JT@vger.kernel.org X-Gm-Message-State: AOJu0YyPE+7xl9c4U0bac7e8BZbkT/nPVLOwfHwK8y5w3xiqiZgPB6ds 95CtM2y8FNqWQ/Kiuwpbh1IGO2UFyQXOS/Jup2ve9r7fPN0lYqPzfXzp41Ly8NTkCAV5in4Um3w aKgcUu4Koqx50BytK29fDZhlEFx2MOg88PslqtRnBvCI4G322CghpdglzCfd1sVrtcAP5rw91My wH X-Gm-Gg: ASbGncv8BWAN6qXTVeBv5ecsRwlIyuZobDfN/X4P0EP9l6qop/VMlzcnJlsnL2YZLXB z1Y9VqIrE9mrsOvkqyhaokhw7kWTX0KDAD9o0A/MkUa42Q7140g9QFXgmCpNYQ6PumymSGL8tFr oztpEnTCshxLpEYIl099b4K8ksnsH63sUk0NYB21LY8Z21wQXGtK3GhZFoe6OcpMyA8MzGdD9uW OuaiYZL/Zc9HDpstn95gx9sRkVEkdxqpQuUjwCm3YYGPws5Gwgxc0hayd3wakC0VtoHTPA7HTGv 0m9GEvSIKCvs6C9qymA1UA== X-Received: by 2002:a05:6a20:160e:b0:21a:bc07:b42c with SMTP id adf61e73a8af0-21ee2637205mr390721637.30.1749148386448; Thu, 05 Jun 2025 11:33:06 -0700 (PDT) X-Google-Smtp-Source: AGHT+IH/yqZnBbl+SM6/4vJLKjp5pL7mxfIM3Ex2bGYvs+sKAlid4vBtW/jb1v1MkC8n+pheLg3iHA== X-Received: by 2002:a05:6a20:160e:b0:21a:bc07:b42c with SMTP id adf61e73a8af0-21ee2637205mr390663637.30.1749148385955; Thu, 05 Jun 2025 11:33:05 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:89fa:e299:1a34:c1f5]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-b2f5ee6f1dfsm13102a12.18.2025.06.05.11.33.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 05 Jun 2025 11:33:05 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Rob Clark , Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , Konrad Dybcio , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= , linux-kernel@vger.kernel.org (open list), linux-media@vger.kernel.org (open list:DMA BUFFER SHARING FRAMEWORK:Keyword:\bdma_(?:buf|fence|resv)\b), linaro-mm-sig@lists.linaro.org (moderated list:DMA BUFFER SHARING FRAMEWORK:Keyword:\bdma_(?:buf|fence|resv)\b) Subject: [PATCH v6 27/40] drm/msm: Extract out syncobj helpers Date: Thu, 5 Jun 2025 11:29:12 -0700 Message-ID: <20250605183111.163594-28-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250605183111.163594-1-robin.clark@oss.qualcomm.com> References: <20250605183111.163594-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: ymG1DUMicD-ZZfdjaGL-UdKEW49E2OE7 X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjA1MDE2NSBTYWx0ZWRfX16FxCX+eiDAI SUAe1U+gRoBHwKXZpu30d7glfFFqCBkQ235iidEfZdmcYV8b5XCiKIMzKQU6dZqc7FEi5qp36Lh iZLUoidbVg8nrPmh7/u2HAYN3FqeEO1Qthti8Y/GwKXGFku7BSQoIlUvijTRNDLwJz02FKYMs/b xfUMDCeo/xpuC7/rCGTrOdThA09QHdm/82dFE6+qyeqklU0h8XJPRZhZ/BMm57IUtHYS9urXiKw cKoLnRdFr6Q/u/OOfFdLS+Hc+QPydnCnX1bpIYfNHGxXGkPa+hMhyY1pADKjkgMQcpb1L/2Nk4x WBchdWfjLo2mWpJgXRBBIM8sSpGN1rBfu8D1iMKFZRO7C5Va7Gjxhe8SRDFFYEUUOL7OIJbllvP paH+0VPmdvEsBVbsoiA9aO0NfAHLwxi/iVwGqExsiBZOO6HqlEjyNi4UU9215OP1MXQ5ZYqa X-Authority-Analysis: v=2.4 cv=RdWQC0tv c=1 sm=1 tr=0 ts=6841e2e3 cx=c_pps a=WW5sKcV1LcKqjgzy2JUPuA==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=cm27Pg_UAAAA:8 a=EUspDBNiAAAA:8 a=ii7SkllToyZ1umWTbp4A:9 a=OpyuDcXvxspvyRM73sMx:22 X-Proofpoint-GUID: ymG1DUMicD-ZZfdjaGL-UdKEW49E2OE7 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.0.736,FMLib:17.12.80.40 definitions=2025-06-05_05,2025-06-05_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 bulkscore=0 adultscore=0 mlxscore=0 priorityscore=1501 phishscore=0 clxscore=1015 lowpriorityscore=0 malwarescore=0 suspectscore=0 impostorscore=0 spamscore=0 mlxlogscore=999 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506050165 From: Rob Clark We'll be re-using these for the VM_BIND ioctl. Also, rename a few things in the uapi header to reflect that syncobj use is not specific to the submit ioctl. Signed-off-by: Rob Clark Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/Makefile | 1 + drivers/gpu/drm/msm/msm_gem_submit.c | 192 ++------------------------- drivers/gpu/drm/msm/msm_syncobj.c | 172 ++++++++++++++++++++++++ drivers/gpu/drm/msm/msm_syncobj.h | 37 ++++++ include/uapi/drm/msm_drm.h | 26 ++-- 5 files changed, 235 insertions(+), 193 deletions(-) create mode 100644 drivers/gpu/drm/msm/msm_syncobj.c create mode 100644 drivers/gpu/drm/msm/msm_syncobj.h diff --git a/drivers/gpu/drm/msm/Makefile b/drivers/gpu/drm/msm/Makefile index 5df20cbeafb8..8af34f87e0c8 100644 --- a/drivers/gpu/drm/msm/Makefile +++ b/drivers/gpu/drm/msm/Makefile @@ -128,6 +128,7 @@ msm-y += \ msm_rd.o \ msm_ringbuffer.o \ msm_submitqueue.o \ + msm_syncobj.o \ msm_gpu_tracepoints.o \ msm-$(CONFIG_DRM_FBDEV_EMULATION) += msm_fbdev.o diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c index fe43fd4049de..e3c76971ae5f 100644 --- a/drivers/gpu/drm/msm/msm_gem_submit.c +++ b/drivers/gpu/drm/msm/msm_gem_submit.c @@ -16,6 +16,7 @@ #include "msm_gpu.h" #include "msm_gem.h" #include "msm_gpu_trace.h" +#include "msm_syncobj.h" /* For userspace errors, use DRM_UT_DRIVER.. so that userspace can enable * error msgs for debugging, but we don't spam dmesg by default @@ -489,173 +490,6 @@ void msm_submit_retire(struct msm_gem_submit *submit) } } -struct msm_submit_post_dep { - struct drm_syncobj *syncobj; - uint64_t point; - struct dma_fence_chain *chain; -}; - -static struct drm_syncobj **msm_parse_deps(struct msm_gem_submit *submit, - struct drm_file *file, - uint64_t in_syncobjs_addr, - uint32_t nr_in_syncobjs, - size_t syncobj_stride) -{ - struct drm_syncobj **syncobjs = NULL; - struct drm_msm_gem_submit_syncobj syncobj_desc = {0}; - int ret = 0; - uint32_t i, j; - - syncobjs = kcalloc(nr_in_syncobjs, sizeof(*syncobjs), - GFP_KERNEL | __GFP_NOWARN | __GFP_NORETRY); - if (!syncobjs) - return ERR_PTR(-ENOMEM); - - for (i = 0; i < nr_in_syncobjs; ++i) { - uint64_t address = in_syncobjs_addr + i * syncobj_stride; - - if (copy_from_user(&syncobj_desc, - u64_to_user_ptr(address), - min(syncobj_stride, sizeof(syncobj_desc)))) { - ret = -EFAULT; - break; - } - - if (syncobj_desc.point && - !drm_core_check_feature(submit->dev, DRIVER_SYNCOBJ_TIMELINE)) { - ret = SUBMIT_ERROR(EOPNOTSUPP, submit, "syncobj timeline unsupported"); - break; - } - - if (syncobj_desc.flags & ~MSM_SUBMIT_SYNCOBJ_FLAGS) { - ret = SUBMIT_ERROR(EINVAL, submit, "invalid syncobj flags: %x", syncobj_desc.flags); - break; - } - - ret = drm_sched_job_add_syncobj_dependency(&submit->base, file, - syncobj_desc.handle, syncobj_desc.point); - if (ret) - break; - - if (syncobj_desc.flags & MSM_SUBMIT_SYNCOBJ_RESET) { - syncobjs[i] = - drm_syncobj_find(file, syncobj_desc.handle); - if (!syncobjs[i]) { - ret = SUBMIT_ERROR(EINVAL, submit, "invalid syncobj handle: %u", i); - break; - } - } - } - - if (ret) { - for (j = 0; j <= i; ++j) { - if (syncobjs[j]) - drm_syncobj_put(syncobjs[j]); - } - kfree(syncobjs); - return ERR_PTR(ret); - } - return syncobjs; -} - -static void msm_reset_syncobjs(struct drm_syncobj **syncobjs, - uint32_t nr_syncobjs) -{ - uint32_t i; - - for (i = 0; syncobjs && i < nr_syncobjs; ++i) { - if (syncobjs[i]) - drm_syncobj_replace_fence(syncobjs[i], NULL); - } -} - -static struct msm_submit_post_dep *msm_parse_post_deps(struct drm_device *dev, - struct drm_file *file, - uint64_t syncobjs_addr, - uint32_t nr_syncobjs, - size_t syncobj_stride) -{ - struct msm_submit_post_dep *post_deps; - struct drm_msm_gem_submit_syncobj syncobj_desc = {0}; - int ret = 0; - uint32_t i, j; - - post_deps = kcalloc(nr_syncobjs, sizeof(*post_deps), - GFP_KERNEL | __GFP_NOWARN | __GFP_NORETRY); - if (!post_deps) - return ERR_PTR(-ENOMEM); - - for (i = 0; i < nr_syncobjs; ++i) { - uint64_t address = syncobjs_addr + i * syncobj_stride; - - if (copy_from_user(&syncobj_desc, - u64_to_user_ptr(address), - min(syncobj_stride, sizeof(syncobj_desc)))) { - ret = -EFAULT; - break; - } - - post_deps[i].point = syncobj_desc.point; - - if (syncobj_desc.flags) { - ret = UERR(EINVAL, dev, "invalid syncobj flags"); - break; - } - - if (syncobj_desc.point) { - if (!drm_core_check_feature(dev, - DRIVER_SYNCOBJ_TIMELINE)) { - ret = UERR(EOPNOTSUPP, dev, "syncobj timeline unsupported"); - break; - } - - post_deps[i].chain = dma_fence_chain_alloc(); - if (!post_deps[i].chain) { - ret = -ENOMEM; - break; - } - } - - post_deps[i].syncobj = - drm_syncobj_find(file, syncobj_desc.handle); - if (!post_deps[i].syncobj) { - ret = UERR(EINVAL, dev, "invalid syncobj handle"); - break; - } - } - - if (ret) { - for (j = 0; j <= i; ++j) { - dma_fence_chain_free(post_deps[j].chain); - if (post_deps[j].syncobj) - drm_syncobj_put(post_deps[j].syncobj); - } - - kfree(post_deps); - return ERR_PTR(ret); - } - - return post_deps; -} - -static void msm_process_post_deps(struct msm_submit_post_dep *post_deps, - uint32_t count, struct dma_fence *fence) -{ - uint32_t i; - - for (i = 0; post_deps && i < count; ++i) { - if (post_deps[i].chain) { - drm_syncobj_add_point(post_deps[i].syncobj, - post_deps[i].chain, - fence, post_deps[i].point); - post_deps[i].chain = NULL; - } else { - drm_syncobj_replace_fence(post_deps[i].syncobj, - fence); - } - } -} - int msm_ioctl_gem_submit(struct drm_device *dev, void *data, struct drm_file *file) { @@ -666,7 +500,7 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data, struct msm_gpu *gpu = priv->gpu; struct msm_gpu_submitqueue *queue; struct msm_ringbuffer *ring; - struct msm_submit_post_dep *post_deps = NULL; + struct msm_syncobj_post_dep *post_deps = NULL; struct drm_syncobj **syncobjs_to_reset = NULL; struct sync_file *sync_file = NULL; int out_fence_fd = -1; @@ -743,10 +577,10 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data, } if (args->flags & MSM_SUBMIT_SYNCOBJ_IN) { - syncobjs_to_reset = msm_parse_deps(submit, file, - args->in_syncobjs, - args->nr_in_syncobjs, - args->syncobj_stride); + syncobjs_to_reset = msm_syncobj_parse_deps(dev, &submit->base, + file, args->in_syncobjs, + args->nr_in_syncobjs, + args->syncobj_stride); if (IS_ERR(syncobjs_to_reset)) { ret = PTR_ERR(syncobjs_to_reset); goto out_unlock; @@ -754,10 +588,10 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data, } if (args->flags & MSM_SUBMIT_SYNCOBJ_OUT) { - post_deps = msm_parse_post_deps(dev, file, - args->out_syncobjs, - args->nr_out_syncobjs, - args->syncobj_stride); + post_deps = msm_syncobj_parse_post_deps(dev, file, + args->out_syncobjs, + args->nr_out_syncobjs, + args->syncobj_stride); if (IS_ERR(post_deps)) { ret = PTR_ERR(post_deps); goto out_unlock; @@ -900,10 +734,8 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data, args->fence = submit->fence_id; queue->last_fence = submit->fence_id; - msm_reset_syncobjs(syncobjs_to_reset, args->nr_in_syncobjs); - msm_process_post_deps(post_deps, args->nr_out_syncobjs, - submit->user_fence); - + msm_syncobj_reset(syncobjs_to_reset, args->nr_in_syncobjs); + msm_syncobj_process_post_deps(post_deps, args->nr_out_syncobjs, submit->user_fence); out: submit_cleanup(submit, !!ret); diff --git a/drivers/gpu/drm/msm/msm_syncobj.c b/drivers/gpu/drm/msm/msm_syncobj.c new file mode 100644 index 000000000000..4baa9f522c54 --- /dev/null +++ b/drivers/gpu/drm/msm/msm_syncobj.c @@ -0,0 +1,172 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* Copyright (C) 2020 Google, Inc */ + +#include "drm/drm_drv.h" + +#include "msm_drv.h" +#include "msm_syncobj.h" + +struct drm_syncobj ** +msm_syncobj_parse_deps(struct drm_device *dev, + struct drm_sched_job *job, + struct drm_file *file, + uint64_t in_syncobjs_addr, + uint32_t nr_in_syncobjs, + size_t syncobj_stride) +{ + struct drm_syncobj **syncobjs = NULL; + struct drm_msm_syncobj syncobj_desc = {0}; + int ret = 0; + uint32_t i, j; + + syncobjs = kcalloc(nr_in_syncobjs, sizeof(*syncobjs), + GFP_KERNEL | __GFP_NOWARN | __GFP_NORETRY); + if (!syncobjs) + return ERR_PTR(-ENOMEM); + + for (i = 0; i < nr_in_syncobjs; ++i) { + uint64_t address = in_syncobjs_addr + i * syncobj_stride; + + if (copy_from_user(&syncobj_desc, + u64_to_user_ptr(address), + min(syncobj_stride, sizeof(syncobj_desc)))) { + ret = -EFAULT; + break; + } + + if (syncobj_desc.point && + !drm_core_check_feature(dev, DRIVER_SYNCOBJ_TIMELINE)) { + ret = UERR(EOPNOTSUPP, dev, "syncobj timeline unsupported"); + break; + } + + if (syncobj_desc.flags & ~MSM_SYNCOBJ_FLAGS) { + ret = UERR(EINVAL, dev, "invalid syncobj flags: %x", syncobj_desc.flags); + break; + } + + ret = drm_sched_job_add_syncobj_dependency(job, file, + syncobj_desc.handle, + syncobj_desc.point); + if (ret) + break; + + if (syncobj_desc.flags & MSM_SYNCOBJ_RESET) { + syncobjs[i] = drm_syncobj_find(file, syncobj_desc.handle); + if (!syncobjs[i]) { + ret = UERR(EINVAL, dev, "invalid syncobj handle: %u", i); + break; + } + } + } + + if (ret) { + for (j = 0; j <= i; ++j) { + if (syncobjs[j]) + drm_syncobj_put(syncobjs[j]); + } + kfree(syncobjs); + return ERR_PTR(ret); + } + return syncobjs; +} + +void +msm_syncobj_reset(struct drm_syncobj **syncobjs, uint32_t nr_syncobjs) +{ + uint32_t i; + + for (i = 0; syncobjs && i < nr_syncobjs; ++i) { + if (syncobjs[i]) + drm_syncobj_replace_fence(syncobjs[i], NULL); + } +} + +struct msm_syncobj_post_dep * +msm_syncobj_parse_post_deps(struct drm_device *dev, + struct drm_file *file, + uint64_t syncobjs_addr, + uint32_t nr_syncobjs, + size_t syncobj_stride) +{ + struct msm_syncobj_post_dep *post_deps; + struct drm_msm_syncobj syncobj_desc = {0}; + int ret = 0; + uint32_t i, j; + + post_deps = kcalloc(nr_syncobjs, sizeof(*post_deps), + GFP_KERNEL | __GFP_NOWARN | __GFP_NORETRY); + if (!post_deps) + return ERR_PTR(-ENOMEM); + + for (i = 0; i < nr_syncobjs; ++i) { + uint64_t address = syncobjs_addr + i * syncobj_stride; + + if (copy_from_user(&syncobj_desc, + u64_to_user_ptr(address), + min(syncobj_stride, sizeof(syncobj_desc)))) { + ret = -EFAULT; + break; + } + + post_deps[i].point = syncobj_desc.point; + + if (syncobj_desc.flags) { + ret = UERR(EINVAL, dev, "invalid syncobj flags"); + break; + } + + if (syncobj_desc.point) { + if (!drm_core_check_feature(dev, + DRIVER_SYNCOBJ_TIMELINE)) { + ret = UERR(EOPNOTSUPP, dev, "syncobj timeline unsupported"); + break; + } + + post_deps[i].chain = dma_fence_chain_alloc(); + if (!post_deps[i].chain) { + ret = -ENOMEM; + break; + } + } + + post_deps[i].syncobj = + drm_syncobj_find(file, syncobj_desc.handle); + if (!post_deps[i].syncobj) { + ret = UERR(EINVAL, dev, "invalid syncobj handle"); + break; + } + } + + if (ret) { + for (j = 0; j <= i; ++j) { + dma_fence_chain_free(post_deps[j].chain); + if (post_deps[j].syncobj) + drm_syncobj_put(post_deps[j].syncobj); + } + + kfree(post_deps); + return ERR_PTR(ret); + } + + return post_deps; +} + +void +msm_syncobj_process_post_deps(struct msm_syncobj_post_dep *post_deps, + uint32_t count, struct dma_fence *fence) +{ + uint32_t i; + + for (i = 0; post_deps && i < count; ++i) { + if (post_deps[i].chain) { + drm_syncobj_add_point(post_deps[i].syncobj, + post_deps[i].chain, + fence, post_deps[i].point); + post_deps[i].chain = NULL; + } else { + drm_syncobj_replace_fence(post_deps[i].syncobj, + fence); + } + } +} diff --git a/drivers/gpu/drm/msm/msm_syncobj.h b/drivers/gpu/drm/msm/msm_syncobj.h new file mode 100644 index 000000000000..bcaa15d01da0 --- /dev/null +++ b/drivers/gpu/drm/msm/msm_syncobj.h @@ -0,0 +1,37 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* Copyright (C) 2020 Google, Inc */ + +#ifndef __MSM_GEM_SYNCOBJ_H__ +#define __MSM_GEM_SYNCOBJ_H__ + +#include "drm/drm_device.h" +#include "drm/drm_syncobj.h" +#include "drm/gpu_scheduler.h" + +struct msm_syncobj_post_dep { + struct drm_syncobj *syncobj; + uint64_t point; + struct dma_fence_chain *chain; +}; + +struct drm_syncobj ** +msm_syncobj_parse_deps(struct drm_device *dev, + struct drm_sched_job *job, + struct drm_file *file, + uint64_t in_syncobjs_addr, + uint32_t nr_in_syncobjs, + size_t syncobj_stride); + +void msm_syncobj_reset(struct drm_syncobj **syncobjs, uint32_t nr_syncobjs); + +struct msm_syncobj_post_dep * +msm_syncobj_parse_post_deps(struct drm_device *dev, + struct drm_file *file, + uint64_t syncobjs_addr, + uint32_t nr_syncobjs, + size_t syncobj_stride); + +void msm_syncobj_process_post_deps(struct msm_syncobj_post_dep *post_deps, + uint32_t count, struct dma_fence *fence); + +#endif /* __MSM_GEM_SYNCOBJ_H__ */ diff --git a/include/uapi/drm/msm_drm.h b/include/uapi/drm/msm_drm.h index 1bccc347945c..2c2fc4b284d0 100644 --- a/include/uapi/drm/msm_drm.h +++ b/include/uapi/drm/msm_drm.h @@ -220,6 +220,17 @@ struct drm_msm_gem_cpu_fini { * Cmdstream Submission: */ +#define MSM_SYNCOBJ_RESET 0x00000001 /* Reset syncobj after wait. */ +#define MSM_SYNCOBJ_FLAGS ( \ + MSM_SYNCOBJ_RESET | \ + 0) + +struct drm_msm_syncobj { + __u32 handle; /* in, syncobj handle. */ + __u32 flags; /* in, from MSM_SUBMIT_SYNCOBJ_FLAGS */ + __u64 point; /* in, timepoint for timeline syncobjs. */ +}; + /* The value written into the cmdstream is logically: * * ((relocbuf->gpuaddr + reloc_offset) << shift) | or @@ -309,17 +320,6 @@ struct drm_msm_gem_submit_bo { MSM_SUBMIT_FENCE_SN_IN | \ 0) -#define MSM_SUBMIT_SYNCOBJ_RESET 0x00000001 /* Reset syncobj after wait. */ -#define MSM_SUBMIT_SYNCOBJ_FLAGS ( \ - MSM_SUBMIT_SYNCOBJ_RESET | \ - 0) - -struct drm_msm_gem_submit_syncobj { - __u32 handle; /* in, syncobj handle. */ - __u32 flags; /* in, from MSM_SUBMIT_SYNCOBJ_FLAGS */ - __u64 point; /* in, timepoint for timeline syncobjs. */ -}; - /* Each cmdstream submit consists of a table of buffers involved, and * one or more cmdstream buffers. This allows for conditional execution * (context-restore), and IB buffers needed for per tile/bin draw cmds. @@ -333,8 +333,8 @@ struct drm_msm_gem_submit { __u64 cmds; /* in, ptr to array of submit_cmd's */ __s32 fence_fd; /* in/out fence fd (see MSM_SUBMIT_FENCE_FD_IN/OUT) */ __u32 queueid; /* in, submitqueue id */ - __u64 in_syncobjs; /* in, ptr to array of drm_msm_gem_submit_syncobj */ - __u64 out_syncobjs; /* in, ptr to array of drm_msm_gem_submit_syncobj */ + __u64 in_syncobjs; /* in, ptr to array of drm_msm_syncobj */ + __u64 out_syncobjs; /* in, ptr to array of drm_msm_syncobj */ __u32 nr_in_syncobjs; /* in, number of entries in in_syncobj */ __u32 nr_out_syncobjs; /* in, number of entries in out_syncobj. */ __u32 syncobj_stride; /* in, stride of syncobj arrays. */ From patchwork Thu Jun 5 18:29:13 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 894499 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 431CB2868BF for ; Thu, 5 Jun 2025 18:33:09 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.168.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749148391; cv=none; b=c35BHlB2bn+ijlEWLe8sJuZAQmLlDu5egZK8ty/yH0vvqkNAVsOj2s/+fSpmwua6cQRbZCXCY1nbgKOupxTvUGzH8+v6ZVqTJp//n/n4B3YOJSZmyfsysxkNueu/2TYihMWAJr7ybZA3qup2E0FT0ulC/HY8UwHoj9s0TbRbu/c= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749148391; c=relaxed/simple; bh=npcaXqJDoSyMBRhF30x2mSPk/8hD8RjcLAdanR+hf0U=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Djj0CCoj3ore55GUYlIkcuyvo2/E4iLQPLH5Viw5qHeArjm2F2dEvRtnbW0I6/VeHJkh4BC4baPx5YFY9E70VXmj+DBsgVrODKywm09GYFHb9JY3FdlEQQMYOMq4wYYbyZ/aDF/YS6XmKFO0XEdvn33IWQWTnMjbVZkvWI3+ZnA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=nDgP4tdd; arc=none smtp.client-ip=205.220.168.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="nDgP4tdd" Received: from pps.filterd (m0279866.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 555GFUPX004217 for ; Thu, 5 Jun 2025 18:33:08 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=YGw3pq0xuYX eB0bxqdCW/EyRGUrXHxJgo02Bi+E/25I=; b=nDgP4tdd+Ws7bev+GhOhAZpuA2f M/5EBWVxMI9QxUQaxxy1c7EQj+40WKPc35VOwdybVzQVPvSLi/uNj5CVH7u0jtzH g34lyK9urU3eUUuTdkyfQwdmfZJwhkmTpsvodQBhbXN4pgDi6EcYO2ogcrXEzRvQ pt3uVuNQShO5jBWVOW8DYplSU8gcl/t6+18vrDdHeXgHEZeIpHYpFUuway26NujA 9MbRlhmnyMWkka6pk/wuCxmJFr2MZrFGUPiXe/p90wq3HSuilqJKIDaFrive0Wb9 OWcw7fHa6h0llk8l5P5ChrLd+NPdTD0hMz9NVSWqClEnrl6LQ3V9RWvnJHA== Received: from mail-pf1-f199.google.com (mail-pf1-f199.google.com [209.85.210.199]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 471g8s2c7h-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Thu, 05 Jun 2025 18:33:08 +0000 (GMT) Received: by mail-pf1-f199.google.com with SMTP id d2e1a72fcca58-747a9ef52a4so1754422b3a.2 for ; Thu, 05 Jun 2025 11:33:08 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1749148388; x=1749753188; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=YGw3pq0xuYXeB0bxqdCW/EyRGUrXHxJgo02Bi+E/25I=; b=rGiu0awM8cqqxshNc3Oz2Acl3iVdmwne+stqNkwi7ucIEFOVe/Q/K0OcBgaGBL3ixF U5or3BbzSlAdOrOpMmnCVIK4JvCRTL/GzsGnOtTSoAhvbczNYAmpkj+pW8r/bQMRgns7 ravfSBAGqTgBaXmUjRYr9gBr/ImR93LZZdYVfn9/u0Ar/UTIZl4DBXWHsHAnDdxl5L0L rXnGuj42ZKeu7jOBwu12lEoG1p7eQ5s7gpN/iz8lalnfW0uztt15oMfOsdc0Rgqwwjuh 82nBCupJu3691APRAITSPu06PzfYjHgCD8wZK+wLHEq3MB48nsCco0zoBGYROY2GWOKx o9cA== X-Forwarded-Encrypted: i=1; AJvYcCUyvcAqhLiSjGF3e9QmgfWzo+nWJ2RoEGfyodn76fK6jCiM2lVL3XsEdz0y3C93POeBENX9HE8sSDMN9RZY@vger.kernel.org X-Gm-Message-State: AOJu0YwnK08CMQYuHch8G8Bfj3WvyRup3XrlC3/0h6e4m4tV4NYNX8AB dUIjWhsj1XP+3hD+0Pha8u3sB1mVCo46RghwT3D14jpy/4l0u4tDP3T+BKROGRWR3BP2SX57BTp SBN17fJZyEhzF/qHNgQTSa2MQeVggklQoTReuocRd+uPD+Dt7c99otN56KysNf+7/WV8R X-Gm-Gg: ASbGncuvUmBOFqRR/1BVjg0BFyAJppDvUUzvB4SYDrtSZ5N3vhj1GU52GW4CjkjySJL a0PQik1MzQ/3Ptsmgw1Ofd2IqDJur26c2Ps+RHWn8ZW5Si0XK2uEqRW6x1oOYCv7pGVv6bgM3c8 /6tey1hdls0/jp4JHb9makv0GTJYa9qUyfKfZod3MJmfbmldC+FuiCeaDkZW2oyv1tYPXt3zGkJ YtRCdmzMUfqISoUfoY8fRvznCn6Cb4cZmvLv1md/NFMB6hRNO9VQSpl+ES25o60zhLdLuvVG/vF zFXScwTnhoJ/uj2yvFJE1g== X-Received: by 2002:a05:6a21:4cc5:b0:1f5:8fe3:4e29 with SMTP id adf61e73a8af0-21ee685066bmr387126637.3.1749148387730; Thu, 05 Jun 2025 11:33:07 -0700 (PDT) X-Google-Smtp-Source: AGHT+IEZy4+h7p31V7pW53fals1CsboObn2Z1U4IfzK+Rx2MlP/oVXrylzGSxEK+JWTBZYf23+Ce6g== X-Received: by 2002:a05:6a21:4cc5:b0:1f5:8fe3:4e29 with SMTP id adf61e73a8af0-21ee685066bmr387097637.3.1749148387380; Thu, 05 Jun 2025 11:33:07 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:89fa:e299:1a34:c1f5]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-747affadf70sm13643103b3a.91.2025.06.05.11.33.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 05 Jun 2025 11:33:06 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Rob Clark , Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v6 28/40] drm/msm: Use DMA_RESV_USAGE_BOOKKEEP/KERNEL Date: Thu, 5 Jun 2025 11:29:13 -0700 Message-ID: <20250605183111.163594-29-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250605183111.163594-1-robin.clark@oss.qualcomm.com> References: <20250605183111.163594-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: tiEmd2eDg-GXvOEcGf5lmvqOvLVspZtv X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjA1MDE2NSBTYWx0ZWRfXzxTWNg9UUfqw g8VEh44z4q8df1pmqu+ndsxlZpQRVRC3jTd9uRR7JvduruGadKMYYKVxMOGiLRzCLPB8AqeecJc 3V+EPwpN50Hf0vSIZ1JbC77ENst2i9Qmi0U5BZYMiUXpw2dPkMrwqAF6CHYwSNOIwOyP7HAmL6j vobBf8rxvD++dvZW7pZKglMRATrPg7S3JEX+0duJlowByJJmfJLmd1+NIH1purBvsbtSsyr4eB0 L8KfxxQPWysO6LSFhiiytE0pfBonmvt9HjjpY8aF6FuRhXGY2Ol2YkqXIePLI1G+VjvkDU2h6tJ B64M/YCIifxWXcta+Nvm+TAYEl3mbTDsYLOHyWXhi/53nSzhIQvlOWOCa42npk/t14tFnYn+yso NV4TT65q3rUCv50gJJ4rT4aOIrn9mTV/Q+OISa5yCyt/MJcwS9BgVUZveOjkJqdxzZB3deK8 X-Authority-Analysis: v=2.4 cv=RdWQC0tv c=1 sm=1 tr=0 ts=6841e2e4 cx=c_pps a=WW5sKcV1LcKqjgzy2JUPuA==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=cm27Pg_UAAAA:8 a=EUspDBNiAAAA:8 a=isCY8TonHXl0-fnU9HAA:9 a=OpyuDcXvxspvyRM73sMx:22 X-Proofpoint-GUID: tiEmd2eDg-GXvOEcGf5lmvqOvLVspZtv X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.0.736,FMLib:17.12.80.40 definitions=2025-06-05_05,2025-06-05_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 bulkscore=0 adultscore=0 mlxscore=0 priorityscore=1501 phishscore=0 clxscore=1015 lowpriorityscore=0 malwarescore=0 suspectscore=0 impostorscore=0 spamscore=0 mlxlogscore=999 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506050165 From: Rob Clark Any place we wait for a BO to become idle, we should use BOOKKEEP usage, to ensure that it waits for _any_ activity. Signed-off-by: Rob Clark Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gem.c | 6 +++--- drivers/gpu/drm/msm/msm_gem_shrinker.c | 2 +- 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index ec349719b49a..106fec06c18d 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -97,8 +97,8 @@ static void msm_gem_close(struct drm_gem_object *obj, struct drm_file *file) * TODO we might need to kick this to a queue to avoid blocking * in CLOSE ioctl */ - dma_resv_wait_timeout(obj->resv, DMA_RESV_USAGE_READ, false, - msecs_to_jiffies(1000)); + dma_resv_wait_timeout(obj->resv, DMA_RESV_USAGE_BOOKKEEP, false, + MAX_SCHEDULE_TIMEOUT); msm_gem_lock_vm_and_obj(&exec, obj, ctx->vm); put_iova_spaces(obj, ctx->vm, true); @@ -903,7 +903,7 @@ bool msm_gem_active(struct drm_gem_object *obj) if (to_msm_bo(obj)->pin_count) return true; - return !dma_resv_test_signaled(obj->resv, dma_resv_usage_rw(true)); + return !dma_resv_test_signaled(obj->resv, DMA_RESV_USAGE_BOOKKEEP); } int msm_gem_cpu_prep(struct drm_gem_object *obj, uint32_t op, ktime_t *timeout) diff --git a/drivers/gpu/drm/msm/msm_gem_shrinker.c b/drivers/gpu/drm/msm/msm_gem_shrinker.c index 5faf6227584a..1039e3c0a47b 100644 --- a/drivers/gpu/drm/msm/msm_gem_shrinker.c +++ b/drivers/gpu/drm/msm/msm_gem_shrinker.c @@ -139,7 +139,7 @@ evict(struct drm_gem_object *obj, struct ww_acquire_ctx *ticket) static bool wait_for_idle(struct drm_gem_object *obj) { - enum dma_resv_usage usage = dma_resv_usage_rw(true); + enum dma_resv_usage usage = DMA_RESV_USAGE_BOOKKEEP; return dma_resv_wait_timeout(obj->resv, usage, false, 10) > 0; } From patchwork Thu Jun 5 18:29:14 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 894498 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 472DE284669 for ; Thu, 5 Jun 2025 18:33:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.168.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749148394; cv=none; b=mq8Wr2ij9dhXicY+gVehGcqWLwFvGyIcuXBoDkdS32FudY3O8WjkrSTecckAfjH/BHUkFdOW800vwk1F52CEJUCu7UiBAiLeTo6TVEdY6atyYaFd2KlvqbiSInGSChEUJ0cxteGcl+/43ADr6AgITvVHX+F67NBb0e1zNUPykt8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749148394; c=relaxed/simple; bh=KdBKiXc8jGP2R3bmxiE41MPspy0DSCGjxtXfk1R8RD4=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=pWyjeUEp1TBe16cXsMAa/PQaDWjQ0SWv3LMG0WXve6C2I05aSQGN7AgOkd2QJbdhJ6DeGJudAXV3JpZIUKUx/gCUvFsyP8QspyI0sX/qb1UWdyLGhpTJu10t7PbN1mhQYEhrLaFyKvMy7r5kfiyBIzdgWxJA7cbZWKcjU4EimHc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=Aspr4B33; arc=none smtp.client-ip=205.220.168.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="Aspr4B33" Received: from pps.filterd (m0279867.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 555Gt5D1006543 for ; Thu, 5 Jun 2025 18:33:10 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=LjqieYpaddy NEXBx2vdpRebDgTvYxO1H7xwyagIwts4=; b=Aspr4B33E4t2zAArR3JwLxHVsQx yOQFCQxwPKcC0XP4VIQE0l/lmsGRohYJMe0OsFZR7J8PdL/pckOwWm417b3ElcD2 7kfBaqAmCrx/RRtxqPFLXwePlBhbz6wyz0kW680k7/dYVWMpLIF8y7H97+PcpW+c 7KULmBV+KDN4PnLRMRUYQphrXha3rY6u7SRqgeXitjljYf4LetN/vUUUENRlKmZv bXAqhYMrQXmhZuVCPHcWLVIVM0M2tuSXCNSY2gqJuJE8g3XbZvv7AxgJWSJ3TLEI 3DvMj9LV6mY18PYrZZ0EOoxZ612uiX7PUq/70TIYtRsdc8gzYZ7DXuWG8yg== Received: from mail-pl1-f198.google.com (mail-pl1-f198.google.com [209.85.214.198]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 471g8t2ady-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Thu, 05 Jun 2025 18:33:10 +0000 (GMT) Received: by mail-pl1-f198.google.com with SMTP id d9443c01a7336-234906c5e29so14801765ad.0 for ; Thu, 05 Jun 2025 11:33:10 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1749148389; x=1749753189; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=LjqieYpaddyNEXBx2vdpRebDgTvYxO1H7xwyagIwts4=; b=WnM5nVwfmJBgmytxLt+QzaXqfs/JmKN8ZOc8E05aMPykH2EWfPwh+LfnWnQ5E/aZmg bPcOa+4WUlNxTYRGd9qHbfnv6vTGlWDWmINYf/bLlyjGhqQbyOmjYXOmo69AhnAtvrnf BEKUfx9ZRiTjRpTlQZFJK9rKgihfNe7x3IniAH5rpgnSRJC76eP8KslfonMsmYqmNJof irja11u2xCFTm9Nxs4bz6wbM9q/qOP1y9bx7LIwyA7Z6ng56zZyMsZPhM6c8oE+df5VJ mKQASyH4bYhV0wXmNzVhGCrucX6coRAcpBS1dQ4Ea14XDfDSFaJQZ5ijCcokX0qAfuAf auqA== X-Forwarded-Encrypted: i=1; AJvYcCXw0IZmXiENLso7ly7UKFa4vFDYtj+OnyI8hS0CJTY3U8SnVDteR8YUp1kkcgvPMf34bl6n+CxjX2kLi/jI@vger.kernel.org X-Gm-Message-State: AOJu0YyKVu+kh+106SheJdfcHx/TBowDtRK6WOqoIIynBK4lXnt/Y7Jf RrCTfS0WzR7mQeSMeiWQlO2ZYFXGM0Q3HTB390xu9Kh9Wax80OtLyo9C2s/PS5o46ttJX9ZX9+9 EyiFFiJwaXyPhUOkupwbnEyaAm4MRVLxidvUR+sILel4evdVKsIXRCHjOzAPk2WrqVg5k X-Gm-Gg: ASbGncsK+AmB0S0W3xxwxD9XHZCKxmW6gOpKo4dA0dzIkLZNfZ8X5+zU7esVJ4cPzks Wex9VfbSnPJjGkNNfxz21aRRlfEwDV5vglqaptguE36a3Qslh/9nFIDjcE5pWxZ5Ky8sSHY3AsP rWZH/6mWv4k/vNx1Vt5LXvPiIMbhznbBzIMTWdvaeRu1qie1z/awaOsMLPQLDjWPveKRJ4c3VKH PUOrIA3NhaVdiasGZyB43HTpYWu/RpFcK5xKSbRKcoEhapPJryFD6l2DSv8fBm1QOdKAOwCm407 e2V/8ZLMtbxZ/kj2eYdzVg== X-Received: by 2002:a17:903:1a08:b0:234:9092:9dda with SMTP id d9443c01a7336-23601d136e0mr6838175ad.24.1749148389341; Thu, 05 Jun 2025 11:33:09 -0700 (PDT) X-Google-Smtp-Source: AGHT+IFI80+nVOYR+szNCq9c5gx7b4bSWZdXraoyJrzGn77LXxOJ6L01XNDN5vmGchc5DwjQKvbwnw== X-Received: by 2002:a17:903:1a08:b0:234:9092:9dda with SMTP id d9443c01a7336-23601d136e0mr6837635ad.24.1749148388895; Thu, 05 Jun 2025 11:33:08 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:89fa:e299:1a34:c1f5]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-23506be9635sm123479135ad.80.2025.06.05.11.33.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 05 Jun 2025 11:33:08 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Rob Clark , Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , Konrad Dybcio , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= , linux-kernel@vger.kernel.org (open list), linux-media@vger.kernel.org (open list:DMA BUFFER SHARING FRAMEWORK:Keyword:\bdma_(?:buf|fence|resv)\b), linaro-mm-sig@lists.linaro.org (moderated list:DMA BUFFER SHARING FRAMEWORK:Keyword:\bdma_(?:buf|fence|resv)\b) Subject: [PATCH v6 29/40] drm/msm: Add VM_BIND submitqueue Date: Thu, 5 Jun 2025 11:29:14 -0700 Message-ID: <20250605183111.163594-30-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250605183111.163594-1-robin.clark@oss.qualcomm.com> References: <20250605183111.163594-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: BK49taxnmYN7XzeEUQmuE-WFRJs9XZXX X-Authority-Analysis: v=2.4 cv=EPcG00ZC c=1 sm=1 tr=0 ts=6841e2e6 cx=c_pps a=MTSHoo12Qbhz2p7MsH1ifg==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=cm27Pg_UAAAA:8 a=EUspDBNiAAAA:8 a=pGLkceISAAAA:8 a=PnHp71_pcMtKiH-pJVMA:9 a=GvdueXVYPmCkWapjIL-Q:22 X-Proofpoint-GUID: BK49taxnmYN7XzeEUQmuE-WFRJs9XZXX X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjA1MDE2NSBTYWx0ZWRfX93P53m8lFwgu vYuvKk7egAh6sGAmGcXbiHmno9QtVtftwjAHw68RUKu/w0PQljoB8+onuNwg1IFlbFCultsmQSS dns195YS6FCnzcdwqNNTajFj8sIuHv17vn4IuquLyJx7oknf1wfdqIxWuMRScErCOW0NK/H49rL LiW2YfN2n2Ijrmpkat5PYJ8oOsIvJ9w97w38CIANTqjqhI4YKAEuA1mDiDSISlQgotoBsLE1LDD QK53M+wycuB10tbf7jsjLlejAYze5eONEtiHcmWBCWIeUHNiX3PBtw99Q5iM9nlmcDkNxoTVNra P6XRizSJo3GEI7Gn+4vUWpEam7aTXxAFMgiDmIJk/9kTxec94hJyEMFvyE6j8GDUjZod/ifTTK+ QLNvDlIqrjFP1RRUkfKqvN84XJh5cWWnxlw7YtbzJP6PYAaZBeoE5Ekrld4K/yH38TTTxd4g X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.0.736,FMLib:17.12.80.40 definitions=2025-06-05_05,2025-06-05_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 suspectscore=0 bulkscore=0 priorityscore=1501 phishscore=0 impostorscore=0 lowpriorityscore=0 adultscore=0 malwarescore=0 mlxscore=0 clxscore=1015 mlxlogscore=999 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506050165 From: Rob Clark This submitqueue type isn't tied to a hw ringbuffer, but instead executes on the CPU for performing async VM_BIND ops. Signed-off-by: Rob Clark Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gem.h | 12 +++++ drivers/gpu/drm/msm/msm_gem_submit.c | 60 +++++++++++++++++++--- drivers/gpu/drm/msm/msm_gem_vma.c | 71 +++++++++++++++++++++++++++ drivers/gpu/drm/msm/msm_gpu.h | 3 ++ drivers/gpu/drm/msm/msm_submitqueue.c | 67 +++++++++++++++++++------ include/uapi/drm/msm_drm.h | 9 +++- 6 files changed, 197 insertions(+), 25 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index b44a4f7313c9..945a235d73cf 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -53,6 +53,13 @@ struct msm_gem_vm { /** @base: Inherit from drm_gpuvm. */ struct drm_gpuvm base; + /** + * @sched: Scheduler used for asynchronous VM_BIND request. + * + * Unused for kernel managed VMs (where all operations are synchronous). + */ + struct drm_gpu_scheduler sched; + /** * @mm: Memory management for kernel managed VA allocations * @@ -71,6 +78,9 @@ struct msm_gem_vm { */ struct pid *pid; + /** @last_fence: Fence for last pending work scheduled on the VM */ + struct dma_fence *last_fence; + /** @faults: the number of GPU hangs associated with this address space */ int faults; @@ -100,6 +110,8 @@ struct drm_gpuvm * msm_gem_vm_create(struct drm_device *drm, struct msm_mmu *mmu, const char *name, u64 va_start, u64 va_size, bool managed); +void msm_gem_vm_close(struct drm_gpuvm *gpuvm); + struct msm_fence_context; #define MSM_VMA_DUMP (DRM_GPUVA_USERBITS << 0) diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c index e3c76971ae5f..504f27b0232b 100644 --- a/drivers/gpu/drm/msm/msm_gem_submit.c +++ b/drivers/gpu/drm/msm/msm_gem_submit.c @@ -4,6 +4,7 @@ * Author: Rob Clark */ +#include #include #include #include @@ -258,30 +259,43 @@ static int submit_lookup_cmds(struct msm_gem_submit *submit, static int submit_lock_objects(struct msm_gem_submit *submit) { unsigned flags = DRM_EXEC_IGNORE_DUPLICATES | DRM_EXEC_INTERRUPTIBLE_WAIT; + struct drm_exec *exec = &submit->exec; int ret; -// TODO need to add vm_bind path which locks vm resv + external objs drm_exec_init(&submit->exec, flags, submit->nr_bos); + if (msm_context_is_vmbind(submit->queue->ctx)) { + drm_exec_until_all_locked (&submit->exec) { + ret = drm_gpuvm_prepare_vm(submit->vm, exec, 1); + drm_exec_retry_on_contention(exec); + if (ret) + return ret; + + ret = drm_gpuvm_prepare_objects(submit->vm, exec, 1); + drm_exec_retry_on_contention(exec); + if (ret) + return ret; + } + + return 0; + } + drm_exec_until_all_locked (&submit->exec) { ret = drm_exec_lock_obj(&submit->exec, drm_gpuvm_resv_obj(submit->vm)); drm_exec_retry_on_contention(&submit->exec); if (ret) - goto error; + return ret; for (unsigned i = 0; i < submit->nr_bos; i++) { struct drm_gem_object *obj = submit->bos[i].obj; ret = drm_exec_prepare_obj(&submit->exec, obj, 1); drm_exec_retry_on_contention(&submit->exec); if (ret) - goto error; + return ret; } } return 0; - -error: - return ret; } static int submit_fence_sync(struct msm_gem_submit *submit) @@ -367,9 +381,18 @@ static void submit_unpin_objects(struct msm_gem_submit *submit) static void submit_attach_object_fences(struct msm_gem_submit *submit) { - int i; + struct msm_gem_vm *vm = to_msm_vm(submit->vm); + struct dma_fence *last_fence; + + if (msm_context_is_vmbind(submit->queue->ctx)) { + drm_gpuvm_resv_add_fence(submit->vm, &submit->exec, + submit->user_fence, + DMA_RESV_USAGE_BOOKKEEP, + DMA_RESV_USAGE_BOOKKEEP); + return; + } - for (i = 0; i < submit->nr_bos; i++) { + for (unsigned i = 0; i < submit->nr_bos; i++) { struct drm_gem_object *obj = submit->bos[i].obj; if (submit->bos[i].flags & MSM_SUBMIT_BO_WRITE) @@ -379,6 +402,10 @@ static void submit_attach_object_fences(struct msm_gem_submit *submit) dma_resv_add_fence(obj->resv, submit->user_fence, DMA_RESV_USAGE_READ); } + + last_fence = vm->last_fence; + vm->last_fence = dma_fence_unwrap_merge(submit->user_fence, last_fence); + dma_fence_put(last_fence); } static int submit_bo(struct msm_gem_submit *submit, uint32_t idx, @@ -535,6 +562,11 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data, if (!queue) return -ENOENT; + if (queue->flags & MSM_SUBMITQUEUE_VM_BIND) { + ret = UERR(EINVAL, dev, "Invalid queue type"); + goto out_post_unlock; + } + ring = gpu->rb[queue->ring_nr]; if (args->flags & MSM_SUBMIT_FENCE_FD_OUT) { @@ -724,6 +756,18 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data, submit_attach_object_fences(submit); + if (msm_context_is_vmbind(ctx)) { + /* + * If we are not using VM_BIND, submit_pin_vmas() will validate + * just the BOs attached to the submit. In that case we don't + * need to validate the _entire_ vm, because userspace tracked + * what BOs are associated with the submit. + */ + ret = drm_gpuvm_validate(submit->vm, &submit->exec); + if (ret) + goto out; + } + /* The scheduler owns a ref now: */ msm_gem_submit_get(submit); diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_gem_vma.c index e16a8cafd8be..cf37abb98235 100644 --- a/drivers/gpu/drm/msm/msm_gem_vma.c +++ b/drivers/gpu/drm/msm/msm_gem_vma.c @@ -16,6 +16,7 @@ msm_gem_vm_free(struct drm_gpuvm *gpuvm) drm_mm_takedown(&vm->mm); if (vm->mmu) vm->mmu->funcs->destroy(vm->mmu); + dma_fence_put(vm->last_fence); put_pid(vm->pid); kfree(vm); } @@ -154,6 +155,9 @@ static const struct drm_gpuvm_ops msm_gpuvm_ops = { .vm_free = msm_gem_vm_free, }; +static const struct drm_sched_backend_ops msm_vm_bind_ops = { +}; + /** * msm_gem_vm_create() - Create and initialize a &msm_gem_vm * @drm: the drm device @@ -195,6 +199,21 @@ msm_gem_vm_create(struct drm_device *drm, struct msm_mmu *mmu, const char *name, goto err_free_vm; } + if (!managed) { + struct drm_sched_init_args args = { + .ops = &msm_vm_bind_ops, + .num_rqs = 1, + .credit_limit = 1, + .timeout = MAX_SCHEDULE_TIMEOUT, + .name = "msm-vm-bind", + .dev = drm->dev, + }; + + ret = drm_sched_init(&vm->sched, &args); + if (ret) + goto err_free_dummy; + } + drm_gpuvm_init(&vm->base, name, flags, drm, dummy_gem, va_start, va_size, 0, 0, &msm_gpuvm_ops); drm_gem_object_put(dummy_gem); @@ -206,8 +225,60 @@ msm_gem_vm_create(struct drm_device *drm, struct msm_mmu *mmu, const char *name, return &vm->base; +err_free_dummy: + drm_gem_object_put(dummy_gem); + err_free_vm: kfree(vm); return ERR_PTR(ret); } + +/** + * msm_gem_vm_close() - Close a VM + * @gpuvm: The VM to close + * + * Called when the drm device file is closed, to tear down VM related resources + * (which will drop refcounts to GEM objects that were still mapped into the + * VM at the time). + */ +void +msm_gem_vm_close(struct drm_gpuvm *gpuvm) +{ + struct msm_gem_vm *vm = to_msm_vm(gpuvm); + struct drm_gpuva *vma, *tmp; + + /* + * For kernel managed VMs, the VMAs are torn down when the handle is + * closed, so nothing more to do. + */ + if (vm->managed) + return; + + if (vm->last_fence) + dma_fence_wait(vm->last_fence, false); + + /* Kill the scheduler now, so we aren't racing with it for cleanup: */ + drm_sched_stop(&vm->sched, NULL); + drm_sched_fini(&vm->sched); + + /* Tear down any remaining mappings: */ + dma_resv_lock(drm_gpuvm_resv(gpuvm), NULL); + drm_gpuvm_for_each_va_safe (vma, tmp, gpuvm) { + struct drm_gem_object *obj = vma->gem.obj; + + if (obj && obj->resv != drm_gpuvm_resv(gpuvm)) { + drm_gem_object_get(obj); + msm_gem_lock(obj); + } + + msm_gem_vma_unmap(vma); + msm_gem_vma_close(vma); + + if (obj && obj->resv != drm_gpuvm_resv(gpuvm)) { + msm_gem_unlock(obj); + drm_gem_object_put(obj); + } + } + dma_resv_unlock(drm_gpuvm_resv(gpuvm)); +} diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h index 448ebf721bd8..9cbf155ff222 100644 --- a/drivers/gpu/drm/msm/msm_gpu.h +++ b/drivers/gpu/drm/msm/msm_gpu.h @@ -570,6 +570,9 @@ struct msm_gpu_submitqueue { struct mutex lock; struct kref ref; struct drm_sched_entity *entity; + + /** @_vm_bind_entity: used for @entity pointer for VM_BIND queues */ + struct drm_sched_entity _vm_bind_entity[0]; }; struct msm_gpu_state_bo { diff --git a/drivers/gpu/drm/msm/msm_submitqueue.c b/drivers/gpu/drm/msm/msm_submitqueue.c index 8ced49c7557b..8617a82cd6b3 100644 --- a/drivers/gpu/drm/msm/msm_submitqueue.c +++ b/drivers/gpu/drm/msm/msm_submitqueue.c @@ -72,6 +72,9 @@ void msm_submitqueue_destroy(struct kref *kref) idr_destroy(&queue->fence_idr); + if (queue->entity == &queue->_vm_bind_entity[0]) + drm_sched_entity_destroy(queue->entity); + msm_context_put(queue->ctx); kfree(queue); @@ -102,7 +105,7 @@ struct msm_gpu_submitqueue *msm_submitqueue_get(struct msm_context *ctx, void msm_submitqueue_close(struct msm_context *ctx) { - struct msm_gpu_submitqueue *entry, *tmp; + struct msm_gpu_submitqueue *queue, *tmp; if (!ctx) return; @@ -111,10 +114,17 @@ void msm_submitqueue_close(struct msm_context *ctx) * No lock needed in close and there won't * be any more user ioctls coming our way */ - list_for_each_entry_safe(entry, tmp, &ctx->submitqueues, node) { - list_del(&entry->node); - msm_submitqueue_put(entry); + list_for_each_entry_safe(queue, tmp, &ctx->submitqueues, node) { + if (queue->entity == &queue->_vm_bind_entity[0]) + drm_sched_entity_flush(queue->entity, MAX_WAIT_SCHED_ENTITY_Q_EMPTY); + list_del(&queue->node); + msm_submitqueue_put(queue); } + + if (!ctx->vm) + return; + + msm_gem_vm_close(ctx->vm); } static struct drm_sched_entity * @@ -160,8 +170,6 @@ int msm_submitqueue_create(struct drm_device *drm, struct msm_context *ctx, struct msm_drm_private *priv = drm->dev_private; struct msm_gpu_submitqueue *queue; enum drm_sched_priority sched_prio; - extern int enable_preemption; - bool preemption_supported; unsigned ring_nr; int ret; @@ -171,26 +179,53 @@ int msm_submitqueue_create(struct drm_device *drm, struct msm_context *ctx, if (!priv->gpu) return -ENODEV; - preemption_supported = priv->gpu->nr_rings == 1 && enable_preemption != 0; + if (flags & MSM_SUBMITQUEUE_VM_BIND) { + unsigned sz; - if (flags & MSM_SUBMITQUEUE_ALLOW_PREEMPT && preemption_supported) - return -EINVAL; + /* Not allowed for kernel managed VMs (ie. kernel allocs VA) */ + if (!msm_context_is_vmbind(ctx)) + return -EINVAL; - ret = msm_gpu_convert_priority(priv->gpu, prio, &ring_nr, &sched_prio); - if (ret) - return ret; + if (prio) + return -EINVAL; + + sz = struct_size(queue, _vm_bind_entity, 1); + queue = kzalloc(sz, GFP_KERNEL); + } else { + extern int enable_preemption; + bool preemption_supported = + priv->gpu->nr_rings == 1 && enable_preemption != 0; + + if (flags & MSM_SUBMITQUEUE_ALLOW_PREEMPT && preemption_supported) + return -EINVAL; - queue = kzalloc(sizeof(*queue), GFP_KERNEL); + ret = msm_gpu_convert_priority(priv->gpu, prio, &ring_nr, &sched_prio); + if (ret) + return ret; + + queue = kzalloc(sizeof(*queue), GFP_KERNEL); + } if (!queue) return -ENOMEM; kref_init(&queue->ref); queue->flags = flags; - queue->ring_nr = ring_nr; - queue->entity = get_sched_entity(ctx, priv->gpu->rb[ring_nr], - ring_nr, sched_prio); + if (flags & MSM_SUBMITQUEUE_VM_BIND) { + struct drm_gpu_scheduler *sched = &to_msm_vm(msm_context_vm(drm, ctx))->sched; + + queue->entity = &queue->_vm_bind_entity[0]; + + drm_sched_entity_init(queue->entity, DRM_SCHED_PRIORITY_KERNEL, + &sched, 1, NULL); + } else { + queue->ring_nr = ring_nr; + + queue->entity = get_sched_entity(ctx, priv->gpu->rb[ring_nr], + ring_nr, sched_prio); + } + if (IS_ERR(queue->entity)) { ret = PTR_ERR(queue->entity); kfree(queue); diff --git a/include/uapi/drm/msm_drm.h b/include/uapi/drm/msm_drm.h index 2c2fc4b284d0..6d6cd1219926 100644 --- a/include/uapi/drm/msm_drm.h +++ b/include/uapi/drm/msm_drm.h @@ -385,12 +385,19 @@ struct drm_msm_gem_madvise { /* * Draw queues allow the user to set specific submission parameter. Command * submissions specify a specific submitqueue to use. ID 0 is reserved for - * backwards compatibility as a "default" submitqueue + * backwards compatibility as a "default" submitqueue. + * + * Because VM_BIND async updates happen on the CPU, they must run on a + * virtual queue created with the flag MSM_SUBMITQUEUE_VM_BIND. If we had + * a way to do pgtable updates on the GPU, we could drop this restriction. */ #define MSM_SUBMITQUEUE_ALLOW_PREEMPT 0x00000001 +#define MSM_SUBMITQUEUE_VM_BIND 0x00000002 /* virtual queue for VM_BIND ops */ + #define MSM_SUBMITQUEUE_FLAGS ( \ MSM_SUBMITQUEUE_ALLOW_PREEMPT | \ + MSM_SUBMITQUEUE_VM_BIND | \ 0) /* From patchwork Thu Jun 5 18:29:15 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 894227 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7DD5D2882B9 for ; Thu, 5 Jun 2025 18:33:12 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.168.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749148394; cv=none; b=Y5zx9ip0DP+ALw1Ta5j905yFLWaoi8N2k+4lkPVOIcd5tFBlctgfsOEdJpDJac/Ep4bSCxqr9elayiPmhb3SeVENY/33CqQNdORfkSHivOt6XhbR0FwLm0rjIUDxvj5RDmgzZ6hF0MShgGmN95YzJaMpRPpKC0uI3o3UKcgJlLQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749148394; c=relaxed/simple; bh=miM77xEXTrB6mt10AQICgq6MDZD3BX5ufhFU8RcosDw=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=TAnYav+POki7jY5CPxgRwNNfzUDimV7eua/kEd2uYdm033xKwpi0BKFxna1i/3BhGG4vxC+89LudZYl+NO60pjVfN3koERDO1wz5AE6PjMBxo6SvGtSlD+im8inOrqBEi5bdy3uCbOXmvlxzJ3UhZPcBZY1zqXPQrNvdHOJ3FpA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=ClyghPom; arc=none smtp.client-ip=205.220.168.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="ClyghPom" Received: from pps.filterd (m0279864.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 555HFnvG013514 for ; Thu, 5 Jun 2025 18:33:12 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=DtgSsqvS9NG lueSPRpKpNF02UHsYVqSrf4yaa+KHu3E=; b=ClyghPomOFoMdTdJzGxve7TKXHN 3OQUYa+lC3FIecHp1y+SfCYVCczz7kP0u2EisRTkMBKeg/Fhh10uMMHFbVjIzh1Q Edh0I7kadwComP87m3bhTriUzqyr83vu1PIFmN60gfewQ94d9+4anjazQQ7A2IOF oeBEThjo659CyseoxQMEROIrKvsLB7NGcpLlaUxf2DYnP9WyPRCFFWAdJZloxcYm 1E46HAjYohF0bgJpxj6/A3tvT75v4bGsfa1mTssWN0Ce62ZZbvRMYPBw5vNUL5i6 eakOoYJaiuu1rBi3jjf01tXi8Ry8cELlkRZC9TsZvD2HYOU/bssjqYHLNew== Received: from mail-pl1-f200.google.com (mail-pl1-f200.google.com [209.85.214.200]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 471g8nt9w6-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Thu, 05 Jun 2025 18:33:11 +0000 (GMT) Received: by mail-pl1-f200.google.com with SMTP id d9443c01a7336-235842baba4so12086755ad.3 for ; Thu, 05 Jun 2025 11:33:11 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1749148391; x=1749753191; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=DtgSsqvS9NGlueSPRpKpNF02UHsYVqSrf4yaa+KHu3E=; b=P51cQ7HEtdbf/mscDu2zX9WlD92cxeyz+q8k/YU/FAu4EAk8GgjmVTMDFTspxHA96H P0CGY0QEF2nANsIT2WM+gYO7yJ9eUCV2s27lBsedL/Ks7Q2UpQ11TgATCdIhgiAf8j1h u2qEv7vk0aF7qRlffje/VV2CNRvM6jz69GsV9HI2sK4wOxcnv+juC0tPXMy3k9x6GWkf 48ngqhuOMkPRy9ytuyJp0N5Euw6BvsPMmpk5wsIUYSHaFv1fNOxXlCkbLm6MZ/TpwDYT 5xogOCtq1bRPEotUPmgJUX9T5af5nvj8hCxmqFkYvX18/yeCAvqP47Jf2KZMhP9qF32z k9fw== X-Forwarded-Encrypted: i=1; AJvYcCUOtdv1zmLGTWKiicFEb5B0To2gghYu5kn8wuy4DP/gXF8CKfJT3uhOpOXaPMu6qyp8O4uAJOYZEj0mw+xF@vger.kernel.org X-Gm-Message-State: AOJu0Yx1Cre5/4T6vpQmiA2F7ZNSDqlU8y1vhDdzdp38M3cEGM+NIdw9 FCXo2jekTH+Gir/58U23fZ6oPKidD3rkfa/X67g1p6mAldXehztracx/clxVet1ubpeyccyJ+95 b2O+CqLPLFpMgaGE6qh91DXuajvfSs9FBAeRl1/V5SoS3bgzwIPvcX4DnGB5Eif9a0Zxe X-Gm-Gg: ASbGncvrpLepoE6naOpOtTo3m1VwFIvJgERIdwEkAmG8xnQ0/sHJRFbW6l3iZnIt9rP BEapG8nrXkZaEmWN4q7bGeTDUTCrB3CSEvQkUmi5DHeRUQZahfxdKxmvs/B+J3DoTS4EpcFai5U BtMWGSwyWU6SucrNLBDH3g1oVdsiRZqlz/vVSFqWHocgvZxzrrrEWs2kTdfmR/SjSFrE1Iq8l7e OCFk/E9jypi7wc3IGb8eUsXUrK0IbYKoNcrZtbXUvc0/fI6j24/oYJvlEjYohtUsTERbAkzbiRl A30VAqniUYx58DHFtZHkEQ== X-Received: by 2002:a17:902:ecd0:b0:234:d7c5:a0ea with SMTP id d9443c01a7336-23601e76032mr5573115ad.24.1749148390860; Thu, 05 Jun 2025 11:33:10 -0700 (PDT) X-Google-Smtp-Source: AGHT+IEOxX7OAxfRTU4dE4awOTuujPdjFcN3dyi4/iEto+JK4TWIehfBEilrFztqx0wPx+KLj25mqA== X-Received: by 2002:a17:902:ecd0:b0:234:d7c5:a0ea with SMTP id d9443c01a7336-23601e76032mr5572585ad.24.1749148390362; Thu, 05 Jun 2025 11:33:10 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:89fa:e299:1a34:c1f5]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-23506cd3407sm122902075ad.141.2025.06.05.11.33.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 05 Jun 2025 11:33:09 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Rob Clark , Rob Clark , Rob Clark , Sean Paul , Konrad Dybcio , Abhinav Kumar , Dmitry Baryshkov , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v6 30/40] drm/msm: Support IO_PGTABLE_QUIRK_NO_WARN_ON Date: Thu, 5 Jun 2025 11:29:15 -0700 Message-ID: <20250605183111.163594-31-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250605183111.163594-1-robin.clark@oss.qualcomm.com> References: <20250605183111.163594-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Proofpoint-GUID: IUjMk99nH5bgTXerz_nbUYk8jYGl0oQn X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjA1MDE2NSBTYWx0ZWRfX3AEHdzOLwFPB 3Zm392oSmWw0mfG1alh1ibfHKNVFjtNTvU028f7tnGAETOsTojYuNNv1KfGQUsSRM/xxko9Hqm6 7kEzGhksbzlL0vhLNGLNfkMII4WofT+M0XSiA2YbIZruQQSJZ4JFW+8NTR4n4FxZVWj0KiKsIo7 egGjTHtvbvYS8XZkd5Q9MtDqVbJzovkYm2bTDc5fDX3XWzJr5oRJnkA1Bouv2p7FZt6yAgnKExB RXS6DFxqdfgGJ719Yr8EzRfeGj/42PUOQ30JcUbkC1jT5to/IhHNKgJXqr21N0a1LGVfEHkh3vR 6LVmGkQOpMgoIOE0yTgX0tkvhL7wXJdgYwV2VtzvJNm0uV7a1B1AxuZwUi5y+9qFz7kanoGioOI YdGFYXl+6FgmDDI/pvAMt20mTAD2wMVnK/LgRQh4FHpdSdiodGpEZOh0l146dcS9zngVsTIp X-Proofpoint-ORIG-GUID: IUjMk99nH5bgTXerz_nbUYk8jYGl0oQn X-Authority-Analysis: v=2.4 cv=UphjN/wB c=1 sm=1 tr=0 ts=6841e2e7 cx=c_pps a=IZJwPbhc+fLeJZngyXXI0A==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=cm27Pg_UAAAA:8 a=EUspDBNiAAAA:8 a=MLfKQGWeMraaM-6YArcA:9 a=uG9DUKGECoFWVXl0Dc02:22 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.0.736,FMLib:17.12.80.40 definitions=2025-06-05_05,2025-06-05_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 spamscore=0 impostorscore=0 lowpriorityscore=0 phishscore=0 mlxlogscore=987 clxscore=1015 malwarescore=0 adultscore=0 bulkscore=0 mlxscore=0 suspectscore=0 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506050165 From: Rob Clark With user managed VMs and multiple queues, it is in theory possible to trigger map/unmap errors. These will (in a later patch) mark the VM as unusable. But we want to tell the io-pgtable helpers not to spam the log. In addition, in the unmap path, we don't want to bail early from the unmap, to ensure we don't leave some dangling pages mapped. Signed-off-by: Rob Clark Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 2 +- drivers/gpu/drm/msm/msm_iommu.c | 23 ++++++++++++++++++----- drivers/gpu/drm/msm/msm_mmu.h | 2 +- 3 files changed, 20 insertions(+), 7 deletions(-) diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c index f0e37733c65d..83fba02ca1df 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c @@ -2267,7 +2267,7 @@ a6xx_create_private_vm(struct msm_gpu *gpu, bool kernel_managed) { struct msm_mmu *mmu; - mmu = msm_iommu_pagetable_create(to_msm_vm(gpu->vm)->mmu); + mmu = msm_iommu_pagetable_create(to_msm_vm(gpu->vm)->mmu, kernel_managed); if (IS_ERR(mmu)) return ERR_CAST(mmu); diff --git a/drivers/gpu/drm/msm/msm_iommu.c b/drivers/gpu/drm/msm/msm_iommu.c index 756bd55ee94f..1c068592f9e9 100644 --- a/drivers/gpu/drm/msm/msm_iommu.c +++ b/drivers/gpu/drm/msm/msm_iommu.c @@ -94,15 +94,24 @@ static int msm_iommu_pagetable_unmap(struct msm_mmu *mmu, u64 iova, { struct msm_iommu_pagetable *pagetable = to_pagetable(mmu); struct io_pgtable_ops *ops = pagetable->pgtbl_ops; + int ret = 0; while (size) { - size_t unmapped, pgsize, count; + size_t pgsize, count; + ssize_t unmapped; pgsize = calc_pgsize(pagetable, iova, iova, size, &count); unmapped = ops->unmap_pages(ops, iova, pgsize, count, NULL); - if (!unmapped) - break; + if (unmapped <= 0) { + ret = -EINVAL; + /* + * Continue attempting to unamp the remained of the + * range, so we don't end up with some dangling + * mapped pages + */ + unmapped = PAGE_SIZE; + } iova += unmapped; size -= unmapped; @@ -110,7 +119,7 @@ static int msm_iommu_pagetable_unmap(struct msm_mmu *mmu, u64 iova, iommu_flush_iotlb_all(to_msm_iommu(pagetable->parent)->domain); - return (size == 0) ? 0 : -EINVAL; + return ret; } static int msm_iommu_pagetable_map_prr(struct msm_mmu *mmu, u64 iova, size_t len, int prot) @@ -324,7 +333,7 @@ static const struct iommu_flush_ops tlb_ops = { static int msm_gpu_fault_handler(struct iommu_domain *domain, struct device *dev, unsigned long iova, int flags, void *arg); -struct msm_mmu *msm_iommu_pagetable_create(struct msm_mmu *parent) +struct msm_mmu *msm_iommu_pagetable_create(struct msm_mmu *parent, bool kernel_managed) { struct adreno_smmu_priv *adreno_smmu = dev_get_drvdata(parent->dev); struct msm_iommu *iommu = to_msm_iommu(parent); @@ -358,6 +367,10 @@ struct msm_mmu *msm_iommu_pagetable_create(struct msm_mmu *parent) ttbr0_cfg.quirks &= ~IO_PGTABLE_QUIRK_ARM_TTBR1; ttbr0_cfg.tlb = &tlb_ops; + if (!kernel_managed) { + ttbr0_cfg.quirks |= IO_PGTABLE_QUIRK_NO_WARN; + } + pagetable->pgtbl_ops = alloc_io_pgtable_ops(ARM_64_LPAE_S1, &ttbr0_cfg, pagetable); diff --git a/drivers/gpu/drm/msm/msm_mmu.h b/drivers/gpu/drm/msm/msm_mmu.h index c874852b7331..c70c71fb1a4a 100644 --- a/drivers/gpu/drm/msm/msm_mmu.h +++ b/drivers/gpu/drm/msm/msm_mmu.h @@ -52,7 +52,7 @@ static inline void msm_mmu_set_fault_handler(struct msm_mmu *mmu, void *arg, mmu->handler = handler; } -struct msm_mmu *msm_iommu_pagetable_create(struct msm_mmu *parent); +struct msm_mmu *msm_iommu_pagetable_create(struct msm_mmu *parent, bool kernel_managed); int msm_iommu_pagetable_params(struct msm_mmu *mmu, phys_addr_t *ttbr, int *asid); From patchwork Thu Jun 5 18:29:16 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 894497 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C881A27815C for ; Thu, 5 Jun 2025 18:33:17 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.168.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749148399; cv=none; b=FJ+squgB1eL0QgSgWryvuBnbHl34AdEN5Unacoe6niFNMdbPBOCp4p/ojkaQ+vpsav96G3XTuYU4CAeSq24kvEAFtRADcrfBDrggIFd3TPpCOtIIo/mEeqfhSF+oLo6I9DvVhdeWQ/9rnATBksVxY+oM1BJzYzYdV4CzjsaC+a4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749148399; c=relaxed/simple; bh=NtRjz5V35Ecv1En9evSsW7bK7szwceihcpIDA526Qk0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=e7VUILhcpfCPUg3FdtKQPvwxZkS8UqOLDmZ+73JOhwAJ8GYhH165mSMcEQnZCUTkzOMMg5T7w+SiCp1MXvp/p4nX7mO4HiYcxO8wcX38orTgKMydG+7lksgOuNgII7A6Tq3Kit/IoxdcXLWdaCNhg9qIneRyfHhrDmDiGM3mrKQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=BRINjuzo; arc=none smtp.client-ip=205.220.168.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="BRINjuzo" Received: from pps.filterd (m0279866.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 555GFUPd004217 for ; Thu, 5 Jun 2025 18:33:17 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=/oopH5gDR7c AO8FzYui8Gj2RGY+GaRhFrU3dbHaafIQ=; b=BRINjuzoXVa2G0i5W6ULqM2V22+ sazJdLzJoS8eEHvjwZG7Q/YQg5zOgNk1absrXEpDi7G76X6rqBlwZfsV7bL/SJ2J 1SI+wzzYsCqPCKDHb8PO7KXHWpPpP0hj/NyWBM5OUkeoPZENytuCLS4/HzRVVTMM qhSmX4nFvCynLDKNmNuMwNuIBQwL2Vj/31mQtJhG1QU8TwQlQBQHSiZFXJFjqJIF 7E19o5wnsqMo71x4fDcoWwNr8Lpl6cFV0HL2ZI7XC95soA2qS2wv67MMI2GOi7uQ jOJD/NBPVS65fq7XlBZPz9xIikbJZpOBHxkcvWyCLsJT4ParJTfXMkdIF1A== Received: from mail-pl1-f200.google.com (mail-pl1-f200.google.com [209.85.214.200]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 471g8s2cag-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Thu, 05 Jun 2025 18:33:14 +0000 (GMT) Received: by mail-pl1-f200.google.com with SMTP id d9443c01a7336-234fedd3e51so12380195ad.1 for ; Thu, 05 Jun 2025 11:33:13 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1749148392; x=1749753192; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=/oopH5gDR7cAO8FzYui8Gj2RGY+GaRhFrU3dbHaafIQ=; b=Q+Z0aj9aW9uJJv6+BKmVIvyNZot8mLgy8OqFvDS5aDjvyIrbXI9e8sE/TRnjQcFId4 RPvsMLFAtfFeiA5yWsJvixBNiRKeM6JAS3jgN3Ol8fEW8i4cHW5amFKCoZoWnBx1OUda hZGaISz/3XRYYG9NZSIBcjHwlqCPuiVwxMyLv7yy81nDeI+JqUSr5+i/oGhrkSUVidSw i6XaDCnG/X0WK2w4BJE4q5Nx9JmkhPqS+st6LotUe1uPgXpQ9EoDcJOewaYQJJrd4Tjb PsL+JTBvPsjLN/XPNlZtcIjHBb768Ewmp5L12SadrtWprSgVeZ40q9goI8adeP3fppyL yrMw== X-Forwarded-Encrypted: i=1; AJvYcCVkyTG1R1av4tyy9PVpYzHCnUtmbEECow4997Q8qRe7qml/qIe62kqxU7uBtYP23FDPrdEdN01hnmlBCg4v@vger.kernel.org X-Gm-Message-State: AOJu0YzrmLdc7jlw/cnqW5zMsRKP/N/jGFcRtOok2s1JWwuE0jpiX/zB sNpSAX8cEX3VAeISHSlpAUUxxz70zZ/cDWwsfEqscdovmvXt4+yeAXrlxYguzyKi39EJ1HOX6NW HOMFZ1v7IQkcwy1gjC2nAZDvKBsufVH0ewYIsQXEGqA0rj+ki6GPDPh9VW8slE7HVTHTW X-Gm-Gg: ASbGnctPOWimWVMW8mhoNl3gw3+Qh4uTn1sYNRRlUCN8y9WKOYJQyrleEOOwG7XgCRV mqzFblEGYLJ3nyD2kAryn24eYMsNiKIYYHH69YFDxMhUl3vdZCWW7h31PnFEkpjs1CuEAgkrb1I qXu30tCWsrpxPQxaxMUwuo2/m8AjMfliLzJjKnh1lCfQDQOClfW3cQDLTbzqhwaFk72wHx+QLur XcrIlpLZFPdnxjDH/GTRlwpHGbSECI70vdRfjrbSuoma9MstPZq3q9VwJ3Dzb2W3IeRCZ4/Hz0b lhsQnhmZbZd74hY8z8k3+w== X-Received: by 2002:a17:902:ea0d:b0:224:10a2:cae7 with SMTP id d9443c01a7336-23601d7e29emr6022745ad.40.1749148392231; Thu, 05 Jun 2025 11:33:12 -0700 (PDT) X-Google-Smtp-Source: AGHT+IHS8fuTEbXBYDPyeVGG8+MZF4DtU1vCw/Dt0Ii+F/0TPFOWYg/3Ivhig1Bj1GB69LPeHf7wvA== X-Received: by 2002:a17:902:ea0d:b0:224:10a2:cae7 with SMTP id d9443c01a7336-23601d7e29emr6022265ad.40.1749148391785; Thu, 05 Jun 2025 11:33:11 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:89fa:e299:1a34:c1f5]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-23506d20bfasm122978075ad.235.2025.06.05.11.33.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 05 Jun 2025 11:33:11 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Rob Clark , Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v6 31/40] drm/msm: Support pgtable preallocation Date: Thu, 5 Jun 2025 11:29:16 -0700 Message-ID: <20250605183111.163594-32-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250605183111.163594-1-robin.clark@oss.qualcomm.com> References: <20250605183111.163594-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: hK5x7pTNnTlLaQKZ5DI9sRRLpQLT56YB X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjA1MDE2NSBTYWx0ZWRfX0rcNybK8E9PQ ShCTFMqz1Yigbfh/94iBuz9wkc/xqVDIV/M3Ru0iFLUodQbqTdyN0xJpxB783n8RAnrOBrMZkwG mvaE2qW0L4Lk6MHwExbOYymkSuWM0B2p4j4TOsruxzjOXyv3ZwmZ7HIKHMI9cLieSMF88VQzq9J ecxqhfV936K3qoeHVjmHJ5vQoiTT6E7kM/4WwbmdvPWKV/uMZU/QmV4oBUBExLH/X9uth9BkuYM 9pENlbegA39NPHpZQFgp/X+4FRBrDAlMI2I+ajpcpaAMKm75Cy10faGlvQJL+AV+gUWeqMV9bK5 jmL5wNmeGvnKwMvpbOU3DDKGg5z2ZgP2GHSKIK97nfgpsurSn2C9QGZFlawRIQqDlKO474phHi0 ndWnEHxDqBXSH+1tabpFnL6tznfxowPTIXuEdbjX+FwIJH3BlpyxMiuQatG8EpmuQeZY2HXb X-Authority-Analysis: v=2.4 cv=RdWQC0tv c=1 sm=1 tr=0 ts=6841e2ea cx=c_pps a=IZJwPbhc+fLeJZngyXXI0A==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=7CQSdrXTAAAA:8 a=cm27Pg_UAAAA:8 a=EUspDBNiAAAA:8 a=FDPAursefL7ktZtO0vwA:9 a=uG9DUKGECoFWVXl0Dc02:22 a=a-qgeE7W1pNrGK8U0ZQC:22 X-Proofpoint-GUID: hK5x7pTNnTlLaQKZ5DI9sRRLpQLT56YB X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.0.736,FMLib:17.12.80.40 definitions=2025-06-05_05,2025-06-05_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 bulkscore=0 adultscore=0 mlxscore=0 priorityscore=1501 phishscore=0 clxscore=1015 lowpriorityscore=0 malwarescore=0 suspectscore=0 impostorscore=0 spamscore=0 mlxlogscore=999 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506050165 From: Rob Clark Introduce a mechanism to count the worst case # of pages required in a VM_BIND op. Note that previously we would have had to somehow account for allocations in unmap, when splitting a block. This behavior was removed in commit 33729a5fc0ca ("iommu/io-pgtable-arm: Remove split on unmap behavior)" Signed-off-by: Rob Clark Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gem.h | 1 + drivers/gpu/drm/msm/msm_iommu.c | 191 +++++++++++++++++++++++++++++++- drivers/gpu/drm/msm/msm_mmu.h | 34 ++++++ 3 files changed, 225 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index 945a235d73cf..46c7ddbc2dce 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -7,6 +7,7 @@ #ifndef __MSM_GEM_H__ #define __MSM_GEM_H__ +#include "msm_mmu.h" #include #include #include "drm/drm_exec.h" diff --git a/drivers/gpu/drm/msm/msm_iommu.c b/drivers/gpu/drm/msm/msm_iommu.c index 1c068592f9e9..bfee3e0dcb23 100644 --- a/drivers/gpu/drm/msm/msm_iommu.c +++ b/drivers/gpu/drm/msm/msm_iommu.c @@ -6,6 +6,7 @@ #include #include +#include #include "msm_drv.h" #include "msm_mmu.h" @@ -14,6 +15,8 @@ struct msm_iommu { struct iommu_domain *domain; atomic_t pagetables; struct page *prr_page; + + struct kmem_cache *pt_cache; }; #define to_msm_iommu(x) container_of(x, struct msm_iommu, base) @@ -27,6 +30,9 @@ struct msm_iommu_pagetable { unsigned long pgsize_bitmap; /* Bitmap of page sizes in use */ phys_addr_t ttbr; u32 asid; + + /** @root_page_table: Stores the root page table pointer. */ + void *root_page_table; }; static struct msm_iommu_pagetable *to_pagetable(struct msm_mmu *mmu) { @@ -282,7 +288,145 @@ msm_iommu_pagetable_walk(struct msm_mmu *mmu, unsigned long iova, uint64_t ptes[ return 0; } +static void +msm_iommu_pagetable_prealloc_count(struct msm_mmu *mmu, struct msm_mmu_prealloc *p, + uint64_t iova, size_t len) +{ + u64 pt_count; + + /* + * L1, L2 and L3 page tables. + * + * We could optimize L3 allocation by iterating over the sgt and merging + * 2M contiguous blocks, but it's simpler to over-provision and return + * the pages if they're not used. + * + * The first level descriptor (v8 / v7-lpae page table format) encodes + * 30 bits of address. The second level encodes 29. For the 3rd it is + * 39. + * + * https://developer.arm.com/documentation/ddi0406/c/System-Level-Architecture/Virtual-Memory-System-Architecture--VMSA-/Long-descriptor-translation-table-format/Long-descriptor-translation-table-format-descriptors?lang=en#BEIHEFFB + */ + pt_count = ((ALIGN(iova + len, 1ull << 39) - ALIGN_DOWN(iova, 1ull << 39)) >> 39) + + ((ALIGN(iova + len, 1ull << 30) - ALIGN_DOWN(iova, 1ull << 30)) >> 30) + + ((ALIGN(iova + len, 1ull << 21) - ALIGN_DOWN(iova, 1ull << 21)) >> 21); + + p->count += pt_count; +} + +static struct kmem_cache * +get_pt_cache(struct msm_mmu *mmu) +{ + struct msm_iommu_pagetable *pagetable = to_pagetable(mmu); + return to_msm_iommu(pagetable->parent)->pt_cache; +} + +static int +msm_iommu_pagetable_prealloc_allocate(struct msm_mmu *mmu, struct msm_mmu_prealloc *p) +{ + struct kmem_cache *pt_cache = get_pt_cache(mmu); + int ret; + + p->pages = kvmalloc_array(p->count, sizeof(p->pages), GFP_KERNEL); + if (!p->pages) + return -ENOMEM; + + ret = kmem_cache_alloc_bulk(pt_cache, GFP_KERNEL, p->count, p->pages); + if (ret != p->count) { + p->count = ret; + return -ENOMEM; + } + + return 0; +} + +static void +msm_iommu_pagetable_prealloc_cleanup(struct msm_mmu *mmu, struct msm_mmu_prealloc *p) +{ + struct kmem_cache *pt_cache = get_pt_cache(mmu); + uint32_t remaining_pt_count = p->count - p->ptr; + + kmem_cache_free_bulk(pt_cache, remaining_pt_count, &p->pages[p->ptr]); + kvfree(p->pages); +} + +/** + * alloc_pt() - Custom page table allocator + * @cookie: Cookie passed at page table allocation time. + * @size: Size of the page table. This size should be fixed, + * and determined at creation time based on the granule size. + * @gfp: GFP flags. + * + * We want a custom allocator so we can use a cache for page table + * allocations and amortize the cost of the over-reservation that's + * done to allow asynchronous VM operations. + * + * Return: non-NULL on success, NULL if the allocation failed for any + * reason. + */ +static void * +msm_iommu_pagetable_alloc_pt(void *cookie, size_t size, gfp_t gfp) +{ + struct msm_iommu_pagetable *pagetable = cookie; + struct msm_mmu_prealloc *p = pagetable->base.prealloc; + void *page; + + /* Allocation of the root page table happening during init. */ + if (unlikely(!pagetable->root_page_table)) { + struct page *p; + + p = alloc_pages_node(dev_to_node(pagetable->iommu_dev), + gfp | __GFP_ZERO, get_order(size)); + page = p ? page_address(p) : NULL; + pagetable->root_page_table = page; + return page; + } + + if (WARN_ON(!p) || WARN_ON(p->ptr >= p->count)) + return NULL; + + page = p->pages[p->ptr++]; + memset(page, 0, size); + + /* + * Page table entries don't use virtual addresses, which trips out + * kmemleak. kmemleak_alloc_phys() might work, but physical addresses + * are mixed with other fields, and I fear kmemleak won't detect that + * either. + * + * Let's just ignore memory passed to the page-table driver for now. + */ + kmemleak_ignore(page); + + return page; +} + + +/** + * free_pt() - Custom page table free function + * @cookie: Cookie passed at page table allocation time. + * @data: Page table to free. + * @size: Size of the page table. This size should be fixed, + * and determined at creation time based on the granule size. + */ +static void +msm_iommu_pagetable_free_pt(void *cookie, void *data, size_t size) +{ + struct msm_iommu_pagetable *pagetable = cookie; + + if (unlikely(pagetable->root_page_table == data)) { + free_pages((unsigned long)data, get_order(size)); + pagetable->root_page_table = NULL; + return; + } + + kmem_cache_free(get_pt_cache(&pagetable->base), data); +} + static const struct msm_mmu_funcs pagetable_funcs = { + .prealloc_count = msm_iommu_pagetable_prealloc_count, + .prealloc_allocate = msm_iommu_pagetable_prealloc_allocate, + .prealloc_cleanup = msm_iommu_pagetable_prealloc_cleanup, .map = msm_iommu_pagetable_map, .unmap = msm_iommu_pagetable_unmap, .destroy = msm_iommu_pagetable_destroy, @@ -333,6 +477,17 @@ static const struct iommu_flush_ops tlb_ops = { static int msm_gpu_fault_handler(struct iommu_domain *domain, struct device *dev, unsigned long iova, int flags, void *arg); +static size_t get_tblsz(const struct io_pgtable_cfg *cfg) +{ + int pg_shift, bits_per_level; + + pg_shift = __ffs(cfg->pgsize_bitmap); + /* arm_lpae_iopte is u64: */ + bits_per_level = pg_shift - ilog2(sizeof(u64)); + + return sizeof(u64) << bits_per_level; +} + struct msm_mmu *msm_iommu_pagetable_create(struct msm_mmu *parent, bool kernel_managed) { struct adreno_smmu_priv *adreno_smmu = dev_get_drvdata(parent->dev); @@ -369,8 +524,34 @@ struct msm_mmu *msm_iommu_pagetable_create(struct msm_mmu *parent, bool kernel_m if (!kernel_managed) { ttbr0_cfg.quirks |= IO_PGTABLE_QUIRK_NO_WARN; + + /* + * With userspace managed VM (aka VM_BIND), we need to pre- + * allocate pages ahead of time for map/unmap operations, + * handing them to io-pgtable via custom alloc/free ops as + * needed: + */ + ttbr0_cfg.alloc = msm_iommu_pagetable_alloc_pt; + ttbr0_cfg.free = msm_iommu_pagetable_free_pt; + + /* + * Restrict to single page granules. Otherwise we may run + * into a situation where userspace wants to unmap/remap + * only a part of a larger block mapping, which is not + * possible without unmapping the entire block. Which in + * turn could cause faults if the GPU is accessing other + * parts of the block mapping. + * + * Note that prior to commit 33729a5fc0ca ("iommu/io-pgtable-arm: + * Remove split on unmap behavior)" this was handled in + * io-pgtable-arm. But this apparently does not work + * correctly on SMMUv3. + */ + WARN_ON(!(ttbr0_cfg.pgsize_bitmap & PAGE_SIZE)); + ttbr0_cfg.pgsize_bitmap = PAGE_SIZE; } + pagetable->iommu_dev = ttbr1_cfg->iommu_dev; pagetable->pgtbl_ops = alloc_io_pgtable_ops(ARM_64_LPAE_S1, &ttbr0_cfg, pagetable); @@ -414,7 +595,6 @@ struct msm_mmu *msm_iommu_pagetable_create(struct msm_mmu *parent, bool kernel_m /* Needed later for TLB flush */ pagetable->parent = parent; pagetable->tlb = ttbr1_cfg->tlb; - pagetable->iommu_dev = ttbr1_cfg->iommu_dev; pagetable->pgsize_bitmap = ttbr0_cfg.pgsize_bitmap; pagetable->ttbr = ttbr0_cfg.arm_lpae_s1_cfg.ttbr; @@ -522,6 +702,7 @@ static void msm_iommu_destroy(struct msm_mmu *mmu) { struct msm_iommu *iommu = to_msm_iommu(mmu); iommu_domain_free(iommu->domain); + kmem_cache_destroy(iommu->pt_cache); kfree(iommu); } @@ -596,6 +777,14 @@ struct msm_mmu *msm_iommu_gpu_new(struct device *dev, struct msm_gpu *gpu, unsig return mmu; iommu = to_msm_iommu(mmu); + if (adreno_smmu && adreno_smmu->cookie) { + const struct io_pgtable_cfg *cfg = + adreno_smmu->get_ttbr1_cfg(adreno_smmu->cookie); + size_t tblsz = get_tblsz(cfg); + + iommu->pt_cache = + kmem_cache_create("msm-mmu-pt", tblsz, tblsz, 0, NULL); + } iommu_set_fault_handler(iommu->domain, msm_gpu_fault_handler, iommu); /* Enable stall on iommu fault: */ diff --git a/drivers/gpu/drm/msm/msm_mmu.h b/drivers/gpu/drm/msm/msm_mmu.h index c70c71fb1a4a..76d7dcc1c977 100644 --- a/drivers/gpu/drm/msm/msm_mmu.h +++ b/drivers/gpu/drm/msm/msm_mmu.h @@ -9,8 +9,16 @@ #include +struct msm_mmu_prealloc; +struct msm_mmu; +struct msm_gpu; + struct msm_mmu_funcs { void (*detach)(struct msm_mmu *mmu); + void (*prealloc_count)(struct msm_mmu *mmu, struct msm_mmu_prealloc *p, + uint64_t iova, size_t len); + int (*prealloc_allocate)(struct msm_mmu *mmu, struct msm_mmu_prealloc *p); + void (*prealloc_cleanup)(struct msm_mmu *mmu, struct msm_mmu_prealloc *p); int (*map)(struct msm_mmu *mmu, uint64_t iova, struct sg_table *sgt, size_t off, size_t len, int prot); int (*unmap)(struct msm_mmu *mmu, uint64_t iova, size_t len); @@ -25,12 +33,38 @@ enum msm_mmu_type { MSM_MMU_IOMMU_PAGETABLE, }; +/** + * struct msm_mmu_prealloc - Tracking for pre-allocated pages for MMU updates. + */ +struct msm_mmu_prealloc { + /** @count: Number of pages reserved. */ + uint32_t count; + /** @ptr: Index of first unused page in @pages */ + uint32_t ptr; + /** + * @pages: Array of pages preallocated for MMU table updates. + * + * After a VM operation, there might be free pages remaining in this + * array (since the amount allocated is a worst-case). These are + * returned to the pt_cache at mmu->prealloc_cleanup(). + */ + void **pages; +}; + struct msm_mmu { const struct msm_mmu_funcs *funcs; struct device *dev; int (*handler)(void *arg, unsigned long iova, int flags, void *data); void *arg; enum msm_mmu_type type; + + /** + * @prealloc: pre-allocated pages for pgtable + * + * Set while a VM_BIND job is running, serialized under + * msm_gem_vm::mmu_lock. + */ + struct msm_mmu_prealloc *prealloc; }; static inline void msm_mmu_init(struct msm_mmu *mmu, struct device *dev, From patchwork Thu Jun 5 18:29:17 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 894225 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1FACA28981C for ; Thu, 5 Jun 2025 18:33:52 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.168.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749148433; cv=none; b=rmzKRFG1JWZnwk9xd4nyTEwIR3J67TyDKv0VM4R1oc3Q3AmS/IPdiWrGlUIiHT37Bzm2AYUgAPV5S6RGjqQI0hnukPaeLW4JprVev1ryxRMaapC3OJQXme9WWeTtCvminK6tW4POotwQMUVM0CmkcaKdZCF+gaV6tiWwt3EmXDU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749148433; c=relaxed/simple; bh=vPu53PGWBpfPppllpl1iLj20es1VLGg4hj8SiJ/zrYU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=VrHsI8NSZ6vrjlcJMVJAeno3yqh6DTkTWaTpf9dDsI7iIEQZzRTSq0qy1sP2khSd12CakEivNcoynWRh2bcw392WQhsaGeYKRPD4vUbZsHMGkW1sHLzrvlPHcWk39oomvhbyrBXvXZ++y+hWW6/0XrJS58EmtpC5z1e+gqIioUQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=Er67g9a7; arc=none smtp.client-ip=205.220.168.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="Er67g9a7" Received: from pps.filterd (m0279866.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 555GVMgM004233 for ; Thu, 5 Jun 2025 18:33:51 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=49iUFarYqun NoIxPR8E20lSmIXKZAbaZ3WmH/nqkmuc=; b=Er67g9a7Gk+LidAHj0cQ3Qpw+LZ EOrAqqdSVqkDq59XlMHbDRo+tYSYL7Ja1AovMDGAFfF2RNgqcND39nck3lTCbvCo LEucwKE+Arcv/SdbxI7OJ5b9IVxrg9oGSUD7O3hKLfacNNtKdCxXKqZQyN0ZwYTd 0u3k71SQtAiH1L9vSzgzSCvBcIegzE4XW4sEDc3spBWolYLfRJj4h6SLr826NJza w5+7da1NqDou+zrlJT0IwnlnI0os3Orinjhtu0HEh4mFr3Bg90ntPeH+RcxzsTs8 wMTOX1zRZBOWfhWl2UEv29EAZ59+pjJOMTcILfb0SEXpKpf50Hv8+8CcpRQ== Received: from mail-pl1-f199.google.com (mail-pl1-f199.google.com [209.85.214.199]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 471g8s2cbg-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Thu, 05 Jun 2025 18:33:48 +0000 (GMT) Received: by mail-pl1-f199.google.com with SMTP id d9443c01a7336-23505adcac8so11475295ad.3 for ; Thu, 05 Jun 2025 11:33:46 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1749148393; x=1749753193; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=49iUFarYqunNoIxPR8E20lSmIXKZAbaZ3WmH/nqkmuc=; b=Hjo1aJk5L0dqVqmVFR3WeHvL0j21Aqqb+jGTNdMZD7oIERAldVEx2jbdYM/jvKKuCN wkdAxf58OUNk7r2nzlD5RUrF8uwdugUdDVlNeYVvlcv+0BQq1ga0yEhG1AMfrrcwMfRi scGrzd/EYEfZfNEyIxbXbGJJBGywt0dbRgEysuzOuBwy/Q38GHjj+QIZBdWzm0jr9zcv a4moqeaPTW87VhO+ocgcRqz2QrLVdMLyTkBJqYYyYEAlZeHc8LpAgoMlnBX+bvi2ez3Q NxaOmqX2P1YohzaxDWXATHly7ejSolhjtn3XlTheKi2SVgA8TLtFvH0s5ErZGVNNmkea YCZw== X-Forwarded-Encrypted: i=1; AJvYcCVjYxclbhD33NnoTsugRjyYg2iHf7HMiXLlCD1gC8Jm2o+XqwjZtyqc6gGdSaBVUFf6OTicr+aPPKy2cccU@vger.kernel.org X-Gm-Message-State: AOJu0Yzu5g3XPQS8SzR5wCxjSE3DLBvYQiY0ahz1/yk+9Rxx7T0nVYqP uQOYCYAKF27Nl/+Cv/yHp64CZkVYpBBjH7/U8EJgR1lxhN0J+pMkaXRUt6DQVCcZo86J63WNOyf jhyi1h+gZ3bgxTZu/0L921VySfBxnjCzuZzBHJpEDBT5qd8zCkv3JIDgiRxEOCy5EwdtL X-Gm-Gg: ASbGncuthmZnqExPj+lTm52BZUDAXdTe5wjVps+bIisUkXQnCzizw1ZKo8tuesQ/3+D 8jUkv/hg3kGVXQrHO77g1WNo7QpTMOwzYQ6ycmbx+lg5MyQVpazTEFTfPIMzleZibF+46LL1BGn s1Utkby3NmwI1TMiQBmj9j/GCbtbaGDFCjc5An5uWy2f5u8Ws7NHAJb6QkYK1PhmzimYON2wtnZ WMYF3ufHOuGP1+hGrfke9tzpSxzwxDm39oaZfj2uIQC8tdCtEL4H68XqC3YVgrSAoHRjaLE6UrG Kp9+cr7RnHXUoijhe+XmBw== X-Received: by 2002:a17:903:2349:b0:235:2799:640 with SMTP id d9443c01a7336-23601d225damr6094155ad.25.1749148393469; Thu, 05 Jun 2025 11:33:13 -0700 (PDT) X-Google-Smtp-Source: AGHT+IFl0jqdoODWZ/61ArVqyj/ItoV5kaRHwp4+BPeGGigRtruYOwxTDYKpmN1DgxwTU01eE5wdGg== X-Received: by 2002:a17:903:2349:b0:235:2799:640 with SMTP id d9443c01a7336-23601d225damr6093815ad.25.1749148393096; Thu, 05 Jun 2025 11:33:13 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:89fa:e299:1a34:c1f5]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-23506cf954esm122815285ad.210.2025.06.05.11.33.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 05 Jun 2025 11:33:12 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Rob Clark , Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v6 32/40] drm/msm: Split out map/unmap ops Date: Thu, 5 Jun 2025 11:29:17 -0700 Message-ID: <20250605183111.163594-33-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250605183111.163594-1-robin.clark@oss.qualcomm.com> References: <20250605183111.163594-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: XHp_Zws7NO9-Pf6bsD0TG78HpcCoDLpD X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjA1MDE2NiBTYWx0ZWRfX7A3O4E9mSNG2 irFmURY5HL9WQAmDjXmkF0UNnyV1u7ew7/og598ZfQ1x3BM14YWHHje861malSvblCufCVBw8ld 1uGR3a77eR8cyEdqC1+bHgaInVYsT63b1FOl/sWXhbrTZykeN7fBj+JqR/apUo+eoNe0MSyNJtY n+SOHH0mtaCYpxJxKcHimlmZfifHshEH8HylQ3ZGaKy9SHUj9pBM3wVGAmWwrsxLG0/EELBMDuX yJCAgUlYPVmxc26mrWDf3euUawA3RcyA5Hu+kjAbTnz+AwWtOnMY5kj2oD4qYByy2pr6cFYawV/ vTwdUl16ka6gr2zZWqWenlToNH9vxHHxtoYFrkuqFA/tS73lqoyOh8dqpGAHIAkcIOfIMhmrntN 6CfFggTieh25IiHWpbvBMOlsQG8Dc5XUPKbP9zQNwWe8tUW8UDcpXTf1jD2gkdfSEzTv22PY X-Authority-Analysis: v=2.4 cv=RdWQC0tv c=1 sm=1 tr=0 ts=6841e30c cx=c_pps a=JL+w9abYAAE89/QcEU+0QA==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=cm27Pg_UAAAA:8 a=EUspDBNiAAAA:8 a=xxbd-mT8tQbqgF3hF1cA:9 a=324X-CrmTo6CU4MGRt3R:22 X-Proofpoint-GUID: XHp_Zws7NO9-Pf6bsD0TG78HpcCoDLpD X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.0.736,FMLib:17.12.80.40 definitions=2025-06-05_05,2025-06-05_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 bulkscore=0 adultscore=0 mlxscore=0 priorityscore=1501 phishscore=0 clxscore=1015 lowpriorityscore=0 malwarescore=0 suspectscore=0 impostorscore=0 spamscore=0 mlxlogscore=999 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506050166 From: Rob Clark With async VM_BIND, the actual pgtable updates are deferred. Synchronously, a list of map/unmap ops will be generated, but the actual pgtable changes are deferred. To support that, split out op handlers and change the existing non-VM_BIND paths to use them. Note in particular, the vma itself may already be destroyed/freed by the time an UNMAP op runs (or even a MAP op if there is a later queued UNMAP). For this reason, the op handlers cannot reference the vma pointer. Signed-off-by: Rob Clark Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gem_vma.c | 63 +++++++++++++++++++++++++++---- 1 file changed, 56 insertions(+), 7 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_gem_vma.c index cf37abb98235..76b79c122182 100644 --- a/drivers/gpu/drm/msm/msm_gem_vma.c +++ b/drivers/gpu/drm/msm/msm_gem_vma.c @@ -8,6 +8,34 @@ #include "msm_gem.h" #include "msm_mmu.h" +#define vm_dbg(fmt, ...) pr_debug("%s:%d: "fmt"\n", __func__, __LINE__, ##__VA_ARGS__) + +/** + * struct msm_vm_map_op - create new pgtable mapping + */ +struct msm_vm_map_op { + /** @iova: start address for mapping */ + uint64_t iova; + /** @range: size of the region to map */ + uint64_t range; + /** @offset: offset into @sgt to map */ + uint64_t offset; + /** @sgt: pages to map, or NULL for a PRR mapping */ + struct sg_table *sgt; + /** @prot: the mapping protection flags */ + int prot; +}; + +/** + * struct msm_vm_unmap_op - unmap a range of pages from pgtable + */ +struct msm_vm_unmap_op { + /** @iova: start address for unmap */ + uint64_t iova; + /** @range: size of region to unmap */ + uint64_t range; +}; + static void msm_gem_vm_free(struct drm_gpuvm *gpuvm) { @@ -21,18 +49,36 @@ msm_gem_vm_free(struct drm_gpuvm *gpuvm) kfree(vm); } +static void +vm_unmap_op(struct msm_gem_vm *vm, const struct msm_vm_unmap_op *op) +{ + vm_dbg("%p: %016llx %016llx", vm, op->iova, op->iova + op->range); + + vm->mmu->funcs->unmap(vm->mmu, op->iova, op->range); +} + +static int +vm_map_op(struct msm_gem_vm *vm, const struct msm_vm_map_op *op) +{ + vm_dbg("%p: %016llx %016llx", vm, op->iova, op->iova + op->range); + + return vm->mmu->funcs->map(vm->mmu, op->iova, op->sgt, op->offset, + op->range, op->prot); +} + /* Actually unmap memory for the vma */ void msm_gem_vma_unmap(struct drm_gpuva *vma) { struct msm_gem_vma *msm_vma = to_msm_vma(vma); - struct msm_gem_vm *vm = to_msm_vm(vma->vm); - unsigned size = vma->va.range; /* Don't do anything if the memory isn't mapped */ if (!msm_vma->mapped) return; - vm->mmu->funcs->unmap(vm->mmu, vma->va.addr, size); + vm_unmap_op(to_msm_vm(vma->vm), &(struct msm_vm_unmap_op){ + .iova = vma->va.addr, + .range = vma->va.range, + }); msm_vma->mapped = false; } @@ -42,7 +88,6 @@ int msm_gem_vma_map(struct drm_gpuva *vma, int prot, struct sg_table *sgt) { struct msm_gem_vma *msm_vma = to_msm_vma(vma); - struct msm_gem_vm *vm = to_msm_vm(vma->vm); int ret; if (GEM_WARN_ON(!vma->va.addr)) @@ -62,9 +107,13 @@ msm_gem_vma_map(struct drm_gpuva *vma, int prot, struct sg_table *sgt) * Revisit this if we can come up with a scheme to pre-alloc pages * for the pgtable in map/unmap ops. */ - ret = vm->mmu->funcs->map(vm->mmu, vma->va.addr, sgt, - vma->gem.offset, vma->va.range, - prot); + ret = vm_map_op(to_msm_vm(vma->vm), &(struct msm_vm_map_op){ + .iova = vma->va.addr, + .range = vma->va.range, + .offset = vma->gem.offset, + .sgt = sgt, + .prot = prot, + }); if (ret) { msm_vma->mapped = false; } From patchwork Thu Jun 5 18:29:18 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 894224 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 188DC289817 for ; Thu, 5 Jun 2025 18:33:51 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.168.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749148435; cv=none; b=gRcmmwA9yc3dqOxbHRPdjdFbwOQkHOKdM47Hm89PwJxcSXzPDMA8gkabov6i0i5VdKfJXSA6dvFg09jHB8KmKpoY4LKuow4/0oFygoDW3DDhT0oIbv0avxeIgSEVOQtF901V7yBKdRVISaMsv4NkESI3FGRcDDP0Tz84oTXIuWY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749148435; c=relaxed/simple; bh=NE5yXuj1vAOShJEpSbv74X6inRUSeNU9uTAm0ykMD9I=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=d1M/K81EsaV+Lqrn6NgMreTDf9pmdz8+iDBYlwLvGR78mvBejIb3dnCQFdgVHALA4bEPiz1819cR3lQrfXzYmNqx71uTS8Az4w1C6qQl+DIYCnkZ0JmZcGS+kIS2GMFqeRhzJFxyMsUkKjUMJo2Y5U7rGyYM8V39I5s6bbg0/bM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=MS+nYZzR; arc=none smtp.client-ip=205.220.168.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="MS+nYZzR" Received: from pps.filterd (m0279865.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 555HPE5V027289 for ; Thu, 5 Jun 2025 18:33:51 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=uAthHv4XX7E 1lb+OpBVtfp+e9J4Iyy5gz1t7dygonkQ=; b=MS+nYZzRNaVZ12XLvoOpLSxmwOk 91NpjMBhhvo5+vmuz2PjsIOwJuDiGZQT1o8UxVgvSYj0mWj0SoZbOfCdq+5hfSpq BxFuYh4++2bUqSH/nDJY/QI7e9gvST6gH35Jm1Y59P6e42JPK6eDwEemAI9UCRNV ujnqgzX/JQvjaM7LDAKo5mn/74d7MWfcvhKTUHjOqDXttGj8B6scYswl/vlwlnx2 lGBVQsp7HH8gzkcUHuOnXBP6xdOVsrPCiK6JCzwWIzUqySz9n733p5H3SAfO9tcX ZEbgB0HSui4eJVCDP1FXnHLLCigJdzezwr33cmcYKrLGBROyHEc4RoxtlrA== Received: from mail-pl1-f197.google.com (mail-pl1-f197.google.com [209.85.214.197]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 471g8t29w2-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Thu, 05 Jun 2025 18:33:50 +0000 (GMT) Received: by mail-pl1-f197.google.com with SMTP id d9443c01a7336-235196dfc50so12253935ad.1 for ; Thu, 05 Jun 2025 11:33:49 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1749148396; x=1749753196; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=uAthHv4XX7E1lb+OpBVtfp+e9J4Iyy5gz1t7dygonkQ=; b=octhGW0uYV4vGHIKyEe3FrvbFWmu+q/JqkZjg9Q7wFUYEMThiBvfYKm9Gk11kKsMIU kf/JU+y7fU49y7Y9RPJP0HaW2ZA+RGSxc0qrjMmSeYLMwvd4S9DSpBrDj3roF4uyA7E/ 91sfNDKwtNCDpTXEhLZynC2CvfVZIDPGrer6kwNqIpcHbDzzJM0Gw7HfkW6tPgFeKt7z 9mPAOf9Zkoe9bzwfWShOrhYRqf9lLpOH+LxyH9VVgjLkTfZp1Sc4mq99d6jDy6uuUYB/ 27gwH76gXRFm7kY4bKhX2Vv+K2eA0DKIUi9mVEL2WDe3hJ38iMBXGX+pRCX8q4WIdCDo pbTg== X-Forwarded-Encrypted: i=1; AJvYcCXzdeIiCGwU8MgVdk7ABuJvBygyXC2+pItlkpCY7aj0z4usIPy2jKVISrquFF3PMMg6sIG3ec3qPG//gbHS@vger.kernel.org X-Gm-Message-State: AOJu0Yz0gyNifdCK7i8M8OdDk0W3eREyg4Aj/uGPPQyzNa56w8TDEUyu ec0rZuwJk7n8iniLByUOTlg7YNnkZf3hNKWvUZ1J3s9EWgNww/5t3eWiDJ/RC1LN80BbD4GPG8P 3LiG3bnr95dBbW5dY1HRlH1IyoInkeKBbMBqFyO5t4BQZH3rmklT3yKrKehY4aRc3hMqC X-Gm-Gg: ASbGnctYPQGW2maI8jMb5bcvjw5Cg3K0DgUMcerJ9pAZ2hBsxCVX3xBe0pW53pu5RLu ar2XikaYUbfJ195ojnwKwOo58AvuTB049/DTU4/mK6/Sn22MShCsZSPxE0mI0ECB7gNdVLO5a0e OB2oQb44PQVAXOkZgyjfLAuN7X5vF6BzWPccL2ESnisrHjGVhVU/LAOlGIcwjv7p61c09sZEF05 R/mNXOKACUhlrC4ZHaKoKYubO2CHg8DJYlB91Odh088rxLg3ajwBe0DbgRP5DLG6zO869j3yECP R9f79ugdugj31c4KsCoDarrJbNFPAXge X-Received: by 2002:a17:903:1ca:b0:234:8a4a:ad89 with SMTP id d9443c01a7336-2360204d1b1mr3797005ad.1.1749148395687; Thu, 05 Jun 2025 11:33:15 -0700 (PDT) X-Google-Smtp-Source: AGHT+IED7PG2MnIpja0uHP8JaxVpFocJA1d3q9KJYMDhmlGHC2sWxFSQISv7I7UXRcxTOjkkPYMkdw== X-Received: by 2002:a17:903:1ca:b0:234:8a4a:ad89 with SMTP id d9443c01a7336-2360204d1b1mr3796525ad.1.1749148394975; Thu, 05 Jun 2025 11:33:14 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:89fa:e299:1a34:c1f5]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-23506bc88a7sm122766215ad.39.2025.06.05.11.33.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 05 Jun 2025 11:33:14 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Rob Clark , Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , Konrad Dybcio , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= , linux-kernel@vger.kernel.org (open list), linux-media@vger.kernel.org (open list:DMA BUFFER SHARING FRAMEWORK:Keyword:\bdma_(?:buf|fence|resv)\b), linaro-mm-sig@lists.linaro.org (moderated list:DMA BUFFER SHARING FRAMEWORK:Keyword:\bdma_(?:buf|fence|resv)\b) Subject: [PATCH v6 33/40] drm/msm: Add VM_BIND ioctl Date: Thu, 5 Jun 2025 11:29:18 -0700 Message-ID: <20250605183111.163594-34-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250605183111.163594-1-robin.clark@oss.qualcomm.com> References: <20250605183111.163594-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Authority-Analysis: v=2.4 cv=RMizH5i+ c=1 sm=1 tr=0 ts=6841e30e cx=c_pps a=cmESyDAEBpBGqyK7t0alAg==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=cm27Pg_UAAAA:8 a=EUspDBNiAAAA:8 a=pGLkceISAAAA:8 a=qXEvPNgEyRumeNgRjlwA:9 a=lT2Ezh7aeK42-gto:21 a=1OuFwYUASf3TG4hYMiVC:22 X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjA1MDE2NiBTYWx0ZWRfX+h76qNc8+CBk tMlPLgs6baqV8WskNxXPLExj8tp9xvuQfQdwHSFOKWFkLcwPTMaRG6N3l30ACQEy1OjOidS/8iy kNK4MdWmaY+2iUGhTmtl76XjUP6RHPvxHE0WHp68cwag9E6RHtcBN0V1X+GCukyJfH2IwOG3WJf UsNCbSDlYEK0kYqWweXrve1cTbzpXCZEsY3FNW9HUxi+Ls/tfuCfJW/KyRH5rZe5e1mCGRg0g8I KVTcmu4hOp+zOtvZitbq1DhzB3bTyjpAQhDMbbEDV5ECh3Fl2TixrEqwSMGrISWF2SGJNXy/dXf Dy+Lz/2oWQ47yACQ6cdjCBJF0fzYcnckR4ni9246Pa9jn7HyyM/Dgk/FAu38livTnKemdhacg2I kYWKsWNUJJU/wKINm/8BFey6Sn6AaSuRsoteYU4FihC4z8AFbucFtm7CDRCMNz4HcQKq3sqD X-Proofpoint-GUID: 0iJMD-6212RHzvb7dxcphXJpKoQKvG-O X-Proofpoint-ORIG-GUID: 0iJMD-6212RHzvb7dxcphXJpKoQKvG-O X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.0.736,FMLib:17.12.80.40 definitions=2025-06-05_05,2025-06-05_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 impostorscore=0 phishscore=0 spamscore=0 lowpriorityscore=0 malwarescore=0 bulkscore=0 suspectscore=0 adultscore=0 mlxlogscore=999 mlxscore=0 clxscore=1015 priorityscore=1501 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506050166 From: Rob Clark Add a VM_BIND ioctl for binding/unbinding buffers into a VM. This is only supported if userspace has opted in to MSM_PARAM_EN_VM_BIND. Signed-off-by: Rob Clark Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_drv.c | 1 + drivers/gpu/drm/msm/msm_drv.h | 4 +- drivers/gpu/drm/msm/msm_gem.c | 40 +- drivers/gpu/drm/msm/msm_gem.h | 4 + drivers/gpu/drm/msm/msm_gem_submit.c | 22 +- drivers/gpu/drm/msm/msm_gem_vma.c | 1070 +++++++++++++++++++++++++- include/uapi/drm/msm_drm.h | 74 +- 7 files changed, 1182 insertions(+), 33 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c index 89cb7820064f..bdf775897de8 100644 --- a/drivers/gpu/drm/msm/msm_drv.c +++ b/drivers/gpu/drm/msm/msm_drv.c @@ -791,6 +791,7 @@ static const struct drm_ioctl_desc msm_ioctls[] = { DRM_IOCTL_DEF_DRV(MSM_SUBMITQUEUE_NEW, msm_ioctl_submitqueue_new, DRM_RENDER_ALLOW), DRM_IOCTL_DEF_DRV(MSM_SUBMITQUEUE_CLOSE, msm_ioctl_submitqueue_close, DRM_RENDER_ALLOW), DRM_IOCTL_DEF_DRV(MSM_SUBMITQUEUE_QUERY, msm_ioctl_submitqueue_query, DRM_RENDER_ALLOW), + DRM_IOCTL_DEF_DRV(MSM_VM_BIND, msm_ioctl_vm_bind, DRM_RENDER_ALLOW), }; static void msm_show_fdinfo(struct drm_printer *p, struct drm_file *file) diff --git a/drivers/gpu/drm/msm/msm_drv.h b/drivers/gpu/drm/msm/msm_drv.h index 2f42d075f13a..9a4d2b6d459d 100644 --- a/drivers/gpu/drm/msm/msm_drv.h +++ b/drivers/gpu/drm/msm/msm_drv.h @@ -232,7 +232,9 @@ struct drm_gpuvm *msm_kms_init_vm(struct drm_device *dev); bool msm_use_mmu(struct drm_device *dev); int msm_ioctl_gem_submit(struct drm_device *dev, void *data, - struct drm_file *file); + struct drm_file *file); +int msm_ioctl_vm_bind(struct drm_device *dev, void *data, + struct drm_file *file); #ifdef CONFIG_DEBUG_FS unsigned long msm_gem_shrinker_shrink(struct drm_device *dev, unsigned long nr_to_scan); diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 106fec06c18d..fea13a993629 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -255,8 +255,7 @@ static void put_pages(struct drm_gem_object *obj) } } -static struct page **msm_gem_get_pages_locked(struct drm_gem_object *obj, - unsigned madv) +struct page **msm_gem_get_pages_locked(struct drm_gem_object *obj, unsigned madv) { struct msm_gem_object *msm_obj = to_msm_bo(obj); @@ -1060,18 +1059,37 @@ static void msm_gem_free_object(struct drm_gem_object *obj) /* * We need to lock any VMs the object is still attached to, but not * the object itself (see explaination in msm_gem_assert_locked()), - * so just open-code this special case: + * so just open-code this special case. + * + * Note that we skip the dance if we aren't attached to any VM. This + * is load bearing. The driver needs to support two usage models: + * + * 1. Legacy kernel managed VM: Userspace expects the VMA's to be + * implicitly torn down when the object is freed, the VMA's do + * not hold a hard reference to the BO. + * + * 2. VM_BIND, userspace managed VM: The VMA holds a reference to the + * BO. This can be dropped when the VM is closed and it's associated + * VMAs are torn down. (See msm_gem_vm_close()). + * + * In the latter case the last reference to a BO can be dropped while + * we already have the VM locked. It would have already been removed + * from the gpuva list, but lockdep doesn't know that. Or understand + * the differences between the two usage models. */ - drm_exec_init(&exec, 0, 0); - drm_exec_until_all_locked (&exec) { - struct drm_gpuvm_bo *vm_bo; - drm_gem_for_each_gpuvm_bo (vm_bo, obj) { - drm_exec_lock_obj(&exec, drm_gpuvm_resv_obj(vm_bo->vm)); - drm_exec_retry_on_contention(&exec); + if (!list_empty(&obj->gpuva.list)) { + drm_exec_init(&exec, 0, 0); + drm_exec_until_all_locked (&exec) { + struct drm_gpuvm_bo *vm_bo; + drm_gem_for_each_gpuvm_bo (vm_bo, obj) { + drm_exec_lock_obj(&exec, + drm_gpuvm_resv_obj(vm_bo->vm)); + drm_exec_retry_on_contention(&exec); + } } + put_iova_spaces(obj, NULL, true); + drm_exec_fini(&exec); /* drop locks */ } - put_iova_spaces(obj, NULL, true); - drm_exec_fini(&exec); /* drop locks */ if (obj->import_attach) { GEM_WARN_ON(msm_obj->vaddr); diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index 46c7ddbc2dce..d062722942b5 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -73,6 +73,9 @@ struct msm_gem_vm { /** @mmu: The mmu object which manages the pgtables */ struct msm_mmu *mmu; + /** @mmu_lock: Protects access to the mmu */ + struct mutex mmu_lock; + /** * @pid: For address spaces associated with a specific process, this * will be non-NULL: @@ -205,6 +208,7 @@ int msm_gem_get_and_pin_iova(struct drm_gem_object *obj, struct drm_gpuvm *vm, uint64_t *iova); void msm_gem_unpin_iova(struct drm_gem_object *obj, struct drm_gpuvm *vm); void msm_gem_pin_obj_locked(struct drm_gem_object *obj); +struct page **msm_gem_get_pages_locked(struct drm_gem_object *obj, unsigned madv); struct page **msm_gem_pin_pages_locked(struct drm_gem_object *obj); void msm_gem_unpin_pages_locked(struct drm_gem_object *obj); int msm_gem_dumb_create(struct drm_file *file, struct drm_device *dev, diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c index 504f27b0232b..8a0f5b5eda30 100644 --- a/drivers/gpu/drm/msm/msm_gem_submit.c +++ b/drivers/gpu/drm/msm/msm_gem_submit.c @@ -193,6 +193,7 @@ static int submit_lookup_objects(struct msm_gem_submit *submit, static int submit_lookup_cmds(struct msm_gem_submit *submit, struct drm_msm_gem_submit *args, struct drm_file *file) { + struct msm_context *ctx = file->driver_priv; unsigned i; size_t sz; int ret = 0; @@ -224,6 +225,20 @@ static int submit_lookup_cmds(struct msm_gem_submit *submit, goto out; } + if (msm_context_is_vmbind(ctx)) { + if (submit_cmd.nr_relocs) { + ret = SUBMIT_ERROR(EINVAL, submit, "nr_relocs must be zero"); + goto out; + } + + if (submit_cmd.submit_idx || submit_cmd.submit_offset) { + ret = SUBMIT_ERROR(EINVAL, submit, "submit_idx/offset must be zero"); + goto out; + } + + submit->cmd[i].iova = submit_cmd.iova; + } + submit->cmd[i].type = submit_cmd.type; submit->cmd[i].size = submit_cmd.size / 4; submit->cmd[i].offset = submit_cmd.submit_offset / 4; @@ -530,6 +545,7 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data, struct msm_syncobj_post_dep *post_deps = NULL; struct drm_syncobj **syncobjs_to_reset = NULL; struct sync_file *sync_file = NULL; + unsigned cmds_to_parse; int out_fence_fd = -1; unsigned i; int ret; @@ -653,7 +669,9 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data, if (ret) goto out; - for (i = 0; i < args->nr_cmds; i++) { + cmds_to_parse = msm_context_is_vmbind(ctx) ? 0 : args->nr_cmds; + + for (i = 0; i < cmds_to_parse; i++) { struct drm_gem_object *obj; uint64_t iova; @@ -684,7 +702,7 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data, goto out; } - submit->nr_cmds = i; + submit->nr_cmds = args->nr_cmds; idr_preload(GFP_KERNEL); diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_gem_vma.c index 76b79c122182..d50438af15ef 100644 --- a/drivers/gpu/drm/msm/msm_gem_vma.c +++ b/drivers/gpu/drm/msm/msm_gem_vma.c @@ -4,9 +4,16 @@ * Author: Rob Clark */ +#include "drm/drm_file.h" +#include "drm/msm_drm.h" +#include "linux/file.h" +#include "linux/sync_file.h" + #include "msm_drv.h" #include "msm_gem.h" +#include "msm_gpu.h" #include "msm_mmu.h" +#include "msm_syncobj.h" #define vm_dbg(fmt, ...) pr_debug("%s:%d: "fmt"\n", __func__, __LINE__, ##__VA_ARGS__) @@ -36,6 +43,97 @@ struct msm_vm_unmap_op { uint64_t range; }; +/** + * struct msm_vma_op - A MAP or UNMAP operation + */ +struct msm_vm_op { + /** @op: The operation type */ + enum { + MSM_VM_OP_MAP = 1, + MSM_VM_OP_UNMAP, + } op; + union { + /** @map: Parameters used if op == MSM_VMA_OP_MAP */ + struct msm_vm_map_op map; + /** @unmap: Parameters used if op == MSM_VMA_OP_UNMAP */ + struct msm_vm_unmap_op unmap; + }; + /** @node: list head in msm_vm_bind_job::vm_ops */ + struct list_head node; + + /** + * @obj: backing object for pages to be mapped/unmapped + * + * Async unmap ops, in particular, must hold a reference to the + * original GEM object backing the mapping that will be unmapped. + * But the same can be required in the map path, for example if + * there is not a corresponding unmap op, such as process exit. + * + * This ensures that the pages backing the mapping are not freed + * before the mapping is torn down. + */ + struct drm_gem_object *obj; +}; + +/** + * struct msm_vm_bind_job - Tracking for a VM_BIND ioctl + * + * A table of userspace requested VM updates (MSM_VM_BIND_OP_UNMAP/MAP/MAP_NULL) + * gets applied to the vm, generating a list of VM ops (MSM_VM_OP_MAP/UNMAP) + * which are applied to the pgtables asynchronously. For example a userspace + * requested MSM_VM_BIND_OP_MAP could end up generating both an MSM_VM_OP_UNMAP + * to unmap an existing mapping, and a MSM_VM_OP_MAP to apply the new mapping. + */ +struct msm_vm_bind_job { + /** @base: base class for drm_sched jobs */ + struct drm_sched_job base; + /** @vm: The VM being operated on */ + struct drm_gpuvm *vm; + /** @fence: The fence that is signaled when job completes */ + struct dma_fence *fence; + /** @queue: The queue that the job runs on */ + struct msm_gpu_submitqueue *queue; + /** @prealloc: Tracking for pre-allocated MMU pgtable pages */ + struct msm_mmu_prealloc prealloc; + /** @vm_ops: a list of struct msm_vm_op */ + struct list_head vm_ops; + /** @bos_pinned: are the GEM objects being bound pinned? */ + bool bos_pinned; + /** @nr_ops: the number of userspace requested ops */ + unsigned int nr_ops; + /** + * @ops: the userspace requested ops + * + * The userspace requested ops are copied/parsed and validated + * before we start applying the updates to try to do as much up- + * front error checking as possible, to avoid the VM being in an + * undefined state due to partially executed VM_BIND. + * + * This table also serves to hold a reference to the backing GEM + * objects. + */ + struct msm_vm_bind_op { + uint32_t op; + uint32_t flags; + union { + struct drm_gem_object *obj; + uint32_t handle; + }; + uint64_t obj_offset; + uint64_t iova; + uint64_t range; + } ops[]; +}; + +#define job_foreach_bo(obj, _job) \ + for (unsigned i = 0; i < (_job)->nr_ops; i++) \ + if ((obj = (_job)->ops[i].obj)) + +static inline struct msm_vm_bind_job *to_msm_vm_bind_job(struct drm_sched_job *job) +{ + return container_of(job, struct msm_vm_bind_job, base); +} + static void msm_gem_vm_free(struct drm_gpuvm *gpuvm) { @@ -52,6 +150,9 @@ msm_gem_vm_free(struct drm_gpuvm *gpuvm) static void vm_unmap_op(struct msm_gem_vm *vm, const struct msm_vm_unmap_op *op) { + if (!vm->managed) + lockdep_assert_held(&vm->mmu_lock); + vm_dbg("%p: %016llx %016llx", vm, op->iova, op->iova + op->range); vm->mmu->funcs->unmap(vm->mmu, op->iova, op->range); @@ -60,6 +161,9 @@ vm_unmap_op(struct msm_gem_vm *vm, const struct msm_vm_unmap_op *op) static int vm_map_op(struct msm_gem_vm *vm, const struct msm_vm_map_op *op) { + if (!vm->managed) + lockdep_assert_held(&vm->mmu_lock); + vm_dbg("%p: %016llx %016llx", vm, op->iova, op->iova + op->range); return vm->mmu->funcs->map(vm->mmu, op->iova, op->sgt, op->offset, @@ -69,17 +173,29 @@ vm_map_op(struct msm_gem_vm *vm, const struct msm_vm_map_op *op) /* Actually unmap memory for the vma */ void msm_gem_vma_unmap(struct drm_gpuva *vma) { + struct msm_gem_vm *vm = to_msm_vm(vma->vm); struct msm_gem_vma *msm_vma = to_msm_vma(vma); /* Don't do anything if the memory isn't mapped */ if (!msm_vma->mapped) return; - vm_unmap_op(to_msm_vm(vma->vm), &(struct msm_vm_unmap_op){ + /* + * The mmu_lock is only needed when preallocation is used. But + * in that case we don't need to worry about recursion into + * shrinker + */ + if (!vm->managed) + mutex_lock(&vm->mmu_lock); + + vm_unmap_op(vm, &(struct msm_vm_unmap_op){ .iova = vma->va.addr, .range = vma->va.range, }); + if (!vm->managed) + mutex_unlock(&vm->mmu_lock); + msm_vma->mapped = false; } @@ -87,6 +203,7 @@ void msm_gem_vma_unmap(struct drm_gpuva *vma) int msm_gem_vma_map(struct drm_gpuva *vma, int prot, struct sg_table *sgt) { + struct msm_gem_vm *vm = to_msm_vm(vma->vm); struct msm_gem_vma *msm_vma = to_msm_vma(vma); int ret; @@ -98,6 +215,14 @@ msm_gem_vma_map(struct drm_gpuva *vma, int prot, struct sg_table *sgt) msm_vma->mapped = true; + /* + * The mmu_lock is only needed when preallocation is used. But + * in that case we don't need to worry about recursion into + * shrinker + */ + if (!vm->managed) + mutex_lock(&vm->mmu_lock); + /* * NOTE: iommu/io-pgtable can allocate pages, so we cannot hold * a lock across map/unmap which is also used in the job_run() @@ -107,16 +232,19 @@ msm_gem_vma_map(struct drm_gpuva *vma, int prot, struct sg_table *sgt) * Revisit this if we can come up with a scheme to pre-alloc pages * for the pgtable in map/unmap ops. */ - ret = vm_map_op(to_msm_vm(vma->vm), &(struct msm_vm_map_op){ + ret = vm_map_op(vm, &(struct msm_vm_map_op){ .iova = vma->va.addr, .range = vma->va.range, .offset = vma->gem.offset, .sgt = sgt, .prot = prot, }); - if (ret) { + + if (!vm->managed) + mutex_unlock(&vm->mmu_lock); + + if (ret) msm_vma->mapped = false; - } return ret; } @@ -131,6 +259,9 @@ void msm_gem_vma_close(struct drm_gpuva *vma) drm_gpuvm_resv_assert_held(&vm->base); + if (vma->gem.obj) + msm_gem_assert_locked(vma->gem.obj); + if (vma->va.addr && vm->managed) drm_mm_remove_node(&msm_vma->node); @@ -158,6 +289,7 @@ msm_gem_vma_new(struct drm_gpuvm *gpuvm, struct drm_gem_object *obj, if (vm->managed) { BUG_ON(offset != 0); + BUG_ON(!obj); /* NULL mappings not valid for kernel managed VM */ ret = drm_mm_insert_node_in_range(&vm->mm, &vma->node, obj->size, PAGE_SIZE, 0, range_start, range_end, 0); @@ -169,7 +301,8 @@ msm_gem_vma_new(struct drm_gpuvm *gpuvm, struct drm_gem_object *obj, range_end = range_start + obj->size; } - GEM_WARN_ON((range_end - range_start) > obj->size); + if (obj) + GEM_WARN_ON((range_end - range_start) > obj->size); drm_gpuva_init(&vma->base, range_start, range_end - range_start, obj, offset); vma->mapped = false; @@ -178,6 +311,9 @@ msm_gem_vma_new(struct drm_gpuvm *gpuvm, struct drm_gem_object *obj, if (ret) goto err_free_range; + if (!obj) + return &vma->base; + vm_bo = drm_gpuvm_bo_obtain(&vm->base, obj); if (IS_ERR(vm_bo)) { ret = PTR_ERR(vm_bo); @@ -200,11 +336,297 @@ msm_gem_vma_new(struct drm_gpuvm *gpuvm, struct drm_gem_object *obj, return ERR_PTR(ret); } +static int +msm_gem_vm_bo_validate(struct drm_gpuvm_bo *vm_bo, struct drm_exec *exec) +{ + struct drm_gem_object *obj = vm_bo->obj; + struct drm_gpuva *vma; + int ret; + + vm_dbg("validate: %p", obj); + + msm_gem_assert_locked(obj); + + drm_gpuvm_bo_for_each_va (vma, vm_bo) { + ret = msm_gem_pin_vma_locked(obj, vma); + if (ret) + return ret; + } + + return 0; +} + +struct op_arg { + unsigned flags; + struct msm_vm_bind_job *job; +}; + +static void +vm_op_enqueue(struct op_arg *arg, struct msm_vm_op _op) +{ + struct msm_vm_op *op = kmalloc(sizeof(*op), GFP_KERNEL); + *op = _op; + list_add_tail(&op->node, &arg->job->vm_ops); + + if (op->obj) + drm_gem_object_get(op->obj); +} + +static struct drm_gpuva * +vma_from_op(struct op_arg *arg, struct drm_gpuva_op_map *op) +{ + return msm_gem_vma_new(arg->job->vm, op->gem.obj, op->gem.offset, + op->va.addr, op->va.addr + op->va.range); +} + +static int +msm_gem_vm_sm_step_map(struct drm_gpuva_op *op, void *arg) +{ + struct drm_gem_object *obj = op->map.gem.obj; + struct drm_gpuva *vma; + struct sg_table *sgt; + unsigned prot; + + vma = vma_from_op(arg, &op->map); + if (WARN_ON(IS_ERR(vma))) + return PTR_ERR(vma); + + vm_dbg("%p:%p:%p: %016llx %016llx", vma->vm, vma, vma->gem.obj, + vma->va.addr, vma->va.range); + + vma->flags = ((struct op_arg *)arg)->flags; + + if (obj) { + sgt = to_msm_bo(obj)->sgt; + prot = msm_gem_prot(obj); + } else { + sgt = NULL; + prot = IOMMU_READ | IOMMU_WRITE; + } + + vm_op_enqueue(arg, (struct msm_vm_op){ + .op = MSM_VM_OP_MAP, + .map = { + .sgt = sgt, + .iova = vma->va.addr, + .range = vma->va.range, + .offset = vma->gem.offset, + .prot = prot, + }, + .obj = vma->gem.obj, + }); + + to_msm_vma(vma)->mapped = true; + + return 0; +} + +static int +msm_gem_vm_sm_step_remap(struct drm_gpuva_op *op, void *arg) +{ + struct msm_vm_bind_job *job = ((struct op_arg *)arg)->job; + struct drm_gpuvm *vm = job->vm; + struct drm_gpuva *orig_vma = op->remap.unmap->va; + struct drm_gpuva *prev_vma = NULL, *next_vma = NULL; + struct drm_gpuvm_bo *vm_bo = orig_vma->vm_bo; + bool mapped = to_msm_vma(orig_vma)->mapped; + unsigned flags; + + vm_dbg("orig_vma: %p:%p:%p: %016llx %016llx", vm, orig_vma, + orig_vma->gem.obj, orig_vma->va.addr, orig_vma->va.range); + + if (mapped) { + uint64_t unmap_start, unmap_range; + + drm_gpuva_op_remap_to_unmap_range(&op->remap, &unmap_start, &unmap_range); + + vm_op_enqueue(arg, (struct msm_vm_op){ + .op = MSM_VM_OP_UNMAP, + .unmap = { + .iova = unmap_start, + .range = unmap_range, + }, + .obj = orig_vma->gem.obj, + }); + + /* + * Part of this GEM obj is still mapped, but we're going to kill the + * existing VMA and replace it with one or two new ones (ie. two if + * the unmapped range is in the middle of the existing (unmap) VMA). + * So just set the state to unmapped: + */ + to_msm_vma(orig_vma)->mapped = false; + } + + /* + * Hold a ref to the vm_bo between the msm_gem_vma_close() and the + * creation of the new prev/next vma's, in case the vm_bo is tracked + * in the VM's evict list: + */ + if (vm_bo) + drm_gpuvm_bo_get(vm_bo); + + /* + * The prev_vma and/or next_vma are replacing the unmapped vma, and + * therefore should preserve it's flags: + */ + flags = orig_vma->flags; + + msm_gem_vma_close(orig_vma); + + if (op->remap.prev) { + prev_vma = vma_from_op(arg, op->remap.prev); + if (WARN_ON(IS_ERR(prev_vma))) + return PTR_ERR(prev_vma); + + vm_dbg("prev_vma: %p:%p: %016llx %016llx", vm, prev_vma, prev_vma->va.addr, prev_vma->va.range); + to_msm_vma(prev_vma)->mapped = mapped; + prev_vma->flags = flags; + } + + if (op->remap.next) { + next_vma = vma_from_op(arg, op->remap.next); + if (WARN_ON(IS_ERR(next_vma))) + return PTR_ERR(next_vma); + + vm_dbg("next_vma: %p:%p: %016llx %016llx", vm, next_vma, next_vma->va.addr, next_vma->va.range); + to_msm_vma(next_vma)->mapped = mapped; + next_vma->flags = flags; + } + + if (!mapped) + drm_gpuvm_bo_evict(vm_bo, true); + + /* Drop the previous ref: */ + drm_gpuvm_bo_put(vm_bo); + + return 0; +} + +static int +msm_gem_vm_sm_step_unmap(struct drm_gpuva_op *op, void *arg) +{ + struct drm_gpuva *vma = op->unmap.va; + struct msm_gem_vma *msm_vma = to_msm_vma(vma); + + vm_dbg("%p:%p:%p: %016llx %016llx", vma->vm, vma, vma->gem.obj, + vma->va.addr, vma->va.range); + + if (!msm_vma->mapped) + goto out_close; + + vm_op_enqueue(arg, (struct msm_vm_op){ + .op = MSM_VM_OP_UNMAP, + .unmap = { + .iova = vma->va.addr, + .range = vma->va.range, + }, + .obj = vma->gem.obj, + }); + + msm_vma->mapped = false; + +out_close: + msm_gem_vma_close(vma); + + return 0; +} + static const struct drm_gpuvm_ops msm_gpuvm_ops = { .vm_free = msm_gem_vm_free, + .vm_bo_validate = msm_gem_vm_bo_validate, + .sm_step_map = msm_gem_vm_sm_step_map, + .sm_step_remap = msm_gem_vm_sm_step_remap, + .sm_step_unmap = msm_gem_vm_sm_step_unmap, }; +static struct dma_fence * +msm_vma_job_run(struct drm_sched_job *_job) +{ + struct msm_vm_bind_job *job = to_msm_vm_bind_job(_job); + struct msm_gem_vm *vm = to_msm_vm(job->vm); + struct drm_gem_object *obj; + int ret = vm->unusable ? -EINVAL : 0; + + vm_dbg(""); + + mutex_lock(&vm->mmu_lock); + vm->mmu->prealloc = &job->prealloc; + + while (!list_empty(&job->vm_ops)) { + struct msm_vm_op *op = + list_first_entry(&job->vm_ops, struct msm_vm_op, node); + + switch (op->op) { + case MSM_VM_OP_MAP: + /* + * On error, stop trying to map new things.. but we + * still want to process the unmaps (or in particular, + * the drm_gem_object_put()s) + */ + if (!ret) + ret = vm_map_op(vm, &op->map); + break; + case MSM_VM_OP_UNMAP: + vm_unmap_op(vm, &op->unmap); + break; + } + drm_gem_object_put(op->obj); + list_del(&op->node); + kfree(op); + } + + vm->mmu->prealloc = NULL; + mutex_unlock(&vm->mmu_lock); + + /* + * We failed to perform at least _some_ of the pgtable updates, so + * now the VM is in an undefined state. Game over! + */ + if (ret) + vm->unusable = true; + + job_foreach_bo (obj, job) { + msm_gem_lock(obj); + msm_gem_unpin_locked(obj); + msm_gem_unlock(obj); + } + + /* VM_BIND ops are synchronous, so no fence to wait on: */ + return NULL; +} + +static void +msm_vma_job_free(struct drm_sched_job *_job) +{ + struct msm_vm_bind_job *job = to_msm_vm_bind_job(_job); + struct msm_mmu *mmu = to_msm_vm(job->vm)->mmu; + struct drm_gem_object *obj; + + mmu->funcs->prealloc_cleanup(mmu, &job->prealloc); + + drm_sched_job_cleanup(_job); + + job_foreach_bo (obj, job) + drm_gem_object_put(obj); + + msm_submitqueue_put(job->queue); + dma_fence_put(job->fence); + + /* In error paths, we could have unexecuted ops: */ + while (!list_empty(&job->vm_ops)) { + struct msm_vm_op *op = + list_first_entry(&job->vm_ops, struct msm_vm_op, node); + list_del(&op->node); + kfree(op); + } + + kfree(job); +} + static const struct drm_sched_backend_ops msm_vm_bind_ops = { + .run_job = msm_vma_job_run, + .free_job = msm_vma_job_free }; /** @@ -268,6 +690,7 @@ msm_gem_vm_create(struct drm_device *drm, struct msm_mmu *mmu, const char *name, drm_gem_object_put(dummy_gem); vm->mmu = mmu; + mutex_init(&vm->mmu_lock); vm->managed = managed; drm_mm_init(&vm->mm, va_start, va_size); @@ -280,7 +703,6 @@ msm_gem_vm_create(struct drm_device *drm, struct msm_mmu *mmu, const char *name, err_free_vm: kfree(vm); return ERR_PTR(ret); - } /** @@ -296,6 +718,7 @@ msm_gem_vm_close(struct drm_gpuvm *gpuvm) { struct msm_gem_vm *vm = to_msm_vm(gpuvm); struct drm_gpuva *vma, *tmp; + struct drm_exec exec; /* * For kernel managed VMs, the VMAs are torn down when the handle is @@ -312,22 +735,633 @@ msm_gem_vm_close(struct drm_gpuvm *gpuvm) drm_sched_fini(&vm->sched); /* Tear down any remaining mappings: */ - dma_resv_lock(drm_gpuvm_resv(gpuvm), NULL); - drm_gpuvm_for_each_va_safe (vma, tmp, gpuvm) { - struct drm_gem_object *obj = vma->gem.obj; + drm_exec_init(&exec, 0, 2); + drm_exec_until_all_locked (&exec) { + drm_exec_lock_obj(&exec, drm_gpuvm_resv_obj(gpuvm)); + drm_exec_retry_on_contention(&exec); + + drm_gpuvm_for_each_va_safe (vma, tmp, gpuvm) { + struct drm_gem_object *obj = vma->gem.obj; + + /* + * MSM_BO_NO_SHARE objects share the same resv as the + * VM, in which case the obj is already locked: + */ + if (obj && (obj->resv == drm_gpuvm_resv(gpuvm))) + obj = NULL; - if (obj && obj->resv != drm_gpuvm_resv(gpuvm)) { - drm_gem_object_get(obj); - msm_gem_lock(obj); + if (obj) { + drm_exec_lock_obj(&exec, obj); + drm_exec_retry_on_contention(&exec); + } + + msm_gem_vma_unmap(vma); + msm_gem_vma_close(vma); + + if (obj) { + drm_exec_unlock_obj(&exec, obj); + } } + } + drm_exec_fini(&exec); +} + + +static struct msm_vm_bind_job * +vm_bind_job_create(struct drm_device *dev, struct msm_gpu *gpu, + struct msm_gpu_submitqueue *queue, uint32_t nr_ops) +{ + struct msm_vm_bind_job *job; + uint64_t sz; + int ret; + + sz = struct_size(job, ops, nr_ops); - msm_gem_vma_unmap(vma); - msm_gem_vma_close(vma); + if (sz > SIZE_MAX) + return ERR_PTR(-ENOMEM); + + job = kzalloc(sz, GFP_KERNEL | __GFP_NOWARN); + if (!job) + return ERR_PTR(-ENOMEM); + + ret = drm_sched_job_init(&job->base, queue->entity, 1, queue); + if (ret) { + kfree(job); + return ERR_PTR(ret); + } - if (obj && obj->resv != drm_gpuvm_resv(gpuvm)) { - msm_gem_unlock(obj); - drm_gem_object_put(obj); + job->vm = msm_context_vm(dev, queue->ctx); + job->queue = queue; + INIT_LIST_HEAD(&job->vm_ops); + + return job; +} + +static bool invalid_alignment(uint64_t addr) +{ + /* + * Technically this is about GPU alignment, not CPU alignment. But + * I've not seen any qcom SoC where the SMMU does not support the + * CPU's smallest page size. + */ + return !PAGE_ALIGNED(addr); +} + +static int +lookup_op(struct msm_vm_bind_job *job, const struct drm_msm_vm_bind_op *op) +{ + struct drm_device *dev = job->vm->drm; + int i = job->nr_ops++; + int ret = 0; + + job->ops[i].op = op->op; + job->ops[i].handle = op->handle; + job->ops[i].obj_offset = op->obj_offset; + job->ops[i].iova = op->iova; + job->ops[i].range = op->range; + job->ops[i].flags = op->flags; + + if (op->flags & ~MSM_VM_BIND_OP_FLAGS) + ret = UERR(EINVAL, dev, "invalid flags: %x\n", op->flags); + + if (invalid_alignment(op->iova)) + ret = UERR(EINVAL, dev, "invalid address: %016llx\n", op->iova); + + if (invalid_alignment(op->obj_offset)) + ret = UERR(EINVAL, dev, "invalid bo_offset: %016llx\n", op->obj_offset); + + if (invalid_alignment(op->range)) + ret = UERR(EINVAL, dev, "invalid range: %016llx\n", op->range); + + + /* + * MAP must specify a valid handle. But the handle MBZ for + * UNMAP or MAP_NULL. + */ + if (op->op == MSM_VM_BIND_OP_MAP) { + if (!op->handle) + ret = UERR(EINVAL, dev, "invalid handle\n"); + } else if (op->handle) { + ret = UERR(EINVAL, dev, "handle must be zero\n"); + } + + switch (op->op) { + case MSM_VM_BIND_OP_MAP: + case MSM_VM_BIND_OP_MAP_NULL: + case MSM_VM_BIND_OP_UNMAP: + break; + default: + ret = UERR(EINVAL, dev, "invalid op: %u\n", op->op); + break; + } + + return ret; +} + +/* + * ioctl parsing, parameter validation, and GEM handle lookup + */ +static int +vm_bind_job_lookup_ops(struct msm_vm_bind_job *job, struct drm_msm_vm_bind *args, + struct drm_file *file, int *nr_bos) +{ + struct drm_device *dev = job->vm->drm; + int ret = 0; + int cnt = 0; + + if (args->nr_ops == 1) { + /* Single op case, the op is inlined: */ + ret = lookup_op(job, &args->op); + } else { + for (unsigned i = 0; i < args->nr_ops; i++) { + struct drm_msm_vm_bind_op op; + void __user *userptr = + u64_to_user_ptr(args->ops + (i * sizeof(op))); + + /* make sure we don't have garbage flags, in case we hit + * error path before flags is initialized: + */ + job->ops[i].flags = 0; + + if (copy_from_user(&op, userptr, sizeof(op))) { + ret = -EFAULT; + break; + } + + ret = lookup_op(job, &op); + if (ret) + break; + } + } + + if (ret) { + job->nr_ops = 0; + goto out; + } + + spin_lock(&file->table_lock); + + for (unsigned i = 0; i < args->nr_ops; i++) { + struct drm_gem_object *obj; + + if (!job->ops[i].handle) { + job->ops[i].obj = NULL; + continue; + } + + /* + * normally use drm_gem_object_lookup(), but for bulk lookup + * all under single table_lock just hit object_idr directly: + */ + obj = idr_find(&file->object_idr, job->ops[i].handle); + if (!obj) { + ret = UERR(EINVAL, dev, "invalid handle %u at index %u\n", job->ops[i].handle, i); + goto out_unlock; + } + + drm_gem_object_get(obj); + + job->ops[i].obj = obj; + cnt++; + } + + *nr_bos = cnt; + +out_unlock: + spin_unlock(&file->table_lock); + +out: + return ret; +} + +static void +prealloc_count(struct msm_vm_bind_job *job, + struct msm_vm_bind_op *first, + struct msm_vm_bind_op *last) +{ + struct msm_mmu *mmu = to_msm_vm(job->vm)->mmu; + + if (!first) + return; + + uint64_t start_iova = first->iova; + uint64_t end_iova = last->iova + last->range; + + mmu->funcs->prealloc_count(mmu, &job->prealloc, start_iova, end_iova - start_iova); +} + +static bool +ops_are_same_pte(struct msm_vm_bind_op *first, struct msm_vm_bind_op *next) +{ + /* + * Last level pte covers 2MB.. so we should merge two ops, from + * the PoV of figuring out how much pgtable pages to pre-allocate + * if they land in the same 2MB range: + */ + uint64_t pte_mask = ~(SZ_2M - 1); + return ((first->iova + first->range) & pte_mask) == (next->iova & pte_mask); +} + +/* + * Determine the amount of memory to prealloc for pgtables. For sparse images, + * in particular, userspace plays some tricks with the order of page mappings + * to get the desired swizzle pattern, resulting in a large # of tiny MAP ops. + * So detect when multiple MAP operations are physically contiguous, and count + * them as a single mapping. Otherwise the prealloc_count() will not realize + * they can share pagetable pages and vastly overcount. + */ +static void +vm_bind_prealloc_count(struct msm_vm_bind_job *job) +{ + struct msm_vm_bind_op *first = NULL, *last = NULL; + + for (int i = 0; i < job->nr_ops; i++) { + struct msm_vm_bind_op *op = &job->ops[i]; + + /* We only care about MAP/MAP_NULL: */ + if (op->op == MSM_VM_BIND_OP_UNMAP) + continue; + + /* + * If op is contiguous with last in the current range, then + * it becomes the new last in the range and we continue + * looping: + */ + if (last && ops_are_same_pte(last, op)) { + last = op; + continue; } + + /* + * If op is not contiguous with the current range, flush + * the current range and start anew: + */ + prealloc_count(job, first, last); + first = last = op; } - dma_resv_unlock(drm_gpuvm_resv(gpuvm)); + + /* Flush the remaining range: */ + prealloc_count(job, first, last); +} + +/* + * Lock VM and GEM objects + */ +static int +vm_bind_job_lock_objects(struct msm_vm_bind_job *job, struct drm_exec *exec) +{ + struct drm_gem_object *obj; + int ret; + + /* Lock VM and objects: */ + drm_exec_until_all_locked(exec) { + ret = drm_exec_lock_obj(exec, drm_gpuvm_resv_obj(job->vm)); + drm_exec_retry_on_contention(exec); + if (ret) + return ret; + + job_foreach_bo (obj, job) { + ret = drm_exec_prepare_obj(exec, obj, 1); + drm_exec_retry_on_contention(exec); + if (ret) + return ret; + } + } + + return 0; +} + +/* + * Pin GEM objects, ensuring that we have backing pages. Pinning will move + * the object to the pinned LRU so that the shrinker knows to first consider + * other objects for evicting. + */ +static int +vm_bind_job_pin_objects(struct msm_vm_bind_job *job) +{ + struct drm_gem_object *obj; + + /* + * First loop, before holding the LRU lock, avoids holding the + * LRU lock while calling msm_gem_pin_vma_locked (which could + * trigger get_pages()) + */ + job_foreach_bo (obj, job) { + struct page **pages; + + pages = msm_gem_get_pages_locked(obj, MSM_MADV_WILLNEED); + if (IS_ERR(pages)) + return PTR_ERR(pages); + } + + struct msm_drm_private *priv = job->vm->drm->dev_private; + + /* + * A second loop while holding the LRU lock (a) avoids acquiring/dropping + * the LRU lock for each individual bo, while (b) avoiding holding the + * LRU lock while calling msm_gem_pin_vma_locked() (which could trigger + * get_pages() which could trigger reclaim.. and if we held the LRU lock + * could trigger deadlock with the shrinker). + */ + mutex_lock(&priv->lru.lock); + job_foreach_bo (obj, job) + msm_gem_pin_obj_locked(obj); + mutex_unlock(&priv->lru.lock); + + job->bos_pinned = true; + + return 0; +} + +/* + * Unpin GEM objects. Normally this is done after the bind job is run. + */ +static void +vm_bind_job_unpin_objects(struct msm_vm_bind_job *job) +{ + struct drm_gem_object *obj; + + if (!job->bos_pinned) + return; + + job_foreach_bo (obj, job) + msm_gem_unpin_locked(obj); + + job->bos_pinned = false; +} + +/* + * Pre-allocate pgtable memory, and translate the VM bind requests into a + * sequence of pgtable updates to be applied asynchronously. + */ +static int +vm_bind_job_prepare(struct msm_vm_bind_job *job) +{ + struct msm_gem_vm *vm = to_msm_vm(job->vm); + struct msm_mmu *mmu = vm->mmu; + int ret; + + ret = mmu->funcs->prealloc_allocate(mmu, &job->prealloc); + if (ret) + return ret; + + for (unsigned i = 0; i < job->nr_ops; i++) { + const struct msm_vm_bind_op *op = &job->ops[i]; + struct op_arg arg = { + .job = job, + }; + + switch (op->op) { + case MSM_VM_BIND_OP_UNMAP: + ret = drm_gpuvm_sm_unmap(job->vm, &arg, op->iova, + op->obj_offset); + break; + case MSM_VM_BIND_OP_MAP: + if (op->flags & MSM_VM_BIND_OP_DUMP) + arg.flags |= MSM_VMA_DUMP; + fallthrough; + case MSM_VM_BIND_OP_MAP_NULL: + ret = drm_gpuvm_sm_map(job->vm, &arg, op->iova, + op->range, op->obj, op->obj_offset); + break; + default: + /* + * lookup_op() should have already thrown an error for + * invalid ops + */ + BUG_ON("unreachable"); + } + + if (ret) { + /* + * If we've already started modifying the vm, we can't + * adequetly describe to userspace the intermediate + * state the vm is in. So throw up our hands! + */ + if (i > 0) + vm->unusable = true; + return ret; + } + } + + return 0; +} + +/* + * Attach fences to the GEM objects being bound. This will signify to + * the shrinker that they are busy even after dropping the locks (ie. + * drm_exec_fini()) + */ +static void +vm_bind_job_attach_fences(struct msm_vm_bind_job *job) +{ + for (unsigned i = 0; i < job->nr_ops; i++) { + struct drm_gem_object *obj = job->ops[i].obj; + + if (!obj) + continue; + + dma_resv_add_fence(obj->resv, job->fence, + DMA_RESV_USAGE_KERNEL); + } +} + +int +msm_ioctl_vm_bind(struct drm_device *dev, void *data, struct drm_file *file) +{ + struct msm_drm_private *priv = dev->dev_private; + struct drm_msm_vm_bind *args = data; + struct msm_context *ctx = file->driver_priv; + struct msm_vm_bind_job *job = NULL; + struct msm_gpu *gpu = priv->gpu; + struct msm_gpu_submitqueue *queue; + struct msm_syncobj_post_dep *post_deps = NULL; + struct drm_syncobj **syncobjs_to_reset = NULL; + struct sync_file *sync_file = NULL; + struct dma_fence *fence; + int out_fence_fd = -1; + int ret, nr_bos = 0; + unsigned i; + + if (!gpu) + return -ENXIO; + + /* + * Maybe we could allow just UNMAP ops? OTOH userspace should just + * immediately close the device file and all will be torn down. + */ + if (to_msm_vm(ctx->vm)->unusable) + return UERR(EPIPE, dev, "context is unusable"); + + /* + * Technically, you cannot create a VM_BIND submitqueue in the first + * place, if you haven't opted in to VM_BIND context. But it is + * cleaner / less confusing, to check this case directly. + */ + if (!msm_context_is_vmbind(ctx)) + return UERR(EINVAL, dev, "context does not support vmbind"); + + if (args->flags & ~MSM_VM_BIND_FLAGS) + return UERR(EINVAL, dev, "invalid flags"); + + queue = msm_submitqueue_get(ctx, args->queue_id); + if (!queue) + return -ENOENT; + + if (!(queue->flags & MSM_SUBMITQUEUE_VM_BIND)) { + ret = UERR(EINVAL, dev, "Invalid queue type"); + goto out_post_unlock; + } + + if (args->flags & MSM_VM_BIND_FENCE_FD_OUT) { + out_fence_fd = get_unused_fd_flags(O_CLOEXEC); + if (out_fence_fd < 0) { + ret = out_fence_fd; + goto out_post_unlock; + } + } + + job = vm_bind_job_create(dev, gpu, queue, args->nr_ops); + if (IS_ERR(job)) { + ret = PTR_ERR(job); + goto out_post_unlock; + } + + ret = mutex_lock_interruptible(&queue->lock); + if (ret) + goto out_post_unlock; + + if (args->flags & MSM_VM_BIND_FENCE_FD_IN) { + struct dma_fence *in_fence; + + in_fence = sync_file_get_fence(args->fence_fd); + + if (!in_fence) { + ret = UERR(EINVAL, dev, "invalid in-fence"); + goto out_unlock; + } + + ret = drm_sched_job_add_dependency(&job->base, in_fence); + if (ret) + goto out_unlock; + } + + if (args->in_syncobjs > 0) { + syncobjs_to_reset = msm_syncobj_parse_deps(dev, &job->base, + file, args->in_syncobjs, + args->nr_in_syncobjs, + args->syncobj_stride); + if (IS_ERR(syncobjs_to_reset)) { + ret = PTR_ERR(syncobjs_to_reset); + goto out_unlock; + } + } + + if (args->out_syncobjs > 0) { + post_deps = msm_syncobj_parse_post_deps(dev, file, + args->out_syncobjs, + args->nr_out_syncobjs, + args->syncobj_stride); + if (IS_ERR(post_deps)) { + ret = PTR_ERR(post_deps); + goto out_unlock; + } + } + + ret = vm_bind_job_lookup_ops(job, args, file, &nr_bos); + if (ret) + goto out_unlock; + + vm_bind_prealloc_count(job); + + struct drm_exec exec; + unsigned flags = DRM_EXEC_IGNORE_DUPLICATES | DRM_EXEC_INTERRUPTIBLE_WAIT; + drm_exec_init(&exec, flags, nr_bos + 1); + + ret = vm_bind_job_lock_objects(job, &exec); + if (ret) + goto out; + + ret = vm_bind_job_pin_objects(job); + if (ret) + goto out; + + ret = vm_bind_job_prepare(job); + if (ret) + goto out; + + drm_sched_job_arm(&job->base); + + job->fence = dma_fence_get(&job->base.s_fence->finished); + + if (args->flags & MSM_VM_BIND_FENCE_FD_OUT) { + sync_file = sync_file_create(job->fence); + if (!sync_file) { + ret = -ENOMEM; + } else { + fd_install(out_fence_fd, sync_file->file); + args->fence_fd = out_fence_fd; + } + } + + if (ret) + goto out; + + vm_bind_job_attach_fences(job); + + /* + * The job can be free'd (and fence unref'd) at any point after + * drm_sched_entity_push_job(), so we need to hold our own ref + */ + fence = dma_fence_get(job->fence); + + drm_sched_entity_push_job(&job->base); + + msm_syncobj_reset(syncobjs_to_reset, args->nr_in_syncobjs); + msm_syncobj_process_post_deps(post_deps, args->nr_out_syncobjs, fence); + + dma_fence_put(fence); + +out: + if (ret) + vm_bind_job_unpin_objects(job); + + drm_exec_fini(&exec); +out_unlock: + mutex_unlock(&queue->lock); +out_post_unlock: + if (ret && (out_fence_fd >= 0)) { + put_unused_fd(out_fence_fd); + if (sync_file) + fput(sync_file->file); + } + + if (!IS_ERR_OR_NULL(job)) { + if (ret) + msm_vma_job_free(&job->base); + } else { + /* + * If the submit hasn't yet taken ownership of the queue + * then we need to drop the reference ourself: + */ + msm_submitqueue_put(queue); + } + + if (!IS_ERR_OR_NULL(post_deps)) { + for (i = 0; i < args->nr_out_syncobjs; ++i) { + kfree(post_deps[i].chain); + drm_syncobj_put(post_deps[i].syncobj); + } + kfree(post_deps); + } + + if (!IS_ERR_OR_NULL(syncobjs_to_reset)) { + for (i = 0; i < args->nr_in_syncobjs; ++i) { + if (syncobjs_to_reset[i]) + drm_syncobj_put(syncobjs_to_reset[i]); + } + kfree(syncobjs_to_reset); + } + + return ret; } diff --git a/include/uapi/drm/msm_drm.h b/include/uapi/drm/msm_drm.h index 6d6cd1219926..5c67294edc95 100644 --- a/include/uapi/drm/msm_drm.h +++ b/include/uapi/drm/msm_drm.h @@ -272,7 +272,10 @@ struct drm_msm_gem_submit_cmd { __u32 size; /* in, cmdstream size */ __u32 pad; __u32 nr_relocs; /* in, number of submit_reloc's */ - __u64 relocs; /* in, ptr to array of submit_reloc's */ + union { + __u64 relocs; /* in, ptr to array of submit_reloc's */ + __u64 iova; /* cmdstream address (for VM_BIND contexts) */ + }; }; /* Each buffer referenced elsewhere in the cmdstream submit (ie. the @@ -339,7 +342,74 @@ struct drm_msm_gem_submit { __u32 nr_out_syncobjs; /* in, number of entries in out_syncobj. */ __u32 syncobj_stride; /* in, stride of syncobj arrays. */ __u32 pad; /*in, reserved for future use, always 0. */ +}; + +#define MSM_VM_BIND_OP_UNMAP 0 +#define MSM_VM_BIND_OP_MAP 1 +#define MSM_VM_BIND_OP_MAP_NULL 2 + +#define MSM_VM_BIND_OP_DUMP 1 +#define MSM_VM_BIND_OP_FLAGS ( \ + MSM_VM_BIND_OP_DUMP | \ + 0) +/** + * struct drm_msm_vm_bind_op - bind/unbind op to run + */ +struct drm_msm_vm_bind_op { + /** @op: one of MSM_VM_BIND_OP_x */ + __u32 op; + /** @handle: GEM object handle, MBZ for UNMAP or MAP_NULL */ + __u32 handle; + /** @obj_offset: Offset into GEM object, MBZ for UNMAP or MAP_NULL */ + __u64 obj_offset; + /** @iova: Address to operate on */ + __u64 iova; + /** @range: Number of bites to to map/unmap */ + __u64 range; + /** @flags: Bitmask of MSM_VM_BIND_OP_FLAG_x */ + __u32 flags; + /** @pad: MBZ */ + __u32 pad; +}; + +#define MSM_VM_BIND_FENCE_FD_IN 0x00000001 +#define MSM_VM_BIND_FENCE_FD_OUT 0x00000002 +#define MSM_VM_BIND_FLAGS ( \ + MSM_VM_BIND_FENCE_FD_IN | \ + MSM_VM_BIND_FENCE_FD_OUT | \ + 0) + +/** + * struct drm_msm_vm_bind - Input of &DRM_IOCTL_MSM_VM_BIND + */ +struct drm_msm_vm_bind { + /** @flags: in, bitmask of MSM_VM_BIND_x */ + __u32 flags; + /** @nr_ops: the number of bind ops in this ioctl */ + __u32 nr_ops; + /** @fence_fd: in/out fence fd (see MSM_VM_BIND_FENCE_FD_IN/OUT) */ + __s32 fence_fd; + /** @queue_id: in, submitqueue id */ + __u32 queue_id; + /** @in_syncobjs: in, ptr to array of drm_msm_gem_syncobj */ + __u64 in_syncobjs; + /** @out_syncobjs: in, ptr to array of drm_msm_gem_syncobj */ + __u64 out_syncobjs; + /** @nr_in_syncobjs: in, number of entries in in_syncobj */ + __u32 nr_in_syncobjs; + /** @nr_out_syncobjs: in, number of entries in out_syncobj */ + __u32 nr_out_syncobjs; + /** @syncobj_stride: in, stride of syncobj arrays */ + __u32 syncobj_stride; + /** @op_stride: sizeof each struct drm_msm_vm_bind_op in @ops */ + __u32 op_stride; + union { + /** @op: used if num_ops == 1 */ + struct drm_msm_vm_bind_op op; + /** @ops: userptr to array of drm_msm_vm_bind_op if num_ops > 1 */ + __u64 ops; + }; }; #define MSM_WAIT_FENCE_BOOST 0x00000001 @@ -435,6 +505,7 @@ struct drm_msm_submitqueue_query { #define DRM_MSM_SUBMITQUEUE_NEW 0x0A #define DRM_MSM_SUBMITQUEUE_CLOSE 0x0B #define DRM_MSM_SUBMITQUEUE_QUERY 0x0C +#define DRM_MSM_VM_BIND 0x0D #define DRM_IOCTL_MSM_GET_PARAM DRM_IOWR(DRM_COMMAND_BASE + DRM_MSM_GET_PARAM, struct drm_msm_param) #define DRM_IOCTL_MSM_SET_PARAM DRM_IOW (DRM_COMMAND_BASE + DRM_MSM_SET_PARAM, struct drm_msm_param) @@ -448,6 +519,7 @@ struct drm_msm_submitqueue_query { #define DRM_IOCTL_MSM_SUBMITQUEUE_NEW DRM_IOWR(DRM_COMMAND_BASE + DRM_MSM_SUBMITQUEUE_NEW, struct drm_msm_submitqueue) #define DRM_IOCTL_MSM_SUBMITQUEUE_CLOSE DRM_IOW (DRM_COMMAND_BASE + DRM_MSM_SUBMITQUEUE_CLOSE, __u32) #define DRM_IOCTL_MSM_SUBMITQUEUE_QUERY DRM_IOW (DRM_COMMAND_BASE + DRM_MSM_SUBMITQUEUE_QUERY, struct drm_msm_submitqueue_query) +#define DRM_IOCTL_MSM_VM_BIND DRM_IOWR(DRM_COMMAND_BASE + DRM_MSM_VM_BIND, struct drm_msm_vm_bind) #if defined(__cplusplus) } From patchwork Thu Jun 5 18:29:19 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 894495 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AE897289826 for ; Thu, 5 Jun 2025 18:33:52 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.180.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749148434; cv=none; b=Kwr7MvzrY2SSdfdtraYSCrepyqAQ0cvN6bF8O4IsAQ3QVrZ1XSAJLnDKcUDapSQ8xa9ipiCslVGYZ9aC/srHYhhIbBe0UMO70EyATBxd15q46/EaiQqkGtXgcglFl6av9Ks5V5PztNRRK+ENlWstlomtLN6bQiDhm24Ihx5OTY0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749148434; c=relaxed/simple; bh=IdYFvGCRhcv4vs/14w7XcTb7R0XVsmZHzV2B+dR/Bg0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=tmAlxA9y8/sPWcLzhSCgUFa4Lk6F3mVpO0H6/BAuZGCbCUwyGnHwkT0flDrkqtUR1f/kEIFcy40C3NstRv2hTovltD8L/4+7jYkD9JVSj/EJHfuuEyu0Wq9fO/mKrDMP8koP5NRgdrET5S98q/+N77BuNZKIBpA0eGpEWBx22Qw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=Z6Xqo2M7; arc=none smtp.client-ip=205.220.180.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="Z6Xqo2M7" Received: from pps.filterd (m0279870.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 555GaJA0032471 for ; Thu, 5 Jun 2025 18:33:51 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=5E7r1I1JIny hH5ANbyJ15epExnoKGcVwBYpM2XGtTXk=; b=Z6Xqo2M7a2ISTw5Z7YHFktJ5yv3 znjC+0it5ZuyIpUB+aRpyedDy28Fu0epaWh5SdpNUePiw9Yw8AvJ7qyhlmle54qU SP6zQkwUFKjbfaSAFOFC89iJ0wwye3uM62lTA8ikQ8ZZXmUdIQ9cAXYV2rFy+mU3 a9SbbwZc4CT9C5AbqHPjzTvGGSipVStwkRq56ROQJVnIv0KoQ25reCrIxLSaoUM/ sWegG7QIONzGpwrZXddS8gerHMmuRyWQ678K5kbolo87GhcO48hYIoi3V40uaDla lhFrnwL/C0SDBmjgrv5CsGtfqgRyViHI9xxRFe6Lm+dJ9rvnHnoKtZNRGVw== Received: from mail-pf1-f199.google.com (mail-pf1-f199.google.com [209.85.210.199]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 471sfv17xq-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Thu, 05 Jun 2025 18:33:50 +0000 (GMT) Received: by mail-pf1-f199.google.com with SMTP id d2e1a72fcca58-740774348f6so1162214b3a.1 for ; Thu, 05 Jun 2025 11:33:49 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1749148397; x=1749753197; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=5E7r1I1JInyhH5ANbyJ15epExnoKGcVwBYpM2XGtTXk=; b=k3DNwE5jTytf00Rf/l2brIs5QGto3DuZVxYZKG/j/dmkdG2/MyuD7KyGOEEpY8qtWk qQ3lKqDJ8Hbd8fD22suQH5ei7qTGUsBP0jpaGK45lWYD7Eef1qgToZ23B5ymQ2aMr3Pk 56537m/o1Bo1ZLIgXJFl7SkeRmpA+ZeCnoJXePNo3h/KxJblheaKMimbL6wdsZk9s/VK IKQ0e3AhpUXpt2AP+Am5yeKYdOiUyDpckEYKdtvwO3MjtbyUrFVwZftDQf/7/9X5/cfY dRaBBJPgX43xVDrUyzXgrM9mV/AeasNCr+0IqIkrlmKf9D4Z2oi7QJwpxhAp9PbGeo6V idxA== X-Forwarded-Encrypted: i=1; AJvYcCUQeIyXPwlR11MUkffOoovRf0Cpi80AEOMkXz4aJ5rJW2utARXCVHndpQ37ZCiSLIN4XwXwx1xpsqdlvRAt@vger.kernel.org X-Gm-Message-State: AOJu0YxwQ2EIgeoFCUnX1DxXJN2lue8VtZ1i5FbOjCo51YpoLGv0U55+ fMKQldYDCcc5NRVYdju6cCAzgOtPiTbJW5KEy4Baar+JhO74Z7bGxvpc4v8Zc2npbgOMC2R04N3 RtSSTNnyFYTLgv5wQoxHLP0ZdW7oQga6VC048SxJf3WU1fT5ObGiG1c228e3u+FnwbPLd X-Gm-Gg: ASbGnctrN9fGm28igDQhR84/u36KWdF2AkHF0kEZR8CL60d2UKnG/f6seVkYAvO1WMI U3zzrgOuJh1RJRAEp6f9/yZscFB2pO2oTsDaRFFqyrlkyZT1SiiCEHWj9azC07JeXX7fn/uOCW/ XNWD8SI1CavjwtJJVIKzO3R6qMWtjmCOm0NfOroeAK4C2Dnkw5jR09ePDk4vyqKZsxJEaUPFFhO XlHsPF7GyX316o2bqQyAKQs7Tu+ddgWbY+yH3l3MpYTo+x57QKJupDmXdTSU3rhvY6M7397/29/ 94wHYOpxQWxmAO6lYQpy4Q== X-Received: by 2002:a05:6a20:9190:b0:215:e1cc:6360 with SMTP id adf61e73a8af0-21ee2555999mr373187637.11.1749148396936; Thu, 05 Jun 2025 11:33:16 -0700 (PDT) X-Google-Smtp-Source: AGHT+IH18VNdsdmP9dwkjS2ii+bldo6wejwa5K09jUUvohFK8vDutq+JBo2B7fGEt6UQn9jpjCOc5A== X-Received: by 2002:a05:6a20:9190:b0:215:e1cc:6360 with SMTP id adf61e73a8af0-21ee2555999mr373148637.11.1749148396460; Thu, 05 Jun 2025 11:33:16 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:89fa:e299:1a34:c1f5]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-747affafb51sm13111717b3a.120.2025.06.05.11.33.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 05 Jun 2025 11:33:16 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Rob Clark , Rob Clark , Rob Clark , Sean Paul , Konrad Dybcio , Abhinav Kumar , Dmitry Baryshkov , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v6 34/40] drm/msm: Add VM logging for VM_BIND updates Date: Thu, 5 Jun 2025 11:29:19 -0700 Message-ID: <20250605183111.163594-35-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250605183111.163594-1-robin.clark@oss.qualcomm.com> References: <20250605183111.163594-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Authority-Analysis: v=2.4 cv=CY8I5Krl c=1 sm=1 tr=0 ts=6841e30e cx=c_pps a=WW5sKcV1LcKqjgzy2JUPuA==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=cm27Pg_UAAAA:8 a=EUspDBNiAAAA:8 a=zDHoyB4d8wgZrPPXejgA:9 a=OpyuDcXvxspvyRM73sMx:22 X-Proofpoint-ORIG-GUID: kIjan2oUKbjPC7Bgq9lFy1PffVj64Yl5 X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjA1MDE2NiBTYWx0ZWRfX5IDnbwX1ObEw qLRq5f9fxmi6L3ck6pXxZFEfQS41UeX/6cUr3yWPxR5vSZzWU8EPUfQ/3MTG+poEZzByVOIDMIJ x/PE3LcJOoCvK/UoSx2TLiFB0yMBcw9YxThhejxNqbUeokKvL+1InSk3ictd7p9Ymw/kRPFQo6v MdLCdZXgLvT5SAW9KvPF9tH+waAZEQ50fZqFRjtdDIwrKbo1yQpkmQ8vYOehxp6fKoquQ3ebECk ZaVJuJxgF0HExhSena1L2VMYTBYYNCqd4YvSz3SkDFQB3dgGzPN/09z5gxyLR1LGWFPaj0Nswjq y1jm4u1xIo6/IsfYlAzte71mCcScdAJcoOk10VPAaOmLquaCYzl90ieILJ+o6IN0ozrnbS//WKB fy4Igo/bC3W7MrizpldCEM2AGNcUdxKwUCVrf01YSzlyjvE1eEOkGWFzJQ6HR/xLGAFtMxBb X-Proofpoint-GUID: kIjan2oUKbjPC7Bgq9lFy1PffVj64Yl5 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.0.736,FMLib:17.12.80.40 definitions=2025-06-05_05,2025-06-05_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 mlxscore=0 adultscore=0 bulkscore=0 suspectscore=0 malwarescore=0 clxscore=1015 lowpriorityscore=0 spamscore=0 impostorscore=0 phishscore=0 mlxlogscore=999 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506050166 From: Rob Clark When userspace opts in to VM_BIND, the submit no longer holds references keeping the VMA alive. This makes it difficult to distinguish between UMD/KMD/app bugs. So add a debug option for logging the most recent VM updates and capturing these in GPU devcoredumps. The submitqueue id is also captured, a value of zero means the operation did not go via a submitqueue (ie. comes from msm_gem_vm_close() tearing down the remaining mappings when the device file is closed. Signed-off-by: Rob Clark Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/adreno/adreno_gpu.c | 11 +++ drivers/gpu/drm/msm/msm_gem.h | 24 +++++ drivers/gpu/drm/msm/msm_gem_vma.c | 124 ++++++++++++++++++++++-- drivers/gpu/drm/msm/msm_gpu.c | 52 +++++++++- drivers/gpu/drm/msm/msm_gpu.h | 4 + 5 files changed, 202 insertions(+), 13 deletions(-) diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c b/drivers/gpu/drm/msm/adreno/adreno_gpu.c index efe03f3f42ba..12b42ae2688c 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c @@ -837,6 +837,7 @@ void adreno_gpu_state_destroy(struct msm_gpu_state *state) for (i = 0; state->bos && i < state->nr_bos; i++) kvfree(state->bos[i].data); + kfree(state->vm_logs); kfree(state->bos); kfree(state->comm); kfree(state->cmd); @@ -977,6 +978,16 @@ void adreno_show(struct msm_gpu *gpu, struct msm_gpu_state *state, info->ptes[0], info->ptes[1], info->ptes[2], info->ptes[3]); } + if (state->vm_logs) { + drm_puts(p, "vm-log:\n"); + for (i = 0; i < state->nr_vm_logs; i++) { + struct msm_gem_vm_log_entry *e = &state->vm_logs[i]; + drm_printf(p, " - %s:%d: 0x%016llx-0x%016llx\n", + e->op, e->queue_id, e->iova, + e->iova + e->range); + } + } + drm_printf(p, "rbbm-status: 0x%08x\n", state->rbbm_status); drm_puts(p, "ringbuffer:\n"); diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index d062722942b5..efbf58594c08 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -24,6 +24,20 @@ #define MSM_BO_STOLEN 0x10000000 /* try to use stolen/splash memory */ #define MSM_BO_MAP_PRIV 0x20000000 /* use IOMMU_PRIV when mapping */ +/** + * struct msm_gem_vm_log_entry - An entry in the VM log + * + * For userspace managed VMs, a log of recent VM updates is tracked and + * captured in GPU devcore dumps, to aid debugging issues caused by (for + * example) incorrectly synchronized VM updates + */ +struct msm_gem_vm_log_entry { + const char *op; + uint64_t iova; + uint64_t range; + int queue_id; +}; + /** * struct msm_gem_vm - VM object * @@ -85,6 +99,15 @@ struct msm_gem_vm { /** @last_fence: Fence for last pending work scheduled on the VM */ struct dma_fence *last_fence; + /** @log: A log of recent VM updates */ + struct msm_gem_vm_log_entry *log; + + /** @log_shift: length of @log is (1 << @log_shift) */ + uint32_t log_shift; + + /** @log_idx: index of next @log entry to write */ + uint32_t log_idx; + /** @faults: the number of GPU hangs associated with this address space */ int faults; @@ -115,6 +138,7 @@ msm_gem_vm_create(struct drm_device *drm, struct msm_mmu *mmu, const char *name, u64 va_start, u64 va_size, bool managed); void msm_gem_vm_close(struct drm_gpuvm *gpuvm); +void msm_gem_vm_unusable(struct drm_gpuvm *gpuvm); struct msm_fence_context; diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_gem_vma.c index d50438af15ef..b6760fa9dd82 100644 --- a/drivers/gpu/drm/msm/msm_gem_vma.c +++ b/drivers/gpu/drm/msm/msm_gem_vma.c @@ -17,6 +17,10 @@ #define vm_dbg(fmt, ...) pr_debug("%s:%d: "fmt"\n", __func__, __LINE__, ##__VA_ARGS__) +static uint vm_log_shift = 0; +MODULE_PARM_DESC(vm_log_shift, "Length of VM op log"); +module_param_named(vm_log_shift, vm_log_shift, uint, 0600); + /** * struct msm_vm_map_op - create new pgtable mapping */ @@ -31,6 +35,13 @@ struct msm_vm_map_op { struct sg_table *sgt; /** @prot: the mapping protection flags */ int prot; + + /** + * @queue_id: The id of the submitqueue the operation is performed + * on, or zero for (in particular) UNMAP ops triggered outside of + * a submitqueue (ie. process cleanup) + */ + int queue_id; }; /** @@ -41,6 +52,13 @@ struct msm_vm_unmap_op { uint64_t iova; /** @range: size of region to unmap */ uint64_t range; + + /** + * @queue_id: The id of the submitqueue the operation is performed + * on, or zero for (in particular) UNMAP ops triggered outside of + * a submitqueue (ie. process cleanup) + */ + int queue_id; }; /** @@ -144,16 +162,87 @@ msm_gem_vm_free(struct drm_gpuvm *gpuvm) vm->mmu->funcs->destroy(vm->mmu); dma_fence_put(vm->last_fence); put_pid(vm->pid); + kfree(vm->log); kfree(vm); } +/** + * msm_gem_vm_unusable() - Mark a VM as unusable + * @vm: the VM to mark unusable + */ +void +msm_gem_vm_unusable(struct drm_gpuvm *gpuvm) +{ + struct msm_gem_vm *vm = to_msm_vm(gpuvm); + uint32_t vm_log_len = (1 << vm->log_shift); + uint32_t vm_log_mask = vm_log_len - 1; + uint32_t nr_vm_logs; + int first; + + vm->unusable = true; + + /* Bail if no log, or empty log: */ + if (!vm->log || !vm->log[0].op) + return; + + mutex_lock(&vm->mmu_lock); + + /* + * log_idx is the next entry to overwrite, meaning it is the oldest, or + * first, entry (other than the special case handled below where the + * log hasn't wrapped around yet) + */ + first = vm->log_idx; + + if (!vm->log[first].op) { + /* + * If the next log entry has not been written yet, then only + * entries 0 to idx-1 are valid (ie. we haven't wrapped around + * yet) + */ + nr_vm_logs = MAX(0, first - 1); + first = 0; + } else { + nr_vm_logs = vm_log_len; + } + + pr_err("vm-log:\n"); + for (int i = 0; i < nr_vm_logs; i++) { + int idx = (i + first) & vm_log_mask; + struct msm_gem_vm_log_entry *e = &vm->log[idx]; + pr_err(" - %s:%d: 0x%016llx-0x%016llx\n", + e->op, e->queue_id, e->iova, + e->iova + e->range); + } + + mutex_unlock(&vm->mmu_lock); +} + static void -vm_unmap_op(struct msm_gem_vm *vm, const struct msm_vm_unmap_op *op) +vm_log(struct msm_gem_vm *vm, const char *op, uint64_t iova, uint64_t range, int queue_id) { + int idx; + if (!vm->managed) lockdep_assert_held(&vm->mmu_lock); - vm_dbg("%p: %016llx %016llx", vm, op->iova, op->iova + op->range); + vm_dbg("%s:%p:%d: %016llx %016llx", op, vm, queue_id, iova, iova + range); + + if (!vm->log) + return; + + idx = vm->log_idx; + vm->log[idx].op = op; + vm->log[idx].iova = iova; + vm->log[idx].range = range; + vm->log[idx].queue_id = queue_id; + vm->log_idx = (vm->log_idx + 1) & ((1 << vm->log_shift) - 1); +} + +static void +vm_unmap_op(struct msm_gem_vm *vm, const struct msm_vm_unmap_op *op) +{ + vm_log(vm, "unmap", op->iova, op->range, op->queue_id); vm->mmu->funcs->unmap(vm->mmu, op->iova, op->range); } @@ -161,10 +250,7 @@ vm_unmap_op(struct msm_gem_vm *vm, const struct msm_vm_unmap_op *op) static int vm_map_op(struct msm_gem_vm *vm, const struct msm_vm_map_op *op) { - if (!vm->managed) - lockdep_assert_held(&vm->mmu_lock); - - vm_dbg("%p: %016llx %016llx", vm, op->iova, op->iova + op->range); + vm_log(vm, "map", op->iova, op->range, op->queue_id); return vm->mmu->funcs->map(vm->mmu, op->iova, op->sgt, op->offset, op->range, op->prot); @@ -382,6 +468,7 @@ vma_from_op(struct op_arg *arg, struct drm_gpuva_op_map *op) static int msm_gem_vm_sm_step_map(struct drm_gpuva_op *op, void *arg) { + struct msm_vm_bind_job *job = ((struct op_arg *)arg)->job; struct drm_gem_object *obj = op->map.gem.obj; struct drm_gpuva *vma; struct sg_table *sgt; @@ -412,6 +499,7 @@ msm_gem_vm_sm_step_map(struct drm_gpuva_op *op, void *arg) .range = vma->va.range, .offset = vma->gem.offset, .prot = prot, + .queue_id = job->queue->id, }, .obj = vma->gem.obj, }); @@ -445,6 +533,7 @@ msm_gem_vm_sm_step_remap(struct drm_gpuva_op *op, void *arg) .unmap = { .iova = unmap_start, .range = unmap_range, + .queue_id = job->queue->id, }, .obj = orig_vma->gem.obj, }); @@ -506,6 +595,7 @@ msm_gem_vm_sm_step_remap(struct drm_gpuva_op *op, void *arg) static int msm_gem_vm_sm_step_unmap(struct drm_gpuva_op *op, void *arg) { + struct msm_vm_bind_job *job = ((struct op_arg *)arg)->job; struct drm_gpuva *vma = op->unmap.va; struct msm_gem_vma *msm_vma = to_msm_vma(vma); @@ -520,6 +610,7 @@ msm_gem_vm_sm_step_unmap(struct drm_gpuva_op *op, void *arg) .unmap = { .iova = vma->va.addr, .range = vma->va.range, + .queue_id = job->queue->id, }, .obj = vma->gem.obj, }); @@ -584,7 +675,7 @@ msm_vma_job_run(struct drm_sched_job *_job) * now the VM is in an undefined state. Game over! */ if (ret) - vm->unusable = true; + msm_gem_vm_unusable(job->vm); job_foreach_bo (obj, job) { msm_gem_lock(obj); @@ -695,6 +786,23 @@ msm_gem_vm_create(struct drm_device *drm, struct msm_mmu *mmu, const char *name, drm_mm_init(&vm->mm, va_start, va_size); + /* + * We don't really need vm log for kernel managed VMs, as the kernel + * is responsible for ensuring that GEM objs are mapped if they are + * used by a submit. Furthermore we piggyback on mmu_lock to serialize + * access to the log. + * + * Limit the max log_shift to 8 to prevent userspace from asking us + * for an unreasonable log size. + */ + if (!managed) + vm->log_shift = MIN(vm_log_shift, 8); + + if (vm->log_shift) { + vm->log = kmalloc_array(1 << vm->log_shift, sizeof(vm->log[0]), + GFP_KERNEL | __GFP_ZERO); + } + return &vm->base; err_free_dummy: @@ -1139,7 +1247,7 @@ vm_bind_job_prepare(struct msm_vm_bind_job *job) * state the vm is in. So throw up our hands! */ if (i > 0) - vm->unusable = true; + msm_gem_vm_unusable(job->vm); return ret; } } diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c index 8178b6499478..e5896c084c8a 100644 --- a/drivers/gpu/drm/msm/msm_gpu.c +++ b/drivers/gpu/drm/msm/msm_gpu.c @@ -259,9 +259,6 @@ static void crashstate_get_bos(struct msm_gpu_state *state, struct msm_gem_submi { extern bool rd_full; - if (!submit) - return; - if (msm_context_is_vmbind(submit->queue->ctx)) { struct drm_exec exec; struct drm_gpuva *vma; @@ -318,6 +315,48 @@ static void crashstate_get_bos(struct msm_gpu_state *state, struct msm_gem_submi } } +static void crashstate_get_vm_logs(struct msm_gpu_state *state, struct msm_gem_vm *vm) +{ + uint32_t vm_log_len = (1 << vm->log_shift); + uint32_t vm_log_mask = vm_log_len - 1; + int first; + + /* Bail if no log, or empty log: */ + if (!vm->log || !vm->log[0].op) + return; + + mutex_lock(&vm->mmu_lock); + + /* + * log_idx is the next entry to overwrite, meaning it is the oldest, or + * first, entry (other than the special case handled below where the + * log hasn't wrapped around yet) + */ + first = vm->log_idx; + + if (!vm->log[first].op) { + /* + * If the next log entry has not been written yet, then only + * entries 0 to idx-1 are valid (ie. we haven't wrapped around + * yet) + */ + state->nr_vm_logs = MAX(0, first - 1); + first = 0; + } else { + state->nr_vm_logs = vm_log_len; + } + + state->vm_logs = kmalloc_array( + state->nr_vm_logs, sizeof(vm->log[0]), GFP_KERNEL); + for (int i = 0; i < state->nr_vm_logs; i++) { + int idx = (i + first) & vm_log_mask; + + state->vm_logs[i] = vm->log[idx]; + } + + mutex_unlock(&vm->mmu_lock); +} + static void msm_gpu_crashstate_capture(struct msm_gpu *gpu, struct msm_gem_submit *submit, char *comm, char *cmd) { @@ -349,7 +388,10 @@ static void msm_gpu_crashstate_capture(struct msm_gpu *gpu, msm_iommu_pagetable_walk(mmu, info->iova, info->ptes); } - crashstate_get_bos(state, submit); + if (submit) { + crashstate_get_vm_logs(state, to_msm_vm(submit->vm)); + crashstate_get_bos(state, submit); + } /* Set the active crash state to be dumped on failure */ gpu->crashstate = state; @@ -449,7 +491,7 @@ static void recover_worker(struct kthread_work *work) * VM_BIND) */ if (!vm->managed) - vm->unusable = true; + msm_gem_vm_unusable(submit->vm); } get_comm_cmdline(submit, &comm, &cmd); diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h index 9cbf155ff222..31b83e9e3673 100644 --- a/drivers/gpu/drm/msm/msm_gpu.h +++ b/drivers/gpu/drm/msm/msm_gpu.h @@ -20,6 +20,7 @@ #include "msm_gem.h" struct msm_gem_submit; +struct msm_gem_vm_log_entry; struct msm_gpu_perfcntr; struct msm_gpu_state; struct msm_context; @@ -609,6 +610,9 @@ struct msm_gpu_state { struct msm_gpu_fault_info fault_info; + int nr_vm_logs; + struct msm_gem_vm_log_entry *vm_logs; + int nr_bos; struct msm_gpu_state_bo *bos; }; From patchwork Thu Jun 5 18:29:20 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 894496 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B346227E7F2 for ; Thu, 5 Jun 2025 18:33:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.180.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749148432; cv=none; b=UyopbGl8RR7liNrAhfyDUGYWoD9bkqG60IkOOa6YUMty65Sinas8lNvV6pBR93JO4YsEYh+/hY5Z4EZbSMvMwjETBdJQH4vpiKvBxSe+kTZgioPP7dcrdL9i1M4l4ODkFfE5fk2QZhjTKQeaUXJYRbK+Wcju3aSbyU4bR7Daj3Q= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749148432; c=relaxed/simple; bh=q1iQ+ZwuiqvjwJZLXEoi5s9n0yhPtWG5EPeGFJIoVxI=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=eHqKdUIJiD4sxH0fRYGakm63zMQXCg9XzfASanmb7rNRWs82QvmjVVRmc/dMpleDs7jUpoErikIoJ8WWWuFPccpwwKYn7Hk+uLCCK5e2Mp32FSff2xAeyzy0b+ECdgzwOn2B9wXzPg7EFTvgxQAR8wauNqC71jHE+W+Opq6Fk80= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=P4UxTq2K; arc=none smtp.client-ip=205.220.180.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="P4UxTq2K" Received: from pps.filterd (m0279870.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 555FgEfG032469 for ; Thu, 5 Jun 2025 18:33:49 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=Mhg+7koyMe2 2VLdx/7/ei97p7J97kc5Kcsq5C3Qay4E=; b=P4UxTq2KCtS6reJ+1/wpbRQgr1Y 8cCv/eyfWiu/9wuc4OR2PnqsZQF3DqaB3Y+tLC7p0polvQyw4vnWRB/QCQxQS4Hp NDee9bwLOJOmhykTYn+6rKvkEuLCsGOQUmF/z2/AyTtWCKCyr3RmmZHYbye6qegn KMEPbwZ0EeJPXPIFT4eYQglPpKlJYq+X6HBKh+QL5m6C9GwjEuW4yUkngx/rWoyp rmoRgqNVSMJX7dq4vOasLNZRitkJTFb9Pchk6HdNdNZ5Z+irN+Iq0mXSCnfBrqpA 4UXtSju5VJYdYlttEXxHv/rNibQlG+1Ox9vO8it4npzplpBmfCUDIti4HUw== Received: from mail-pg1-f199.google.com (mail-pg1-f199.google.com [209.85.215.199]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 471sfv17xv-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Thu, 05 Jun 2025 18:33:34 +0000 (GMT) Received: by mail-pg1-f199.google.com with SMTP id 41be03b00d2f7-b2c00e965d0so682810a12.2 for ; Thu, 05 Jun 2025 11:33:34 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1749148398; x=1749753198; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Mhg+7koyMe22VLdx/7/ei97p7J97kc5Kcsq5C3Qay4E=; b=RfXDYdTb/UNBshibgSlAosuUNXxh4DkWCxctEMs5lb8ogzk8tApy5O2SU/K3E6HbHr FijAdUt6khOuEyRS00I2uMSbB2JxSoEY2O2+Oo6/806qVNA+eVKdOH7Tg5DaNzszWJ+r v8jdsLzsjUrwEncP9EdTb/5LFJdmqZKnIJQbuvY+/rOW9nitDFWgOJH+6EBTexToM8Ie Fy/EW4DnalZuXTWzuqKXgyXyI5lc14YLJ2+koDONxqNKwboogI0JlEXbqjJVOZVoPTGm w7PiHW3w8N1q3kWBoZKqfbTtW/EVSjqKE1VObj8ysmOxAzNbYSYCN3cuidvKB4SfuebO CBUw== X-Forwarded-Encrypted: i=1; AJvYcCVyBLlAZZupEI5YvV/UFQwpdjXMLYCG0pkT2f+8oKKcxxYa5MYE6AJ9hwPrgZRRfyVQK9gg0O858nV3Iu1H@vger.kernel.org X-Gm-Message-State: AOJu0YwlvhPKuw4l1VA9tI7CofMN9Sprkw2B7V+OqenAsMG3jspVHT9I j12ffrFgVFcdAf2TchSZbndLe2hDc1W0S1MnoxEzm8w1VBuZCzw2J9xF4b8uYneYgzVlZuBulAN mZ0Kao/hS1Qq6zeIGI+BGKRz8HYeo7hI3kKKS+Q/whQGbd4gbc0eWFmVqFPHXOYiEIquc X-Gm-Gg: ASbGnctZBPF3+ftglQojcSLjzcFn4YsWSBcQaZw/4dqhff0CqxiKqG6g4GOXqFAk1y9 f0r71izDIN+drwrEFcxWW6Fw/UpTqNxoG9JfKbA51bGLJMth4esidMmBMJSAFP92NaBv0yD7gtI dzVB7MjbLrxLg3ioI+tWeavIo+izcIWw+5NrbS7yoIvV7paMDrqmKlnqDIbNU2HLakUh52iu6H2 X9DhLV1o58iLk5B7rjQ0c8mhGg6ofbFeNv0aMwR4D2FeE7v8MKZJ6KUcIc/QHLWGZzTVu3JfUsl SEWrpF3Z9k7Lq98EP3/wkw== X-Received: by 2002:a17:90b:4fcc:b0:312:daf3:bac9 with SMTP id 98e67ed59e1d1-31347077489mr911962a91.34.1749148398247; Thu, 05 Jun 2025 11:33:18 -0700 (PDT) X-Google-Smtp-Source: AGHT+IHjDsIfx3jrnAvxzEYIjomq05BnDcBK9KkqjdcW/+24UL79HmWGPX0+sBZL4Ans0KuSMxmN8Q== X-Received: by 2002:a17:90b:4fcc:b0:312:daf3:bac9 with SMTP id 98e67ed59e1d1-31347077489mr911924a91.34.1749148397819; Thu, 05 Jun 2025 11:33:17 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:89fa:e299:1a34:c1f5]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-3132ad9a4aasm1559479a91.1.2025.06.05.11.33.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 05 Jun 2025 11:33:17 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Rob Clark , Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v6 35/40] drm/msm: Add VMA unmap reason Date: Thu, 5 Jun 2025 11:29:20 -0700 Message-ID: <20250605183111.163594-36-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250605183111.163594-1-robin.clark@oss.qualcomm.com> References: <20250605183111.163594-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Authority-Analysis: v=2.4 cv=CY8I5Krl c=1 sm=1 tr=0 ts=6841e2fe cx=c_pps a=Oh5Dbbf/trHjhBongsHeRQ==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=cm27Pg_UAAAA:8 a=EUspDBNiAAAA:8 a=S93TA_zjY9FrSvCmoiAA:9 a=_Vgx9l1VpLgwpw_dHYaR:22 X-Proofpoint-ORIG-GUID: o_m3Vaq2lkO4M3jndK2-WfxmELGaEYix X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjA1MDE2NSBTYWx0ZWRfXztoXXOAqsWX4 Wof4CUdq7JAiObpN026B8S306Eln9JzxewX6aeakty3IOBdibfVvAf6IH5iQjnBQ1qMhO+mp+Cz EboSXS+z4js9nw92wenOu9+tocC8QJFi6aHtXC5xlGPp+BJEbgS4c/FTEDQ5O1MCmFcj6gSvOHZ jlAEpHqW5xtjfui60mHBrgDtnI4Gv4NbsEocgSXsWNRuje9KJpPCt4rgVpIwjK9qJkv2m57SYNS LwuxpZ3Hr4PJC6j8dApG8XX82WtiKP8noJrlGNe8G/lpd07gwl2e+i9SfhZ3yeL5GHnQaz8UBP4 0e+2RS5NplX9WosFfQS8PYpK4/n75qTQae8eW/ze5HqTNQvAnPb5nXP31qmVpTup2CPG1J0moeE b0rqn9ZAYbXKJoSyuFcmnndkMGXjvHrMqqueLJUUeooXJFnm4jv5TbYdr7AyLbIOtTo0Vyjm X-Proofpoint-GUID: o_m3Vaq2lkO4M3jndK2-WfxmELGaEYix X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.0.736,FMLib:17.12.80.40 definitions=2025-06-05_05,2025-06-05_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 mlxscore=0 adultscore=0 bulkscore=0 suspectscore=0 malwarescore=0 clxscore=1015 lowpriorityscore=0 spamscore=0 impostorscore=0 phishscore=0 mlxlogscore=999 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506050166 From: Rob Clark Make the VM log a bit more useful by providing a reason for the unmap (ie. closing VM vs evict/purge, etc) Signed-off-by: Rob Clark Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gem.c | 20 +++++++++++--------- drivers/gpu/drm/msm/msm_gem.h | 2 +- drivers/gpu/drm/msm/msm_gem_vma.c | 15 ++++++++++++--- 3 files changed, 24 insertions(+), 13 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index fea13a993629..e415e6e32a59 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -47,7 +47,8 @@ static int msm_gem_open(struct drm_gem_object *obj, struct drm_file *file) return 0; } -static void put_iova_spaces(struct drm_gem_object *obj, struct drm_gpuvm *vm, bool close); +static void put_iova_spaces(struct drm_gem_object *obj, struct drm_gpuvm *vm, + bool close, const char *reason); static void detach_vm(struct drm_gem_object *obj, struct drm_gpuvm *vm) { @@ -61,7 +62,7 @@ static void detach_vm(struct drm_gem_object *obj, struct drm_gpuvm *vm) drm_gpuvm_bo_for_each_va (vma, vm_bo) { if (vma->vm != vm) continue; - msm_gem_vma_unmap(vma); + msm_gem_vma_unmap(vma, "detach"); msm_gem_vma_close(vma); break; } @@ -101,7 +102,7 @@ static void msm_gem_close(struct drm_gem_object *obj, struct drm_file *file) MAX_SCHEDULE_TIMEOUT); msm_gem_lock_vm_and_obj(&exec, obj, ctx->vm); - put_iova_spaces(obj, ctx->vm, true); + put_iova_spaces(obj, ctx->vm, true, "close"); detach_vm(obj, ctx->vm); drm_exec_fini(&exec); /* drop locks */ } @@ -429,7 +430,8 @@ static struct drm_gpuva *lookup_vma(struct drm_gem_object *obj, * mapping. */ static void -put_iova_spaces(struct drm_gem_object *obj, struct drm_gpuvm *vm, bool close) +put_iova_spaces(struct drm_gem_object *obj, struct drm_gpuvm *vm, + bool close, const char *reason) { struct drm_gpuvm_bo *vm_bo, *tmp; @@ -444,7 +446,7 @@ put_iova_spaces(struct drm_gem_object *obj, struct drm_gpuvm *vm, bool close) drm_gpuvm_bo_get(vm_bo); drm_gpuvm_bo_for_each_va_safe (vma, vmatmp, vm_bo) { - msm_gem_vma_unmap(vma); + msm_gem_vma_unmap(vma, reason); if (close) msm_gem_vma_close(vma); } @@ -625,7 +627,7 @@ static int clear_iova(struct drm_gem_object *obj, if (!vma) return 0; - msm_gem_vma_unmap(vma); + msm_gem_vma_unmap(vma, NULL); msm_gem_vma_close(vma); return 0; @@ -837,7 +839,7 @@ void msm_gem_purge(struct drm_gem_object *obj) GEM_WARN_ON(!is_purgeable(msm_obj)); /* Get rid of any iommu mapping(s): */ - put_iova_spaces(obj, NULL, false); + put_iova_spaces(obj, NULL, false, "purge"); msm_gem_vunmap(obj); @@ -875,7 +877,7 @@ void msm_gem_evict(struct drm_gem_object *obj) GEM_WARN_ON(is_unevictable(msm_obj)); /* Get rid of any iommu mapping(s): */ - put_iova_spaces(obj, NULL, false); + put_iova_spaces(obj, NULL, false, "evict"); drm_vma_node_unmap(&obj->vma_node, dev->anon_inode->i_mapping); @@ -1087,7 +1089,7 @@ static void msm_gem_free_object(struct drm_gem_object *obj) drm_exec_retry_on_contention(&exec); } } - put_iova_spaces(obj, NULL, true); + put_iova_spaces(obj, NULL, true, "free"); drm_exec_fini(&exec); /* drop locks */ } diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index efbf58594c08..57252b5e08d0 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -168,7 +168,7 @@ struct msm_gem_vma { struct drm_gpuva * msm_gem_vma_new(struct drm_gpuvm *vm, struct drm_gem_object *obj, u64 offset, u64 range_start, u64 range_end); -void msm_gem_vma_unmap(struct drm_gpuva *vma); +void msm_gem_vma_unmap(struct drm_gpuva *vma, const char *reason); int msm_gem_vma_map(struct drm_gpuva *vma, int prot, struct sg_table *sgt); void msm_gem_vma_close(struct drm_gpuva *vma); diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_gem_vma.c index b6760fa9dd82..b6de87e5c3f7 100644 --- a/drivers/gpu/drm/msm/msm_gem_vma.c +++ b/drivers/gpu/drm/msm/msm_gem_vma.c @@ -53,6 +53,9 @@ struct msm_vm_unmap_op { /** @range: size of region to unmap */ uint64_t range; + /** @reason: The reason for the unmap */ + const char *reason; + /** * @queue_id: The id of the submitqueue the operation is performed * on, or zero for (in particular) UNMAP ops triggered outside of @@ -242,7 +245,12 @@ vm_log(struct msm_gem_vm *vm, const char *op, uint64_t iova, uint64_t range, int static void vm_unmap_op(struct msm_gem_vm *vm, const struct msm_vm_unmap_op *op) { - vm_log(vm, "unmap", op->iova, op->range, op->queue_id); + const char *reason = op->reason; + + if (!reason) + reason = "unmap"; + + vm_log(vm, reason, op->iova, op->range, op->queue_id); vm->mmu->funcs->unmap(vm->mmu, op->iova, op->range); } @@ -257,7 +265,7 @@ vm_map_op(struct msm_gem_vm *vm, const struct msm_vm_map_op *op) } /* Actually unmap memory for the vma */ -void msm_gem_vma_unmap(struct drm_gpuva *vma) +void msm_gem_vma_unmap(struct drm_gpuva *vma, const char *reason) { struct msm_gem_vm *vm = to_msm_vm(vma->vm); struct msm_gem_vma *msm_vma = to_msm_vma(vma); @@ -277,6 +285,7 @@ void msm_gem_vma_unmap(struct drm_gpuva *vma) vm_unmap_op(vm, &(struct msm_vm_unmap_op){ .iova = vma->va.addr, .range = vma->va.range, + .reason = reason, }); if (!vm->managed) @@ -863,7 +872,7 @@ msm_gem_vm_close(struct drm_gpuvm *gpuvm) drm_exec_retry_on_contention(&exec); } - msm_gem_vma_unmap(vma); + msm_gem_vma_unmap(vma, "close"); msm_gem_vma_close(vma); if (obj) { From patchwork Thu Jun 5 18:29:21 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 894223 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CD751289E04 for ; Thu, 5 Jun 2025 18:33:53 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.180.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749148436; cv=none; b=BI9I8vAT/nDNpoMdFdTO0cq+0Q/8QEwCrIk5+iJmHl+w3gpdC92EtpnOyTWUiQB2K8/GPqodwvqRDShuNsqeP/J5wmlnI//bEa+EAuKOiP0d1ifyf9KWmnoC0bvwmS78B+vlXfvvm0ZW22NFd+vAfP2pEyq24w/i4u25lY7TvOg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749148436; c=relaxed/simple; bh=szvV6IrMopmr+OBL9gQagLLcUjdP+1eIVq6kpBn3t2w=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=eipM6v6qpzLbaamh33xSW7WwbzdfXojP+jbQm++pbpbXfV6kvx8GUHFQgObziVWBx2LHTr2BWXtitQqxkZccmzPvlgR2oOjjd/7L+LuDngaP5CD1vm0sLdj5Evu1/IyG1GC9uTtTtMBSYbubW3NxZpwb5Czfh2SzF7qXl9qwHJU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=LhVw0kSf; arc=none smtp.client-ip=205.220.180.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="LhVw0kSf" Received: from pps.filterd (m0279870.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 555GaJA2032471 for ; Thu, 5 Jun 2025 18:33:52 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=E0CQf8ECEko EZPuzosk2AQuQ/2hYymsahlQVM4KlO5E=; b=LhVw0kSfCagt1hE3EbqQFhcSwTz jNIZhPETxbdWpiKNWh6SlWzzz8pI2sQz1VU63FbymNH1wz6vxjP1dwplO0yJROus y8EVElidhgJ0V9BMT5abry+KJwCJ4ngUa2xfOUn/EYiEJ45U0tCXivKmC9GT0gzq rWnTgT4ANyrHfr8X7aPcZhSc10Lek6wcad3/S2kSDCaWYUmtX9wYwfOnZZpUexEf qXy/w/xsVRRcyRuyHw8Aq60gJve0uaHzmlIja7UJFPdzEkOE+tpuBLyYtBm6CifT 8AnkqBdcBuP3OlmoOhHvGkhk+71q7Cvvq7yMI3xC0WhD3JxgpQTuvj3c0JA== Received: from mail-pj1-f70.google.com (mail-pj1-f70.google.com [209.85.216.70]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 471sfv17y3-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Thu, 05 Jun 2025 18:33:51 +0000 (GMT) Received: by mail-pj1-f70.google.com with SMTP id 98e67ed59e1d1-310efe825ccso1408276a91.3 for ; Thu, 05 Jun 2025 11:33:50 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1749148400; x=1749753200; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=E0CQf8ECEkoEZPuzosk2AQuQ/2hYymsahlQVM4KlO5E=; b=U6r5uMMM2np8Y51sZs7qzBbBSXKltbBWL4TpGibGJeTDwZAnzBNO5YcDjAcErzeyzG /Z2cM3Qkqx/J9jo4wuAFrPPM5dJDTgAF51ClwLFu6k6KKPXWEk2LlZM6R3CbZqqE4arm NeZANSivP2FIe4v3/7tquibQ77g4DIItc91B5OoaWLebGO81FonQM9XNV1DXWAmixNgA CgXFejolQzweSNVaA2MOX2DVsIDibzpwbLImGYxbcoA7FCMz7dr+D9xSLVvtWrwQg/s7 NrlZqrdyaUYXaMnOSd6uO+TwtXCwk9HD2QMuWWa75rJ8hVdatR/eP3bQHLr2q/EK4Zam 5zIA== X-Forwarded-Encrypted: i=1; AJvYcCVdmBg0T5LyJ7BYu02vDZd7GpeZQQqmqFJZIVfAIA9jRH/KZi/+AmTmnlUwusqaLrM/1k3OInilIBBBR+cS@vger.kernel.org X-Gm-Message-State: AOJu0Yz8JWiPoUQilnC/qgNxxxP/EhKRL3BxdTSB3jaxnYGUl9Bz8YN/ N1k7iBDPSD2wAVXRT0B3GbaYQbhjm9L0y94s+7JTPPhyw6sEQ3GBh2/68t0ltt3F61Suvo/51Yu aNqEqZvnxkzZO1Hl2tPY/o+yWnQdfiavSL9Yk+6tP6XdJggh9u3gblyRGKd+Lu+4eH6Z/ X-Gm-Gg: ASbGncufR7xfbIckoncHYFvhDhetRn3VScFGGhL393K0ZGCTk8MNiy04Yr7w0JbaMMd 6Iz4OQRdv1QpejfajuI2WMSGTEYEozTcNnFwBVbUkbKyyoPA/yavGCyR90cmcFYDMAV0PhAw17p 8088Axsx6+GkoGNzR3t0iEgGemIxrOnXpoHyYX1Zk7FxBlNKYNCidYg31EYktn+x8WC5xTIMtHb nW4jOtlDH+yxmfr4gMrxfCEM71i88wmIvJFOlyvE0ZJD/c0mYZC7g1plXcOnKOnSqoDhcTOqL/H 4qG3Ww0Cj4+IDdrdJDE4tw== X-Received: by 2002:a17:90b:4e86:b0:311:b0ec:135f with SMTP id 98e67ed59e1d1-3134706f8efmr1045860a91.30.1749148399698; Thu, 05 Jun 2025 11:33:19 -0700 (PDT) X-Google-Smtp-Source: AGHT+IFtRHbayHnOG/QPhFRjld2rJq5hRHHtm7rPI8PxA2Bd1CySo0EJuFhDV1n0cRcHKai1dDS1Qw== X-Received: by 2002:a17:90b:4e86:b0:311:b0ec:135f with SMTP id 98e67ed59e1d1-3134706f8efmr1045803a91.30.1749148399107; Thu, 05 Jun 2025 11:33:19 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:89fa:e299:1a34:c1f5]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-31349fc373esm61635a91.32.2025.06.05.11.33.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 05 Jun 2025 11:33:18 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Rob Clark , Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v6 36/40] drm/msm: Add mmu prealloc tracepoint Date: Thu, 5 Jun 2025 11:29:21 -0700 Message-ID: <20250605183111.163594-37-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250605183111.163594-1-robin.clark@oss.qualcomm.com> References: <20250605183111.163594-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Authority-Analysis: v=2.4 cv=CY8I5Krl c=1 sm=1 tr=0 ts=6841e30f cx=c_pps a=0uOsjrqzRL749jD1oC5vDA==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=cm27Pg_UAAAA:8 a=EUspDBNiAAAA:8 a=W9C9WuCMp67TlgULjysA:9 a=mQ_c8vxmzFEMiUWkPHU9:22 X-Proofpoint-ORIG-GUID: A37RJQ8bSwfIw-SQUYlQTOfu8UARF0-s X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjA1MDE2NiBTYWx0ZWRfX6oGviahNUhM7 UddTdVhnGptSKExdqnrciGayHmJV0JzFos5yj0n4xkVU1WxN/Jz7AV7LJF7N3fOA6etMsj9p8r+ sSg1xMmXve+tjygMeNGnk9SDDm20j0CZPDsD99EjvePBG0hPwe+JYB2Tj4SE6FbXEkyqszfMzek /uT15QIBzRGOJro7pC5ITEAJ0oVNZ70+tJdOfWe0WkjcGcHUFYx+P4BXFCDnKDtT+cany2phCR/ T3G/HIUHjcnZu82XC2c/kR0vJaxuLR7odVCO5VG2y6jAQ2F4ErIT4AbRt7A7UAh+ERuCrGxeegz iCcBNLm9TXyVXnM5SEnPiJGeN7rG2Gne22wMLIWO75WbuBB3UD2NsllP7cSh5ZzwHSx7m4kZ3Oo Ap7mZ6/S1Br19yZVUp+NICLTPEwauO6klwdwNPhbgHywtfxpeYUNwrFhMV1+7hWowiBeE4xD X-Proofpoint-GUID: A37RJQ8bSwfIw-SQUYlQTOfu8UARF0-s X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.0.736,FMLib:17.12.80.40 definitions=2025-06-05_05,2025-06-05_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 mlxscore=0 adultscore=0 bulkscore=0 suspectscore=0 malwarescore=0 clxscore=1015 lowpriorityscore=0 spamscore=0 impostorscore=0 phishscore=0 mlxlogscore=999 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506050166 From: Rob Clark So we can monitor how many pages are getting preallocated vs how many get used. Signed-off-by: Rob Clark Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gpu_trace.h | 14 ++++++++++++++ drivers/gpu/drm/msm/msm_iommu.c | 4 ++++ 2 files changed, 18 insertions(+) diff --git a/drivers/gpu/drm/msm/msm_gpu_trace.h b/drivers/gpu/drm/msm/msm_gpu_trace.h index 7f863282db0d..781bbe5540bd 100644 --- a/drivers/gpu/drm/msm/msm_gpu_trace.h +++ b/drivers/gpu/drm/msm/msm_gpu_trace.h @@ -205,6 +205,20 @@ TRACE_EVENT(msm_gpu_preemption_irq, TP_printk("preempted to %u", __entry->ring_id) ); +TRACE_EVENT(msm_mmu_prealloc_cleanup, + TP_PROTO(u32 count, u32 remaining), + TP_ARGS(count, remaining), + TP_STRUCT__entry( + __field(u32, count) + __field(u32, remaining) + ), + TP_fast_assign( + __entry->count = count; + __entry->remaining = remaining; + ), + TP_printk("count=%u, remaining=%u", __entry->count, __entry->remaining) +); + #endif #undef TRACE_INCLUDE_PATH diff --git a/drivers/gpu/drm/msm/msm_iommu.c b/drivers/gpu/drm/msm/msm_iommu.c index bfee3e0dcb23..09fd99ac06f6 100644 --- a/drivers/gpu/drm/msm/msm_iommu.c +++ b/drivers/gpu/drm/msm/msm_iommu.c @@ -8,6 +8,7 @@ #include #include #include "msm_drv.h" +#include "msm_gpu_trace.h" #include "msm_mmu.h" struct msm_iommu { @@ -346,6 +347,9 @@ msm_iommu_pagetable_prealloc_cleanup(struct msm_mmu *mmu, struct msm_mmu_preallo struct kmem_cache *pt_cache = get_pt_cache(mmu); uint32_t remaining_pt_count = p->count - p->ptr; + if (p->count > 0) + trace_msm_mmu_prealloc_cleanup(p->count, remaining_pt_count); + kmem_cache_free_bulk(pt_cache, remaining_pt_count, &p->pages[p->ptr]); kvfree(p->pages); } From patchwork Thu Jun 5 18:29:22 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 894226 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 612EF27E1D5 for ; Thu, 5 Jun 2025 18:33:36 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.168.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749148417; cv=none; b=Y6z709Xralv7H0D1byzVNCnTXRsE34BVt1sVbRI8OTlV91BL3U1/InBGh0G4N9dLZzuCVogLV7mxFqyyF6NBxlwJX1Q6w5k3OJ9SDLnYbUHyqe1VJP9mXV1+dNRDtiH1eVBNB+LmtGtATkoSOjY9FYbvmKGw7D/iBhqJrCL6/ss= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749148417; c=relaxed/simple; bh=eFxBT0OAg7rchLVBxX/NKfn/olJKKvoe8oSOxRdtsVQ=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=aTCsMja2dL6GFZjm81rEfnE5uigvGL5HZt4Wy6NyOLIUGdLB2SOiMKoYYbT1sho4CQ5tNUeT81ceGF4+zBNl+hZb4ewTfANRDwbzhEeGQ/WvrdJX63VNBSz7zf3rPprGJNEnKQPpvfmCSHXX7KGZSBL6FZ2vUYftvf8d63MXiag= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=oT7dY6ZW; arc=none smtp.client-ip=205.220.168.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="oT7dY6ZW" Received: from pps.filterd (m0279864.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 555HIVBB013476 for ; Thu, 5 Jun 2025 18:33:35 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=eqq9rbaAioE 9x/Vr1cLJypF4bAqJQvBM2Xsa5cd3ZNI=; b=oT7dY6ZWx7dyYOYToij2tW+L4Oj 09gGE9IIAf8cZNxnCLF0QUnYTrlfQnMh8kPGizzYBkugN3Zm843uF8xj+GRE2oVW 2A/H0ly2shTaTL+Zlhy1GsF3L3CcAcQnJFHEwzOitJIV9513TwkRQZSkuMMV5/HU ERPcW9E+c3VVSG2WXVKvAbbc8/xdyYzioCL58CUkjmrPCwjojKGewmOY2g81Wo+M jdRg/on3fMHpD7FCz3yi/5F0jXLEGh1F9GUaLlj9Z9wANRDmq8uQZCKNPmbROQzz bpeUj9CmQCl1TvDyjkl1mDmVc7xjNJACsVF8Qy4JgsOvWjUFin1WMgb8n8Q== Received: from mail-pf1-f197.google.com (mail-pf1-f197.google.com [209.85.210.197]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 471g8nta2s-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Thu, 05 Jun 2025 18:33:35 +0000 (GMT) Received: by mail-pf1-f197.google.com with SMTP id d2e1a72fcca58-747dd44048cso1187359b3a.3 for ; Thu, 05 Jun 2025 11:33:34 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1749148401; x=1749753201; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=eqq9rbaAioE9x/Vr1cLJypF4bAqJQvBM2Xsa5cd3ZNI=; b=lshXZ0D/qgS5afNpA+5CBxtoeFeF46GPjkoBbVQTpRPiBJg8QY6UNqYNUNNCk6glJ5 IsdDqbayeRm1JlEdbD+5HQM1sqQrxhM1+BHH7M90gta8Jv3cOa4vHfnb2Jhqkpz/KJAN XL9wgAzO0/vvyNS9oqORiCYmqFa7R0GRvOz/boFIYilUei0/pfBktaV/oRgQKBjzDwjb 7i4Fg8KebsfOpYy6b/QRFpPSBqNtLUTazBnu9qFKq3d0rSI2Mdk/ha8TMhLTOM3BdIsH ZhCxJIO5WFxmvjwDiUL4GGXdiPsR+Zwl+sCtp6szMxvwO8A7fzIuLLW4ZT63KaGypWJh PBGQ== X-Forwarded-Encrypted: i=1; AJvYcCUP7OXA3Klp3zjgli9RUCG9F6GBiS+u57wYFb0poM+CveiWvn9qaAauD+QAQQcw5beRdPfrVHVKiL0bq8Wf@vger.kernel.org X-Gm-Message-State: AOJu0Yzvi18T8kZlkVbq+mJZiPVT0T6TZw17VFjPX3Btosra9fAlfD5P vTQYSKeA0M0hQFkIBEQYvD38WElagCo6oYuWc+HjHtfGpBnQGBUVoqLW+Fdt4W47B3RMMC+Epq5 PabHGudqCbw1PTCBrtBW9gUuiAD0DRSTDmHai76TfGNh/lUn7GskI1Aw/qAE06mT97iCG X-Gm-Gg: ASbGnctAptVOAvE06nGlTBuzG84hqGaY3u9Le1fiTygDE+gzK/zT7yJ5egS2gL8gYfl 7VRgLKeePqfxneLo41ZPNF0a6E6ycGKqbBowJ5L16A0fgUtXUjvoKd8PSL8TV3h+79Q0SWBVAJG sAlzhc210JjxwV8G4LH78u5CkCLjypJPsmU97hamOlxk2VvqVujTFm13aCTTNxJ/KJlKtZe3/fp ggR/m5KLWLh6FM33tb3s5a9szbChP/W/LFKp2fBKPVzs2uZVOwdMuc14h8ci/P3Q3KufW/X23bC K7NVwBOXt39kv6g1YjFr3w== X-Received: by 2002:a05:6a21:998d:b0:1f5:8179:4f47 with SMTP id adf61e73a8af0-21ee257ac7fmr368472637.20.1749148400881; Thu, 05 Jun 2025 11:33:20 -0700 (PDT) X-Google-Smtp-Source: AGHT+IEv2c+RdwYHEoeR38pw2sdo71gINKvZO5oyAxjTcUM0ezUYPn99pL8wCFAVNClAdFPkoX+B/A== X-Received: by 2002:a05:6a21:998d:b0:1f5:8179:4f47 with SMTP id adf61e73a8af0-21ee257ac7fmr368441637.20.1749148400493; Thu, 05 Jun 2025 11:33:20 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:89fa:e299:1a34:c1f5]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-b2f5f668354sm8116a12.42.2025.06.05.11.33.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 05 Jun 2025 11:33:20 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Rob Clark , Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v6 37/40] drm/msm: use trylock for debugfs Date: Thu, 5 Jun 2025 11:29:22 -0700 Message-ID: <20250605183111.163594-38-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250605183111.163594-1-robin.clark@oss.qualcomm.com> References: <20250605183111.163594-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Proofpoint-GUID: 6uWskpdnhb7QgZRotdGfrP_03B8Eaqdh X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjA1MDE2NSBTYWx0ZWRfXw/Qq3kfMk2+n BUejL3Fh6cv1p9QMSJFKjUVNAPA91hqOA3YGRvsPxKZe7lovbnH29bTRBDvc9UikV/OHIbf9TlP e4JsMpRZ6gY9rdZWM5Mvm/2kezLoz4N2CjhH/GJsHhE5RbD4aU+dblKA+dIh2GGzmu7rT/tSe5H h0VmKJlzpN98Zq9aOqe5ji/1mHA5RP1MSIl3Tdc+t8hJ6EywgarEYs/hb7yU/Ikyj0LMqDhkWa1 aMJGU43d5Q0ktp3cc4/oceHknQphJ07s+OqCD8QOojqADhBQrAnDg7/DsYDHhx6SfTRioLWRsS3 VHeICt3OzZjFGXC4cORO/SEo0L9nfzyEwMZV4zIgRysmKdZqilnE0CxFczZcOYp5v3XgbyGcOR5 XPGVkuiSPUl2BplLHWfb4Lv3UTLkYhQPlg3S/3WIbKGX9SN2Pov6nbVNlpNqE6y76sie8CEG X-Proofpoint-ORIG-GUID: 6uWskpdnhb7QgZRotdGfrP_03B8Eaqdh X-Authority-Analysis: v=2.4 cv=UphjN/wB c=1 sm=1 tr=0 ts=6841e2ff cx=c_pps a=rEQLjTOiSrHUhVqRoksmgQ==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=cm27Pg_UAAAA:8 a=EUspDBNiAAAA:8 a=UIWvmcERRd2or3XT2GcA:9 a=2VI0MkxyNR6bbpdq8BZq:22 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.0.736,FMLib:17.12.80.40 definitions=2025-06-05_05,2025-06-05_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 spamscore=0 impostorscore=0 lowpriorityscore=0 phishscore=0 mlxlogscore=999 clxscore=1015 malwarescore=0 adultscore=0 bulkscore=0 mlxscore=0 suspectscore=0 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506050165 From: Rob Clark This resolves a potential deadlock vs msm_gem_vm_close(). Otherwise for _NO_SHARE buffers msm_gem_describe() could be trying to acquire the shared vm resv, while already holding priv->obj_lock. But _vm_close() might drop the last reference to a GEM obj while already holding the vm resv, and msm_gem_free_object() needs to grab priv->obj_lock, a locking inversion. OTOH this is only for debugfs and it isn't critical if we undercount by skipping a locked obj. So just use trylock() and move along if we can't get the lock. Signed-off-by: Rob Clark Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gem.c | 3 ++- drivers/gpu/drm/msm/msm_gem.h | 6 ++++++ 2 files changed, 8 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index e415e6e32a59..b882647144bb 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -946,7 +946,8 @@ void msm_gem_describe(struct drm_gem_object *obj, struct seq_file *m, uint64_t off = drm_vma_node_start(&obj->vma_node); const char *madv; - msm_gem_lock(obj); + if (!msm_gem_trylock(obj)) + return; stats->all.count++; stats->all.size += obj->size; diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index 57252b5e08d0..9671c4299cf8 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -280,6 +280,12 @@ msm_gem_lock(struct drm_gem_object *obj) dma_resv_lock(obj->resv, NULL); } +static inline bool __must_check +msm_gem_trylock(struct drm_gem_object *obj) +{ + return dma_resv_trylock(obj->resv); +} + static inline int msm_gem_lock_interruptible(struct drm_gem_object *obj) { From patchwork Thu Jun 5 18:29:23 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 894493 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3C83B27F165 for ; Thu, 5 Jun 2025 18:33:57 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.180.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749148439; cv=none; b=aGUuP77h8hCffkLkhGGLHEeBZxrBAytnL6iQVjeDzvc7SaZmExUF9uNSFf4NBHvA4gLAkjNDT94gFewlSivxDtgXqgHXwLEwHZBSUcnaNxhTlrtQDXutzuabn7RObXR/6H+z/CQYwR2icQsI18+feBqNtxc7rj+KF3fIJW45MVs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749148439; c=relaxed/simple; bh=Yc9/xrKw3NesAc8iIakavO1+bxkLrBo6rWFNSTz0mhM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=m6Z7oCone7HI9AwTtNl9qL5yUMlKjusJEX3astux45+dbh5pXRRhat3GtoI25k8ySbE55qe5ZXYB8lO149XkINN9dSWrxtt2YxzlQcFRvRf5bD7s1TBk+1toX7sm9ULHIgyi4KVfDmbc5KT8Ay3/fMc79YGvt4zizEXk6MTOuSY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=ceXA9Wbu; arc=none smtp.client-ip=205.220.180.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="ceXA9Wbu" Received: from pps.filterd (m0279869.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 5559JsZI028418 for ; Thu, 5 Jun 2025 18:33:57 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=VHBzJXi95z6 O97WECG8ege8Ip+1ENafn5APqxEppLk8=; b=ceXA9WbuldePiP1fwN6+udy+e1k zqyUbAgBYusWJWfHYY4kfFm/K0aawt/sYiW1G8BSm2EE0nTADmh60oO7FVz0vQMR z/tF0PAbx7foEOk99mZtnt4zHGmpblk0em+WjnIKT6Zc74UFxpG3rXjmBWPpsLZE W60X/0laxlF+KYplboT16JvyXU6EAApu3SPYzU3f/43CDr7TSQa/eWApHT3uV6fW 5EJmuRZ+J6pX9bLeQfeEhhs920qfkyS777SN2amt0KYFv2EUYfo+vpzzESp/BGB9 hQfgixpIwqPw9LEG+Q7YY4/Hxkf6xl5cdU4uRviMZgtv+uUqIFgr3mh2o2A== Received: from mail-pf1-f199.google.com (mail-pf1-f199.google.com [209.85.210.199]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 47202wg0ep-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Thu, 05 Jun 2025 18:33:54 +0000 (GMT) Received: by mail-pf1-f199.google.com with SMTP id d2e1a72fcca58-742a091d290so996817b3a.3 for ; Thu, 05 Jun 2025 11:33:35 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1749148402; x=1749753202; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=VHBzJXi95z6O97WECG8ege8Ip+1ENafn5APqxEppLk8=; b=az8kQcmWOiil/IogpG8nuQYJeNL8VAfk4V8K8iDLieLQBmFrsYh54fmigKXhaCaCnS PlK5LmpTAvT8yknCjq3xmuWod1je22/mee1fBl7WdRigxvZRx+O9p2hopvEqOE5YUQYH RqzRkYBYd186kGh80R5N2ZjXWMWUe1pXPFV1Iz6JuWEGjJ1mQjKWTY8+5S6aXrmRXyjk Bw4mbOI4TEv5NEhVTY5+iOQ0gzEQqs2rTfkh/qw9A4RHV2fJeqq/EyyXA7tqELFj/TlE HHvEc3zckrZbtpjlFZydBoS2INgVn7hS/sFZAhtgO4jXmV7ymEohFgyqQt1dFPzuWHuN 96yg== X-Forwarded-Encrypted: i=1; AJvYcCWYCiVNs12vtLnR33kQeU1G7ZlQOyDd3lmukCznd99oztCH/w7pQuLmGqK184Z+dXS/lgjEPIFB/kYul+6c@vger.kernel.org X-Gm-Message-State: AOJu0YxpHIPvXdXSTYojGuaiwGNeCEPsDkIt32tlpOv2JwM/5fVg8Qwr iv6ILbKyrZz695C9Yo2QlvBZb3kDoXLpFZAbjqwlKHdyBFQvsEWe9ySw5B4E8hqHKR4ebQOIr4H 3LLyyBo2YDRUhjWGxfQarUlDq4RSJ46ppLA9SMT48iK8e+UDmbtRMmTvhfJ0t8qjxo8NBAkCaEd 3a X-Gm-Gg: ASbGncsJXrDcmryqhE1it8rBV24TOrRDMcIp94uxkwZ57Mz8IQta848lyXlYEcxH/Dn dk1ZmWOtGvvFXBVRk7gY4/gUe/WBf8ejV3BmWfPrdIJ2xtOV/pvHrQLYhmbOHFFXlKCJptCx9Yn 8IprZ4NI0KcmnR/S/jje8Suj5W1AbETetG0OQjFEMEiD+79/smU6ECkzNEXBwxWHZ1W+umfURhD fAIB1IjdPOLXfqYHG7Bjj+Y2ecwa00ABNqp+obPwKw9v7GH4bMsqruBa/8945ahz/D3tzlBvAOD ZpNgKAip08rRaBlmEJEXvw== X-Received: by 2002:a05:6a00:1a8d:b0:73e:2d76:9eb1 with SMTP id d2e1a72fcca58-74827e80628mr887813b3a.10.1749148402076; Thu, 05 Jun 2025 11:33:22 -0700 (PDT) X-Google-Smtp-Source: AGHT+IHBuTKC6/naA3f/kWh7qz2oby28B2fruyWvYz+LKZRKvKVwuNe4K9bJXRX956fiwilQlAHacg== X-Received: by 2002:a05:6a00:1a8d:b0:73e:2d76:9eb1 with SMTP id d2e1a72fcca58-74827e80628mr887775b3a.10.1749148401687; Thu, 05 Jun 2025 11:33:21 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:89fa:e299:1a34:c1f5]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-747afeaba2esm13094969b3a.49.2025.06.05.11.33.21 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 05 Jun 2025 11:33:21 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Rob Clark , Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v6 38/40] drm/msm: Bump UAPI version Date: Thu, 5 Jun 2025 11:29:23 -0700 Message-ID: <20250605183111.163594-39-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250605183111.163594-1-robin.clark@oss.qualcomm.com> References: <20250605183111.163594-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: 0qywwC3eKaT0ZZyxIWMYGIYWnopPdXIo X-Proofpoint-GUID: 0qywwC3eKaT0ZZyxIWMYGIYWnopPdXIo X-Authority-Analysis: v=2.4 cv=Y/D4sgeN c=1 sm=1 tr=0 ts=6841e313 cx=c_pps a=WW5sKcV1LcKqjgzy2JUPuA==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=cm27Pg_UAAAA:8 a=EUspDBNiAAAA:8 a=KgEaFMypzpKrXJt10QQA:9 a=OpyuDcXvxspvyRM73sMx:22 X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjA1MDE2NiBTYWx0ZWRfXw6+CJkZK0liA EAXjEd4zBmJG4H4ARYWkcOxwgJOXPR4RUoDHxK1v5mPF2P3r8k9OWyNoATL+iCyPEftpMMke5wE 1kLu0aeL5TdnIOtguyhG3Zg+rgZABXqhATPVHRWOIg1PFuLuho5NndeJdLUD787WDggctI4W3if uEsdG9r6PmR9Wqj/H0eyxoXc5gMYnGuNQzI3eCENYOImjpmmG/mOXXCgqieMrRoqwMnB+h7nPYz lbHFWIINMOB1ACfWqYu2YFascoI+w8o6TKq2D58+UIcrviFLFxU+lOdo3eVIGcZr9TrDazm03Yi euvqm4liVgHIvT4uyyOXjsLKnkHb5FsU520US/rCjPnD89voadHgckoeJXFeJIJwh5CYgU8TsY2 oaFCgL9VEA1R82w1rHy8QeWp44YZCMxOvZm8t2oZ9sW1NFmqCtyIaWzGPfd315d7ULWbPQyt X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.0.736,FMLib:17.12.80.40 definitions=2025-06-05_05,2025-06-05_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 clxscore=1015 priorityscore=1501 spamscore=0 adultscore=0 impostorscore=0 lowpriorityscore=0 phishscore=0 mlxscore=0 mlxlogscore=999 malwarescore=0 bulkscore=0 suspectscore=0 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506050166 From: Rob Clark Bump version to signal to userspace that VM_BIND is supported. Signed-off-by: Rob Clark Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_drv.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c index bdf775897de8..710046906229 100644 --- a/drivers/gpu/drm/msm/msm_drv.c +++ b/drivers/gpu/drm/msm/msm_drv.c @@ -41,9 +41,10 @@ * - 1.10.0 - Add MSM_SUBMIT_BO_NO_IMPLICIT * - 1.11.0 - Add wait boost (MSM_WAIT_FENCE_BOOST, MSM_PREP_BOOST) * - 1.12.0 - Add MSM_INFO_SET_METADATA and MSM_INFO_GET_METADATA + * - 1.13.0 - Add VM_BIND */ #define MSM_VERSION_MAJOR 1 -#define MSM_VERSION_MINOR 12 +#define MSM_VERSION_MINOR 13 #define MSM_VERSION_PATCHLEVEL 0 bool dumpstate; From patchwork Thu Jun 5 18:29:24 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 894494 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 82A36289832 for ; Thu, 5 Jun 2025 18:33:53 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.168.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749148435; cv=none; b=fCRtpvQLXi0pDbCESSFNwCei+kyahj7gGi3oz3hfsqOuE7Hu5/amVKlCPJTkqpqIl2uvFsTVYHGkwDFjmtVxmksBOOfI1d/QP2nLvlTfYYxjZdifgREEajINCmXgFvuxpeTAFQNwYIT1TvpW0L9gulxvE+4kd9QSckMzYDkgD+k= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749148435; c=relaxed/simple; bh=CNFpqBCpJTApkLdrqsBLeL7yC78BMVJQT1X3MZKnBuY=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=glCjIRpTX3P8AyMTFm+jrEEhD9L/W711Hxt9sSa2sisOF6NCQki/Cn9ZtAc7Anxm1V9B6/3+Fp59p+W9+QZISzKuRNeqll/XooDtwIK5MlYtIW1lIxTI/r0zREBFvkdzx8DDM6sxwMPo/QTMJ5CYot4i7QU4zsqobQazCfA9LPA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=TYPn6v9w; arc=none smtp.client-ip=205.220.168.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="TYPn6v9w" Received: from pps.filterd (m0279867.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 555GmKhA006326 for ; Thu, 5 Jun 2025 18:33:53 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=l1oDcpRqB1J +a1cJgnMEhleqoTcWJSa+Q8URyBzUP58=; b=TYPn6v9wYWE5PHNPJDBGXtHAdCV NALzwp9L3ij9piPI8YtW3JuL10sFQKM0wqMllYLgP2x/IKWrcqB6iWWPGL3PUQTB YKN+40xjqikm066CqIzW0Le+ZvC3L7Drt+BsveJmqw+jTcPIaxSS7QBZxz/IjA09 MYO2/OBQWID0IVbBiHH54kVTM5uuDF8KgPUOMYwV/r4IHm4qxZiQnxJZg9+gIkom Hj/t8DrCEe3wdPZS5JTfXCSyvQXQbIx38c9G02dEdrIkUZPoyhXtjukCrdgy5wRr PM04FqfVqD0Y+U2TpXUaDLi+B+5BrawYgnIWx17hmx8kR8tEEcJmiohCNkQ== Received: from mail-pf1-f200.google.com (mail-pf1-f200.google.com [209.85.210.200]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 471g8t2amm-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Thu, 05 Jun 2025 18:33:51 +0000 (GMT) Received: by mail-pf1-f200.google.com with SMTP id d2e1a72fcca58-74299055c3dso2056064b3a.0 for ; Thu, 05 Jun 2025 11:33:49 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1749148403; x=1749753203; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=l1oDcpRqB1J+a1cJgnMEhleqoTcWJSa+Q8URyBzUP58=; b=s1rM+gNoKFXl84j6GRurgcud9Yjw0gmGAJrfe96uzbfpMhY3Sy457k0AozR4UmP2m8 ahyRAYjcl9z63F7zIAOk4+ZmGDUidzd1izMkkEfSBayeFLvGrtZq+ydAAiH6appNHoeL 0p2+IeURdylE08SuZrQjzEwtqHezStXHHK4WecJP/Bu5GRXQAN/FWszqiIVRhJj4FrTf RtaayPEW8TaaG8nPGJhsUNG451v/ffqx39VzoJ1S5tdBk3tMsGTmOa1O5pq1weSZQsKb uHAKPHnVhtxPkyxvw0jHbsPN4IvRk0Z6LxEeiE2b0EVGGeJE6fCNm1rWps7SP8ws9MEI jVcA== X-Forwarded-Encrypted: i=1; AJvYcCWiXXDDS++uVd8aJw0M03NLDck4WuZvut76Kwf9Gx/F1JH0GZm29ITVboFZ3ZrLiA7N+y02IEeTMGZq0v/U@vger.kernel.org X-Gm-Message-State: AOJu0YxkZsWtEz2nPnV17BN4sxfGSoy7JB12g7NcvllMp7VHr1IMudTY 8Vm5OPIDeb7KuqBaL+rdm5BwsJV8xozfYRNn1XOfadZqcuqNarM+kaU9JRz3+8MyJNm4Nwmx1Bh P9RyUOPWzw5D86+6jDGgqUTGnWHRNZ1cyqESuMw7VBYoDoSDHrqch3vW728W8KleRo2Rg X-Gm-Gg: ASbGncss86Do2S4m8xaz3kB+FWPz9XySADkgH35Gq0JQLDLSGEgHX5lrMqsCAtpD2TS 8vHj3lN2e0xd2NqT/9POBbeANxTLUINmLer/RQoVkEeKNz2LGP3VHUGbzSjLNZ/hvCnVT4DAGNd /8y/s88KUPs/nsDN4TirDcCkB2udF7PCMGc2DQZswEfae78SmlPaH76n+JawAdWVacaYhTraIid yk7X69AkjZ/QjMAu0cnnGmw3oPZIYc1FHxAlaOzp1M6E1ii6KQAMV/cupbR2nR3r28FN+wNRMVv nuFnw0DLDc91r38s4mypp9f6FUQd7FkO X-Received: by 2002:a05:6a00:cce:b0:73e:30dc:bb9b with SMTP id d2e1a72fcca58-74827e50eb9mr1048526b3a.2.1749148403489; Thu, 05 Jun 2025 11:33:23 -0700 (PDT) X-Google-Smtp-Source: AGHT+IEMNkt3VnPgo9LHcMEBOOISZcxlTtccCk13++zwvFaeLaKCW5x2JsNvx3t7Oim2FO7VOM6X+g== X-Received: by 2002:a05:6a00:cce:b0:73e:30dc:bb9b with SMTP id d2e1a72fcca58-74827e50eb9mr1048489b3a.2.1749148403058; Thu, 05 Jun 2025 11:33:23 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:89fa:e299:1a34:c1f5]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-747affafc55sm13562918b3a.96.2025.06.05.11.33.22 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 05 Jun 2025 11:33:22 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= , linux-kernel@vger.kernel.org (open list), linux-media@vger.kernel.org (open list:DMA BUFFER SHARING FRAMEWORK:Keyword:\bdma_(?:buf|fence|resv)\b), linaro-mm-sig@lists.linaro.org (moderated list:DMA BUFFER SHARING FRAMEWORK:Keyword:\bdma_(?:buf|fence|resv)\b) Subject: [PATCH v6 39/40] drm/msm: Defer VMA unmap for fb unpins Date: Thu, 5 Jun 2025 11:29:24 -0700 Message-ID: <20250605183111.163594-40-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250605183111.163594-1-robin.clark@oss.qualcomm.com> References: <20250605183111.163594-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: bv547H1mQLuck7fwDD2Vj6PefmJEWzNC X-Authority-Analysis: v=2.4 cv=EPcG00ZC c=1 sm=1 tr=0 ts=6841e310 cx=c_pps a=mDZGXZTwRPZaeRUbqKGCBw==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=EUspDBNiAAAA:8 a=kRuV8PDA-HUW7Xa4O9wA:9 a=zc0IvFSfCIW2DFIPzwfm:22 X-Proofpoint-GUID: bv547H1mQLuck7fwDD2Vj6PefmJEWzNC X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjA1MDE2NiBTYWx0ZWRfX3wqdmuQqko2e SV/N/FDfIjdC+w5i+cAWVCcPN0XsSX8TL/pNg3yJnl3tmL8/VGoU2+bo1VjEJqfEoeKe7yvbwlW vrvC4wKTocwSYug9MrKXBiD4Ecnyo25OyaqMMjAU0OSkEZLvZtXeVL5n+zMlXjWd0s5v24/KeMf QBPswyNOXm7QJvXaYSmFIbQXBstdWdDv7TtUUZkbAZI44xgFEmz11L6aK08/u5uQoYgCnt2F/2W 3u4zNkBoS8hSAMy7SnHTPIpKn55jcEE2UOoGoQdXlTfA+F0xRImZOwzVhLW6cpLeVcCUpUX/Xfo oYJBbRLZVXttJlOPBHupr5raW0ouUSIaNUi8NS0Tt+uT0u3suG297Pzf08k9+Qk6e03orMwXFkI cm1j8Kv52VnJrV1b3uG4DGhb08xZmeqDWvYmbV8zjpxzwjRJMr3wsBwJ7iD9gfjA++Vt7Eew X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.0.736,FMLib:17.12.80.40 definitions=2025-06-05_05,2025-06-05_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 suspectscore=0 bulkscore=0 priorityscore=1501 phishscore=0 impostorscore=0 lowpriorityscore=0 adultscore=0 malwarescore=0 mlxscore=0 clxscore=1015 mlxlogscore=999 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506050166 With the conversion to drm_gpuvm, we lost the lazy VMA cleanup, which means that fb cleanup/unpin when pageflipping to new scanout buffers immediately unmaps the scanout buffer. This is costly (with tlbinv, it can be 4-6ms for a 1080p scanout buffer, and more for higher resolutions)! To avoid this, introduce a vma_ref, which is incremented for scanout, and whenever userspace has a GEM handle or dma-buf fd. When unpinning if the vm is the kms->vm we defer tearing down the VMA until the vma_ref drops to zero. If the buffer is still part of a flip-chain then userspace will be holding some sort of reference to the BO, either via a GEM handle and/or dma-buf fd. So this avoids unmapping the VMA when there is a strong possibility that it will be needed again. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gem.c | 77 +++++++++++++++++++--------- drivers/gpu/drm/msm/msm_gem.h | 29 +++++++++++ drivers/gpu/drm/msm/msm_gem_prime.c | 35 ++++++++++++- drivers/gpu/drm/msm/msm_gem_submit.c | 8 +++ 4 files changed, 124 insertions(+), 25 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index b882647144bb..55a409ac72f5 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -19,11 +19,11 @@ #include "msm_drv.h" #include "msm_gem.h" #include "msm_gpu.h" +#include "msm_kms.h" static int pgprot = 0; module_param(pgprot, int, 0600); - static void update_device_mem(struct msm_drm_private *priv, ssize_t size) { uint64_t total_mem = atomic64_add_return(size, &priv->total_mem); @@ -43,6 +43,7 @@ static void update_ctx_mem(struct drm_file *file, ssize_t size) static int msm_gem_open(struct drm_gem_object *obj, struct drm_file *file) { + msm_gem_vma_get(obj); update_ctx_mem(file, obj->size); return 0; } @@ -50,33 +51,13 @@ static int msm_gem_open(struct drm_gem_object *obj, struct drm_file *file) static void put_iova_spaces(struct drm_gem_object *obj, struct drm_gpuvm *vm, bool close, const char *reason); -static void detach_vm(struct drm_gem_object *obj, struct drm_gpuvm *vm) -{ - msm_gem_assert_locked(obj); - drm_gpuvm_resv_assert_held(vm); - - struct drm_gpuvm_bo *vm_bo = drm_gpuvm_bo_find(vm, obj); - if (vm_bo) { - struct drm_gpuva *vma; - - drm_gpuvm_bo_for_each_va (vma, vm_bo) { - if (vma->vm != vm) - continue; - msm_gem_vma_unmap(vma, "detach"); - msm_gem_vma_close(vma); - break; - } - - drm_gpuvm_bo_put(vm_bo); - } -} - static void msm_gem_close(struct drm_gem_object *obj, struct drm_file *file) { struct msm_context *ctx = file->driver_priv; struct drm_exec exec; update_ctx_mem(file, -obj->size); + msm_gem_vma_put(obj); /* * If VM isn't created yet, nothing to cleanup. And in fact calling @@ -103,10 +84,47 @@ static void msm_gem_close(struct drm_gem_object *obj, struct drm_file *file) msm_gem_lock_vm_and_obj(&exec, obj, ctx->vm); put_iova_spaces(obj, ctx->vm, true, "close"); - detach_vm(obj, ctx->vm); drm_exec_fini(&exec); /* drop locks */ } +/* + * Get/put for kms->vm VMA + */ + +void msm_gem_vma_get(struct drm_gem_object *obj) +{ + atomic_inc(&to_msm_bo(obj)->vma_ref); +} + +void msm_gem_vma_put(struct drm_gem_object *obj) +{ + struct msm_drm_private *priv = obj->dev->dev_private; + struct drm_exec exec; + + if (atomic_dec_return(&to_msm_bo(obj)->vma_ref)) + return; + + if (!priv->kms) + return; + + msm_gem_lock_vm_and_obj(&exec, obj, priv->kms->vm); + put_iova_spaces(obj, priv->kms->vm, true, "vma_put"); + drm_exec_fini(&exec); /* drop locks */ +} + +static void msm_gem_vma_put_locked(struct drm_gem_object *obj) +{ + struct msm_drm_private *priv = obj->dev->dev_private; + + if (atomic_dec_return(&to_msm_bo(obj)->vma_ref)) + return; + + if (!priv->kms) + return; + + put_iova_spaces(obj, priv->kms->vm, true, "vma_put"); +} + /* * Cache sync.. this is a bit over-complicated, to fit dma-mapping * API. Really GPU cache is out of scope here (handled on cmdstream) @@ -281,6 +299,7 @@ void msm_gem_pin_obj_locked(struct drm_gem_object *obj) msm_gem_assert_locked(obj); to_msm_bo(obj)->pin_count++; + msm_gem_vma_get(obj); drm_gem_lru_move_tail_locked(&priv->lru.pinned, obj); } @@ -518,6 +537,8 @@ void msm_gem_unpin_locked(struct drm_gem_object *obj) msm_gem_assert_locked(obj); + msm_gem_vma_put_locked(obj); + mutex_lock(&priv->lru.lock); msm_obj->pin_count--; GEM_WARN_ON(msm_obj->pin_count < 0); @@ -664,6 +685,13 @@ int msm_gem_set_iova(struct drm_gem_object *obj, return ret; } +static bool is_kms_vm(struct drm_gpuvm *vm) +{ + struct msm_drm_private *priv = vm->drm->dev_private; + + return priv->kms && (priv->kms->vm == vm); +} + /* * Unpin a iova by updating the reference counts. The memory isn't actually * purged until something else (shrinker, mm_notifier, destroy, etc) decides @@ -679,7 +707,8 @@ void msm_gem_unpin_iova(struct drm_gem_object *obj, struct drm_gpuvm *vm) if (vma) { msm_gem_unpin_locked(obj); } - detach_vm(obj, vm); + if (!is_kms_vm(vm)) + put_iova_spaces(obj, vm, true, "close"); drm_exec_fini(&exec); /* drop locks */ } diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index 9671c4299cf8..fafb221e173b 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -211,9 +211,38 @@ struct msm_gem_object { * Protected by LRU lock. */ int pin_count; + + /** + * @vma_ref: Reference count of VMA users. + * + * With the vm_bo/vma holding a reference to the GEM object, we'd + * otherwise have to actively tear down a VMA when, for example, + * a buffer is unpinned for scanout, vs. the pre-drm_gpuvm approach + * where a VMA did not hold a reference to the BO, but instead was + * implicitly torn down when the BO was freed. + * + * To regain the lazy VMA teardown, we use the @vma_ref. It is + * incremented for any of the following: + * + * 1) the BO is pinned for scanout/etc + * 2) the BO is exported as a dma_buf + * 3) the BO has open userspace handle + * + * All of those conditions will hold an reference to the BO, + * preventing it from being freed. So lazily keeping around the + * VMA will not prevent the BO from being freed. (Or rather, the + * reference loop is harmless in this case.) + * + * When the @vma_ref drops to zero, then kms->vm VMA will be + * torn down. + */ + atomic_t vma_ref; }; #define to_msm_bo(x) container_of(x, struct msm_gem_object, base) +void msm_gem_vma_get(struct drm_gem_object *obj); +void msm_gem_vma_put(struct drm_gem_object *obj); + uint64_t msm_gem_mmap_offset(struct drm_gem_object *obj); int msm_gem_prot(struct drm_gem_object *obj); int msm_gem_pin_vma_locked(struct drm_gem_object *obj, struct drm_gpuva *vma); diff --git a/drivers/gpu/drm/msm/msm_gem_prime.c b/drivers/gpu/drm/msm/msm_gem_prime.c index 1a6d8099196a..43f264d3cfa9 100644 --- a/drivers/gpu/drm/msm/msm_gem_prime.c +++ b/drivers/gpu/drm/msm/msm_gem_prime.c @@ -6,6 +6,7 @@ #include +#include #include #include "msm_drv.h" @@ -48,13 +49,45 @@ struct drm_gem_object *msm_gem_prime_import_sg_table(struct drm_device *dev, return msm_gem_import(dev, attach->dmabuf, sg); } +static void msm_gem_dmabuf_release(struct dma_buf *dma_buf) +{ + struct drm_gem_object *obj = dma_buf->priv; + + msm_gem_vma_put(obj); + drm_gem_dmabuf_release(dma_buf); +} + +static const struct dma_buf_ops msm_gem_prime_dmabuf_ops = { + .cache_sgt_mapping = true, + .attach = drm_gem_map_attach, + .detach = drm_gem_map_detach, + .map_dma_buf = drm_gem_map_dma_buf, + .unmap_dma_buf = drm_gem_unmap_dma_buf, + .release = msm_gem_dmabuf_release, + .mmap = drm_gem_dmabuf_mmap, + .vmap = drm_gem_dmabuf_vmap, + .vunmap = drm_gem_dmabuf_vunmap, +}; struct dma_buf *msm_gem_prime_export(struct drm_gem_object *obj, int flags) { if (to_msm_bo(obj)->flags & MSM_BO_NO_SHARE) return ERR_PTR(-EPERM); - return drm_gem_prime_export(obj, flags); + msm_gem_vma_get(obj); + + struct drm_device *dev = obj->dev; + struct dma_buf_export_info exp_info = { + .exp_name = KBUILD_MODNAME, /* white lie for debug */ + .owner = dev->driver->fops->owner, + .ops = &msm_gem_prime_dmabuf_ops, + .size = obj->size, + .flags = flags, + .priv = obj, + .resv = obj->resv, + }; + + return drm_gem_dmabuf_export(dev, &exp_info); } int msm_gem_prime_pin(struct drm_gem_object *obj) diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c index 8a0f5b5eda30..bf9010da7e58 100644 --- a/drivers/gpu/drm/msm/msm_gem_submit.c +++ b/drivers/gpu/drm/msm/msm_gem_submit.c @@ -527,6 +527,14 @@ void msm_submit_retire(struct msm_gem_submit *submit) struct drm_gem_object *obj = submit->bos[i].obj; struct drm_gpuvm_bo *vm_bo = submit->bos[i].vm_bo; + /* + * msm_gem_unpin_active() doesn't drop the vma ref, because + * requires grabbing locks which we cannot grab in the fence + * signaling path. So we have to do that here + */ + if (submit->bos_pinned) + msm_gem_vma_put(obj); + drm_gem_object_put(obj); drm_gpuvm_bo_put(vm_bo); } From patchwork Thu Jun 5 18:29:25 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 894222 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E6FE528A1CF for ; Thu, 5 Jun 2025 18:34:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.168.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749148442; cv=none; b=e3MPdoC9K9i39eHZqiw8gzHrnuqtOvZI+6CmJADFepAR0E5BtO0D1+wrTeJos1cbbkjVEWbX8V8ZV/fN1keTTmGhwOXlBQPgEmEXXtSG7XTvQ3lnloAkkA9H5pYtM4tkvxXEfLzV42DVQQoRrYmnSbOxVC2Gk3akoYAcGxqZYDk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749148442; c=relaxed/simple; bh=EplCkbkulILusxv/wohoVfpD+NhRaGSSQ6P4eW0lfgM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=i5jfGrzP6Iu5UmZV+ykBXki6aIolKQjzuXfUm+r5SLjqTIMArw2HanG6TAQW6YFsqRx5OqtORd+VLZp7MzNBwD59P9KhZF4VoA4mN1s3Y0+G33oE4n6Y9ZMXf6NS48fMjTBfmTmkk71l4Y8bS3dUvd8ovuwFQMAbdQGxhBX8Uw0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=Qt+kGGO5; arc=none smtp.client-ip=205.220.168.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="Qt+kGGO5" Received: from pps.filterd (m0279862.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 555A0b9t012737 for ; Thu, 5 Jun 2025 18:34:00 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=qcppdkim1; bh=aq5Mlb2wvM6 cddTJVtPFg+Q8+IGCZmAcY73uOcj02Js=; b=Qt+kGGO5q7zuA0bUvX990t53AK2 aorO+Z98d+v2HySgmc1Jfv4IpuLsqOC2N5HGaIV8myIW+CyAo8cuSyiuCcmlNod3 BynZ9OJJ6UtF+LnCg1MEyCPjAt3YIQa8LBnGz78er7c4lHi2eL7LEERSaPJCzSgf LWJmsihV6AN/SJrIG4pCQu0v57sUXC2Q5xg4+GUx6OUCnFzZX7kdiLKpu7tjwfh1 JjnqaYGk/OVU6Cz9wEdhTPwEhtgYrME8sOxA0GHmqSfP4ezA+TG6uwmENHbcNvFk LxyNgqJ3PzQMzNOQojJwa3mURpQmYl+A7Iqtm1N8PoTuK4qVzKPLiigyYiA== Received: from mail-pl1-f198.google.com (mail-pl1-f198.google.com [209.85.214.198]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 472be861h5-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Thu, 05 Jun 2025 18:33:38 +0000 (GMT) Received: by mail-pl1-f198.google.com with SMTP id d9443c01a7336-2355651d204so11999885ad.2 for ; Thu, 05 Jun 2025 11:33:38 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1749148405; x=1749753205; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=aq5Mlb2wvM6cddTJVtPFg+Q8+IGCZmAcY73uOcj02Js=; b=U0QeJHYQ6Yd7ht2bCqHCxlfX933kT+I1EUdkvRivHg/6SDeS5LIbQaBMvjTWmwR9lw dxTKjN+pQN6LLWnYs1d/Qsn+cQYW39ltSAOhYKyxoZGMg9CYnZTbOapfHYbkMc9/JEU6 RaogbIcnHwMhsQOH+63pttTihf1r1s7RVmFDZx6h7syqcaaSOu9Kimy3VDxJC8HJOnLt TSrY4ol6iUJu4e2VfvIbA/sCK0sekPCyRcDW3QJjPnIvH4wL6yOhbu2XanHPcj67DJNo ex4RjV9uXi29z4PfJRRiJsBw1nT2GwiaQBDwV/xAAlGTNc+4Pz6qUj70QYUoBP7YfPtp OViA== X-Forwarded-Encrypted: i=1; AJvYcCVE/csfUXqMv2jfBnbBrNyPkcXvkysEXV6Y49neZwuq4zokLyqLUHf5FWL6sQbgkNaEEltqJ/VltTv/WgXZ@vger.kernel.org X-Gm-Message-State: AOJu0YybffXTHzLJoGFJxpU4iLPyu/rtqjbsIhtPONBMpDn0/itaqUxl NIlUsg46Ae6rXLLfenKaikKVLrTIgDQP8K/mQ9IUEz9bmIIEMSA711dsVUYOoFEotoDaej04h10 typJALv05riDqDXiKHCwoe0HDpRIT2oKENFYT4Z966D2OICc2G6HOA09xXU2QKQCEtROa X-Gm-Gg: ASbGnct1IMs2N18/tzPlpjA5C3XQ1EapKe9FBnr4yLq5uqPZkFbZFMBjqikGrviE1vD oZO4y56jXhD73xJorAAxuE6akbCzoX4uWvqs3cxC5eB6NjHmXm7JSXdBshH2PLTN1tDsDk8+Niv dwwShGEI6kWhJsO6hbN6Xa+cQyY9IXoOVlpDXYKa9LElRwKWpa8l9cG9GNX/FxMTmUMuj7MBQDF VwkIOfHSj/Cg5k8EedMfza/2WZRE/YwaMU4xnDNWVNhIbnI8Bdqe3+ryZSrsIGA5LFka1QiWHyV jGAyS6qMgyZMB7dsJf5PsQ== X-Received: by 2002:a17:902:ea11:b0:234:f19a:eead with SMTP id d9443c01a7336-23601ed546dmr4753945ad.43.1749148404751; Thu, 05 Jun 2025 11:33:24 -0700 (PDT) X-Google-Smtp-Source: AGHT+IGv4pHc4fnYu4s1RXNlIzNDDJsXYuIILJZfiVF+w0RPFujOSg/3fi9tGM3rphg+6/s7TBErbQ== X-Received: by 2002:a17:902:ea11:b0:234:f19a:eead with SMTP id d9443c01a7336-23601ed546dmr4753535ad.43.1749148404374; Thu, 05 Jun 2025 11:33:24 -0700 (PDT) Received: from localhost ([2601:1c0:5000:d5c:89fa:e299:1a34:c1f5]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-23506cf47c0sm122369965ad.175.2025.06.05.11.33.23 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 05 Jun 2025 11:33:23 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , Konrad Dybcio , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v6 40/40] drm/msm: Add VM_BIND throttling Date: Thu, 5 Jun 2025 11:29:25 -0700 Message-ID: <20250605183111.163594-41-robin.clark@oss.qualcomm.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250605183111.163594-1-robin.clark@oss.qualcomm.com> References: <20250605183111.163594-1-robin.clark@oss.qualcomm.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Authority-Analysis: v=2.4 cv=bNYWIO+Z c=1 sm=1 tr=0 ts=6841e316 cx=c_pps a=MTSHoo12Qbhz2p7MsH1ifg==:117 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=EUspDBNiAAAA:8 a=x_1jt8ETcTvhpFk4nSoA:9 a=GvdueXVYPmCkWapjIL-Q:22 X-Proofpoint-GUID: owZS26_L3gCNS7o_O4mEOF0niKeGPNUc X-Proofpoint-ORIG-GUID: owZS26_L3gCNS7o_O4mEOF0niKeGPNUc X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjA1MDE2NiBTYWx0ZWRfXxigUR4YUXblC 9b5DDEoET+IuHGLppx9dmTo2FdokC1q8v9GJDFNMSEfYBp96uLUO8sN+pywZwAU3JAxupKuwIWd EeLPmPYYBFq2ridJrdtqB3Rp/vEGtowRe1Mws35foYRrhCaAubNTeEjJy+gSz0fYu8y+f52CT+e g1xUP/iE6c9OSWYXsWVfWoK8jU997XvSOFTztolUxNuvYzd1pE5BbTwrG6jqf3R2Qqj9yXEr9Ij +W2wfa+1xOKmCaYrw4NdRC0Lyfq9rzPL8SW2RxrfUQf1/31Oi0H9yT4yE8zhanW5aa22EVwtnLa ZmKDKkFSJN72A13cHH+Ih+lhRbD9iPTYpGsKxS3JhafsiRb+WgybQ8SAi+PFR/TTQSiSj9/4w4y u4VcoZOlk8puugtixinZ2YzUgRDAaeTjQWRRHWneRZ9n33dDCGfx4A526QC+HmGrGcn2HPrU X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.0.736,FMLib:17.12.80.40 definitions=2025-06-05_05,2025-06-05_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 lowpriorityscore=0 malwarescore=0 phishscore=0 priorityscore=1501 suspectscore=0 mlxscore=0 impostorscore=0 spamscore=0 clxscore=1015 mlxlogscore=999 adultscore=0 bulkscore=0 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2505280000 definitions=main-2506050166 A large number of (unsorted or separate) small (<2MB) mappings can cause a lot of, probably unnecessary, prealloc pages. Ie. a single 4k page size mapping will pre-allocate 3 pages (for levels 2-4) for the pagetable. Which can chew up a large amount of unneeded memory. So add a mechanism to put an upper bound on the # of pre-alloc pages. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gem_vma.c | 23 +++++++++++++++++++++-- drivers/gpu/drm/msm/msm_gpu.h | 3 +++ 2 files changed, 24 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_gem_vma.c index b6de87e5c3f7..83f6f95b4865 100644 --- a/drivers/gpu/drm/msm/msm_gem_vma.c +++ b/drivers/gpu/drm/msm/msm_gem_vma.c @@ -705,6 +705,8 @@ msm_vma_job_free(struct drm_sched_job *_job) mmu->funcs->prealloc_cleanup(mmu, &job->prealloc); + atomic_sub(job->prealloc.count, &job->queue->in_flight_prealloc); + drm_sched_job_cleanup(_job); job_foreach_bo (obj, job) @@ -1087,10 +1089,11 @@ ops_are_same_pte(struct msm_vm_bind_op *first, struct msm_vm_bind_op *next) * them as a single mapping. Otherwise the prealloc_count() will not realize * they can share pagetable pages and vastly overcount. */ -static void +static int vm_bind_prealloc_count(struct msm_vm_bind_job *job) { struct msm_vm_bind_op *first = NULL, *last = NULL; + int ret; for (int i = 0; i < job->nr_ops; i++) { struct msm_vm_bind_op *op = &job->ops[i]; @@ -1119,6 +1122,20 @@ vm_bind_prealloc_count(struct msm_vm_bind_job *job) /* Flush the remaining range: */ prealloc_count(job, first, last); + + /* + * Now that we know the needed amount to pre-alloc, throttle on pending + * VM_BIND jobs if we already have too much pre-alloc memory in flight + */ + ret = wait_event_interruptible( + to_msm_vm(job->vm)->sched.job_scheduled, + atomic_read(&job->queue->in_flight_prealloc) <= 1024); + if (ret) + return ret; + + atomic_add(job->prealloc.count, &job->queue->in_flight_prealloc); + + return 0; } /* @@ -1389,7 +1406,9 @@ msm_ioctl_vm_bind(struct drm_device *dev, void *data, struct drm_file *file) if (ret) goto out_unlock; - vm_bind_prealloc_count(job); + ret = vm_bind_prealloc_count(job); + if (ret) + goto out_unlock; struct drm_exec exec; unsigned flags = DRM_EXEC_IGNORE_DUPLICATES | DRM_EXEC_INTERRUPTIBLE_WAIT; diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h index 31b83e9e3673..5508885d865f 100644 --- a/drivers/gpu/drm/msm/msm_gpu.h +++ b/drivers/gpu/drm/msm/msm_gpu.h @@ -555,6 +555,8 @@ static inline int msm_gpu_convert_priority(struct msm_gpu *gpu, int prio, * seqno, protected by submitqueue lock * @idr_lock: for serializing access to fence_idr * @lock: submitqueue lock for serializing submits on a queue + * @in_flight_prealloc: for VM_BIND queue, # of preallocated pgtable pages for + * queued VM_BIND jobs * @ref: reference count * @entity: the submit job-queue */ @@ -569,6 +571,7 @@ struct msm_gpu_submitqueue { struct idr fence_idr; struct spinlock idr_lock; struct mutex lock; + atomic_t in_flight_prealloc; struct kref ref; struct drm_sched_entity *entity;