From patchwork Sat Dec 7 16:15:01 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 848123 Received: from mail-pl1-f181.google.com (mail-pl1-f181.google.com [209.85.214.181]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BEEC41E4AB; Sat, 7 Dec 2024 16:17:17 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.181 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733588239; cv=none; b=HzpOelI1d8nmuyQzzTySSnl0y9jC8YMhlgl9HFFjjAWCRh71OQYfNfo02blo/DsoGu0Xfqbs863pa9x4x9kFBNqhLRCsK2S1s7MPPMQirl3LJxlmJbBMB9hEnGk3UvtqtHUy/ApX76ug9b7avd8/aZaQSktGjKRk2q5nIJXt46c= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733588239; c=relaxed/simple; bh=zZFzwTWrK17rR7ZksTjm1PquAvjmpQpa0P2SQ+3Bcmo=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=ka3VqZoDjCKim23s23lwOe2h8quTDdhvBBUE5xnlaYFCipuz4LsENF5ifTWRUPLWJH2ydp6TGr8lu4FTNkhz4fSKvRW5hbkdBaRShoZXvC/LjII5MGwOsuXi+R32teZgXgcpGHL5j+kcEkVfUqYQdC3JwrupTxtO54dJcMiEcQM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=FRlmhsKs; arc=none smtp.client-ip=209.85.214.181 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="FRlmhsKs" Received: by mail-pl1-f181.google.com with SMTP id d9443c01a7336-215853ed047so30385445ad.2; Sat, 07 Dec 2024 08:17:17 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1733588237; x=1734193037; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=CV+uCFA2JDB1hhyvgSiWVaEfg9viIwNK4A1DQTOi2r8=; b=FRlmhsKsh+s2qahVcuu/pu6avg3aJykj4NSWNyOIFI0VVXYlFam2814WnMc8EGkvc2 MeDPDsS12+y+1Q+Z8C/NffglM83b529U49reqnDBRhGUe3w+wZMzqsa2J2t6wk2E6eVU TMc7/RNliJUfjW6Eymb3UlJNcbo6Dhs6BgQj1LIaJe2lF6z1bdtblOc/PIhUSIt9ZUIG ynkbNNyITLjkd/fmyAD24QWjDm8Ys0dNAF8J0Z6L/lpGMawb55mTui6wVdqcFfxHe/WK eyv036uLeMuoBP+f/4vuPZdyJgCVDuJVyoqCkCMC+j5gZ31lW2Q9LzW9K68wN0TrIU7V IUcg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1733588237; x=1734193037; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=CV+uCFA2JDB1hhyvgSiWVaEfg9viIwNK4A1DQTOi2r8=; b=ETPY70wTRYg+U4cyRn2Is/pRB8U3+7ay2jCej6APhRDmYphokPG8JaGS3WSJVLFUgG wCTG/SXnqHD5QcUZyvhSngMbCgRNXn9fmIlRkh01jeOw1WkUkz0C8hvYYQiP4Fe4b9E6 5eLn/noVjdwLo8nPFH683DMBxm9B5rT5/nIT52nRp01UcqCRvMp5H2+dLMh3fwdxccAP Iqn5aqWelXEReul9AUJDrtu5MbU5TX1PDdbUn9JYv1hQZm0f0Gddil/kZbt+YRjQyxST HQ6cMq0m0pwM8jRbLMoAHUAJWQArWMYzy9wgCF/pdO8yfkuEFDotl6lytYpYqLnV8ig1 9V6g== X-Forwarded-Encrypted: i=1; AJvYcCUId0ZuHK/H69Q+bnPx7sAz2KsPDRIlVFwL9s/7Jt1Rc6aaiBsZpECNj7JqgboWM6wa41PUP4jhJ0W17wOu@vger.kernel.org, AJvYcCVfzPbTHYomXif9ysTRDlGxyATLrlyWb6gOgUOG681DaaniVCo8+m8MTqHXCQ3XbAUdEmn+/M+HCfLxVPKs@vger.kernel.org X-Gm-Message-State: AOJu0YwXAfB51Lxz0au9fF2aB809hB7Iwvw2LyJCi3QVpwJmeJkr36bX bJQjsXxXAcQuAIIqQ8Bd8Xjf8vBgo0OUiLqxbi3HlRP8bDfWXr8N X-Gm-Gg: ASbGncsmSUrKPtvSmfwUdrC5KPoWd/bkTOFDcxQk3kIvH7Ew62H9gPh7Lvg036H9i4C oId6OCBCWyw0qFVnbGNJvNwPguipwpfp6Q1RiwNoURlGj0SCQLrzSJK66nZWvrVjWpGtH/034DF 9ic0pTTiUR02JwTn1OdXBku8EBqRO1x+eKx1oMN/yls+FwFgdRyW6xkiX7MMMpGCr8lq4h3f0Se AKPa1G2ovEQQpKafKaiIM24NYImvTqpoqOIYSpKYQXaCvmHdSnhmJlEIrDUZ1Yrbv1hG58YmEVa Q5cfzbPi X-Google-Smtp-Source: AGHT+IGNSol+D/hLnPJGlFtYh26C30xz0pG81ohsKLL4K621B+zkYYXAaBZbDE8Vu91JRUJH2MTVhg== X-Received: by 2002:a17:902:e548:b0:215:b087:5d5b with SMTP id d9443c01a7336-21614dd1b7amr107402935ad.40.1733588237059; Sat, 07 Dec 2024 08:17:17 -0800 (PST) Received: from localhost (c-73-37-105-206.hsd1.or.comcast.net. [73.37.105.206]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-21630fee27bsm10627515ad.269.2024.12.07.08.17.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 07 Dec 2024 08:17:16 -0800 (PST) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Akhil P Oommen , Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [RFC 01/24] HACK: drm/msm: Disable shrinker Date: Sat, 7 Dec 2024 08:15:01 -0800 Message-ID: <20241207161651.410556-2-robdclark@gmail.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20241207161651.410556-1-robdclark@gmail.com> References: <20241207161651.410556-1-robdclark@gmail.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Rob Clark The WIP VM_BIND patches don't yet support shrinker.. Not-signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_drv.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c index 2aefb8becda0..6bc6f67825ce 100644 --- a/drivers/gpu/drm/msm/msm_drv.c +++ b/drivers/gpu/drm/msm/msm_drv.c @@ -267,7 +267,7 @@ static int msm_drm_init(struct device *dev, const struct drm_driver *drv) if (ret) goto err_deinit_vram; - ret = msm_gem_shrinker_init(ddev); +// ret = msm_gem_shrinker_init(ddev); if (ret) goto err_msm_uninit; From patchwork Sat Dec 7 16:15:02 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 848325 Received: from mail-pf1-f172.google.com (mail-pf1-f172.google.com [209.85.210.172]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 20A3214EC73; Sat, 7 Dec 2024 16:17:18 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.172 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733588241; cv=none; b=H2McLpVMWi0n9CNCnW2h1nV+4FFIGipNbUhM8SjS7PuDMqig6rOYWwWAXtboNPmHMB4eXF6J9s+vDnwVfck1uQwCkt/6LLM4/we9vy7g1S5h2q1zSRq53ba9lVZZ35Q47ELUc8hwRhWffIhRfv13qEM9lKJI2BSe7Z3RlJhVnJg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733588241; c=relaxed/simple; bh=LrMtcrJsWe3nUe1gFanbwSIOVvkq5q5NxXhzofQlBzw=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=HCvgpS9Ofz+WW9BxUxYE4M7drgGc2zAFEg7bwYmC6DyqSNmsJlWkyfTn/XkbvnlmBmjwf+UAGKgOISN4QBJoIx+k1Dg3wfo92C3pSe6TZvrbktmKbl7Zr3QNHLLcCTzUYZ1gEHuEY75g5jFynIuQHiAFYFlCquonixRVnh902Qo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=TWrfADzg; arc=none smtp.client-ip=209.85.210.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="TWrfADzg" Received: by mail-pf1-f172.google.com with SMTP id d2e1a72fcca58-7250c199602so3292581b3a.1; Sat, 07 Dec 2024 08:17:18 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1733588238; x=1734193038; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=+uFEdHLp4VUZbVW3eaLyXZ1CLYX+lqZJRxIQr83XVZ4=; b=TWrfADzgQ38TrMa8qGg4yaS5kemlzFy60A9Jd1UJcUcym8DRRziCntOAjYx6YCa5hc 6fqFtEdGi0z0H0FlbRjVHX58LDr3aJSkCkXTzlCYno85SQxzikVnbStfZToqoZcO0+GI kV5W7eTEqhgE/Z4ODVkkIirhMsf8RJPM/ckbdSlG1oDiKiJYNKO5Vxkuv9CIWAiCEBIc zmgqngc10TLlBKZU/QNxKr/+boDefnMM5YxDmfn/BRPVhfwQ6Y00NN82MHSa7C/gnGOr PtFYW75v6rLBb1OWE/32X+RFAGHjnriw3lBjX4fGggKs2wqLEWwDWRWVImSVKvF0tS8O LBpg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1733588238; x=1734193038; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=+uFEdHLp4VUZbVW3eaLyXZ1CLYX+lqZJRxIQr83XVZ4=; b=tmuoIjQDyDWL+dCUfW+6TudMEcWpJnnd5q17sTFvU/vSBH+EE3VRp7NiAehzKRIhDM hLFD0BA5Fv6YnoR79G5u16+dg23R67saLk+Z3x98BgnKbJrdn4vDaO5OmTTB2izNf1+K N4oVnIUuJaLWJPaPXz7bvepnkEbboJyUImVPqht/j40CRdFqXpAOj0e+CLuBoNYje+BB QLhkbVXFbCHTiGubN0rWpMWV0GnRGzkYogRbihIvsk+2pq3QPo6tVw+P1s62lgoyLU3X mbFb/j9j1Wl4Zp8znJDIlcLPyTChDVLY6TloByUnVBo3eJgOpXRg3h3JLHl4z1aRX4n4 BhrQ== X-Forwarded-Encrypted: i=1; AJvYcCVsiwC23+Kn+dyLZ4C0AKfXQYuGzFLUiqUfksf+45+A6vAqNvOpRtq4sblGxtY9Eqi29vpHWpQZyL5ixR+g@vger.kernel.org, AJvYcCW6elNAe7kt8xpz8CrpE4zaxd9IS3OkmpC/ubaHQFEM/qZ3TFTpLXsGOMs5UQKaJE+1nc0hIvnJdQBOZk6T@vger.kernel.org X-Gm-Message-State: AOJu0Yx+jH9c2GPTYVmviltz6cvagKMQ3oWymNN0oueu2hVajN1lI7gZ 9OaG82vCXcDmElI3syD/IJh7XRbi8oiwB2JU+GTawNNGN/b3uQFD X-Gm-Gg: ASbGncsKMrLBLqyu1XR3t+QLEkB8/C/25zc2DvZaM0Mkv1XS5oZ29mUBv+SaOjHHSCW HD6ryaUgDJEvJpE/79y433zbCBnPE2WlC/VNRDF93bAhXKitM9nWAA/d8DcQ7ZKqgIhD8WX4VFf Pvw1WvPD/27Mn7SSMq+Jd8zqZRPGFRYJ3Oxx3gdBddraBZ6C0STHccNltuXmT38UMDGidp1eyxu x3QUFMlhhp3pQuXdgq+KFbkN49DWQ6/zqABvNgnv9rlNCXGCWAVwu3eSZs50R+M43bl1XLmT2Nc TSI96iWm X-Google-Smtp-Source: AGHT+IFzlQQnmd+Nye1RouxY4f7B/5e6cN53cpBTcztHP8nFGwlNi+lJxK+QOrngvG79eePW7ncK3g== X-Received: by 2002:a05:6a20:734c:b0:1e1:9fef:e976 with SMTP id adf61e73a8af0-1e19fefeb02mr485499637.23.1733588238404; Sat, 07 Dec 2024 08:17:18 -0800 (PST) Received: from localhost (c-73-37-105-206.hsd1.or.comcast.net. [73.37.105.206]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-725e2f2d5f0sm68691b3a.40.2024.12.07.08.17.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 07 Dec 2024 08:17:17 -0800 (PST) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Akhil P Oommen , Rob Clark , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [RFC 02/24] drm/gpuvm: Don't require obj lock in destructor path Date: Sat, 7 Dec 2024 08:15:02 -0800 Message-ID: <20241207161651.410556-3-robdclark@gmail.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20241207161651.410556-1-robdclark@gmail.com> References: <20241207161651.410556-1-robdclark@gmail.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Rob Clark See commit a414fe3a2129 ("drm/msm/gem: Drop obj lock in msm_gem_free_object()") for justification. Signed-off-by: Rob Clark --- drivers/gpu/drm/drm_gpuvm.c | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/drm_gpuvm.c b/drivers/gpu/drm/drm_gpuvm.c index f9eb56f24bef..1e89a98caad4 100644 --- a/drivers/gpu/drm/drm_gpuvm.c +++ b/drivers/gpu/drm/drm_gpuvm.c @@ -1511,7 +1511,9 @@ drm_gpuvm_bo_destroy(struct kref *kref) drm_gpuvm_bo_list_del(vm_bo, extobj, lock); drm_gpuvm_bo_list_del(vm_bo, evict, lock); - drm_gem_gpuva_assert_lock_held(obj); + if (kref_read(&obj->refcount) > 0) + drm_gem_gpuva_assert_lock_held(obj); + list_del(&vm_bo->list.entry.gem); if (ops && ops->vm_bo_free) @@ -1871,7 +1873,8 @@ drm_gpuva_unlink(struct drm_gpuva *va) if (unlikely(!obj)) return; - drm_gem_gpuva_assert_lock_held(obj); + if (kref_read(&obj->refcount) > 0) + drm_gem_gpuva_assert_lock_held(obj); list_del_init(&va->gem.entry); va->vm_bo = NULL; From patchwork Sat Dec 7 16:15:03 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 848122 Received: from mail-pf1-f171.google.com (mail-pf1-f171.google.com [209.85.210.171]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9B9B514F9D6; Sat, 7 Dec 2024 16:17:20 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.171 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733588242; cv=none; b=aw2+qhWYvChTo7OMlVcXs79dDqOzhRIOb18fYEBE8omro2FELO9vkTjoQgU7CxQLnRRlTZlxGKWB4Im3boY0lLxegcpoinMuLW9gC96UGz5Jn0l3a9Mfuvg1UEkJdvLXKPzeqVQ/y5YUHcS9k+pFb0QbVMuT6BIwFEyygq04z2s= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733588242; c=relaxed/simple; bh=DQfBvjMzBz8ecm4yJsv36Y0t3lgtWsL8E6OSrojhqG0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=QiPDwW8JnpHdHfK+EACd4z0P5/9qhiKqxCLWKW5mlc6cbCtFBmFucUBiyK7pWCT0PNMC/FU16sER98htJA+adnKUGbLMOLKYJe7NlTMxP0zcsr+sbXof09K4DR6VkL9Ftf/fh1L0sX8gtcjY7HRlhWJWXETF4wYZ5R7yEVVZV30= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=BirfkY2j; arc=none smtp.client-ip=209.85.210.171 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="BirfkY2j" Received: by mail-pf1-f171.google.com with SMTP id d2e1a72fcca58-725db141410so204960b3a.2; Sat, 07 Dec 2024 08:17:20 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1733588240; x=1734193040; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=aQgFzaGycn/HJR1rBYagqMMFKjXleXPjXG3RiZU2xbE=; b=BirfkY2j65tWBX/x6YnBJIGJYNROQ4u95xGblYtNnjMurhZzf9xAOooIlRSekB97U1 J26wmtZ7PpHqP9fPnrjgWoqTl+AdxZuLc2CBBJErsZPmMVCtofh77U2F+TVjV3ZIgh4O Xs+P+FEE0G7Du/ae7FtHuZBQu4rQEvtZ/QKeajYw39YF6IUYBOksb6fd/DE/COHQyGMQ QyNG7ifLvCFd4g6eI5Gv8OFWgfxOdU3REidUIzP0+fmtr/Xcitp7RLgUyXzrxeFXfVOq NraEayOWjr2Z286UVDERB3/wPPJDvsQ3h3RuStMSK/OAx6Orf9qQ0tIy0FCwKHsmuBuO ljJA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1733588240; x=1734193040; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=aQgFzaGycn/HJR1rBYagqMMFKjXleXPjXG3RiZU2xbE=; b=nnQ4auJwLo3detqv1qJCYp3EXUclKMrbWfHk0HH1itLocFaw+7AkOX5ZRCtDC18KTh fXV1bgiP6ecr+2V5mJOnB1z9ztPWw6p8Zj6NPB+XNqHnrGsl59HIIT8zFSr4aa7NJG5E Ymf/7YgKtTA1J3tnkCJBc+MakOCQnrOa5TpdmRQq6Y/lMwC0cjH6DOQLRmjgoGbdcOIl sg4xxZUizNHO7vZxU80+VAWsc/gUmHknoVCtcAbZCAmVA5RF8ZVW7gxoRqW18+s06CyS pXDUEUI6xKUFF7BhVuHVfprhxbB9t8RFG+8KHIKcr9dtdzLokcBbJzoaV8T36SZfS8Ym seJQ== X-Forwarded-Encrypted: i=1; AJvYcCV5RrJdl49/f1ZiTwTPzNH3wucx0CuDoIGBb8WKp3jE9sRkHKQvc4sfpgP4dIYQOMqZripCdiZm1xOta/Ef@vger.kernel.org, AJvYcCVhI74q7O97LNuROCKmV95wsjnuceoO1JHsM0L3p8Z9m5xXcTmwFgNEPyPRoLMi2N3yPFnirOfoq96klJqd@vger.kernel.org X-Gm-Message-State: AOJu0Yyefbbl71VD3pTKVIkxnlEWNYfq4cy+5KY3kNMuQf83mh4wnPx0 48zXonUp4GwyfgaXYPnBOOBUDl3OqUsdIuR51blwAe1Btk72U0Fe X-Gm-Gg: ASbGncuvDTZxJmLX9+eYJuEAv+SzBrAxMeEhfdAN9nUgN8UpXG+MfUOAiQwvvF6R13C rB27Kg1tTABIj81Oe+xrt+Arw3E4AYT1jnP9dXBSnMYqk3rJtZwS5y6+jN/+4eNz1Ba+hYuqtkj 20xHYR2AoXW6sW0xVoW79YioFtc9qSzJZJ65NOyUtYrBVHV7Vx5fyoS66hMEIzGE0qq8b4J+XlQ 1nunywDIu+1VA9HSHBOLEWsdxR0FPtzX+EBcvreVtLNDOdnVGJ9bjlmj0RCxnxEjxonuyRncBZb ueU9KYRI X-Google-Smtp-Source: AGHT+IHUEDls1d5ZD4bWmKwHTYFRuuM8ju54qCGhC+KSoPe/9G4tXm/AdkfqFY2cobSLPNQUQAXelw== X-Received: by 2002:a05:6a20:158b:b0:1e0:f472:e496 with SMTP id adf61e73a8af0-1e1870a75abmr11348864637.4.1733588239829; Sat, 07 Dec 2024 08:17:19 -0800 (PST) Received: from localhost (c-73-37-105-206.hsd1.or.comcast.net. [73.37.105.206]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-725c8865ee1sm1874143b3a.137.2024.12.07.08.17.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 07 Dec 2024 08:17:19 -0800 (PST) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Akhil P Oommen , Rob Clark , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [RFC 03/24] drm/gpuvm: Remove bogus lock assert Date: Sat, 7 Dec 2024 08:15:03 -0800 Message-ID: <20241207161651.410556-4-robdclark@gmail.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20241207161651.410556-1-robdclark@gmail.com> References: <20241207161651.410556-1-robdclark@gmail.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Rob Clark If the driver is using an external mutex to synchronize vm access, it doesn't need to hold vm->r_obj->resv. And if the driver is already holding obj->resv, then needing to pointlessly grab vm->r_obj->resv will be seen by lockdep as nested locking. Signed-off-by: Rob Clark --- drivers/gpu/drm/drm_gpuvm.c | 3 --- 1 file changed, 3 deletions(-) diff --git a/drivers/gpu/drm/drm_gpuvm.c b/drivers/gpu/drm/drm_gpuvm.c index 1e89a98caad4..c9bf18119a86 100644 --- a/drivers/gpu/drm/drm_gpuvm.c +++ b/drivers/gpu/drm/drm_gpuvm.c @@ -1505,9 +1505,6 @@ drm_gpuvm_bo_destroy(struct kref *kref) struct drm_gem_object *obj = vm_bo->obj; bool lock = !drm_gpuvm_resv_protected(gpuvm); - if (!lock) - drm_gpuvm_resv_assert_held(gpuvm); - drm_gpuvm_bo_list_del(vm_bo, extobj, lock); drm_gpuvm_bo_list_del(vm_bo, evict, lock); From patchwork Sat Dec 7 16:15:04 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 848324 Received: from mail-pf1-f171.google.com (mail-pf1-f171.google.com [209.85.210.171]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B471F1547E3; Sat, 7 Dec 2024 16:17:22 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.171 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733588244; cv=none; b=aDOWV3NSPwN7vsUzePgYIGgocgsdYNZecvOuRP3BzrhEGzgUcoq6rlzt3vtzQfJ7KGILwhkeOwuLiShT6ukGCAzxsC7fiYzov555GYHm+6d2l7T05bYckypm6AoEDmQZVH+x1A9qRVBAX8N4AfjvGVmfbgEohtcvGnO1obr00hQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733588244; c=relaxed/simple; bh=+yZ/gV2fVkgam7T4MiJwixZYx5I/P9M9f4tCqOPy3kA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=J/NOm47nhwrBw7n6xd8/NpK3x9yPqNqJIHf9+J6o2naFGwMAAGDJK3yNmGstvnHbYinn1XOTPwwGwIujacEpI/39krWCfgoOTCJj2whqhlRD/HADAB2jCz2OqTuw0o4mMCdLA2rmVil1nCLxShYzeR6Jk/mYGI2B4MkUQGWbVOU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=TVSs33G/; arc=none smtp.client-ip=209.85.210.171 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="TVSs33G/" Received: by mail-pf1-f171.google.com with SMTP id d2e1a72fcca58-7256dc42176so3398873b3a.3; Sat, 07 Dec 2024 08:17:22 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1733588242; x=1734193042; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=og6TbKNQT7gUo1tyiZb0dTG6ZZOX+cqXvJGjZMol84k=; b=TVSs33G/7TXIqkgAu0wF1cAcBwDa8QCiP46oEZZfZDWvzDxBuo5DCQ9i2ti1K0bMSE vkeuHCKOQn6js1vB1C7WAMNAFDk4ToZuryO6+7rwyDwNoyYfxnPSKtmXGy6mAKFlfYSd bh7xAFMtHdrxK7BoC0kWUIlQbMErja94YbPfHcgVoifeR3L1aYYv3p33+LJLCxABGRLw xaxuLF5MhQ4TSCDENPGVDHf+IMT1AtBCKgPVNN8A6PA9UBNur9WeClC+lNsVrSdoCDrX 445hDxf6jYFTJCCMcPjqxenECP0iq003HTHUjaxu/vQP5v73P4v1QR6t1ct6P4K/OhOK h2jA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1733588242; x=1734193042; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=og6TbKNQT7gUo1tyiZb0dTG6ZZOX+cqXvJGjZMol84k=; b=l6+7HA6kLF6JTtHVLiNAna8SkW+XDsE5rd/8zIRnGlh5Lq06iQocdbTmgK5vo1uXhz YVK9n9XjyYRErWh4dBXhIUAr/6O7kEtJgzc0iRxGG8IRCSMbMR/OASFxF5Qsh1d4v7g1 eJ2ZgNBQ2EZkzxF1CG2SYitklIJnQaZ8ExEMOWNeCcSJf4WLF73qEB1eYZagnFbMozXv kVU5Kc6u1Pwzl0I07nyOBjZyhfEXjT/dmS1Fu/Qbii0hgaeZghqAO2eoRgeMLWTGtPeu GDtjkrHglQAyjgAYKWt8XUXP/hqko+6i+4Z4Y7S7sUFXYYlibarxqhsKdtOPPwjM5mcu Q0CQ== X-Forwarded-Encrypted: i=1; AJvYcCUh4GGBAoy3c77Z6XBd4qpTuJuvIjUb2tqJ1IL7ZOUmtIruZmFCF1cNw3+hqhucfe540cPteX9xMZvz/CsX@vger.kernel.org, AJvYcCWYeaJFu0BQOzphH4gUIAoerD/k293fy5fr89mIK4PNHivquiKeY50fRmfQHbJodiQwx/l1LPKZBYhk67as@vger.kernel.org X-Gm-Message-State: AOJu0Yy8l8Ne2DO/mFV7vfnxQwmftSGvUn6C5l+piGj6ZBStzbkjaBGK 7pykO/+dvIvIanP+wxA4ZteBMIFIeTUceetc8hV6XaBu2fOwMQl2 X-Gm-Gg: ASbGncvPPKYgLaMDDGRbqsy41Y28ZiQw5rmGx5KQcXhQsDzG3xrdpIuJeifa9KRjtz6 IKMUmW8agabRj6jrQebOEMo2SLgIWA/eLxUVxaeoMdjGi6cT0vxRx42idm2Yw0kBW0M++vsXLT1 +qptMxkJQwRPE3mG8GFRGpnMuxETqCiCySQBMr7+D1+WdR8+Bf0Y7gQKMNS950nmpP1DZj7MrBa O+GZ6Sprbv/IY2Kk8EVgWY9aeqcUSNyu7d2Ju1+ZKw+23n0UNmVBfuyKwtBQsCfau3hcTj2yoEd +JdoJ3sA X-Google-Smtp-Source: AGHT+IG87V2LVfkZufSxkAwW+KhZ7IjRJrfnw/8csBCiSK+b09Gfi6ZYwYPOhJurL7WhNm8Z6C0bDQ== X-Received: by 2002:a05:6a20:3d87:b0:1e0:c6c0:1e08 with SMTP id adf61e73a8af0-1e1870c802dmr10900586637.25.1733588241979; Sat, 07 Dec 2024 08:17:21 -0800 (PST) Received: from localhost (c-73-37-105-206.hsd1.or.comcast.net. [73.37.105.206]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-725a2a90339sm4776139b3a.105.2024.12.07.08.17.21 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 07 Dec 2024 08:17:21 -0800 (PST) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Akhil P Oommen , Rob Clark , Rob Clark , Sean Paul , Konrad Dybcio , Abhinav Kumar , Dmitry Baryshkov , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [RFC 04/24] drm/msm: Rename msm_file_private -> msm_context Date: Sat, 7 Dec 2024 08:15:04 -0800 Message-ID: <20241207161651.410556-5-robdclark@gmail.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20241207161651.410556-1-robdclark@gmail.com> References: <20241207161651.410556-1-robdclark@gmail.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Rob Clark This is a more descriptive name. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 2 +- drivers/gpu/drm/msm/adreno/adreno_gpu.c | 6 ++-- drivers/gpu/drm/msm/adreno/adreno_gpu.h | 4 +-- drivers/gpu/drm/msm/msm_drv.c | 14 ++++----- drivers/gpu/drm/msm/msm_gem.c | 2 +- drivers/gpu/drm/msm/msm_gem_submit.c | 2 +- drivers/gpu/drm/msm/msm_gpu.c | 4 +-- drivers/gpu/drm/msm/msm_gpu.h | 39 ++++++++++++------------- drivers/gpu/drm/msm/msm_submitqueue.c | 27 +++++++++-------- 9 files changed, 49 insertions(+), 51 deletions(-) diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c index 0ae29a7c8a4d..867c6161ef1f 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c @@ -111,7 +111,7 @@ static void a6xx_set_pagetable(struct a6xx_gpu *a6xx_gpu, struct msm_ringbuffer *ring, struct msm_gem_submit *submit) { bool sysprof = refcount_read(&a6xx_gpu->base.base.sysprof_active) > 1; - struct msm_file_private *ctx = submit->queue->ctx; + struct msm_context *ctx = submit->queue->ctx; struct adreno_gpu *adreno_gpu = &a6xx_gpu->base; phys_addr_t ttbr; u32 asid; diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c b/drivers/gpu/drm/msm/adreno/adreno_gpu.c index a18c69a9f3fa..719abefecb6f 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c @@ -306,7 +306,7 @@ int adreno_fault_handler(struct msm_gpu *gpu, unsigned long iova, int flags, return 0; } -int adreno_get_param(struct msm_gpu *gpu, struct msm_file_private *ctx, +int adreno_get_param(struct msm_gpu *gpu, struct msm_context *ctx, uint32_t param, uint64_t *value, uint32_t *len) { struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu); @@ -394,7 +394,7 @@ int adreno_get_param(struct msm_gpu *gpu, struct msm_file_private *ctx, } } -int adreno_set_param(struct msm_gpu *gpu, struct msm_file_private *ctx, +int adreno_set_param(struct msm_gpu *gpu, struct msm_context *ctx, uint32_t param, uint64_t value, uint32_t len) { struct drm_device *drm = gpu->dev; @@ -440,7 +440,7 @@ int adreno_set_param(struct msm_gpu *gpu, struct msm_file_private *ctx, case MSM_PARAM_SYSPROF: if (!capable(CAP_SYS_ADMIN)) return UERR(EPERM, drm, "invalid permissions"); - return msm_file_private_set_sysprof(ctx, gpu, value); + return msm_context_set_sysprof(ctx, gpu, value); default: return UERR(EINVAL, drm, "%s: invalid param: %u", gpu->name, param); } diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.h b/drivers/gpu/drm/msm/adreno/adreno_gpu.h index 9bd38dda4308..caf8816e6252 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.h +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.h @@ -569,9 +569,9 @@ static inline int adreno_is_a7xx(struct adreno_gpu *gpu) } u64 adreno_private_address_space_size(struct msm_gpu *gpu); -int adreno_get_param(struct msm_gpu *gpu, struct msm_file_private *ctx, +int adreno_get_param(struct msm_gpu *gpu, struct msm_context *ctx, uint32_t param, uint64_t *value, uint32_t *len); -int adreno_set_param(struct msm_gpu *gpu, struct msm_file_private *ctx, +int adreno_set_param(struct msm_gpu *gpu, struct msm_context *ctx, uint32_t param, uint64_t value, uint32_t len); const struct firmware *adreno_request_fw(struct adreno_gpu *adreno_gpu, const char *fwname); diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c index 6bc6f67825ce..e7c76d243ee7 100644 --- a/drivers/gpu/drm/msm/msm_drv.c +++ b/drivers/gpu/drm/msm/msm_drv.c @@ -333,7 +333,7 @@ static int context_init(struct drm_device *dev, struct drm_file *file) { static atomic_t ident = ATOMIC_INIT(0); struct msm_drm_private *priv = dev->dev_private; - struct msm_file_private *ctx; + struct msm_context *ctx; ctx = kzalloc(sizeof(*ctx), GFP_KERNEL); if (!ctx) @@ -363,23 +363,23 @@ static int msm_open(struct drm_device *dev, struct drm_file *file) return context_init(dev, file); } -static void context_close(struct msm_file_private *ctx) +static void context_close(struct msm_context *ctx) { msm_submitqueue_close(ctx); - msm_file_private_put(ctx); + msm_context_put(ctx); } static void msm_postclose(struct drm_device *dev, struct drm_file *file) { struct msm_drm_private *priv = dev->dev_private; - struct msm_file_private *ctx = file->driver_priv; + struct msm_context *ctx = file->driver_priv; /* * It is not possible to set sysprof param to non-zero if gpu * is not initialized: */ if (priv->gpu) - msm_file_private_set_sysprof(ctx, priv->gpu, 0); + msm_context_set_sysprof(ctx, priv->gpu, 0); context_close(ctx); } @@ -511,7 +511,7 @@ static int msm_ioctl_gem_info_iova(struct drm_device *dev, uint64_t *iova) { struct msm_drm_private *priv = dev->dev_private; - struct msm_file_private *ctx = file->driver_priv; + struct msm_context *ctx = file->driver_priv; if (!priv->gpu) return -EINVAL; @@ -531,7 +531,7 @@ static int msm_ioctl_gem_info_set_iova(struct drm_device *dev, uint64_t iova) { struct msm_drm_private *priv = dev->dev_private; - struct msm_file_private *ctx = file->driver_priv; + struct msm_context *ctx = file->driver_priv; if (!priv->gpu) return -EINVAL; diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index ebc9ba66efb8..747e2ab8373a 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -44,7 +44,7 @@ static void update_device_mem(struct msm_drm_private *priv, ssize_t size) static void update_ctx_mem(struct drm_file *file, ssize_t size) { - struct msm_file_private *ctx = file->driver_priv; + struct msm_context *ctx = file->driver_priv; uint64_t ctx_mem = atomic64_add_return(size, &ctx->ctx_mem); rcu_read_lock(); /* Locks file->pid! */ diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c index be6e793f34bd..99d3f2c4bae5 100644 --- a/drivers/gpu/drm/msm/msm_gem_submit.c +++ b/drivers/gpu/drm/msm/msm_gem_submit.c @@ -642,7 +642,7 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data, { struct msm_drm_private *priv = dev->dev_private; struct drm_msm_gem_submit *args = data; - struct msm_file_private *ctx = file->driver_priv; + struct msm_context *ctx = file->driver_priv; struct msm_gem_submit *submit = NULL; struct msm_gpu *gpu = priv->gpu; struct msm_gpu_submitqueue *queue; diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c index 0d4a3744cfcb..6ff9541990dc 100644 --- a/drivers/gpu/drm/msm/msm_gpu.c +++ b/drivers/gpu/drm/msm/msm_gpu.c @@ -148,7 +148,7 @@ int msm_gpu_pm_suspend(struct msm_gpu *gpu) return 0; } -void msm_gpu_show_fdinfo(struct msm_gpu *gpu, struct msm_file_private *ctx, +void msm_gpu_show_fdinfo(struct msm_gpu *gpu, struct msm_context *ctx, struct drm_printer *p) { drm_printf(p, "drm-engine-gpu:\t%llu ns\n", ctx->elapsed_ns); @@ -330,7 +330,7 @@ static void retire_submits(struct msm_gpu *gpu); static void get_comm_cmdline(struct msm_gem_submit *submit, char **comm, char **cmd) { - struct msm_file_private *ctx = submit->queue->ctx; + struct msm_context *ctx = submit->queue->ctx; struct task_struct *task; WARN_ON(!mutex_is_locked(&submit->gpu->lock)); diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h index 7cabc8480d7c..76ad75f06706 100644 --- a/drivers/gpu/drm/msm/msm_gpu.h +++ b/drivers/gpu/drm/msm/msm_gpu.h @@ -22,7 +22,7 @@ struct msm_gem_submit; struct msm_gpu_perfcntr; struct msm_gpu_state; -struct msm_file_private; +struct msm_context; struct msm_gpu_config { const char *ioname; @@ -44,9 +44,9 @@ struct msm_gpu_config { * + z180_gpu */ struct msm_gpu_funcs { - int (*get_param)(struct msm_gpu *gpu, struct msm_file_private *ctx, + int (*get_param)(struct msm_gpu *gpu, struct msm_context *ctx, uint32_t param, uint64_t *value, uint32_t *len); - int (*set_param)(struct msm_gpu *gpu, struct msm_file_private *ctx, + int (*set_param)(struct msm_gpu *gpu, struct msm_context *ctx, uint32_t param, uint64_t value, uint32_t len); int (*hw_init)(struct msm_gpu *gpu); @@ -339,7 +339,7 @@ struct msm_gpu_perfcntr { #define NR_SCHED_PRIORITIES (1 + DRM_SCHED_PRIORITY_LOW - DRM_SCHED_PRIORITY_HIGH) /** - * struct msm_file_private - per-drm_file context + * struct msm_context - per-drm_file context * * @queuelock: synchronizes access to submitqueues list * @submitqueues: list of &msm_gpu_submitqueue created by userspace @@ -349,7 +349,7 @@ struct msm_gpu_perfcntr { * @ref: reference count * @seqno: unique per process seqno */ -struct msm_file_private { +struct msm_context { rwlock_t queuelock; struct list_head submitqueues; int queueid; @@ -504,7 +504,7 @@ struct msm_gpu_submitqueue { u32 ring_nr; int faults; uint32_t last_fence; - struct msm_file_private *ctx; + struct msm_context *ctx; struct list_head node; struct idr fence_idr; struct spinlock idr_lock; @@ -600,33 +600,32 @@ static inline void gpu_write64(struct msm_gpu *gpu, u32 reg, u64 val) int msm_gpu_pm_suspend(struct msm_gpu *gpu); int msm_gpu_pm_resume(struct msm_gpu *gpu); -void msm_gpu_show_fdinfo(struct msm_gpu *gpu, struct msm_file_private *ctx, +void msm_gpu_show_fdinfo(struct msm_gpu *gpu, struct msm_context *ctx, struct drm_printer *p); -int msm_submitqueue_init(struct drm_device *drm, struct msm_file_private *ctx); -struct msm_gpu_submitqueue *msm_submitqueue_get(struct msm_file_private *ctx, +int msm_submitqueue_init(struct drm_device *drm, struct msm_context *ctx); +struct msm_gpu_submitqueue *msm_submitqueue_get(struct msm_context *ctx, u32 id); int msm_submitqueue_create(struct drm_device *drm, - struct msm_file_private *ctx, + struct msm_context *ctx, u32 prio, u32 flags, u32 *id); -int msm_submitqueue_query(struct drm_device *drm, struct msm_file_private *ctx, +int msm_submitqueue_query(struct drm_device *drm, struct msm_context *ctx, struct drm_msm_submitqueue_query *args); -int msm_submitqueue_remove(struct msm_file_private *ctx, u32 id); -void msm_submitqueue_close(struct msm_file_private *ctx); +int msm_submitqueue_remove(struct msm_context *ctx, u32 id); +void msm_submitqueue_close(struct msm_context *ctx); void msm_submitqueue_destroy(struct kref *kref); -int msm_file_private_set_sysprof(struct msm_file_private *ctx, - struct msm_gpu *gpu, int sysprof); -void __msm_file_private_destroy(struct kref *kref); +int msm_context_set_sysprof(struct msm_context *ctx, struct msm_gpu *gpu, int sysprof); +void __msm_context_destroy(struct kref *kref); -static inline void msm_file_private_put(struct msm_file_private *ctx) +static inline void msm_context_put(struct msm_context *ctx) { - kref_put(&ctx->ref, __msm_file_private_destroy); + kref_put(&ctx->ref, __msm_context_destroy); } -static inline struct msm_file_private *msm_file_private_get( - struct msm_file_private *ctx) +static inline struct msm_context *msm_context_get( + struct msm_context *ctx) { kref_get(&ctx->ref); return ctx; diff --git a/drivers/gpu/drm/msm/msm_submitqueue.c b/drivers/gpu/drm/msm/msm_submitqueue.c index 7fed1de63b5d..1acc0fe36353 100644 --- a/drivers/gpu/drm/msm/msm_submitqueue.c +++ b/drivers/gpu/drm/msm/msm_submitqueue.c @@ -7,8 +7,7 @@ #include "msm_gpu.h" -int msm_file_private_set_sysprof(struct msm_file_private *ctx, - struct msm_gpu *gpu, int sysprof) +int msm_context_set_sysprof(struct msm_context *ctx, struct msm_gpu *gpu, int sysprof) { /* * Since pm_runtime and sysprof_active are both refcounts, we @@ -46,10 +45,10 @@ int msm_file_private_set_sysprof(struct msm_file_private *ctx, return 0; } -void __msm_file_private_destroy(struct kref *kref) +void __msm_context_destroy(struct kref *kref) { - struct msm_file_private *ctx = container_of(kref, - struct msm_file_private, ref); + struct msm_context *ctx = container_of(kref, + struct msm_context, ref); int i; for (i = 0; i < ARRAY_SIZE(ctx->entities); i++) { @@ -73,12 +72,12 @@ void msm_submitqueue_destroy(struct kref *kref) idr_destroy(&queue->fence_idr); - msm_file_private_put(queue->ctx); + msm_context_put(queue->ctx); kfree(queue); } -struct msm_gpu_submitqueue *msm_submitqueue_get(struct msm_file_private *ctx, +struct msm_gpu_submitqueue *msm_submitqueue_get(struct msm_context *ctx, u32 id) { struct msm_gpu_submitqueue *entry; @@ -101,7 +100,7 @@ struct msm_gpu_submitqueue *msm_submitqueue_get(struct msm_file_private *ctx, return NULL; } -void msm_submitqueue_close(struct msm_file_private *ctx) +void msm_submitqueue_close(struct msm_context *ctx) { struct msm_gpu_submitqueue *entry, *tmp; @@ -119,7 +118,7 @@ void msm_submitqueue_close(struct msm_file_private *ctx) } static struct drm_sched_entity * -get_sched_entity(struct msm_file_private *ctx, struct msm_ringbuffer *ring, +get_sched_entity(struct msm_context *ctx, struct msm_ringbuffer *ring, unsigned ring_nr, enum drm_sched_priority sched_prio) { static DEFINE_MUTEX(entity_lock); @@ -155,7 +154,7 @@ get_sched_entity(struct msm_file_private *ctx, struct msm_ringbuffer *ring, return ctx->entities[idx]; } -int msm_submitqueue_create(struct drm_device *drm, struct msm_file_private *ctx, +int msm_submitqueue_create(struct drm_device *drm, struct msm_context *ctx, u32 prio, u32 flags, u32 *id) { struct msm_drm_private *priv = drm->dev_private; @@ -200,7 +199,7 @@ int msm_submitqueue_create(struct drm_device *drm, struct msm_file_private *ctx, write_lock(&ctx->queuelock); - queue->ctx = msm_file_private_get(ctx); + queue->ctx = msm_context_get(ctx); queue->id = ctx->queueid++; if (id) @@ -221,7 +220,7 @@ int msm_submitqueue_create(struct drm_device *drm, struct msm_file_private *ctx, * Create the default submit-queue (id==0), used for backwards compatibility * for userspace that pre-dates the introduction of submitqueues. */ -int msm_submitqueue_init(struct drm_device *drm, struct msm_file_private *ctx) +int msm_submitqueue_init(struct drm_device *drm, struct msm_context *ctx) { struct msm_drm_private *priv = drm->dev_private; int default_prio, max_priority; @@ -261,7 +260,7 @@ static int msm_submitqueue_query_faults(struct msm_gpu_submitqueue *queue, return ret ? -EFAULT : 0; } -int msm_submitqueue_query(struct drm_device *drm, struct msm_file_private *ctx, +int msm_submitqueue_query(struct drm_device *drm, struct msm_context *ctx, struct drm_msm_submitqueue_query *args) { struct msm_gpu_submitqueue *queue; @@ -282,7 +281,7 @@ int msm_submitqueue_query(struct drm_device *drm, struct msm_file_private *ctx, return ret; } -int msm_submitqueue_remove(struct msm_file_private *ctx, u32 id) +int msm_submitqueue_remove(struct msm_context *ctx, u32 id) { struct msm_gpu_submitqueue *entry; From patchwork Sat Dec 7 16:15:05 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 848121 Received: from mail-pl1-f175.google.com (mail-pl1-f175.google.com [209.85.214.175]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 78FA7156227; Sat, 7 Dec 2024 16:17:24 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.175 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733588246; cv=none; b=RraXLxxXRp4t+xR6lTMwWg35O9pi/8YF6xCpztXY4bqG4zKDqBStfQN2G0PuoKggJJIuurYoiiwmUiOKMfuBLTi6vD1Sjib2I3EehVi0CjkaR8LcIXXEpoN1m7kZ3M69Y/31ZbFlCgWBKF4ehPrXoCM8of+BIQqz6cAhQ5aecS4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733588246; c=relaxed/simple; bh=lUgh65DYtiHlDm4/9bBPwUy1OhkwTJzvsuUxcUrmAHg=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=segGCYoXiW6CRXPUaEmz5y5dmqKOQSQp5d/bY8yyeLQvdUlEKSfXKWvIkDddmFhSY5jKo+eFt072THwzA3XZj2nnJ014wKpoGnZKxLB3IjBzRJBaPJplx1Y2FNrVMy4h44n747wEdu9hQ1Yn5/xw7luqwJr8Xvc0BTavUCxoxE8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=cTBpqHM2; arc=none smtp.client-ip=209.85.214.175 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="cTBpqHM2" Received: by mail-pl1-f175.google.com with SMTP id d9443c01a7336-2162c0f6a39so4395775ad.0; Sat, 07 Dec 2024 08:17:24 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1733588244; x=1734193044; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=/byccYmBx9GqheY7o02LLjac0bcvN34+HChoq7opjUQ=; b=cTBpqHM25kIaei+5RwERhqGcwjAeW2E795+Dy3kuUl9unYowO33TkzvmbYPTr4st5T RgzKNdJiMD5kvc3GKtMRW09Lnh9wzvAt0l07Q/n02SRX2jGHcwQjAMNurdb6MoLL16hO 00P2TGwV729g15vCOD/B6jLNP7fTPY6CpyMspoG0N0fF7trF2KWmCOWT55/O5FzSEdTi 7Kxi12xpS27ZN4JWiR6bzvQwMhk4RP9TF4k2Vc2+GcKD2/ehc0U+D8aoIgRBoWxwhGe+ JxivGXA5ZvLppe3My1SdOX7L4ag3NHPM1nfb8iZD3BEEhIA6Bg5LAEFJGmNcYBggw9Aw Bl/g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1733588244; x=1734193044; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=/byccYmBx9GqheY7o02LLjac0bcvN34+HChoq7opjUQ=; b=jMnSHiTm0utM0YBK7cQKcelHaQjKrCixnHzEz9hbtMcfYLF8FMclzL52/8IiPpfgzr fNE1GSSlMb76V2A3EKTZHLVQuksN9q04ScosAcZ+ywSqJggvvpd0dBi5oBtMUvXcRorG Q61t573sRYRysCo5a3xAIKH54THxZIPDjc6iiHaNArJ6RzLRUwOmk9Sqqoh4OriQ9VsX dKdwMkM75o+J9eDB3j7HLNFBrCislZ8HSub4QMdpsSCTb/1//UpEpCGGvAJhU6xDHsjm GDK+FbLB4z1tpAfeFeBTShhG1cB5Q63u+/J4NsBVnVsih/1qscYARnIVi7pzmlH+VXG7 sS0g== X-Forwarded-Encrypted: i=1; AJvYcCVRD+pdY9Ee3IL4mo/T0VnEQkasEQktBxzpRRi7aZZOwogOjdb6f8HZ6r4YUUi+xWtGUtbmHsEZSiqB/KUk@vger.kernel.org, AJvYcCW6qfHKzsumfoia3SYWgFGr0D+aAUP1egVnd+8H+U3w4W061qmrfOys4bvAa8kQRr3ZPFa60TPnNj3cySwk@vger.kernel.org X-Gm-Message-State: AOJu0YyPKgW3A4vISTClH2/u9IfPD9b1Ya97lTX7UC3hkGPOAmimHSbf vQZb1rFLNJgAZlh7NKZAMZJBMW1Ra02JXTRqllkkdOO9Pn8XNZqK X-Gm-Gg: ASbGncvI1upd7Bok0qafA591zG9dJ7A7GHb/VH9c11Z3Jsmn6iSXJk0RnKe20r+u0Px AvR2dZupf679fPGXqnJ6Hd7lk24+fANpe3LonsXRC5pyrJxZHIBPMTt7llIWhCD89xKzT1PjBV0 J0eLH61Ji20Haq9h33QgXMbFbIHccLz5MUAAQ+xORZQ2+1hI8csVwaeEayUKKYKwKvAw5Ljx9WJ EyTUvxKl48YPlXyOPTSSKdp/Pa9458cbWHz3vkNOa6toLEkFPSYhRLEzZVFEwZvEaxWjDms2Bfs 5A4aYUj8 X-Google-Smtp-Source: AGHT+IHf3FRKOfTROzUO93B5FqsgYxWePs7Ij4IlbmL7knPOJXeOyJEigaHOrxxp4IxYJ4ZweYxOCQ== X-Received: by 2002:a17:902:da8f:b0:215:8847:4377 with SMTP id d9443c01a7336-216112fb5e7mr114122425ad.15.1733588243660; Sat, 07 Dec 2024 08:17:23 -0800 (PST) Received: from localhost (c-73-37-105-206.hsd1.or.comcast.net. [73.37.105.206]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-21631bd2c2dsm9974905ad.263.2024.12.07.08.17.23 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 07 Dec 2024 08:17:23 -0800 (PST) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Akhil P Oommen , Rob Clark , Rob Clark , Sean Paul , Konrad Dybcio , Abhinav Kumar , Dmitry Baryshkov , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [RFC 05/24] drm/msm: Improve msm_context comments Date: Sat, 7 Dec 2024 08:15:05 -0800 Message-ID: <20241207161651.410556-6-robdclark@gmail.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20241207161651.410556-1-robdclark@gmail.com> References: <20241207161651.410556-1-robdclark@gmail.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Rob Clark Just some tidying up. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gpu.h | 44 +++++++++++++++++++++++------------ 1 file changed, 29 insertions(+), 15 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h index 76ad75f06706..01a3b2770d71 100644 --- a/drivers/gpu/drm/msm/msm_gpu.h +++ b/drivers/gpu/drm/msm/msm_gpu.h @@ -340,25 +340,39 @@ struct msm_gpu_perfcntr { /** * struct msm_context - per-drm_file context - * - * @queuelock: synchronizes access to submitqueues list - * @submitqueues: list of &msm_gpu_submitqueue created by userspace - * @queueid: counter incremented each time a submitqueue is created, - * used to assign &msm_gpu_submitqueue.id - * @aspace: the per-process GPU address-space - * @ref: reference count - * @seqno: unique per process seqno */ struct msm_context { + /** @queuelock: synchronizes access to submitqueues list */ rwlock_t queuelock; + + /** @submitqueues: list of &msm_gpu_submitqueue created by userspace */ struct list_head submitqueues; + + /** + * @queueid: + * + * Counter incremented each time a submitqueue is created, used to + * assign &msm_gpu_submitqueue.id + */ int queueid; + + /** @aspace: the per-process GPU address-space */ struct msm_gem_address_space *aspace; + + /** @kref: the reference count */ struct kref ref; + + /** + * @seqno: + * + * A unique per-process sequence number. Used to detect context + * switches, without relying on keeping a, potentially dangling, + * pointer to the previous context. + */ int seqno; /** - * sysprof: + * @sysprof: * * The value of MSM_PARAM_SYSPROF set by userspace. This is * intended to be used by system profiling tools like Mesa's @@ -376,21 +390,21 @@ struct msm_context { int sysprof; /** - * comm: Overridden task comm, see MSM_PARAM_COMM + * @comm: Overridden task comm, see MSM_PARAM_COMM * * Accessed under msm_gpu::lock */ char *comm; /** - * cmdline: Overridden task cmdline, see MSM_PARAM_CMDLINE + * @cmdline: Overridden task cmdline, see MSM_PARAM_CMDLINE * * Accessed under msm_gpu::lock */ char *cmdline; /** - * elapsed: + * @elapsed: * * The total (cumulative) elapsed time GPU was busy with rendering * from this context in ns. @@ -398,7 +412,7 @@ struct msm_context { uint64_t elapsed_ns; /** - * cycles: + * @cycles: * * The total (cumulative) GPU cycles elapsed attributed to this * context. @@ -406,7 +420,7 @@ struct msm_context { uint64_t cycles; /** - * entities: + * @entities: * * Table of per-priority-level sched entities used by submitqueues * associated with this &drm_file. Because some userspace apps @@ -419,7 +433,7 @@ struct msm_context { struct drm_sched_entity *entities[NR_SCHED_PRIORITIES * MSM_GPU_MAX_RINGS]; /** - * ctx_mem: + * @ctx_mem: * * Total amount of memory of GEM buffers with handles attached for * this context. From patchwork Sat Dec 7 16:15:06 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 848120 Received: from mail-pf1-f181.google.com (mail-pf1-f181.google.com [209.85.210.181]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 159C914D2A0; Sat, 7 Dec 2024 16:17:39 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.181 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733588264; cv=none; b=G1IcJTAfHUiOk1jbJ9fBTCbyOL7QwgfGf4LvI+dJbl0kE6ku2SdBk/pHwmh5/MFSEl9YbyzzYCH7RsLv04wBNZy15z5+21a+5+3VX5mm3nahUy4ez7IGcapCg6zgMHreo/Wj0HPvBfxfZ7O7oXZbFV0wc6uuTHKRK2EUXI+mKbY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733588264; c=relaxed/simple; bh=x2eJNUPqAUAEo+jV6YnnQWps7oZnemmFJ0myfOOSvUM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=mfBiah9P3cr/oVwbHY42o6rTjNBBp8auMoqSXYh66K9YCpGmrHIC3+IKSbt7VawGWxcVJ+x7XFIBYhk3oZHpBuNCzLSFWv19+Qavxidj+cI/IcpWajggxLPPyrxXqOyYgIjqw0cx+Hfk4a3s9lZutx7BevMubXLdEAb7BqOtbOE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=CipV+6c/; arc=none smtp.client-ip=209.85.210.181 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="CipV+6c/" Received: by mail-pf1-f181.google.com with SMTP id d2e1a72fcca58-725dbdf380aso190927b3a.3; Sat, 07 Dec 2024 08:17:39 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1733588259; x=1734193059; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=FBltBfbzYC8e4urfh/RR4ynd84ldUvC9PpEd6BoRpFs=; b=CipV+6c/2lFyCWoGj/yo0XqWAno40KuNkeu1UGG8JyhjY1HZnTszW9a9Hpf66/jwqc 2yLFdV//sMLOnuHS4sgYXYBNu2dVUsLDQJcWUhrCEinjzKm+TtGgT5ToJvKRmxo6vdeW aDN71abu1S8CPxR+jmDvMT6BCjd1GQevSOlpgGO0YRiBoeuf5QZDI3OIZJB48YpxSJif P/l1146iSywQWfN029hVFk/YOJd0Y9FeLhGyF80U0QZSwMU6br1y1JsYMr46mNiMkhtp lccUa1D5bXmF1nSVrqb0ISj9IrWpDPUT7cchLTDGXxqtr7lDppLwuPzMLCPlRR2q22Yf n5/Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1733588259; x=1734193059; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=FBltBfbzYC8e4urfh/RR4ynd84ldUvC9PpEd6BoRpFs=; b=k+E8aKr0i/nrAv7hLEl/NXHFZZWE74pipnyrBEd4C7EiOCp1DldghEk02OsvYf1oRB /MuxyPqgSNgT2cqpO1Ylbm6UGUqt7anV2S4GzyWJGUk3HiMEEwfuuvBVFYf9x9PWtHHz lTSx73IGFazUiE3PoYMlQyPdyUu8pHiHT7Dau67mGRzOJwGe2jaGSCWEmcCJ0H1YuLFA zJtkPPOkeE2qhMQCSYtDTi5dwdGB2atsyoQc24vOB7XrtceEJZHuf7OhFZMh6Wam92nY COkant9UorrEgdhrVjrpubvklZ/8tMz7KgXRxy2UGF5O1NOaahxgxcZqPSg8GZo1qER0 2uRQ== X-Forwarded-Encrypted: i=1; AJvYcCWJK00r3wlF1EE18sHg0g6Nx85415tGNVucOYnyg+6PwU9Gh956w8hVjc7KUb4HDkJSzOxjt4Yiu7CyZuI+@vger.kernel.org, AJvYcCX2BNijFJ4hrTb7uaFbFBbOJwlgrudwP5uB3ATUplyItEcI8l4WBng97gCnOHAlagadoWx6TOSoVuK9BeYQ@vger.kernel.org X-Gm-Message-State: AOJu0Yx0x0I26fYQ/S80iAkKbxBd+yp6mUsXBOSjCkLFpVmPe4wD2d2j 5c1UfgmBW2M+W3GjpgH1sPCvL2HFUcgQwFrK7ijJWraPbrcPgeRg X-Gm-Gg: ASbGncvE8B/nRyxKZiQ6h/LxqLRZ+mRpv9paYLIqXWEwa30Ui+KdBiVvVB5+MYJOrUe CQAUryKCWCkGkg5UA1WnyIW/2jMaUYrtTeEJP4ypS0Uqf6eLVKYJgcXRusuF7IiH00RQpRoqfwc 6VhkYpcNCTvWp/VB8PrwJdePsvWsYrIuhJIjRz/zIGkNrbRzwsDgQpLfoPGNJ3FKKJ5o6JUMaXv 2NIzDXdSkCIwS/Dl84OH9JrN0r2Zuk4XPjzs1ETf5E1xnthj/W5Fn/AD1x4SLYeRfmFNYlWb1kV B+SpdwCm X-Google-Smtp-Source: AGHT+IF0b+c48LSCHEbO8/ozrKW3bm9Q0STbv75TXxpBScxGkH3WHbmlb+c46qQbN2yEe7q2qk64og== X-Received: by 2002:a05:6a00:4603:b0:71e:69e:596b with SMTP id d2e1a72fcca58-725b81dda50mr10539532b3a.17.1733588258972; Sat, 07 Dec 2024 08:17:38 -0800 (PST) Received: from localhost (c-73-37-105-206.hsd1.or.comcast.net. [73.37.105.206]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-725a2ca8954sm4652115b3a.133.2024.12.07.08.17.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 07 Dec 2024 08:17:38 -0800 (PST) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Akhil P Oommen , Rob Clark , Rob Clark , Sean Paul , Konrad Dybcio , Abhinav Kumar , Dmitry Baryshkov , Marijn Suijten , David Airlie , Simona Vetter , Paloma Arellano , Jani Nikula , =?utf-8?b?QmFybmFiw6FzIEN6w6ltw6Fu?= , Stephen Boyd , Carl Vanderlip , Jonathan Marek , Jun Nie , linux-kernel@vger.kernel.org (open list) Subject: [RFC 06/24] drm/msm: Rename msm_gem_address_space -> msm_gem_vm Date: Sat, 7 Dec 2024 08:15:06 -0800 Message-ID: <20241207161651.410556-7-robdclark@gmail.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20241207161651.410556-1-robdclark@gmail.com> References: <20241207161651.410556-1-robdclark@gmail.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Rob Clark Re-aligning naming to better match drm_gpuvm terminology will make things less confusing at the end of the drm_gpuvm conversion. This is just rename churn, no functional change. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/adreno/a2xx_gpu.c | 18 ++-- drivers/gpu/drm/msm/adreno/a3xx_gpu.c | 4 +- drivers/gpu/drm/msm/adreno/a4xx_gpu.c | 4 +- drivers/gpu/drm/msm/adreno/a5xx_debugfs.c | 4 +- drivers/gpu/drm/msm/adreno/a5xx_gpu.c | 22 ++--- drivers/gpu/drm/msm/adreno/a5xx_power.c | 2 +- drivers/gpu/drm/msm/adreno/a5xx_preempt.c | 10 +- drivers/gpu/drm/msm/adreno/a6xx_gmu.c | 26 +++--- drivers/gpu/drm/msm/adreno/a6xx_gmu.h | 2 +- drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 45 +++++---- drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c | 6 +- drivers/gpu/drm/msm/adreno/a6xx_preempt.c | 10 +- drivers/gpu/drm/msm/adreno/adreno_gpu.c | 43 +++++---- drivers/gpu/drm/msm/adreno/adreno_gpu.h | 18 ++-- .../drm/msm/disp/dpu1/dpu_encoder_phys_wb.c | 14 +-- drivers/gpu/drm/msm/disp/dpu1/dpu_formats.c | 18 ++-- drivers/gpu/drm/msm/disp/dpu1/dpu_formats.h | 2 +- drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c | 18 ++-- drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c | 14 +-- drivers/gpu/drm/msm/disp/dpu1/dpu_plane.h | 4 +- drivers/gpu/drm/msm/disp/mdp4/mdp4_crtc.c | 6 +- drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c | 24 ++--- drivers/gpu/drm/msm/disp/mdp4/mdp4_plane.c | 12 +-- drivers/gpu/drm/msm/disp/mdp5/mdp5_crtc.c | 4 +- drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c | 18 ++-- drivers/gpu/drm/msm/disp/mdp5/mdp5_plane.c | 12 +-- drivers/gpu/drm/msm/dsi/dsi_host.c | 14 +-- drivers/gpu/drm/msm/msm_drv.c | 8 +- drivers/gpu/drm/msm/msm_drv.h | 10 +- drivers/gpu/drm/msm/msm_fb.c | 10 +- drivers/gpu/drm/msm/msm_fbdev.c | 2 +- drivers/gpu/drm/msm/msm_gem.c | 74 +++++++-------- drivers/gpu/drm/msm/msm_gem.h | 34 +++---- drivers/gpu/drm/msm/msm_gem_submit.c | 6 +- drivers/gpu/drm/msm/msm_gem_vma.c | 93 +++++++++---------- drivers/gpu/drm/msm/msm_gpu.c | 46 ++++----- drivers/gpu/drm/msm/msm_gpu.h | 16 ++-- drivers/gpu/drm/msm/msm_kms.c | 12 +-- drivers/gpu/drm/msm/msm_kms.h | 2 +- drivers/gpu/drm/msm/msm_ringbuffer.c | 4 +- drivers/gpu/drm/msm/msm_submitqueue.c | 2 +- 41 files changed, 344 insertions(+), 349 deletions(-) diff --git a/drivers/gpu/drm/msm/adreno/a2xx_gpu.c b/drivers/gpu/drm/msm/adreno/a2xx_gpu.c index 379a3d346c30..5eb063ed0b46 100644 --- a/drivers/gpu/drm/msm/adreno/a2xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a2xx_gpu.c @@ -113,7 +113,7 @@ static int a2xx_hw_init(struct msm_gpu *gpu) uint32_t *ptr, len; int i, ret; - a2xx_gpummu_params(gpu->aspace->mmu, &pt_base, &tran_error); + a2xx_gpummu_params(gpu->vm->mmu, &pt_base, &tran_error); DBG("%s", gpu->name); @@ -466,19 +466,19 @@ static struct msm_gpu_state *a2xx_gpu_state_get(struct msm_gpu *gpu) return state; } -static struct msm_gem_address_space * -a2xx_create_address_space(struct msm_gpu *gpu, struct platform_device *pdev) +static struct msm_gem_vm * +a2xx_create_vm(struct msm_gpu *gpu, struct platform_device *pdev) { struct msm_mmu *mmu = a2xx_gpummu_new(&pdev->dev, gpu); - struct msm_gem_address_space *aspace; + struct msm_gem_vm *vm; - aspace = msm_gem_address_space_create(mmu, "gpu", SZ_16M, + vm = msm_gem_vm_create(mmu, "gpu", SZ_16M, 0xfff * SZ_64K); - if (IS_ERR(aspace) && !IS_ERR(mmu)) + if (IS_ERR(vm) && !IS_ERR(mmu)) mmu->funcs->destroy(mmu); - return aspace; + return vm; } static u32 a2xx_get_rptr(struct msm_gpu *gpu, struct msm_ringbuffer *ring) @@ -504,7 +504,7 @@ static const struct adreno_gpu_funcs funcs = { #endif .gpu_state_get = a2xx_gpu_state_get, .gpu_state_put = adreno_gpu_state_put, - .create_address_space = a2xx_create_address_space, + .create_vm = a2xx_create_vm, .get_rptr = a2xx_get_rptr, }, }; @@ -551,7 +551,7 @@ struct msm_gpu *a2xx_gpu_init(struct drm_device *dev) else adreno_gpu->registers = a220_registers; - if (!gpu->aspace) { + if (!gpu->vm) { dev_err(dev->dev, "No memory protection without MMU\n"); if (!allow_vram_carveout) { ret = -ENXIO; diff --git a/drivers/gpu/drm/msm/adreno/a3xx_gpu.c b/drivers/gpu/drm/msm/adreno/a3xx_gpu.c index b6df115bb567..434e6ededf83 100644 --- a/drivers/gpu/drm/msm/adreno/a3xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a3xx_gpu.c @@ -526,7 +526,7 @@ static const struct adreno_gpu_funcs funcs = { .gpu_busy = a3xx_gpu_busy, .gpu_state_get = a3xx_gpu_state_get, .gpu_state_put = adreno_gpu_state_put, - .create_address_space = adreno_create_address_space, + .create_vm = adreno_create_vm, .get_rptr = a3xx_get_rptr, }, }; @@ -581,7 +581,7 @@ struct msm_gpu *a3xx_gpu_init(struct drm_device *dev) goto fail; } - if (!gpu->aspace) { + if (!gpu->vm) { /* TODO we think it is possible to configure the GPU to * restrict access to VRAM carveout. But the required * registers are unknown. For now just bail out and diff --git a/drivers/gpu/drm/msm/adreno/a4xx_gpu.c b/drivers/gpu/drm/msm/adreno/a4xx_gpu.c index f1b18a6663f7..2c75debcfd84 100644 --- a/drivers/gpu/drm/msm/adreno/a4xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a4xx_gpu.c @@ -645,7 +645,7 @@ static const struct adreno_gpu_funcs funcs = { .gpu_busy = a4xx_gpu_busy, .gpu_state_get = a4xx_gpu_state_get, .gpu_state_put = adreno_gpu_state_put, - .create_address_space = adreno_create_address_space, + .create_vm = adreno_create_vm, .get_rptr = a4xx_get_rptr, }, .get_timestamp = a4xx_get_timestamp, @@ -695,7 +695,7 @@ struct msm_gpu *a4xx_gpu_init(struct drm_device *dev) adreno_gpu->uche_trap_base = 0xffff0000ffff0000ull; - if (!gpu->aspace) { + if (!gpu->vm) { /* TODO we think it is possible to configure the GPU to * restrict access to VRAM carveout. But the required * registers are unknown. For now just bail out and diff --git a/drivers/gpu/drm/msm/adreno/a5xx_debugfs.c b/drivers/gpu/drm/msm/adreno/a5xx_debugfs.c index 169b8fe688f8..625a4e787d8f 100644 --- a/drivers/gpu/drm/msm/adreno/a5xx_debugfs.c +++ b/drivers/gpu/drm/msm/adreno/a5xx_debugfs.c @@ -116,13 +116,13 @@ reset_set(void *data, u64 val) adreno_gpu->fw[ADRENO_FW_PFP] = NULL; if (a5xx_gpu->pm4_bo) { - msm_gem_unpin_iova(a5xx_gpu->pm4_bo, gpu->aspace); + msm_gem_unpin_iova(a5xx_gpu->pm4_bo, gpu->vm); drm_gem_object_put(a5xx_gpu->pm4_bo); a5xx_gpu->pm4_bo = NULL; } if (a5xx_gpu->pfp_bo) { - msm_gem_unpin_iova(a5xx_gpu->pfp_bo, gpu->aspace); + msm_gem_unpin_iova(a5xx_gpu->pfp_bo, gpu->vm); drm_gem_object_put(a5xx_gpu->pfp_bo); a5xx_gpu->pfp_bo = NULL; } diff --git a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c index caf2c0a7a29f..4814c470e3a1 100644 --- a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c @@ -620,7 +620,7 @@ static int a5xx_ucode_load(struct msm_gpu *gpu) a5xx_gpu->shadow = msm_gem_kernel_new(gpu->dev, sizeof(u32) * gpu->nr_rings, MSM_BO_WC | MSM_BO_MAP_PRIV, - gpu->aspace, &a5xx_gpu->shadow_bo, + gpu->vm, &a5xx_gpu->shadow_bo, &a5xx_gpu->shadow_iova); if (IS_ERR(a5xx_gpu->shadow)) @@ -1040,22 +1040,22 @@ static void a5xx_destroy(struct msm_gpu *gpu) a5xx_preempt_fini(gpu); if (a5xx_gpu->pm4_bo) { - msm_gem_unpin_iova(a5xx_gpu->pm4_bo, gpu->aspace); + msm_gem_unpin_iova(a5xx_gpu->pm4_bo, gpu->vm); drm_gem_object_put(a5xx_gpu->pm4_bo); } if (a5xx_gpu->pfp_bo) { - msm_gem_unpin_iova(a5xx_gpu->pfp_bo, gpu->aspace); + msm_gem_unpin_iova(a5xx_gpu->pfp_bo, gpu->vm); drm_gem_object_put(a5xx_gpu->pfp_bo); } if (a5xx_gpu->gpmu_bo) { - msm_gem_unpin_iova(a5xx_gpu->gpmu_bo, gpu->aspace); + msm_gem_unpin_iova(a5xx_gpu->gpmu_bo, gpu->vm); drm_gem_object_put(a5xx_gpu->gpmu_bo); } if (a5xx_gpu->shadow_bo) { - msm_gem_unpin_iova(a5xx_gpu->shadow_bo, gpu->aspace); + msm_gem_unpin_iova(a5xx_gpu->shadow_bo, gpu->vm); drm_gem_object_put(a5xx_gpu->shadow_bo); } @@ -1455,7 +1455,7 @@ static int a5xx_crashdumper_init(struct msm_gpu *gpu, struct a5xx_crashdumper *dumper) { dumper->ptr = msm_gem_kernel_new(gpu->dev, - SZ_1M, MSM_BO_WC, gpu->aspace, + SZ_1M, MSM_BO_WC, gpu->vm, &dumper->bo, &dumper->iova); if (!IS_ERR(dumper->ptr)) @@ -1555,7 +1555,7 @@ static void a5xx_gpu_state_get_hlsq_regs(struct msm_gpu *gpu, if (a5xx_crashdumper_run(gpu, &dumper)) { kfree(a5xx_state->hlsqregs); - msm_gem_kernel_put(dumper.bo, gpu->aspace); + msm_gem_kernel_put(dumper.bo, gpu->vm); return; } @@ -1563,7 +1563,7 @@ static void a5xx_gpu_state_get_hlsq_regs(struct msm_gpu *gpu, memcpy(a5xx_state->hlsqregs, dumper.ptr + (256 * SZ_1K), count * sizeof(u32)); - msm_gem_kernel_put(dumper.bo, gpu->aspace); + msm_gem_kernel_put(dumper.bo, gpu->vm); } static struct msm_gpu_state *a5xx_gpu_state_get(struct msm_gpu *gpu) @@ -1711,7 +1711,7 @@ static const struct adreno_gpu_funcs funcs = { .gpu_busy = a5xx_gpu_busy, .gpu_state_get = a5xx_gpu_state_get, .gpu_state_put = a5xx_gpu_state_put, - .create_address_space = adreno_create_address_space, + .create_vm = adreno_create_vm, .get_rptr = a5xx_get_rptr, }, .get_timestamp = a5xx_get_timestamp, @@ -1789,8 +1789,8 @@ struct msm_gpu *a5xx_gpu_init(struct drm_device *dev) return ERR_PTR(ret); } - if (gpu->aspace) - msm_mmu_set_fault_handler(gpu->aspace->mmu, gpu, a5xx_fault_handler); + if (gpu->vm) + msm_mmu_set_fault_handler(gpu->vm->mmu, gpu, a5xx_fault_handler); /* Set up the preemption specific bits and pieces for each ringbuffer */ a5xx_preempt_init(gpu); diff --git a/drivers/gpu/drm/msm/adreno/a5xx_power.c b/drivers/gpu/drm/msm/adreno/a5xx_power.c index 6b91e0bd1514..d6da7351cfbb 100644 --- a/drivers/gpu/drm/msm/adreno/a5xx_power.c +++ b/drivers/gpu/drm/msm/adreno/a5xx_power.c @@ -363,7 +363,7 @@ void a5xx_gpmu_ucode_init(struct msm_gpu *gpu) bosize = (cmds_size + (cmds_size / TYPE4_MAX_PAYLOAD) + 1) << 2; ptr = msm_gem_kernel_new(drm, bosize, - MSM_BO_WC | MSM_BO_GPU_READONLY, gpu->aspace, + MSM_BO_WC | MSM_BO_GPU_READONLY, gpu->vm, &a5xx_gpu->gpmu_bo, &a5xx_gpu->gpmu_iova); if (IS_ERR(ptr)) return; diff --git a/drivers/gpu/drm/msm/adreno/a5xx_preempt.c b/drivers/gpu/drm/msm/adreno/a5xx_preempt.c index 0469fea55010..5f9e2eb80a2c 100644 --- a/drivers/gpu/drm/msm/adreno/a5xx_preempt.c +++ b/drivers/gpu/drm/msm/adreno/a5xx_preempt.c @@ -254,7 +254,7 @@ static int preempt_init_ring(struct a5xx_gpu *a5xx_gpu, ptr = msm_gem_kernel_new(gpu->dev, A5XX_PREEMPT_RECORD_SIZE + A5XX_PREEMPT_COUNTER_SIZE, - MSM_BO_WC | MSM_BO_MAP_PRIV, gpu->aspace, &bo, &iova); + MSM_BO_WC | MSM_BO_MAP_PRIV, gpu->vm, &bo, &iova); if (IS_ERR(ptr)) return PTR_ERR(ptr); @@ -262,9 +262,9 @@ static int preempt_init_ring(struct a5xx_gpu *a5xx_gpu, /* The buffer to store counters needs to be unprivileged */ counters = msm_gem_kernel_new(gpu->dev, A5XX_PREEMPT_COUNTER_SIZE, - MSM_BO_WC, gpu->aspace, &counters_bo, &counters_iova); + MSM_BO_WC, gpu->vm, &counters_bo, &counters_iova); if (IS_ERR(counters)) { - msm_gem_kernel_put(bo, gpu->aspace); + msm_gem_kernel_put(bo, gpu->vm); return PTR_ERR(counters); } @@ -295,8 +295,8 @@ void a5xx_preempt_fini(struct msm_gpu *gpu) int i; for (i = 0; i < gpu->nr_rings; i++) { - msm_gem_kernel_put(a5xx_gpu->preempt_bo[i], gpu->aspace); - msm_gem_kernel_put(a5xx_gpu->preempt_counters_bo[i], gpu->aspace); + msm_gem_kernel_put(a5xx_gpu->preempt_bo[i], gpu->vm); + msm_gem_kernel_put(a5xx_gpu->preempt_counters_bo[i], gpu->vm); } } diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c index 14db7376c712..31cceb9eb51a 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c @@ -1218,15 +1218,15 @@ int a6xx_gmu_stop(struct a6xx_gpu *a6xx_gpu) static void a6xx_gmu_memory_free(struct a6xx_gmu *gmu) { - msm_gem_kernel_put(gmu->hfi.obj, gmu->aspace); - msm_gem_kernel_put(gmu->debug.obj, gmu->aspace); - msm_gem_kernel_put(gmu->icache.obj, gmu->aspace); - msm_gem_kernel_put(gmu->dcache.obj, gmu->aspace); - msm_gem_kernel_put(gmu->dummy.obj, gmu->aspace); - msm_gem_kernel_put(gmu->log.obj, gmu->aspace); - - gmu->aspace->mmu->funcs->detach(gmu->aspace->mmu); - msm_gem_address_space_put(gmu->aspace); + msm_gem_kernel_put(gmu->hfi.obj, gmu->vm); + msm_gem_kernel_put(gmu->debug.obj, gmu->vm); + msm_gem_kernel_put(gmu->icache.obj, gmu->vm); + msm_gem_kernel_put(gmu->dcache.obj, gmu->vm); + msm_gem_kernel_put(gmu->dummy.obj, gmu->vm); + msm_gem_kernel_put(gmu->log.obj, gmu->vm); + + gmu->vm->mmu->funcs->detach(gmu->vm->mmu); + msm_gem_vm_put(gmu->vm); } static int a6xx_gmu_memory_alloc(struct a6xx_gmu *gmu, struct a6xx_gmu_bo *bo, @@ -1255,7 +1255,7 @@ static int a6xx_gmu_memory_alloc(struct a6xx_gmu *gmu, struct a6xx_gmu_bo *bo, if (IS_ERR(bo->obj)) return PTR_ERR(bo->obj); - ret = msm_gem_get_and_pin_iova_range(bo->obj, gmu->aspace, &bo->iova, + ret = msm_gem_get_and_pin_iova_range(bo->obj, gmu->vm, &bo->iova, range_start, range_end); if (ret) { drm_gem_object_put(bo->obj); @@ -1280,9 +1280,9 @@ static int a6xx_gmu_memory_probe(struct a6xx_gmu *gmu) if (IS_ERR(mmu)) return PTR_ERR(mmu); - gmu->aspace = msm_gem_address_space_create(mmu, "gmu", 0x0, 0x80000000); - if (IS_ERR(gmu->aspace)) - return PTR_ERR(gmu->aspace); + gmu->vm = msm_gem_vm_create(mmu, "gmu", 0x0, 0x80000000); + if (IS_ERR(gmu->vm)) + return PTR_ERR(gmu->vm); return 0; } diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.h b/drivers/gpu/drm/msm/adreno/a6xx_gmu.h index b4a79f88ccf4..5ffabc16e35a 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.h +++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.h @@ -50,7 +50,7 @@ struct a6xx_gmu { /* For serializing communication with the GMU: */ struct mutex lock; - struct msm_gem_address_space *aspace; + struct msm_gem_vm *vm; void __iomem *mmio; void __iomem *rscc; diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c index 867c6161ef1f..6b961267614f 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c @@ -120,7 +120,7 @@ static void a6xx_set_pagetable(struct a6xx_gpu *a6xx_gpu, if (ctx->seqno == ring->cur_ctx_seqno) return; - if (msm_iommu_pagetable_params(ctx->aspace->mmu, &ttbr, &asid)) + if (msm_iommu_pagetable_params(ctx->vm->mmu, &ttbr, &asid)) return; if (adreno_gpu->info->family >= ADRENO_7XX_GEN1) { @@ -945,7 +945,7 @@ static int a6xx_ucode_load(struct msm_gpu *gpu) msm_gem_object_set_name(a6xx_gpu->sqe_bo, "sqefw"); if (!a6xx_ucode_check_version(a6xx_gpu, a6xx_gpu->sqe_bo)) { - msm_gem_unpin_iova(a6xx_gpu->sqe_bo, gpu->aspace); + msm_gem_unpin_iova(a6xx_gpu->sqe_bo, gpu->vm); drm_gem_object_put(a6xx_gpu->sqe_bo); a6xx_gpu->sqe_bo = NULL; @@ -962,7 +962,7 @@ static int a6xx_ucode_load(struct msm_gpu *gpu) a6xx_gpu->shadow = msm_gem_kernel_new(gpu->dev, sizeof(u32) * gpu->nr_rings, MSM_BO_WC | MSM_BO_MAP_PRIV, - gpu->aspace, &a6xx_gpu->shadow_bo, + gpu->vm, &a6xx_gpu->shadow_bo, &a6xx_gpu->shadow_iova); if (IS_ERR(a6xx_gpu->shadow)) @@ -973,7 +973,7 @@ static int a6xx_ucode_load(struct msm_gpu *gpu) a6xx_gpu->pwrup_reglist_ptr = msm_gem_kernel_new(gpu->dev, PAGE_SIZE, MSM_BO_WC | MSM_BO_MAP_PRIV, - gpu->aspace, &a6xx_gpu->pwrup_reglist_bo, + gpu->vm, &a6xx_gpu->pwrup_reglist_bo, &a6xx_gpu->pwrup_reglist_iova); if (IS_ERR(a6xx_gpu->pwrup_reglist_ptr)) @@ -2186,12 +2186,12 @@ static void a6xx_destroy(struct msm_gpu *gpu) struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu); if (a6xx_gpu->sqe_bo) { - msm_gem_unpin_iova(a6xx_gpu->sqe_bo, gpu->aspace); + msm_gem_unpin_iova(a6xx_gpu->sqe_bo, gpu->vm); drm_gem_object_put(a6xx_gpu->sqe_bo); } if (a6xx_gpu->shadow_bo) { - msm_gem_unpin_iova(a6xx_gpu->shadow_bo, gpu->aspace); + msm_gem_unpin_iova(a6xx_gpu->shadow_bo, gpu->vm); drm_gem_object_put(a6xx_gpu->shadow_bo); } @@ -2231,8 +2231,8 @@ static void a6xx_gpu_set_freq(struct msm_gpu *gpu, struct dev_pm_opp *opp, mutex_unlock(&a6xx_gpu->gmu.lock); } -static struct msm_gem_address_space * -a6xx_create_address_space(struct msm_gpu *gpu, struct platform_device *pdev) +static struct msm_gem_vm * +a6xx_create_vm(struct msm_gpu *gpu, struct platform_device *pdev) { struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu); struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu); @@ -2246,22 +2246,22 @@ a6xx_create_address_space(struct msm_gpu *gpu, struct platform_device *pdev) !device_iommu_capable(&pdev->dev, IOMMU_CAP_CACHE_COHERENCY)) quirks |= IO_PGTABLE_QUIRK_ARM_OUTER_WBWA; - return adreno_iommu_create_address_space(gpu, pdev, quirks); + return adreno_iommu_create_vm(gpu, pdev, quirks); } -static struct msm_gem_address_space * -a6xx_create_private_address_space(struct msm_gpu *gpu) +static struct msm_gem_vm * +a6xx_create_private_vm(struct msm_gpu *gpu) { struct msm_mmu *mmu; - mmu = msm_iommu_pagetable_create(gpu->aspace->mmu); + mmu = msm_iommu_pagetable_create(gpu->vm->mmu); if (IS_ERR(mmu)) return ERR_CAST(mmu); - return msm_gem_address_space_create(mmu, + return msm_gem_vm_create(mmu, "gpu", 0x100000000ULL, - adreno_private_address_space_size(gpu)); + adreno_private_vm_size(gpu)); } static uint32_t a6xx_get_rptr(struct msm_gpu *gpu, struct msm_ringbuffer *ring) @@ -2378,8 +2378,8 @@ static const struct adreno_gpu_funcs funcs = { .gpu_state_get = a6xx_gpu_state_get, .gpu_state_put = a6xx_gpu_state_put, #endif - .create_address_space = a6xx_create_address_space, - .create_private_address_space = a6xx_create_private_address_space, + .create_vm = a6xx_create_vm, + .create_private_vm = a6xx_create_private_vm, .get_rptr = a6xx_get_rptr, .progress = a6xx_progress, }, @@ -2407,8 +2407,8 @@ static const struct adreno_gpu_funcs funcs_gmuwrapper = { .gpu_state_get = a6xx_gpu_state_get, .gpu_state_put = a6xx_gpu_state_put, #endif - .create_address_space = a6xx_create_address_space, - .create_private_address_space = a6xx_create_private_address_space, + .create_vm = a6xx_create_vm, + .create_private_vm = a6xx_create_private_vm, .get_rptr = a6xx_get_rptr, .progress = a6xx_progress, }, @@ -2438,8 +2438,8 @@ static const struct adreno_gpu_funcs funcs_a7xx = { .gpu_state_get = a6xx_gpu_state_get, .gpu_state_put = a6xx_gpu_state_put, #endif - .create_address_space = a6xx_create_address_space, - .create_private_address_space = a6xx_create_private_address_space, + .create_vm = a6xx_create_vm, + .create_private_vm = a6xx_create_private_vm, .get_rptr = a6xx_get_rptr, .progress = a6xx_progress, }, @@ -2535,9 +2535,8 @@ struct msm_gpu *a6xx_gpu_init(struct drm_device *dev) adreno_gpu->uche_trap_base = 0x1fffffffff000ull; - if (gpu->aspace) - msm_mmu_set_fault_handler(gpu->aspace->mmu, gpu, - a6xx_fault_handler); + if (gpu->vm) + msm_mmu_set_fault_handler(gpu->vm->mmu, gpu, a6xx_fault_handler); a6xx_calc_ubwc_config(adreno_gpu); /* Set up the preemption specific bits and pieces for each ringbuffer */ diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c index 0fcae53c0b14..a73613551493 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c @@ -132,7 +132,7 @@ static int a6xx_crashdumper_init(struct msm_gpu *gpu, struct a6xx_crashdumper *dumper) { dumper->ptr = msm_gem_kernel_new(gpu->dev, - SZ_1M, MSM_BO_WC, gpu->aspace, + SZ_1M, MSM_BO_WC, gpu->vm, &dumper->bo, &dumper->iova); if (!IS_ERR(dumper->ptr)) @@ -1610,7 +1610,7 @@ struct msm_gpu_state *a6xx_gpu_state_get(struct msm_gpu *gpu) a7xx_get_clusters(gpu, a6xx_state, dumper); a7xx_get_dbgahb_clusters(gpu, a6xx_state, dumper); - msm_gem_kernel_put(dumper->bo, gpu->aspace); + msm_gem_kernel_put(dumper->bo, gpu->vm); } a7xx_get_post_crashdumper_registers(gpu, a6xx_state); @@ -1622,7 +1622,7 @@ struct msm_gpu_state *a6xx_gpu_state_get(struct msm_gpu *gpu) a6xx_get_clusters(gpu, a6xx_state, dumper); a6xx_get_dbgahb_clusters(gpu, a6xx_state, dumper); - msm_gem_kernel_put(dumper->bo, gpu->aspace); + msm_gem_kernel_put(dumper->bo, gpu->vm); } } diff --git a/drivers/gpu/drm/msm/adreno/a6xx_preempt.c b/drivers/gpu/drm/msm/adreno/a6xx_preempt.c index 2fd4e39f618f..41229c60aa06 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_preempt.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_preempt.c @@ -343,7 +343,7 @@ static int preempt_init_ring(struct a6xx_gpu *a6xx_gpu, ptr = msm_gem_kernel_new(gpu->dev, PREEMPT_RECORD_SIZE(adreno_gpu), - MSM_BO_WC | MSM_BO_MAP_PRIV, gpu->aspace, &bo, &iova); + MSM_BO_WC | MSM_BO_MAP_PRIV, gpu->vm, &bo, &iova); if (IS_ERR(ptr)) return PTR_ERR(ptr); @@ -361,7 +361,7 @@ static int preempt_init_ring(struct a6xx_gpu *a6xx_gpu, ptr = msm_gem_kernel_new(gpu->dev, PREEMPT_SMMU_INFO_SIZE, MSM_BO_WC | MSM_BO_MAP_PRIV | MSM_BO_GPU_READONLY, - gpu->aspace, &bo, &iova); + gpu->vm, &bo, &iova); if (IS_ERR(ptr)) return PTR_ERR(ptr); @@ -376,7 +376,7 @@ static int preempt_init_ring(struct a6xx_gpu *a6xx_gpu, struct a7xx_cp_smmu_info *smmu_info_ptr = ptr; - msm_iommu_pagetable_params(gpu->aspace->mmu, &ttbr, &asid); + msm_iommu_pagetable_params(gpu->vm->mmu, &ttbr, &asid); smmu_info_ptr->magic = GEN7_CP_SMMU_INFO_MAGIC; smmu_info_ptr->ttbr0 = ttbr; @@ -404,7 +404,7 @@ void a6xx_preempt_fini(struct msm_gpu *gpu) int i; for (i = 0; i < gpu->nr_rings; i++) - msm_gem_kernel_put(a6xx_gpu->preempt_bo[i], gpu->aspace); + msm_gem_kernel_put(a6xx_gpu->preempt_bo[i], gpu->vm); } void a6xx_preempt_init(struct msm_gpu *gpu) @@ -430,7 +430,7 @@ void a6xx_preempt_init(struct msm_gpu *gpu) a6xx_gpu->preempt_postamble_ptr = msm_gem_kernel_new(gpu->dev, PAGE_SIZE, MSM_BO_WC | MSM_BO_MAP_PRIV | MSM_BO_GPU_READONLY, - gpu->aspace, &a6xx_gpu->preempt_postamble_bo, + gpu->vm, &a6xx_gpu->preempt_postamble_bo, &a6xx_gpu->preempt_postamble_iova); preempt_prepare_postamble(a6xx_gpu); diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c b/drivers/gpu/drm/msm/adreno/adreno_gpu.c index 719abefecb6f..14ac1900f031 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c @@ -191,21 +191,21 @@ int adreno_zap_shader_load(struct msm_gpu *gpu, u32 pasid) return zap_shader_load_mdt(gpu, adreno_gpu->info->zapfw, pasid); } -struct msm_gem_address_space * -adreno_create_address_space(struct msm_gpu *gpu, - struct platform_device *pdev) +struct msm_gem_vm * +adreno_create_vm(struct msm_gpu *gpu, + struct platform_device *pdev) { - return adreno_iommu_create_address_space(gpu, pdev, 0); + return adreno_iommu_create_vm(gpu, pdev, 0); } -struct msm_gem_address_space * -adreno_iommu_create_address_space(struct msm_gpu *gpu, - struct platform_device *pdev, - unsigned long quirks) +struct msm_gem_vm * +adreno_iommu_create_vm(struct msm_gpu *gpu, + struct platform_device *pdev, + unsigned long quirks) { struct iommu_domain_geometry *geometry; struct msm_mmu *mmu; - struct msm_gem_address_space *aspace; + struct msm_gem_vm *vm; u64 start, size; mmu = msm_iommu_gpu_new(&pdev->dev, gpu, quirks); @@ -224,16 +224,15 @@ adreno_iommu_create_address_space(struct msm_gpu *gpu, start = max_t(u64, SZ_16M, geometry->aperture_start); size = geometry->aperture_end - start + 1; - aspace = msm_gem_address_space_create(mmu, "gpu", - start & GENMASK_ULL(48, 0), size); + vm = msm_gem_vm_create(mmu, "gpu", start & GENMASK_ULL(48, 0), size); - if (IS_ERR(aspace) && !IS_ERR(mmu)) + if (IS_ERR(vm) && !IS_ERR(mmu)) mmu->funcs->destroy(mmu); - return aspace; + return vm; } -u64 adreno_private_address_space_size(struct msm_gpu *gpu) +u64 adreno_private_vm_size(struct msm_gpu *gpu) { struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu); @@ -262,7 +261,7 @@ int adreno_fault_handler(struct msm_gpu *gpu, unsigned long iova, int flags, * it now. */ if (!do_devcoredump) { - gpu->aspace->mmu->funcs->resume_translation(gpu->aspace->mmu); + gpu->vm->mmu->funcs->resume_translation(gpu->vm->mmu); } /* @@ -356,8 +355,8 @@ int adreno_get_param(struct msm_gpu *gpu, struct msm_context *ctx, *value = 0; return 0; case MSM_PARAM_FAULTS: - if (ctx->aspace) - *value = gpu->global_faults + ctx->aspace->faults; + if (ctx->vm) + *value = gpu->global_faults + ctx->vm->faults; else *value = gpu->global_faults; return 0; @@ -365,14 +364,14 @@ int adreno_get_param(struct msm_gpu *gpu, struct msm_context *ctx, *value = gpu->suspend_count; return 0; case MSM_PARAM_VA_START: - if (ctx->aspace == gpu->aspace) + if (ctx->vm == gpu->vm) return UERR(EINVAL, drm, "requires per-process pgtables"); - *value = ctx->aspace->va_start; + *value = ctx->vm->va_start; return 0; case MSM_PARAM_VA_SIZE: - if (ctx->aspace == gpu->aspace) + if (ctx->vm == gpu->vm) return UERR(EINVAL, drm, "requires per-process pgtables"); - *value = ctx->aspace->va_size; + *value = ctx->vm->va_size; return 0; case MSM_PARAM_HIGHEST_BANK_BIT: *value = adreno_gpu->ubwc_config.highest_bank_bit; @@ -562,7 +561,7 @@ struct drm_gem_object *adreno_fw_create_bo(struct msm_gpu *gpu, void *ptr; ptr = msm_gem_kernel_new(gpu->dev, fw->size - 4, - MSM_BO_WC | MSM_BO_GPU_READONLY, gpu->aspace, &bo, iova); + MSM_BO_WC | MSM_BO_GPU_READONLY, gpu->vm, &bo, iova); if (IS_ERR(ptr)) return ERR_CAST(ptr); diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.h b/drivers/gpu/drm/msm/adreno/adreno_gpu.h index caf8816e6252..728e4b0def3d 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.h +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.h @@ -568,7 +568,7 @@ static inline int adreno_is_a7xx(struct adreno_gpu *gpu) adreno_is_a740_family(gpu); } -u64 adreno_private_address_space_size(struct msm_gpu *gpu); +u64 adreno_private_vm_size(struct msm_gpu *gpu); int adreno_get_param(struct msm_gpu *gpu, struct msm_context *ctx, uint32_t param, uint64_t *value, uint32_t *len); int adreno_set_param(struct msm_gpu *gpu, struct msm_context *ctx, @@ -611,14 +611,14 @@ void adreno_show_object(struct drm_printer *p, void **ptr, int len, * Common helper function to initialize the default address space for arm-smmu * attached targets */ -struct msm_gem_address_space * -adreno_create_address_space(struct msm_gpu *gpu, - struct platform_device *pdev); - -struct msm_gem_address_space * -adreno_iommu_create_address_space(struct msm_gpu *gpu, - struct platform_device *pdev, - unsigned long quirks); +struct msm_gem_vm * +adreno_create_vm(struct msm_gpu *gpu, + struct platform_device *pdev); + +struct msm_gem_vm * +adreno_iommu_create_vm(struct msm_gpu *gpu, + struct platform_device *pdev, + unsigned long quirks); int adreno_fault_handler(struct msm_gpu *gpu, unsigned long iova, int flags, struct adreno_smmu_fault_info *info, const char *block, diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_wb.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_wb.c index 4c006ec74575..2c53c937485a 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_wb.c +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_wb.c @@ -558,7 +558,7 @@ static void dpu_encoder_phys_wb_prepare_wb_job(struct dpu_encoder_phys *phys_enc struct drm_writeback_job *job) { const struct msm_format *format; - struct msm_gem_address_space *aspace; + struct msm_gem_vm *vm; struct dpu_hw_wb_cfg *wb_cfg; int ret; struct dpu_encoder_phys_wb *wb_enc = to_dpu_encoder_phys_wb(phys_enc); @@ -568,13 +568,13 @@ static void dpu_encoder_phys_wb_prepare_wb_job(struct dpu_encoder_phys *phys_enc wb_enc->wb_job = job; wb_enc->wb_conn = job->connector; - aspace = phys_enc->dpu_kms->base.aspace; + vm = phys_enc->dpu_kms->base.vm; wb_cfg = &wb_enc->wb_cfg; memset(wb_cfg, 0, sizeof(struct dpu_hw_wb_cfg)); - ret = msm_framebuffer_prepare(job->fb, aspace, false); + ret = msm_framebuffer_prepare(job->fb, vm, false); if (ret) { DPU_ERROR("prep fb failed, %d\n", ret); return; @@ -588,7 +588,7 @@ static void dpu_encoder_phys_wb_prepare_wb_job(struct dpu_encoder_phys *phys_enc return; } - dpu_format_populate_addrs(aspace, job->fb, &wb_cfg->dest); + dpu_format_populate_addrs(vm, job->fb, &wb_cfg->dest); wb_cfg->dest.width = job->fb->width; wb_cfg->dest.height = job->fb->height; @@ -611,14 +611,14 @@ static void dpu_encoder_phys_wb_cleanup_wb_job(struct dpu_encoder_phys *phys_enc struct drm_writeback_job *job) { struct dpu_encoder_phys_wb *wb_enc = to_dpu_encoder_phys_wb(phys_enc); - struct msm_gem_address_space *aspace; + struct msm_gem_vm *vm; if (!job->fb) return; - aspace = phys_enc->dpu_kms->base.aspace; + vm = phys_enc->dpu_kms->base.vm; - msm_framebuffer_cleanup(job->fb, aspace, false); + msm_framebuffer_cleanup(job->fb, vm, false); wb_enc->wb_job = NULL; wb_enc->wb_conn = NULL; } diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_formats.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_formats.c index 59c9427da7dd..d115b79af771 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_formats.c +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_formats.c @@ -274,7 +274,7 @@ int dpu_format_populate_plane_sizes( return _dpu_format_populate_plane_sizes_linear(fmt, fb, layout); } -static void _dpu_format_populate_addrs_ubwc(struct msm_gem_address_space *aspace, +static void _dpu_format_populate_addrs_ubwc(struct msm_gem_vm *vm, struct drm_framebuffer *fb, struct dpu_hw_fmt_layout *layout) { @@ -282,7 +282,7 @@ static void _dpu_format_populate_addrs_ubwc(struct msm_gem_address_space *aspace uint32_t base_addr = 0; bool meta; - base_addr = msm_framebuffer_iova(fb, aspace, 0); + base_addr = msm_framebuffer_iova(fb, vm, 0); fmt = msm_framebuffer_format(fb); meta = MSM_FORMAT_IS_UBWC(fmt); @@ -355,7 +355,7 @@ static void _dpu_format_populate_addrs_ubwc(struct msm_gem_address_space *aspace } } -static void _dpu_format_populate_addrs_linear(struct msm_gem_address_space *aspace, +static void _dpu_format_populate_addrs_linear(struct msm_gem_vm *vm, struct drm_framebuffer *fb, struct dpu_hw_fmt_layout *layout) { @@ -363,17 +363,17 @@ static void _dpu_format_populate_addrs_linear(struct msm_gem_address_space *aspa /* Populate addresses for simple formats here */ for (i = 0; i < layout->num_planes; ++i) - layout->plane_addr[i] = msm_framebuffer_iova(fb, aspace, i); -} + layout->plane_addr[i] = msm_framebuffer_iova(fb, vm, i); + } /** * dpu_format_populate_addrs - populate buffer addresses based on * mmu, fb, and format found in the fb - * @aspace: address space pointer + * @vm: address space pointer * @fb: framebuffer pointer * @layout: format layout structure to populate */ -void dpu_format_populate_addrs(struct msm_gem_address_space *aspace, +void dpu_format_populate_addrs(struct msm_gem_vm *vm, struct drm_framebuffer *fb, struct dpu_hw_fmt_layout *layout) { @@ -384,7 +384,7 @@ void dpu_format_populate_addrs(struct msm_gem_address_space *aspace, /* Populate the addresses given the fb */ if (MSM_FORMAT_IS_UBWC(fmt) || MSM_FORMAT_IS_TILE(fmt)) - _dpu_format_populate_addrs_ubwc(aspace, fb, layout); + _dpu_format_populate_addrs_ubwc(vm, fb, layout); else - _dpu_format_populate_addrs_linear(aspace, fb, layout); + _dpu_format_populate_addrs_linear(vm, fb, layout); } diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_formats.h b/drivers/gpu/drm/msm/disp/dpu1/dpu_formats.h index c6145d43aa3f..989f3e13c497 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_formats.h +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_formats.h @@ -31,7 +31,7 @@ static inline bool dpu_find_format(u32 format, const u32 *supported_formats, return false; } -void dpu_format_populate_addrs(struct msm_gem_address_space *aspace, +void dpu_format_populate_addrs(struct msm_gem_vm *vm, struct drm_framebuffer *fb, struct dpu_hw_fmt_layout *layout); diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c index ca4847b2b738..37475f2a20ac 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c @@ -1051,26 +1051,26 @@ static void _dpu_kms_mmu_destroy(struct dpu_kms *dpu_kms) { struct msm_mmu *mmu; - if (!dpu_kms->base.aspace) + if (!dpu_kms->base.vm) return; - mmu = dpu_kms->base.aspace->mmu; + mmu = dpu_kms->base.vm->mmu; mmu->funcs->detach(mmu); - msm_gem_address_space_put(dpu_kms->base.aspace); + msm_gem_vm_put(dpu_kms->base.vm); - dpu_kms->base.aspace = NULL; + dpu_kms->base.vm = NULL; } static int _dpu_kms_mmu_init(struct dpu_kms *dpu_kms) { - struct msm_gem_address_space *aspace; + struct msm_gem_vm *vm; - aspace = msm_kms_init_aspace(dpu_kms->dev); - if (IS_ERR(aspace)) - return PTR_ERR(aspace); + vm = msm_kms_init_vm(dpu_kms->dev); + if (IS_ERR(vm)) + return PTR_ERR(vm); - dpu_kms->base.aspace = aspace; + dpu_kms->base.vm = vm; return 0; } diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c index 3ffac24333a2..f80b252603a2 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c @@ -72,7 +72,7 @@ static const uint32_t qcom_compressed_supported_formats[] = { /* * struct dpu_plane - local dpu plane structure - * @aspace: address space pointer + * @vm: address space pointer * @csc_ptr: Points to dpu_csc_cfg structure to use for current * @catalog: Points to dpu catalog structure * @revalidate: force revalidation of all the plane properties @@ -655,8 +655,8 @@ static int dpu_plane_prepare_fb(struct drm_plane *plane, DPU_DEBUG_PLANE(pdpu, "FB[%u]\n", fb->base.id); - /* cache aspace */ - pstate->aspace = kms->base.aspace; + /* cache vm */ + pstate->vm = kms->base.vm; /* * TODO: Need to sort out the msm_framebuffer_prepare() call below so @@ -665,9 +665,9 @@ static int dpu_plane_prepare_fb(struct drm_plane *plane, */ drm_gem_plane_helper_prepare_fb(plane, new_state); - if (pstate->aspace) { + if (pstate->vm) { ret = msm_framebuffer_prepare(new_state->fb, - pstate->aspace, pstate->needs_dirtyfb); + pstate->vm, pstate->needs_dirtyfb); if (ret) { DPU_ERROR("failed to prepare framebuffer\n"); return ret; @@ -690,7 +690,7 @@ static void dpu_plane_cleanup_fb(struct drm_plane *plane, DPU_DEBUG_PLANE(pdpu, "FB[%u]\n", old_state->fb->base.id); - msm_framebuffer_cleanup(old_state->fb, old_pstate->aspace, + msm_framebuffer_cleanup(old_state->fb, old_pstate->vm, old_pstate->needs_dirtyfb); } @@ -1187,7 +1187,7 @@ static void dpu_plane_sspp_atomic_update(struct drm_plane *plane, pstate->needs_qos_remap |= (is_rt_pipe != pdpu->is_rt_pipe); pdpu->is_rt_pipe = is_rt_pipe; - dpu_format_populate_addrs(pstate->aspace, new_state->fb, &pstate->layout); + dpu_format_populate_addrs(pstate->vm, new_state->fb, &pstate->layout); DPU_DEBUG_PLANE(pdpu, "FB[%u] " DRM_RECT_FP_FMT "->crtc%u " DRM_RECT_FMT ", %p4cc ubwc %d\n", fb->base.id, DRM_RECT_FP_ARG(&state->src), diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.h b/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.h index 97090ca7842b..3a76b57c137c 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.h +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.h @@ -17,7 +17,7 @@ /** * struct dpu_plane_state: Define dpu extension of drm plane state object * @base: base drm plane state object - * @aspace: pointer to address space for input/output buffers + * @vm: pointer to address space for input/output buffers * @pipe: software pipe description * @r_pipe: software pipe description of the second pipe * @pipe_cfg: software pipe configuration @@ -34,7 +34,7 @@ */ struct dpu_plane_state { struct drm_plane_state base; - struct msm_gem_address_space *aspace; + struct msm_gem_vm *vm; struct dpu_sw_pipe pipe; struct dpu_sw_pipe r_pipe; struct dpu_sw_pipe_cfg pipe_cfg; diff --git a/drivers/gpu/drm/msm/disp/mdp4/mdp4_crtc.c b/drivers/gpu/drm/msm/disp/mdp4/mdp4_crtc.c index b8610aa806ea..0133c0c01a0b 100644 --- a/drivers/gpu/drm/msm/disp/mdp4/mdp4_crtc.c +++ b/drivers/gpu/drm/msm/disp/mdp4/mdp4_crtc.c @@ -120,7 +120,7 @@ static void unref_cursor_worker(struct drm_flip_work *work, void *val) struct mdp4_kms *mdp4_kms = get_kms(&mdp4_crtc->base); struct msm_kms *kms = &mdp4_kms->base.base; - msm_gem_unpin_iova(val, kms->aspace); + msm_gem_unpin_iova(val, kms->vm); drm_gem_object_put(val); } @@ -369,7 +369,7 @@ static void update_cursor(struct drm_crtc *crtc) if (next_bo) { /* take a obj ref + iova ref when we start scanning out: */ drm_gem_object_get(next_bo); - msm_gem_get_and_pin_iova(next_bo, kms->aspace, &iova); + msm_gem_get_and_pin_iova(next_bo, kms->vm, &iova); /* enable cursor: */ mdp4_write(mdp4_kms, REG_MDP4_DMA_CURSOR_SIZE(dma), @@ -427,7 +427,7 @@ static int mdp4_crtc_cursor_set(struct drm_crtc *crtc, } if (cursor_bo) { - ret = msm_gem_get_and_pin_iova(cursor_bo, kms->aspace, &iova); + ret = msm_gem_get_and_pin_iova(cursor_bo, kms->vm, &iova); if (ret) goto fail; } else { diff --git a/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c b/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c index 6e4e74f9d63d..3c5f8c3a5059 100644 --- a/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c +++ b/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c @@ -120,15 +120,15 @@ static void mdp4_destroy(struct msm_kms *kms) { struct mdp4_kms *mdp4_kms = to_mdp4_kms(to_mdp_kms(kms)); struct device *dev = mdp4_kms->dev->dev; - struct msm_gem_address_space *aspace = kms->aspace; + struct msm_gem_vm *vm = kms->vm; if (mdp4_kms->blank_cursor_iova) - msm_gem_unpin_iova(mdp4_kms->blank_cursor_bo, kms->aspace); + msm_gem_unpin_iova(mdp4_kms->blank_cursor_bo, kms->vm); drm_gem_object_put(mdp4_kms->blank_cursor_bo); - if (aspace) { - aspace->mmu->funcs->detach(aspace->mmu); - msm_gem_address_space_put(aspace); + if (vm) { + vm->mmu->funcs->detach(vm->mmu); + msm_gem_vm_put(vm); } if (mdp4_kms->rpm_enabled) @@ -380,7 +380,7 @@ static int mdp4_kms_init(struct drm_device *dev) struct mdp4_kms *mdp4_kms = to_mdp4_kms(to_mdp_kms(priv->kms)); struct msm_kms *kms = NULL; struct msm_mmu *mmu; - struct msm_gem_address_space *aspace; + struct msm_gem_vm *vm; int ret; u32 major, minor; unsigned long max_clk; @@ -449,19 +449,19 @@ static int mdp4_kms_init(struct drm_device *dev) } else if (!mmu) { DRM_DEV_INFO(dev->dev, "no iommu, fallback to phys " "contig buffers for scanout\n"); - aspace = NULL; + vm = NULL; } else { - aspace = msm_gem_address_space_create(mmu, + vm = msm_gem_vm_create(mmu, "mdp4", 0x1000, 0x100000000 - 0x1000); - if (IS_ERR(aspace)) { + if (IS_ERR(vm)) { if (!IS_ERR(mmu)) mmu->funcs->destroy(mmu); - ret = PTR_ERR(aspace); + ret = PTR_ERR(vm); goto fail; } - kms->aspace = aspace; + kms->vm = vm; } ret = modeset_init(mdp4_kms); @@ -478,7 +478,7 @@ static int mdp4_kms_init(struct drm_device *dev) goto fail; } - ret = msm_gem_get_and_pin_iova(mdp4_kms->blank_cursor_bo, kms->aspace, + ret = msm_gem_get_and_pin_iova(mdp4_kms->blank_cursor_bo, kms->vm, &mdp4_kms->blank_cursor_iova); if (ret) { DRM_DEV_ERROR(dev->dev, "could not pin blank-cursor bo: %d\n", ret); diff --git a/drivers/gpu/drm/msm/disp/mdp4/mdp4_plane.c b/drivers/gpu/drm/msm/disp/mdp4/mdp4_plane.c index 3fefb2088008..7743be6167f8 100644 --- a/drivers/gpu/drm/msm/disp/mdp4/mdp4_plane.c +++ b/drivers/gpu/drm/msm/disp/mdp4/mdp4_plane.c @@ -87,7 +87,7 @@ static int mdp4_plane_prepare_fb(struct drm_plane *plane, drm_gem_plane_helper_prepare_fb(plane, new_state); - return msm_framebuffer_prepare(new_state->fb, kms->aspace, false); + return msm_framebuffer_prepare(new_state->fb, kms->vm, false); } static void mdp4_plane_cleanup_fb(struct drm_plane *plane, @@ -102,7 +102,7 @@ static void mdp4_plane_cleanup_fb(struct drm_plane *plane, return; DBG("%s: cleanup: FB[%u]", mdp4_plane->name, fb->base.id); - msm_framebuffer_cleanup(fb, kms->aspace, false); + msm_framebuffer_cleanup(fb, kms->vm, false); } @@ -153,13 +153,13 @@ static void mdp4_plane_set_scanout(struct drm_plane *plane, MDP4_PIPE_SRC_STRIDE_B_P3(fb->pitches[3])); mdp4_write(mdp4_kms, REG_MDP4_PIPE_SRCP0_BASE(pipe), - msm_framebuffer_iova(fb, kms->aspace, 0)); + msm_framebuffer_iova(fb, kms->vm, 0)); mdp4_write(mdp4_kms, REG_MDP4_PIPE_SRCP1_BASE(pipe), - msm_framebuffer_iova(fb, kms->aspace, 1)); + msm_framebuffer_iova(fb, kms->vm, 1)); mdp4_write(mdp4_kms, REG_MDP4_PIPE_SRCP2_BASE(pipe), - msm_framebuffer_iova(fb, kms->aspace, 2)); + msm_framebuffer_iova(fb, kms->vm, 2)); mdp4_write(mdp4_kms, REG_MDP4_PIPE_SRCP3_BASE(pipe), - msm_framebuffer_iova(fb, kms->aspace, 3)); + msm_framebuffer_iova(fb, kms->vm, 3)); } static void mdp4_write_csc_config(struct mdp4_kms *mdp4_kms, diff --git a/drivers/gpu/drm/msm/disp/mdp5/mdp5_crtc.c b/drivers/gpu/drm/msm/disp/mdp5/mdp5_crtc.c index 0f653e62b4a0..298861f373b0 100644 --- a/drivers/gpu/drm/msm/disp/mdp5/mdp5_crtc.c +++ b/drivers/gpu/drm/msm/disp/mdp5/mdp5_crtc.c @@ -169,7 +169,7 @@ static void unref_cursor_worker(struct drm_flip_work *work, void *val) struct mdp5_kms *mdp5_kms = get_kms(&mdp5_crtc->base); struct msm_kms *kms = &mdp5_kms->base.base; - msm_gem_unpin_iova(val, kms->aspace); + msm_gem_unpin_iova(val, kms->vm); drm_gem_object_put(val); } @@ -993,7 +993,7 @@ static int mdp5_crtc_cursor_set(struct drm_crtc *crtc, if (!cursor_bo) return -ENOENT; - ret = msm_gem_get_and_pin_iova(cursor_bo, kms->aspace, + ret = msm_gem_get_and_pin_iova(cursor_bo, kms->vm, &mdp5_crtc->cursor.iova); if (ret) { drm_gem_object_put(cursor_bo); diff --git a/drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c b/drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c index 374704cce656..bfbec278d19a 100644 --- a/drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c +++ b/drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c @@ -198,11 +198,11 @@ static void mdp5_destroy(struct mdp5_kms *mdp5_kms); static void mdp5_kms_destroy(struct msm_kms *kms) { struct mdp5_kms *mdp5_kms = to_mdp5_kms(to_mdp_kms(kms)); - struct msm_gem_address_space *aspace = kms->aspace; + struct msm_gem_vm *vm = kms->vm; - if (aspace) { - aspace->mmu->funcs->detach(aspace->mmu); - msm_gem_address_space_put(aspace); + if (vm) { + vm->mmu->funcs->detach(vm->mmu); + msm_gem_vm_put(vm); } mdp_kms_destroy(&mdp5_kms->base); @@ -500,7 +500,7 @@ static int mdp5_kms_init(struct drm_device *dev) struct mdp5_kms *mdp5_kms; struct mdp5_cfg *config; struct msm_kms *kms = priv->kms; - struct msm_gem_address_space *aspace; + struct msm_gem_vm *vm; int i, ret; ret = mdp5_init(to_platform_device(dev->dev), dev); @@ -534,13 +534,13 @@ static int mdp5_kms_init(struct drm_device *dev) } mdelay(16); - aspace = msm_kms_init_aspace(mdp5_kms->dev); - if (IS_ERR(aspace)) { - ret = PTR_ERR(aspace); + vm = msm_kms_init_vm(mdp5_kms->dev); + if (IS_ERR(vm)) { + ret = PTR_ERR(vm); goto fail; } - kms->aspace = aspace; + kms->vm = vm; pm_runtime_put_sync(&pdev->dev); diff --git a/drivers/gpu/drm/msm/disp/mdp5/mdp5_plane.c b/drivers/gpu/drm/msm/disp/mdp5/mdp5_plane.c index 62de248ed1b0..34e38b999120 100644 --- a/drivers/gpu/drm/msm/disp/mdp5/mdp5_plane.c +++ b/drivers/gpu/drm/msm/disp/mdp5/mdp5_plane.c @@ -144,7 +144,7 @@ static int mdp5_plane_prepare_fb(struct drm_plane *plane, drm_gem_plane_helper_prepare_fb(plane, new_state); - return msm_framebuffer_prepare(new_state->fb, kms->aspace, needs_dirtyfb); + return msm_framebuffer_prepare(new_state->fb, kms->vm, needs_dirtyfb); } static void mdp5_plane_cleanup_fb(struct drm_plane *plane, @@ -159,7 +159,7 @@ static void mdp5_plane_cleanup_fb(struct drm_plane *plane, return; DBG("%s: cleanup: FB[%u]", plane->name, fb->base.id); - msm_framebuffer_cleanup(fb, kms->aspace, needed_dirtyfb); + msm_framebuffer_cleanup(fb, kms->vm, needed_dirtyfb); } static int mdp5_plane_atomic_check_with_state(struct drm_crtc_state *crtc_state, @@ -478,13 +478,13 @@ static void set_scanout_locked(struct mdp5_kms *mdp5_kms, MDP5_PIPE_SRC_STRIDE_B_P3(fb->pitches[3])); mdp5_write(mdp5_kms, REG_MDP5_PIPE_SRC0_ADDR(pipe), - msm_framebuffer_iova(fb, kms->aspace, 0)); + msm_framebuffer_iova(fb, kms->vm, 0)); mdp5_write(mdp5_kms, REG_MDP5_PIPE_SRC1_ADDR(pipe), - msm_framebuffer_iova(fb, kms->aspace, 1)); + msm_framebuffer_iova(fb, kms->vm, 1)); mdp5_write(mdp5_kms, REG_MDP5_PIPE_SRC2_ADDR(pipe), - msm_framebuffer_iova(fb, kms->aspace, 2)); + msm_framebuffer_iova(fb, kms->vm, 2)); mdp5_write(mdp5_kms, REG_MDP5_PIPE_SRC3_ADDR(pipe), - msm_framebuffer_iova(fb, kms->aspace, 3)); + msm_framebuffer_iova(fb, kms->vm, 3)); } /* Note: mdp5_plane->pipe_lock must be locked */ diff --git a/drivers/gpu/drm/msm/dsi/dsi_host.c b/drivers/gpu/drm/msm/dsi/dsi_host.c index a98d24b7cb00..6ef3aaac1450 100644 --- a/drivers/gpu/drm/msm/dsi/dsi_host.c +++ b/drivers/gpu/drm/msm/dsi/dsi_host.c @@ -143,7 +143,7 @@ struct msm_dsi_host { /* DSI 6G TX buffer*/ struct drm_gem_object *tx_gem_obj; - struct msm_gem_address_space *aspace; + struct msm_gem_vm *vm; /* DSI v2 TX buffer */ void *tx_buf; @@ -1158,10 +1158,10 @@ int dsi_tx_buf_alloc_6g(struct msm_dsi_host *msm_host, int size) uint64_t iova; u8 *data; - msm_host->aspace = msm_gem_address_space_get(priv->kms->aspace); + msm_host->vm = msm_gem_vm_get(priv->kms->vm); data = msm_gem_kernel_new(dev, size, MSM_BO_WC, - msm_host->aspace, + msm_host->vm, &msm_host->tx_gem_obj, &iova); if (IS_ERR(data)) { @@ -1205,10 +1205,10 @@ void msm_dsi_tx_buf_free(struct mipi_dsi_host *host) return; if (msm_host->tx_gem_obj) { - msm_gem_kernel_put(msm_host->tx_gem_obj, msm_host->aspace); - msm_gem_address_space_put(msm_host->aspace); + msm_gem_kernel_put(msm_host->tx_gem_obj, msm_host->vm); + msm_gem_vm_put(msm_host->vm); msm_host->tx_gem_obj = NULL; - msm_host->aspace = NULL; + msm_host->vm = NULL; } if (msm_host->tx_buf) @@ -1337,7 +1337,7 @@ int dsi_dma_base_get_6g(struct msm_dsi_host *msm_host, uint64_t *dma_base) return -EINVAL; return msm_gem_get_and_pin_iova(msm_host->tx_gem_obj, - priv->kms->aspace, dma_base); + priv->kms->vm, dma_base); } int dsi_dma_base_get_v2(struct msm_dsi_host *msm_host, uint64_t *dma_base) diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c index e7c76d243ee7..88cd1ed59d48 100644 --- a/drivers/gpu/drm/msm/msm_drv.c +++ b/drivers/gpu/drm/msm/msm_drv.c @@ -345,7 +345,7 @@ static int context_init(struct drm_device *dev, struct drm_file *file) kref_init(&ctx->ref); msm_submitqueue_init(dev, ctx); - ctx->aspace = msm_gpu_create_private_address_space(priv->gpu, current); + ctx->vm = msm_gpu_create_private_vm(priv->gpu, current); file->driver_priv = ctx; ctx->seqno = atomic_inc_return(&ident); @@ -523,7 +523,7 @@ static int msm_ioctl_gem_info_iova(struct drm_device *dev, * Don't pin the memory here - just get an address so that userspace can * be productive */ - return msm_gem_get_iova(obj, ctx->aspace, iova); + return msm_gem_get_iova(obj, ctx->vm, iova); } static int msm_ioctl_gem_info_set_iova(struct drm_device *dev, @@ -537,13 +537,13 @@ static int msm_ioctl_gem_info_set_iova(struct drm_device *dev, return -EINVAL; /* Only supported if per-process address space is supported: */ - if (priv->gpu->aspace == ctx->aspace) + if (priv->gpu->vm == ctx->vm) return UERR(EOPNOTSUPP, dev, "requires per-process pgtables"); if (should_fail(&fail_gem_iova, obj->size)) return -ENOMEM; - return msm_gem_set_iova(obj, ctx->aspace, iova); + return msm_gem_set_iova(obj, ctx->vm, iova); } static int msm_ioctl_gem_info_set_metadata(struct drm_gem_object *obj, diff --git a/drivers/gpu/drm/msm/msm_drv.h b/drivers/gpu/drm/msm/msm_drv.h index fee31680a6d5..ce1ef981a309 100644 --- a/drivers/gpu/drm/msm/msm_drv.h +++ b/drivers/gpu/drm/msm/msm_drv.h @@ -48,7 +48,7 @@ struct msm_rd_state; struct msm_perf_state; struct msm_gem_submit; struct msm_fence_context; -struct msm_gem_address_space; +struct msm_gem_vm; struct msm_gem_vma; struct msm_disp_state; @@ -241,7 +241,7 @@ void msm_crtc_disable_vblank(struct drm_crtc *crtc); int msm_register_mmu(struct drm_device *dev, struct msm_mmu *mmu); void msm_unregister_mmu(struct drm_device *dev, struct msm_mmu *mmu); -struct msm_gem_address_space *msm_kms_init_aspace(struct drm_device *dev); +struct msm_gem_vm *msm_kms_init_vm(struct drm_device *dev); bool msm_use_mmu(struct drm_device *dev); int msm_ioctl_gem_submit(struct drm_device *dev, void *data, @@ -263,11 +263,11 @@ int msm_gem_prime_pin(struct drm_gem_object *obj); void msm_gem_prime_unpin(struct drm_gem_object *obj); int msm_framebuffer_prepare(struct drm_framebuffer *fb, - struct msm_gem_address_space *aspace, bool needs_dirtyfb); + struct msm_gem_vm *vm, bool needs_dirtyfb); void msm_framebuffer_cleanup(struct drm_framebuffer *fb, - struct msm_gem_address_space *aspace, bool needed_dirtyfb); + struct msm_gem_vm *vm, bool needed_dirtyfb); uint32_t msm_framebuffer_iova(struct drm_framebuffer *fb, - struct msm_gem_address_space *aspace, int plane); + struct msm_gem_vm *vm, int plane); struct drm_gem_object *msm_framebuffer_bo(struct drm_framebuffer *fb, int plane); const struct msm_format *msm_framebuffer_format(struct drm_framebuffer *fb); struct drm_framebuffer *msm_framebuffer_create(struct drm_device *dev, diff --git a/drivers/gpu/drm/msm/msm_fb.c b/drivers/gpu/drm/msm/msm_fb.c index 09268e416843..6df318b73534 100644 --- a/drivers/gpu/drm/msm/msm_fb.c +++ b/drivers/gpu/drm/msm/msm_fb.c @@ -76,7 +76,7 @@ void msm_framebuffer_describe(struct drm_framebuffer *fb, struct seq_file *m) /* prepare/pin all the fb's bo's for scanout. */ int msm_framebuffer_prepare(struct drm_framebuffer *fb, - struct msm_gem_address_space *aspace, + struct msm_gem_vm *vm, bool needs_dirtyfb) { struct msm_framebuffer *msm_fb = to_msm_framebuffer(fb); @@ -88,7 +88,7 @@ int msm_framebuffer_prepare(struct drm_framebuffer *fb, atomic_inc(&msm_fb->prepare_count); for (i = 0; i < n; i++) { - ret = msm_gem_get_and_pin_iova(fb->obj[i], aspace, &msm_fb->iova[i]); + ret = msm_gem_get_and_pin_iova(fb->obj[i], vm, &msm_fb->iova[i]); drm_dbg_state(fb->dev, "FB[%u]: iova[%d]: %08llx (%d)\n", fb->base.id, i, msm_fb->iova[i], ret); if (ret) @@ -99,7 +99,7 @@ int msm_framebuffer_prepare(struct drm_framebuffer *fb, } void msm_framebuffer_cleanup(struct drm_framebuffer *fb, - struct msm_gem_address_space *aspace, + struct msm_gem_vm *vm, bool needed_dirtyfb) { struct msm_framebuffer *msm_fb = to_msm_framebuffer(fb); @@ -109,14 +109,14 @@ void msm_framebuffer_cleanup(struct drm_framebuffer *fb, refcount_dec(&msm_fb->dirtyfb); for (i = 0; i < n; i++) - msm_gem_unpin_iova(fb->obj[i], aspace); + msm_gem_unpin_iova(fb->obj[i], vm); if (!atomic_dec_return(&msm_fb->prepare_count)) memset(msm_fb->iova, 0, sizeof(msm_fb->iova)); } uint32_t msm_framebuffer_iova(struct drm_framebuffer *fb, - struct msm_gem_address_space *aspace, int plane) + struct msm_gem_vm *vm, int plane) { struct msm_framebuffer *msm_fb = to_msm_framebuffer(fb); return msm_fb->iova[plane] + fb->offsets[plane]; diff --git a/drivers/gpu/drm/msm/msm_fbdev.c b/drivers/gpu/drm/msm/msm_fbdev.c index c62249b1ab3d..b5969374d53f 100644 --- a/drivers/gpu/drm/msm/msm_fbdev.c +++ b/drivers/gpu/drm/msm/msm_fbdev.c @@ -122,7 +122,7 @@ int msm_fbdev_driver_fbdev_probe(struct drm_fb_helper *helper, * in panic (ie. lock-safe, etc) we could avoid pinning the * buffer now: */ - ret = msm_gem_get_and_pin_iova(bo, priv->kms->aspace, &paddr); + ret = msm_gem_get_and_pin_iova(bo, priv->kms->vm, &paddr); if (ret) { DRM_DEV_ERROR(dev->dev, "failed to get buffer obj iova: %d\n", ret); goto fail; diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 747e2ab8373a..c29367239283 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -398,14 +398,14 @@ uint64_t msm_gem_mmap_offset(struct drm_gem_object *obj) } static struct msm_gem_vma *add_vma(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace) + struct msm_gem_vm *vm) { struct msm_gem_object *msm_obj = to_msm_bo(obj); struct msm_gem_vma *vma; msm_gem_assert_locked(obj); - vma = msm_gem_vma_new(aspace); + vma = msm_gem_vma_new(vm); if (!vma) return ERR_PTR(-ENOMEM); @@ -415,7 +415,7 @@ static struct msm_gem_vma *add_vma(struct drm_gem_object *obj, } static struct msm_gem_vma *lookup_vma(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace) + struct msm_gem_vm *vm) { struct msm_gem_object *msm_obj = to_msm_bo(obj); struct msm_gem_vma *vma; @@ -423,7 +423,7 @@ static struct msm_gem_vma *lookup_vma(struct drm_gem_object *obj, msm_gem_assert_locked(obj); list_for_each_entry(vma, &msm_obj->vmas, list) { - if (vma->aspace == aspace) + if (vma->vm == vm) return vma; } @@ -454,7 +454,7 @@ put_iova_spaces(struct drm_gem_object *obj, bool close) msm_gem_assert_locked(obj); list_for_each_entry(vma, &msm_obj->vmas, list) { - if (vma->aspace) { + if (vma->vm) { msm_gem_vma_purge(vma); if (close) msm_gem_vma_close(vma); @@ -477,19 +477,19 @@ put_iova_vmas(struct drm_gem_object *obj) } static struct msm_gem_vma *get_vma_locked(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace, + struct msm_gem_vm *vm, u64 range_start, u64 range_end) { struct msm_gem_vma *vma; msm_gem_assert_locked(obj); - vma = lookup_vma(obj, aspace); + vma = lookup_vma(obj, vm); if (!vma) { int ret; - vma = add_vma(obj, aspace); + vma = add_vma(obj, vm); if (IS_ERR(vma)) return vma; @@ -561,13 +561,13 @@ void msm_gem_unpin_active(struct drm_gem_object *obj) } struct msm_gem_vma *msm_gem_get_vma_locked(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace) + struct msm_gem_vm *vm) { - return get_vma_locked(obj, aspace, 0, U64_MAX); + return get_vma_locked(obj, vm, 0, U64_MAX); } static int get_and_pin_iova_range_locked(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace, uint64_t *iova, + struct msm_gem_vm *vm, uint64_t *iova, u64 range_start, u64 range_end) { struct msm_gem_vma *vma; @@ -575,7 +575,7 @@ static int get_and_pin_iova_range_locked(struct drm_gem_object *obj, msm_gem_assert_locked(obj); - vma = get_vma_locked(obj, aspace, range_start, range_end); + vma = get_vma_locked(obj, vm, range_start, range_end); if (IS_ERR(vma)) return PTR_ERR(vma); @@ -593,13 +593,13 @@ static int get_and_pin_iova_range_locked(struct drm_gem_object *obj, * limits iova to specified range (in pages) */ int msm_gem_get_and_pin_iova_range(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace, uint64_t *iova, + struct msm_gem_vm *vm, uint64_t *iova, u64 range_start, u64 range_end) { int ret; msm_gem_lock(obj); - ret = get_and_pin_iova_range_locked(obj, aspace, iova, range_start, range_end); + ret = get_and_pin_iova_range_locked(obj, vm, iova, range_start, range_end); msm_gem_unlock(obj); return ret; @@ -607,9 +607,9 @@ int msm_gem_get_and_pin_iova_range(struct drm_gem_object *obj, /* get iova and pin it. Should have a matching put */ int msm_gem_get_and_pin_iova(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace, uint64_t *iova) + struct msm_gem_vm *vm, uint64_t *iova) { - return msm_gem_get_and_pin_iova_range(obj, aspace, iova, 0, U64_MAX); + return msm_gem_get_and_pin_iova_range(obj, vm, iova, 0, U64_MAX); } /* @@ -617,13 +617,13 @@ int msm_gem_get_and_pin_iova(struct drm_gem_object *obj, * valid for the life of the object */ int msm_gem_get_iova(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace, uint64_t *iova) + struct msm_gem_vm *vm, uint64_t *iova) { struct msm_gem_vma *vma; int ret = 0; msm_gem_lock(obj); - vma = get_vma_locked(obj, aspace, 0, U64_MAX); + vma = get_vma_locked(obj, vm, 0, U64_MAX); if (IS_ERR(vma)) { ret = PTR_ERR(vma); } else { @@ -635,9 +635,9 @@ int msm_gem_get_iova(struct drm_gem_object *obj, } static int clear_iova(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace) + struct msm_gem_vm *vm) { - struct msm_gem_vma *vma = lookup_vma(obj, aspace); + struct msm_gem_vma *vma = lookup_vma(obj, vm); if (!vma) return 0; @@ -657,20 +657,20 @@ static int clear_iova(struct drm_gem_object *obj, * Setting an iova of zero will clear the vma. */ int msm_gem_set_iova(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace, uint64_t iova) + struct msm_gem_vm *vm, uint64_t iova) { int ret = 0; msm_gem_lock(obj); if (!iova) { - ret = clear_iova(obj, aspace); + ret = clear_iova(obj, vm); } else { struct msm_gem_vma *vma; - vma = get_vma_locked(obj, aspace, iova, iova + obj->size); + vma = get_vma_locked(obj, vm, iova, iova + obj->size); if (IS_ERR(vma)) { ret = PTR_ERR(vma); } else if (GEM_WARN_ON(vma->iova != iova)) { - clear_iova(obj, aspace); + clear_iova(obj, vm); ret = -EBUSY; } } @@ -685,12 +685,12 @@ int msm_gem_set_iova(struct drm_gem_object *obj, * to get rid of it */ void msm_gem_unpin_iova(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace) + struct msm_gem_vm *vm) { struct msm_gem_vma *vma; msm_gem_lock(obj); - vma = lookup_vma(obj, aspace); + vma = lookup_vma(obj, vm); if (!GEM_WARN_ON(!vma)) { msm_gem_unpin_locked(obj); } @@ -1008,23 +1008,23 @@ void msm_gem_describe(struct drm_gem_object *obj, struct seq_file *m, list_for_each_entry(vma, &msm_obj->vmas, list) { const char *name, *comm; - if (vma->aspace) { - struct msm_gem_address_space *aspace = vma->aspace; + if (vma->vm) { + struct msm_gem_vm *vm = vma->vm; struct task_struct *task = - get_pid_task(aspace->pid, PIDTYPE_PID); + get_pid_task(vm->pid, PIDTYPE_PID); if (task) { comm = kstrdup(task->comm, GFP_KERNEL); put_task_struct(task); } else { comm = NULL; } - name = aspace->name; + name = vm->name; } else { name = comm = NULL; } - seq_printf(m, " [%s%s%s: aspace=%p, %08llx,%s]", + seq_printf(m, " [%s%s%s: vm=%p, %08llx,%s]", name, comm ? ":" : "", comm ? comm : "", - vma->aspace, vma->iova, + vma->vm, vma->iova, vma->mapped ? "mapped" : "unmapped"); kfree(comm); } @@ -1349,7 +1349,7 @@ struct drm_gem_object *msm_gem_import(struct drm_device *dev, } void *msm_gem_kernel_new(struct drm_device *dev, uint32_t size, - uint32_t flags, struct msm_gem_address_space *aspace, + uint32_t flags, struct msm_gem_vm *vm, struct drm_gem_object **bo, uint64_t *iova) { void *vaddr; @@ -1360,14 +1360,14 @@ void *msm_gem_kernel_new(struct drm_device *dev, uint32_t size, return ERR_CAST(obj); if (iova) { - ret = msm_gem_get_and_pin_iova(obj, aspace, iova); + ret = msm_gem_get_and_pin_iova(obj, vm, iova); if (ret) goto err; } vaddr = msm_gem_get_vaddr(obj); if (IS_ERR(vaddr)) { - msm_gem_unpin_iova(obj, aspace); + msm_gem_unpin_iova(obj, vm); ret = PTR_ERR(vaddr); goto err; } @@ -1384,13 +1384,13 @@ void *msm_gem_kernel_new(struct drm_device *dev, uint32_t size, } void msm_gem_kernel_put(struct drm_gem_object *bo, - struct msm_gem_address_space *aspace) + struct msm_gem_vm *vm) { if (IS_ERR_OR_NULL(bo)) return; msm_gem_put_vaddr(bo); - msm_gem_unpin_iova(bo, aspace); + msm_gem_unpin_iova(bo, vm); drm_gem_object_put(bo); } diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index 85f0257e83da..d2f39a371373 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -22,7 +22,7 @@ #define MSM_BO_STOLEN 0x10000000 /* try to use stolen/splash memory */ #define MSM_BO_MAP_PRIV 0x20000000 /* use IOMMU_PRIV when mapping */ -struct msm_gem_address_space { +struct msm_gem_vm { const char *name; /* NOTE: mm managed at the page level, size is in # of pages * and position mm_node->start is in # of pages: @@ -47,13 +47,13 @@ struct msm_gem_address_space { uint64_t va_size; }; -struct msm_gem_address_space * -msm_gem_address_space_get(struct msm_gem_address_space *aspace); +struct msm_gem_vm * +msm_gem_vm_get(struct msm_gem_vm *vm); -void msm_gem_address_space_put(struct msm_gem_address_space *aspace); +void msm_gem_vm_put(struct msm_gem_vm *vm); -struct msm_gem_address_space * -msm_gem_address_space_create(struct msm_mmu *mmu, const char *name, +struct msm_gem_vm * +msm_gem_vm_create(struct msm_mmu *mmu, const char *name, u64 va_start, u64 size); struct msm_fence_context; @@ -61,12 +61,12 @@ struct msm_fence_context; struct msm_gem_vma { struct drm_mm_node node; uint64_t iova; - struct msm_gem_address_space *aspace; + struct msm_gem_vm *vm; struct list_head list; /* node in msm_gem_object::vmas */ bool mapped; }; -struct msm_gem_vma *msm_gem_vma_new(struct msm_gem_address_space *aspace); +struct msm_gem_vma *msm_gem_vma_new(struct msm_gem_vm *vm); int msm_gem_vma_init(struct msm_gem_vma *vma, int size, u64 range_start, u64 range_end); void msm_gem_vma_purge(struct msm_gem_vma *vma); @@ -127,18 +127,18 @@ int msm_gem_pin_vma_locked(struct drm_gem_object *obj, struct msm_gem_vma *vma); void msm_gem_unpin_locked(struct drm_gem_object *obj); void msm_gem_unpin_active(struct drm_gem_object *obj); struct msm_gem_vma *msm_gem_get_vma_locked(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace); + struct msm_gem_vm *vm); int msm_gem_get_iova(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace, uint64_t *iova); + struct msm_gem_vm *vm, uint64_t *iova); int msm_gem_set_iova(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace, uint64_t iova); + struct msm_gem_vm *vm, uint64_t iova); int msm_gem_get_and_pin_iova_range(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace, uint64_t *iova, + struct msm_gem_vm *vm, uint64_t *iova, u64 range_start, u64 range_end); int msm_gem_get_and_pin_iova(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace, uint64_t *iova); + struct msm_gem_vm *vm, uint64_t *iova); void msm_gem_unpin_iova(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace); + struct msm_gem_vm *vm); void msm_gem_pin_obj_locked(struct drm_gem_object *obj); struct page **msm_gem_pin_pages_locked(struct drm_gem_object *obj); void msm_gem_unpin_pages_locked(struct drm_gem_object *obj); @@ -160,10 +160,10 @@ int msm_gem_new_handle(struct drm_device *dev, struct drm_file *file, struct drm_gem_object *msm_gem_new(struct drm_device *dev, uint32_t size, uint32_t flags); void *msm_gem_kernel_new(struct drm_device *dev, uint32_t size, - uint32_t flags, struct msm_gem_address_space *aspace, + uint32_t flags, struct msm_gem_vm *vm, struct drm_gem_object **bo, uint64_t *iova); void msm_gem_kernel_put(struct drm_gem_object *bo, - struct msm_gem_address_space *aspace); + struct msm_gem_vm *vm); struct drm_gem_object *msm_gem_import(struct drm_device *dev, struct dma_buf *dmabuf, struct sg_table *sgt); __printf(2, 3) @@ -257,7 +257,7 @@ struct msm_gem_submit { struct kref ref; struct drm_device *dev; struct msm_gpu *gpu; - struct msm_gem_address_space *aspace; + struct msm_gem_vm *vm; struct list_head node; /* node in ring submit list */ struct drm_exec exec; uint32_t seqno; /* Sequence number of the submit on the ring */ diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c index 99d3f2c4bae5..30a281aa1353 100644 --- a/drivers/gpu/drm/msm/msm_gem_submit.c +++ b/drivers/gpu/drm/msm/msm_gem_submit.c @@ -63,7 +63,7 @@ static struct msm_gem_submit *submit_create(struct drm_device *dev, kref_init(&submit->ref); submit->dev = dev; - submit->aspace = queue->ctx->aspace; + submit->vm = queue->ctx->vm; submit->gpu = gpu; submit->cmd = (void *)&submit->bos[nr_bos]; submit->queue = queue; @@ -302,7 +302,7 @@ static int submit_pin_objects(struct msm_gem_submit *submit) struct msm_gem_vma *vma; /* if locking succeeded, pin bo: */ - vma = msm_gem_get_vma_locked(obj, submit->aspace); + vma = msm_gem_get_vma_locked(obj, submit->vm); if (IS_ERR(vma)) { ret = PTR_ERR(vma); break; @@ -659,7 +659,7 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data, if (args->pad) return -EINVAL; - if (unlikely(!ctx->aspace) && !capable(CAP_SYS_RAWIO)) { + if (unlikely(!ctx->vm) && !capable(CAP_SYS_RAWIO)) { DRM_ERROR_RATELIMITED("IOMMU support or CAP_SYS_RAWIO required!\n"); return -EPERM; } diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_gem_vma.c index 11e842dda73c..9419692f0cc8 100644 --- a/drivers/gpu/drm/msm/msm_gem_vma.c +++ b/drivers/gpu/drm/msm/msm_gem_vma.c @@ -10,45 +10,44 @@ #include "msm_mmu.h" static void -msm_gem_address_space_destroy(struct kref *kref) +msm_gem_vm_destroy(struct kref *kref) { - struct msm_gem_address_space *aspace = container_of(kref, - struct msm_gem_address_space, kref); - - drm_mm_takedown(&aspace->mm); - if (aspace->mmu) - aspace->mmu->funcs->destroy(aspace->mmu); - put_pid(aspace->pid); - kfree(aspace); + struct msm_gem_vm *vm = container_of(kref, struct msm_gem_vm, kref); + + drm_mm_takedown(&vm->mm); + if (vm->mmu) + vm->mmu->funcs->destroy(vm->mmu); + put_pid(vm->pid); + kfree(vm); } -void msm_gem_address_space_put(struct msm_gem_address_space *aspace) +void msm_gem_vm_put(struct msm_gem_vm *vm) { - if (aspace) - kref_put(&aspace->kref, msm_gem_address_space_destroy); + if (vm) + kref_put(&vm->kref, msm_gem_vm_destroy); } -struct msm_gem_address_space * -msm_gem_address_space_get(struct msm_gem_address_space *aspace) +struct msm_gem_vm * +msm_gem_vm_get(struct msm_gem_vm *vm) { - if (!IS_ERR_OR_NULL(aspace)) - kref_get(&aspace->kref); + if (!IS_ERR_OR_NULL(vm)) + kref_get(&vm->kref); - return aspace; + return vm; } /* Actually unmap memory for the vma */ void msm_gem_vma_purge(struct msm_gem_vma *vma) { - struct msm_gem_address_space *aspace = vma->aspace; + struct msm_gem_vm *vm = vma->vm; unsigned size = vma->node.size; /* Don't do anything if the memory isn't mapped */ if (!vma->mapped) return; - aspace->mmu->funcs->unmap(aspace->mmu, vma->iova, size); + vm->mmu->funcs->unmap(vm->mmu, vma->iova, size); vma->mapped = false; } @@ -58,7 +57,7 @@ int msm_gem_vma_map(struct msm_gem_vma *vma, int prot, struct sg_table *sgt, int size) { - struct msm_gem_address_space *aspace = vma->aspace; + struct msm_gem_vm *vm = vma->vm; int ret; if (GEM_WARN_ON(!vma->iova)) @@ -69,7 +68,7 @@ msm_gem_vma_map(struct msm_gem_vma *vma, int prot, vma->mapped = true; - if (!aspace) + if (!vm) return 0; /* @@ -81,7 +80,7 @@ msm_gem_vma_map(struct msm_gem_vma *vma, int prot, * Revisit this if we can come up with a scheme to pre-alloc pages * for the pgtable in map/unmap ops. */ - ret = aspace->mmu->funcs->map(aspace->mmu, vma->iova, sgt, size, prot); + ret = vm->mmu->funcs->map(vm->mmu, vma->iova, sgt, size, prot); if (ret) { vma->mapped = false; @@ -93,21 +92,21 @@ msm_gem_vma_map(struct msm_gem_vma *vma, int prot, /* Close an iova. Warn if it is still in use */ void msm_gem_vma_close(struct msm_gem_vma *vma) { - struct msm_gem_address_space *aspace = vma->aspace; + struct msm_gem_vm *vm = vma->vm; GEM_WARN_ON(vma->mapped); - spin_lock(&aspace->lock); + spin_lock(&vm->lock); if (vma->iova) drm_mm_remove_node(&vma->node); - spin_unlock(&aspace->lock); + spin_unlock(&vm->lock); vma->iova = 0; - msm_gem_address_space_put(aspace); + msm_gem_vm_put(vm); } -struct msm_gem_vma *msm_gem_vma_new(struct msm_gem_address_space *aspace) +struct msm_gem_vma *msm_gem_vma_new(struct msm_gem_vm *vm) { struct msm_gem_vma *vma; @@ -115,7 +114,7 @@ struct msm_gem_vma *msm_gem_vma_new(struct msm_gem_address_space *aspace) if (!vma) return NULL; - vma->aspace = aspace; + vma->vm = vm; return vma; } @@ -124,20 +123,20 @@ struct msm_gem_vma *msm_gem_vma_new(struct msm_gem_address_space *aspace) int msm_gem_vma_init(struct msm_gem_vma *vma, int size, u64 range_start, u64 range_end) { - struct msm_gem_address_space *aspace = vma->aspace; + struct msm_gem_vm *vm = vma->vm; int ret; - if (GEM_WARN_ON(!aspace)) + if (GEM_WARN_ON(!vm)) return -EINVAL; if (GEM_WARN_ON(vma->iova)) return -EBUSY; - spin_lock(&aspace->lock); - ret = drm_mm_insert_node_in_range(&aspace->mm, &vma->node, + spin_lock(&vm->lock); + ret = drm_mm_insert_node_in_range(&vm->mm, &vma->node, size, PAGE_SIZE, 0, range_start, range_end, 0); - spin_unlock(&aspace->lock); + spin_unlock(&vm->lock); if (ret) return ret; @@ -145,33 +144,33 @@ int msm_gem_vma_init(struct msm_gem_vma *vma, int size, vma->iova = vma->node.start; vma->mapped = false; - kref_get(&aspace->kref); + kref_get(&vm->kref); return 0; } -struct msm_gem_address_space * -msm_gem_address_space_create(struct msm_mmu *mmu, const char *name, +struct msm_gem_vm * +msm_gem_vm_create(struct msm_mmu *mmu, const char *name, u64 va_start, u64 size) { - struct msm_gem_address_space *aspace; + struct msm_gem_vm *vm; if (IS_ERR(mmu)) return ERR_CAST(mmu); - aspace = kzalloc(sizeof(*aspace), GFP_KERNEL); - if (!aspace) + vm = kzalloc(sizeof(*vm), GFP_KERNEL); + if (!vm) return ERR_PTR(-ENOMEM); - spin_lock_init(&aspace->lock); - aspace->name = name; - aspace->mmu = mmu; - aspace->va_start = va_start; - aspace->va_size = size; + spin_lock_init(&vm->lock); + vm->name = name; + vm->mmu = mmu; + vm->va_start = va_start; + vm->va_size = size; - drm_mm_init(&aspace->mm, va_start, size); + drm_mm_init(&vm->mm, va_start, size); - kref_init(&aspace->kref); + kref_init(&vm->kref); - return aspace; + return vm; } diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c index 6ff9541990dc..b61cc939363d 100644 --- a/drivers/gpu/drm/msm/msm_gpu.c +++ b/drivers/gpu/drm/msm/msm_gpu.c @@ -377,8 +377,8 @@ static void recover_worker(struct kthread_work *work) /* Increment the fault counts */ submit->queue->faults++; - if (submit->aspace) - submit->aspace->faults++; + if (submit->vm) + submit->vm->faults++; get_comm_cmdline(submit, &comm, &cmd); @@ -483,7 +483,7 @@ static void fault_worker(struct kthread_work *work) resume_smmu: memset(&gpu->fault_info, 0, sizeof(gpu->fault_info)); - gpu->aspace->mmu->funcs->resume_translation(gpu->aspace->mmu); + gpu->vm->mmu->funcs->resume_translation(gpu->vm->mmu); mutex_unlock(&gpu->lock); } @@ -820,10 +820,10 @@ static int get_clocks(struct platform_device *pdev, struct msm_gpu *gpu) } /* Return a new address space for a msm_drm_private instance */ -struct msm_gem_address_space * -msm_gpu_create_private_address_space(struct msm_gpu *gpu, struct task_struct *task) +struct msm_gem_vm * +msm_gpu_create_private_vm(struct msm_gpu *gpu, struct task_struct *task) { - struct msm_gem_address_space *aspace = NULL; + struct msm_gem_vm *vm = NULL; if (!gpu) return NULL; @@ -831,16 +831,16 @@ msm_gpu_create_private_address_space(struct msm_gpu *gpu, struct task_struct *ta * If the target doesn't support private address spaces then return * the global one */ - if (gpu->funcs->create_private_address_space) { - aspace = gpu->funcs->create_private_address_space(gpu); - if (!IS_ERR(aspace)) - aspace->pid = get_pid(task_pid(task)); + if (gpu->funcs->create_private_vm) { + vm = gpu->funcs->create_private_vm(gpu); + if (!IS_ERR(vm)) + vm->pid = get_pid(task_pid(task)); } - if (IS_ERR_OR_NULL(aspace)) - aspace = msm_gem_address_space_get(gpu->aspace); + if (IS_ERR_OR_NULL(vm)) + vm = msm_gem_vm_get(gpu->vm); - return aspace; + return vm; } int msm_gpu_init(struct drm_device *drm, struct platform_device *pdev, @@ -936,18 +936,18 @@ int msm_gpu_init(struct drm_device *drm, struct platform_device *pdev, msm_devfreq_init(gpu); - gpu->aspace = gpu->funcs->create_address_space(gpu, pdev); + gpu->vm = gpu->funcs->create_vm(gpu, pdev); - if (gpu->aspace == NULL) + if (gpu->vm == NULL) DRM_DEV_INFO(drm->dev, "%s: no IOMMU, fallback to VRAM carveout!\n", name); - else if (IS_ERR(gpu->aspace)) { - ret = PTR_ERR(gpu->aspace); + else if (IS_ERR(gpu->vm)) { + ret = PTR_ERR(gpu->vm); goto fail; } memptrs = msm_gem_kernel_new(drm, sizeof(struct msm_rbmemptrs) * nr_rings, - check_apriv(gpu, MSM_BO_WC), gpu->aspace, &gpu->memptrs_bo, + check_apriv(gpu, MSM_BO_WC), gpu->vm, &gpu->memptrs_bo, &memptrs_iova); if (IS_ERR(memptrs)) { @@ -991,7 +991,7 @@ int msm_gpu_init(struct drm_device *drm, struct platform_device *pdev, gpu->rb[i] = NULL; } - msm_gem_kernel_put(gpu->memptrs_bo, gpu->aspace); + msm_gem_kernel_put(gpu->memptrs_bo, gpu->vm); platform_set_drvdata(pdev, NULL); return ret; @@ -1008,11 +1008,11 @@ void msm_gpu_cleanup(struct msm_gpu *gpu) gpu->rb[i] = NULL; } - msm_gem_kernel_put(gpu->memptrs_bo, gpu->aspace); + msm_gem_kernel_put(gpu->memptrs_bo, gpu->vm); - if (!IS_ERR_OR_NULL(gpu->aspace)) { - gpu->aspace->mmu->funcs->detach(gpu->aspace->mmu); - msm_gem_address_space_put(gpu->aspace); + if (!IS_ERR_OR_NULL(gpu->vm)) { + gpu->vm->mmu->funcs->detach(gpu->vm->mmu); + msm_gem_vm_put(gpu->vm); } if (gpu->worker) { diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h index 01a3b2770d71..edbdd894adfb 100644 --- a/drivers/gpu/drm/msm/msm_gpu.h +++ b/drivers/gpu/drm/msm/msm_gpu.h @@ -78,10 +78,8 @@ struct msm_gpu_funcs { /* note: gpu_set_freq() can assume that we have been pm_resumed */ void (*gpu_set_freq)(struct msm_gpu *gpu, struct dev_pm_opp *opp, bool suspended); - struct msm_gem_address_space *(*create_address_space) - (struct msm_gpu *gpu, struct platform_device *pdev); - struct msm_gem_address_space *(*create_private_address_space) - (struct msm_gpu *gpu); + struct msm_gem_vm *(*create_vm)(struct msm_gpu *gpu, struct platform_device *pdev); + struct msm_gem_vm *(*create_private_vm)(struct msm_gpu *gpu); uint32_t (*get_rptr)(struct msm_gpu *gpu, struct msm_ringbuffer *ring); /** @@ -228,7 +226,7 @@ struct msm_gpu { void __iomem *mmio; int irq; - struct msm_gem_address_space *aspace; + struct msm_gem_vm *vm; /* Power Control: */ struct regulator *gpu_reg, *gpu_cx; @@ -356,8 +354,8 @@ struct msm_context { */ int queueid; - /** @aspace: the per-process GPU address-space */ - struct msm_gem_address_space *aspace; + /** @vm: the per-process GPU address-space */ + struct msm_gem_vm *vm; /** @kref: the reference count */ struct kref ref; @@ -667,8 +665,8 @@ int msm_gpu_init(struct drm_device *drm, struct platform_device *pdev, struct msm_gpu *gpu, const struct msm_gpu_funcs *funcs, const char *name, struct msm_gpu_config *config); -struct msm_gem_address_space * -msm_gpu_create_private_address_space(struct msm_gpu *gpu, struct task_struct *task); +struct msm_gem_vm * +msm_gpu_create_private_vm(struct msm_gpu *gpu, struct task_struct *task); void msm_gpu_cleanup(struct msm_gpu *gpu); diff --git a/drivers/gpu/drm/msm/msm_kms.c b/drivers/gpu/drm/msm/msm_kms.c index f3326d09bdbc..3649276ea1b2 100644 --- a/drivers/gpu/drm/msm/msm_kms.c +++ b/drivers/gpu/drm/msm/msm_kms.c @@ -164,9 +164,9 @@ void msm_crtc_disable_vblank(struct drm_crtc *crtc) vblank_ctrl_queue_work(priv, crtc, false); } -struct msm_gem_address_space *msm_kms_init_aspace(struct drm_device *dev) +struct msm_gem_vm *msm_kms_init_vm(struct drm_device *dev) { - struct msm_gem_address_space *aspace; + struct msm_gem_vm *vm; struct msm_mmu *mmu; struct device *mdp_dev = dev->dev; struct device *mdss_dev = mdp_dev->parent; @@ -190,14 +190,14 @@ struct msm_gem_address_space *msm_kms_init_aspace(struct drm_device *dev) return NULL; } - aspace = msm_gem_address_space_create(mmu, "mdp_kms", + vm = msm_gem_vm_create(mmu, "mdp_kms", 0x1000, 0x100000000 - 0x1000); - if (IS_ERR(aspace)) { - dev_err(mdp_dev, "aspace create, error %pe\n", aspace); + if (IS_ERR(vm)) { + dev_err(mdp_dev, "vm create, error %pe\n", vm); mmu->funcs->destroy(mmu); } - return aspace; + return vm; } void msm_drm_kms_uninit(struct device *dev) diff --git a/drivers/gpu/drm/msm/msm_kms.h b/drivers/gpu/drm/msm/msm_kms.h index e60162744c66..73da232237bc 100644 --- a/drivers/gpu/drm/msm/msm_kms.h +++ b/drivers/gpu/drm/msm/msm_kms.h @@ -129,7 +129,7 @@ struct msm_kms { bool irq_requested; /* mapper-id used to request GEM buffer mapped for scanout: */ - struct msm_gem_address_space *aspace; + struct msm_gem_vm *vm; /* disp snapshot support */ struct kthread_worker *dump_worker; diff --git a/drivers/gpu/drm/msm/msm_ringbuffer.c b/drivers/gpu/drm/msm/msm_ringbuffer.c index c803556a8f64..edb8e3bee955 100644 --- a/drivers/gpu/drm/msm/msm_ringbuffer.c +++ b/drivers/gpu/drm/msm/msm_ringbuffer.c @@ -78,7 +78,7 @@ struct msm_ringbuffer *msm_ringbuffer_new(struct msm_gpu *gpu, int id, ring->start = msm_gem_kernel_new(gpu->dev, MSM_GPU_RINGBUFFER_SZ, check_apriv(gpu, MSM_BO_WC | MSM_BO_GPU_READONLY), - gpu->aspace, &ring->bo, &ring->iova); + gpu->vm, &ring->bo, &ring->iova); if (IS_ERR(ring->start)) { ret = PTR_ERR(ring->start); @@ -130,7 +130,7 @@ void msm_ringbuffer_destroy(struct msm_ringbuffer *ring) msm_fence_context_free(ring->fctx); - msm_gem_kernel_put(ring->bo, ring->gpu->aspace); + msm_gem_kernel_put(ring->bo, ring->gpu->vm); kfree(ring); } diff --git a/drivers/gpu/drm/msm/msm_submitqueue.c b/drivers/gpu/drm/msm/msm_submitqueue.c index 1acc0fe36353..6298233c3568 100644 --- a/drivers/gpu/drm/msm/msm_submitqueue.c +++ b/drivers/gpu/drm/msm/msm_submitqueue.c @@ -59,7 +59,7 @@ void __msm_context_destroy(struct kref *kref) kfree(ctx->entities[i]); } - msm_gem_address_space_put(ctx->aspace); + msm_gem_vm_put(ctx->vm); kfree(ctx->comm); kfree(ctx->cmdline); kfree(ctx); From patchwork Sat Dec 7 16:15:07 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 848323 Received: from mail-pf1-f176.google.com (mail-pf1-f176.google.com [209.85.210.176]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B92F814D444; Sat, 7 Dec 2024 16:17:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.176 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733588263; cv=none; b=Q98ZvOnmGyEvSLlISH1RNPAkNoeJyCWtzq37JA33pGd5PpKKBfNHEq6wOeNqXc40wTPieQOQ9jA6k/mXIKDz2saPKHE7HuL9Ugtgx8VNI/FQ5z3LCtrwnQKhziXpCl6q3nmPQJrC5KE69kXJSIL3uyeGTKmp2XwFIOcel9LYYa0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733588263; c=relaxed/simple; bh=Anw8HE5YkTAGZ9q4Y0tI/aHdwfdR7yOZSf4HrmlH3vc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=V3vLyETmtY8+PAlnj+u6Nh2lttAk5kzCOu6/yziEIl0sdlvcHbQSmc2nDJostNFU69JPAoEaEuLnLPh3TNA+Jvb6IV4gvxLdVdDGxBzasPzXxSL+OsYOCVVwnXnLWE7cvDB5TMzVQPm20jHb5WCyxBCriXFP0w71JpFXE0QL8/s= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=k6La8Gu/; arc=none smtp.client-ip=209.85.210.176 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="k6La8Gu/" Received: by mail-pf1-f176.google.com with SMTP id d2e1a72fcca58-725c3510b50so1324396b3a.3; Sat, 07 Dec 2024 08:17:41 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1733588261; x=1734193061; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Fbt9U5IgsSbwA9VRGE91iCi6TIhVfHwSfrspWG1tcv4=; b=k6La8Gu/+JcrYhLars6JLrPRIm27Ia/Ks9xeiVyha6gHk+13yipXnuCjFMDppi3MOp NkzZ+ryriEFKCN3vMbdJerw6xYVvxgH3kk/ovw+tewfkXcLMksT5CjelhfB3VsgXjUyk wDkGpNp79wJ4ULMVjctH4gxEAZxXMRpsFUNvRP3kiyFML0Mrj/8eOY8LFeJjQvHfcWmN iONWbLH4JvF80ppdp4CEB4ds1UFfs04kwFU3Lrt9ViD/tzaM5M+Mg1qY6zkOAvGGc8d3 mL3YyKLWqZ/rCMULdGXNpyobLe+2BOl3LLaqF+SoG30ezN6HVDx1MPPCnED08CJIO2nK HqkQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1733588261; x=1734193061; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Fbt9U5IgsSbwA9VRGE91iCi6TIhVfHwSfrspWG1tcv4=; b=Ejof2UJ+bk42uAxNimI11wIlL8wsobra451ImPfA3rQdCAZjr0MMMXLnIbV+SbFd8y x75039WWivsEbuvEQmwqXozhbBL1hUfGx4mjX2BCU002Ura9Aez4/73LNDRnjcyR4goY nyWgw5OrkjtNLV1t2KJUQPusDrqu8L5QQIDLKDZfwL9/pbDkJIfhWyACCdjwjYDyngE+ JfDvvhCkGmwUUy83M0yyGLO4MATc9HspS6ytwjy6EsalSwx0zCQ/j9BtdmJoiuTWapYU c/J2NW3nmHkpfB+bRh71OEdOzgLBQHSVlHLFkxQy3P0Z8jV9gQROVAVdsnzdXHMdsUMI LTYg== X-Forwarded-Encrypted: i=1; AJvYcCW0MTncng/dNYmkgmNR+3khQfIOKusm1RmI/vuqWBb7dSw/YEIyToEyluVSkUv9BSeaxxSAtpGd1G3BehXv@vger.kernel.org, AJvYcCWweAqzDMfleSVPTJsRCjQav0ywMg9wpIn4HLbG8b1daE/naWeOnQQghksYoAu7W9L+m2BIMFkqfGm9E8eL@vger.kernel.org X-Gm-Message-State: AOJu0YzxMt0z0O9iWaO1CRRgYUJUU4vKdcn6IzctyOhACO3ettYS6jgt uY9LO3vrXI8I+YSWh5iUh27TOt4ldEtQvztHXTOA5ee8i1Zi2DPlhbnY3A== X-Gm-Gg: ASbGncv+Il49+eqWmdKVGXPoSgZv5CkVtGfUOS3Rcq7lsh0c4WdHoaQaP6yz/mzH/hy C2RBwm+buwuqzhH6jV5nZP7Kl08+neq84xOnVCM36EGiI0D+Ks5zVkbvh+BFVT9htwsw7+Qw4lx WxtOpYv9MVmnBiUEVb4ShIJrDwqq7sZp4JzAqJi1Tn852kSeOsUGQ2rLlPjLKU5VLL920nKyT9l 0buJ7K54c14jPMJA8L/otNogjy/4vUyRxGulA67haZEytoLOX5kb/RxaySxYnAo2ZQMhYBpE3hi 6lar37IBkGOD X-Google-Smtp-Source: AGHT+IEkq87YpOqKaKvYpaUN2bxoTuDpzP4uW9mDjz3oJZGljk6KJFgmugZGVuaQH46DPH6ygaz5Xg== X-Received: by 2002:a05:6a00:c8b:b0:725:1951:79fd with SMTP id d2e1a72fcca58-725b81c2c72mr8908329b3a.26.1733588260672; Sat, 07 Dec 2024 08:17:40 -0800 (PST) Received: from localhost (c-73-37-105-206.hsd1.or.comcast.net. [73.37.105.206]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-725da7e0181sm852674b3a.134.2024.12.07.08.17.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 07 Dec 2024 08:17:40 -0800 (PST) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Akhil P Oommen , Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [RFC 07/24] drm/msm: Remove vram carveout support Date: Sat, 7 Dec 2024 08:15:07 -0800 Message-ID: <20241207161651.410556-8-robdclark@gmail.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20241207161651.410556-1-robdclark@gmail.com> References: <20241207161651.410556-1-robdclark@gmail.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Rob Clark It is standing in the way of drm_gpuvm / VM_BIND support. Not to mention frequently broken and rarely tested. And I think only needed for a 10yr old not quite upstream SoC (msm8974). Maybe we can add support back in later, but I'm doubtful. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_drv.c | 117 +----------------------- drivers/gpu/drm/msm/msm_drv.h | 11 --- drivers/gpu/drm/msm/msm_gem.c | 131 +++------------------------ drivers/gpu/drm/msm/msm_gem.h | 5 - drivers/gpu/drm/msm/msm_gem_submit.c | 5 - 5 files changed, 13 insertions(+), 256 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c index 88cd1ed59d48..a5a95a53d2c8 100644 --- a/drivers/gpu/drm/msm/msm_drv.c +++ b/drivers/gpu/drm/msm/msm_drv.c @@ -46,12 +46,6 @@ #define MSM_VERSION_MINOR 12 #define MSM_VERSION_PATCHLEVEL 0 -static void msm_deinit_vram(struct drm_device *ddev); - -static char *vram = "16m"; -MODULE_PARM_DESC(vram, "Configure VRAM size (for devices without IOMMU/GPUMMU)"); -module_param(vram, charp, 0); - bool dumpstate; MODULE_PARM_DESC(dumpstate, "Dump KMS state on errors"); module_param(dumpstate, bool, 0600); @@ -97,8 +91,6 @@ static int msm_drm_uninit(struct device *dev) if (priv->kms) msm_drm_kms_uninit(dev); - msm_deinit_vram(ddev); - component_unbind_all(dev, ddev); ddev->dev_private = NULL; @@ -109,107 +101,6 @@ static int msm_drm_uninit(struct device *dev) return 0; } -bool msm_use_mmu(struct drm_device *dev) -{ - struct msm_drm_private *priv = dev->dev_private; - - /* - * a2xx comes with its own MMU - * On other platforms IOMMU can be declared specified either for the - * MDP/DPU device or for its parent, MDSS device. - */ - return priv->is_a2xx || - device_iommu_mapped(dev->dev) || - device_iommu_mapped(dev->dev->parent); -} - -static int msm_init_vram(struct drm_device *dev) -{ - struct msm_drm_private *priv = dev->dev_private; - struct device_node *node; - unsigned long size = 0; - int ret = 0; - - /* In the device-tree world, we could have a 'memory-region' - * phandle, which gives us a link to our "vram". Allocating - * is all nicely abstracted behind the dma api, but we need - * to know the entire size to allocate it all in one go. There - * are two cases: - * 1) device with no IOMMU, in which case we need exclusive - * access to a VRAM carveout big enough for all gpu - * buffers - * 2) device with IOMMU, but where the bootloader puts up - * a splash screen. In this case, the VRAM carveout - * need only be large enough for fbdev fb. But we need - * exclusive access to the buffer to avoid the kernel - * using those pages for other purposes (which appears - * as corruption on screen before we have a chance to - * load and do initial modeset) - */ - - node = of_parse_phandle(dev->dev->of_node, "memory-region", 0); - if (node) { - struct resource r; - ret = of_address_to_resource(node, 0, &r); - of_node_put(node); - if (ret) - return ret; - size = r.end - r.start + 1; - DRM_INFO("using VRAM carveout: %lx@%pa\n", size, &r.start); - - /* if we have no IOMMU, then we need to use carveout allocator. - * Grab the entire DMA chunk carved out in early startup in - * mach-msm: - */ - } else if (!msm_use_mmu(dev)) { - DRM_INFO("using %s VRAM carveout\n", vram); - size = memparse(vram, NULL); - } - - if (size) { - unsigned long attrs = 0; - void *p; - - priv->vram.size = size; - - drm_mm_init(&priv->vram.mm, 0, (size >> PAGE_SHIFT) - 1); - spin_lock_init(&priv->vram.lock); - - attrs |= DMA_ATTR_NO_KERNEL_MAPPING; - attrs |= DMA_ATTR_WRITE_COMBINE; - - /* note that for no-kernel-mapping, the vaddr returned - * is bogus, but non-null if allocation succeeded: - */ - p = dma_alloc_attrs(dev->dev, size, - &priv->vram.paddr, GFP_KERNEL, attrs); - if (!p) { - DRM_DEV_ERROR(dev->dev, "failed to allocate VRAM\n"); - priv->vram.paddr = 0; - return -ENOMEM; - } - - DRM_DEV_INFO(dev->dev, "VRAM: %08x->%08x\n", - (uint32_t)priv->vram.paddr, - (uint32_t)(priv->vram.paddr + size)); - } - - return ret; -} - -static void msm_deinit_vram(struct drm_device *ddev) -{ - struct msm_drm_private *priv = ddev->dev_private; - unsigned long attrs = DMA_ATTR_NO_KERNEL_MAPPING; - - if (!priv->vram.paddr) - return; - - drm_mm_takedown(&priv->vram.mm); - dma_free_attrs(ddev->dev, priv->vram.size, NULL, priv->vram.paddr, - attrs); -} - static int msm_drm_init(struct device *dev, const struct drm_driver *drv) { struct msm_drm_private *priv = dev_get_drvdata(dev); @@ -256,16 +147,12 @@ static int msm_drm_init(struct device *dev, const struct drm_driver *drv) goto err_destroy_wq; } - ret = msm_init_vram(ddev); - if (ret) - goto err_destroy_wq; - dma_set_max_seg_size(dev, UINT_MAX); /* Bind all our sub-components: */ ret = component_bind_all(dev, ddev); if (ret) - goto err_deinit_vram; + goto err_destroy_wq; // ret = msm_gem_shrinker_init(ddev); if (ret) @@ -302,8 +189,6 @@ static int msm_drm_init(struct device *dev, const struct drm_driver *drv) return ret; -err_deinit_vram: - msm_deinit_vram(ddev); err_destroy_wq: destroy_workqueue(priv->wq); err_put_dev: diff --git a/drivers/gpu/drm/msm/msm_drv.h b/drivers/gpu/drm/msm/msm_drv.h index ce1ef981a309..20a0f8f23490 100644 --- a/drivers/gpu/drm/msm/msm_drv.h +++ b/drivers/gpu/drm/msm/msm_drv.h @@ -183,17 +183,6 @@ struct msm_drm_private { struct msm_drm_thread event_thread[MAX_CRTCS]; - /* VRAM carveout, used when no IOMMU: */ - struct { - unsigned long size; - dma_addr_t paddr; - /* NOTE: mm managed at the page level, size is in # of pages - * and position mm_node->start is in # of pages: - */ - struct drm_mm mm; - spinlock_t lock; /* Protects drm_mm node allocation/removal */ - } vram; - struct notifier_block vmap_notifier; struct shrinker *shrinker; diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index c29367239283..f42bfa70502a 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -17,24 +17,8 @@ #include #include "msm_drv.h" -#include "msm_fence.h" #include "msm_gem.h" #include "msm_gpu.h" -#include "msm_mmu.h" - -static dma_addr_t physaddr(struct drm_gem_object *obj) -{ - struct msm_gem_object *msm_obj = to_msm_bo(obj); - struct msm_drm_private *priv = obj->dev->dev_private; - return (((dma_addr_t)msm_obj->vram_node->start) << PAGE_SHIFT) + - priv->vram.paddr; -} - -static bool use_pages(struct drm_gem_object *obj) -{ - struct msm_gem_object *msm_obj = to_msm_bo(obj); - return !msm_obj->vram_node; -} static void update_device_mem(struct msm_drm_private *priv, ssize_t size) { @@ -135,36 +119,6 @@ static void update_lru(struct drm_gem_object *obj) mutex_unlock(&priv->lru.lock); } -/* allocate pages from VRAM carveout, used when no IOMMU: */ -static struct page **get_pages_vram(struct drm_gem_object *obj, int npages) -{ - struct msm_gem_object *msm_obj = to_msm_bo(obj); - struct msm_drm_private *priv = obj->dev->dev_private; - dma_addr_t paddr; - struct page **p; - int ret, i; - - p = kvmalloc_array(npages, sizeof(struct page *), GFP_KERNEL); - if (!p) - return ERR_PTR(-ENOMEM); - - spin_lock(&priv->vram.lock); - ret = drm_mm_insert_node(&priv->vram.mm, msm_obj->vram_node, npages); - spin_unlock(&priv->vram.lock); - if (ret) { - kvfree(p); - return ERR_PTR(ret); - } - - paddr = physaddr(obj); - for (i = 0; i < npages; i++) { - p[i] = pfn_to_page(__phys_to_pfn(paddr)); - paddr += PAGE_SIZE; - } - - return p; -} - static struct page **get_pages(struct drm_gem_object *obj) { struct msm_gem_object *msm_obj = to_msm_bo(obj); @@ -176,10 +130,7 @@ static struct page **get_pages(struct drm_gem_object *obj) struct page **p; int npages = obj->size >> PAGE_SHIFT; - if (use_pages(obj)) - p = drm_gem_get_pages(obj); - else - p = get_pages_vram(obj, npages); + p = drm_gem_get_pages(obj); if (IS_ERR(p)) { DRM_DEV_ERROR(dev->dev, "could not get pages: %ld\n", @@ -212,18 +163,6 @@ static struct page **get_pages(struct drm_gem_object *obj) return msm_obj->pages; } -static void put_pages_vram(struct drm_gem_object *obj) -{ - struct msm_gem_object *msm_obj = to_msm_bo(obj); - struct msm_drm_private *priv = obj->dev->dev_private; - - spin_lock(&priv->vram.lock); - drm_mm_remove_node(msm_obj->vram_node); - spin_unlock(&priv->vram.lock); - - kvfree(msm_obj->pages); -} - static void put_pages(struct drm_gem_object *obj) { struct msm_gem_object *msm_obj = to_msm_bo(obj); @@ -244,10 +183,7 @@ static void put_pages(struct drm_gem_object *obj) update_device_mem(obj->dev->dev_private, -obj->size); - if (use_pages(obj)) - drm_gem_put_pages(obj, msm_obj->pages, true, false); - else - put_pages_vram(obj); + drm_gem_put_pages(obj, msm_obj->pages, true, false); msm_obj->pages = NULL; update_lru(obj); @@ -1207,19 +1143,10 @@ struct drm_gem_object *msm_gem_new(struct drm_device *dev, uint32_t size, uint32 struct msm_drm_private *priv = dev->dev_private; struct msm_gem_object *msm_obj; struct drm_gem_object *obj = NULL; - bool use_vram = false; int ret; size = PAGE_ALIGN(size); - if (!msm_use_mmu(dev)) - use_vram = true; - else if ((flags & (MSM_BO_STOLEN | MSM_BO_SCANOUT)) && priv->vram.size) - use_vram = true; - - if (GEM_WARN_ON(use_vram && !priv->vram.size)) - return ERR_PTR(-EINVAL); - /* Disallow zero sized objects as they make the underlying * infrastructure grumpy */ @@ -1232,44 +1159,16 @@ struct drm_gem_object *msm_gem_new(struct drm_device *dev, uint32_t size, uint32 msm_obj = to_msm_bo(obj); - if (use_vram) { - struct msm_gem_vma *vma; - struct page **pages; - - drm_gem_private_object_init(dev, obj, size); - - msm_gem_lock(obj); - - vma = add_vma(obj, NULL); - msm_gem_unlock(obj); - if (IS_ERR(vma)) { - ret = PTR_ERR(vma); - goto fail; - } - - to_msm_bo(obj)->vram_node = &vma->node; - - msm_gem_lock(obj); - pages = get_pages(obj); - msm_gem_unlock(obj); - if (IS_ERR(pages)) { - ret = PTR_ERR(pages); - goto fail; - } - - vma->iova = physaddr(obj); - } else { - ret = drm_gem_object_init(dev, obj, size); - if (ret) - goto fail; - /* - * Our buffers are kept pinned, so allocating them from the - * MOVABLE zone is a really bad idea, and conflicts with CMA. - * See comments above new_inode() why this is required _and_ - * expected if you're going to pin these pages. - */ - mapping_set_gfp_mask(obj->filp->f_mapping, GFP_HIGHUSER); - } + ret = drm_gem_object_init(dev, obj, size); + if (ret) + goto fail; + /* + * Our buffers are kept pinned, so allocating them from the + * MOVABLE zone is a really bad idea, and conflicts with CMA. + * See comments above new_inode() why this is required _and_ + * expected if you're going to pin these pages. + */ + mapping_set_gfp_mask(obj->filp->f_mapping, GFP_HIGHUSER); drm_gem_lru_move_tail(&priv->lru.unbacked, obj); @@ -1297,12 +1196,6 @@ struct drm_gem_object *msm_gem_import(struct drm_device *dev, uint32_t size; int ret, npages; - /* if we don't have IOMMU, don't bother pretending we can import: */ - if (!msm_use_mmu(dev)) { - DRM_DEV_ERROR(dev->dev, "cannot import without IOMMU\n"); - return ERR_PTR(-EINVAL); - } - size = PAGE_ALIGN(dmabuf->size); ret = msm_gem_new_impl(dev, size, MSM_BO_WC, &obj); diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index d2f39a371373..c16b11182831 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -102,11 +102,6 @@ struct msm_gem_object { struct list_head vmas; /* list of msm_gem_vma */ - /* For physically contiguous buffers. Used when we don't have - * an IOMMU. Also used for stolen/splashscreen buffer. - */ - struct drm_mm_node *vram_node; - char name[32]; /* Identifier to print for the debugfs files */ /* userspace metadata backchannel */ diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c index 30a281aa1353..235ad4be7fd0 100644 --- a/drivers/gpu/drm/msm/msm_gem_submit.c +++ b/drivers/gpu/drm/msm/msm_gem_submit.c @@ -659,11 +659,6 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data, if (args->pad) return -EINVAL; - if (unlikely(!ctx->vm) && !capable(CAP_SYS_RAWIO)) { - DRM_ERROR_RATELIMITED("IOMMU support or CAP_SYS_RAWIO required!\n"); - return -EPERM; - } - /* for now, we just have 3d pipe.. eventually this would need to * be more clever to dispatch to appropriate gpu module: */ From patchwork Sat Dec 7 16:15:08 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 848119 Received: from mail-pl1-f172.google.com (mail-pl1-f172.google.com [209.85.214.172]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0FE8E14F9FA; Sat, 7 Dec 2024 16:17:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.172 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733588266; cv=none; b=dLQnzyLPjt++dA8wa+hz5254UarCgYwkovSFP4QTq0QGoXSxc+95aF1YtiAT7wr7fANmmvFe9Am38l1BlbesD5p203ngDyVlPv4KX2YvlF6jq//3xsWAk8kTJDWJg2JJAJ2BEIz4/RBJQTYPB5OZ6UbthBqBwDnBlCu6rEBseYo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733588266; c=relaxed/simple; bh=x959h/MIbQeMPwRohoSRvpp75clWi3271X1vvXgiU+g=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=dOsj4kgfB+/YTVz1IEfAkkmOJ9l7Ok1PV/FHKAtSpesXRVsR6zWiKm9Yfa/ZVh3V5IDcn0zq83Ro8yVSiH5ISXABwIajSHfqp1iShwlZ2mzVZGLEQvVuC1Bqus06mJQORpCZndXhHsbKSwSHaYYezKLq6iuH0qzw/yEiROjdW+Q= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=fZWkrPRW; arc=none smtp.client-ip=209.85.214.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="fZWkrPRW" Received: by mail-pl1-f172.google.com with SMTP id d9443c01a7336-215b0582aaeso23670055ad.3; Sat, 07 Dec 2024 08:17:44 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1733588264; x=1734193064; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=UDaYepScugWukq2XI1zi50Sp415jY0K0KC3SJsPCP2g=; b=fZWkrPRWCJtGlDYQ+Bhjx6wROp0gawyP/oR90ejtm2WRLtigMFbepmnbcst9iF+M1V D/S4XKQvboywfROvBIRh5HcSQzd/M02NpvLd2Jl9qV2i5c1Vj1JGuTHzHbMeb9CRBPIk Pk4kxmd0ERm4iDuP9pW+X0S4/Iwt08nh+c9OD3bhenkOM6n0dgs0mL1fGs29csDLG+LS 6WReuZ4i2SY+W8CCbmLAQdghDYlABNIZejujRkZmErDFGU5cpAT0BWJSCGWPrjSacPBr CTO84fsyVWJTZjE5xhHsGmgcU9xPuGYco0lWLoilVpKCw9WNHlQUeDpjd+3QmzagUxBE m/2g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1733588264; x=1734193064; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=UDaYepScugWukq2XI1zi50Sp415jY0K0KC3SJsPCP2g=; b=c7OPyDj3ztK6yOLXf/NP6Hq16kdtzmumlk1UQ7xQaO7qBsn4+ZQ2cSr8jrubV8uvSN ruhK19PsGk9YZVJpp0U4r4Z0cebp9Vzn34s9O1dFQ2hLpMMwnRfAjLdVv4Yq1qyvdWos KAcKkB4PYyIwlLLW9LQz6FmYdoQa8ybQaeroijn/ZUq3wTGK5uFtU99htTu1LN7LtPQZ RKhGNEi6unhWq/rCZCjz0RGCGlXyJNXRTnTr0STFYAwW7VRH2LfczkbufeDD75d70deb lhDHmciWgnBsgG5TuGgon42n72604k1zPXe2OeBKWf9cR5H0RUc1MEdRvJ39pryYSsZY EEsg== X-Forwarded-Encrypted: i=1; AJvYcCVuoBShjYbMSTNLcHxCUHTSMQnRIHxvA0JfchtUSCyGXPgjh8JkdzyRka8OR+z3x4INPhmlHWcZXdiegz4W@vger.kernel.org, AJvYcCWM2FV768ij/Pqvf/TxdXPtRtOaRh4q6RBN7yjbtsPOO9Pfh9uh02ocqpBq84BYSHsxY2R6NsBFcpDrNasw@vger.kernel.org X-Gm-Message-State: AOJu0Ywfju9AIkrxDn41N1T26fWGT3kdtl9RxyysPuDtwG1wA+lErzDH skaW/BXSGZTT4J+Yz0hG0mcdHPCl7BkDF3cYf0tr1KINLzvxAvcs X-Gm-Gg: ASbGncuyWDPfYaBsxIRnmkkZs49ncrT0DFhQ4fRaesR20dUCYni1BR+lryqcxV6rrmV wb91inDpIh0NKHLm8cSfJ+qbqQxcNZ8Y9GlZ7V3Ya+gWlHn8CfJ8H/xliAVCU+DycjGYm3hAsx7 BBpmbTD4PzBkLv9kvCMYcpBM5PvKpGGyiJyDURyMWKIbtMCP23+3cx/ul9ykKddykH9vl5bImKk tNj4WQ1EgLzVxkG/JFacjJYoLqXL1AIi5vo3s2aj2fAH8M0lJdE+lkWHM7E5JdSrhPYSJSK2OQf SQDAeAXp X-Google-Smtp-Source: AGHT+IGlm8gy6Zo+nDRsiVqmOjnQSJxEkmCuTB/KlVGLgBqx8VVYfJC/KbklIO4245KngxH8QdN8+g== X-Received: by 2002:a17:902:f641:b0:215:9091:4f4c with SMTP id d9443c01a7336-21614dabf5bmr99810815ad.41.1733588262908; Sat, 07 Dec 2024 08:17:42 -0800 (PST) Received: from localhost (c-73-37-105-206.hsd1.or.comcast.net. [73.37.105.206]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-215f8efa1a0sm45432785ad.165.2024.12.07.08.17.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 07 Dec 2024 08:17:41 -0800 (PST) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Akhil P Oommen , Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [RFC 08/24] drm/msm: Collapse vma allocation and initialization Date: Sat, 7 Dec 2024 08:15:08 -0800 Message-ID: <20241207161651.410556-9-robdclark@gmail.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20241207161651.410556-1-robdclark@gmail.com> References: <20241207161651.410556-1-robdclark@gmail.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Rob Clark Now that we've dropped vram carveout support, we can collapse vma allocation and initialization. This better matches how things work with drm_gpuvm. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gem.c | 30 +++----------------------- drivers/gpu/drm/msm/msm_gem.h | 4 ++-- drivers/gpu/drm/msm/msm_gem_vma.c | 36 +++++++++++++------------------ 3 files changed, 20 insertions(+), 50 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index f42bfa70502a..6f11ce1d0191 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -333,23 +333,6 @@ uint64_t msm_gem_mmap_offset(struct drm_gem_object *obj) return offset; } -static struct msm_gem_vma *add_vma(struct drm_gem_object *obj, - struct msm_gem_vm *vm) -{ - struct msm_gem_object *msm_obj = to_msm_bo(obj); - struct msm_gem_vma *vma; - - msm_gem_assert_locked(obj); - - vma = msm_gem_vma_new(vm); - if (!vma) - return ERR_PTR(-ENOMEM); - - list_add_tail(&vma->list, &msm_obj->vmas); - - return vma; -} - static struct msm_gem_vma *lookup_vma(struct drm_gem_object *obj, struct msm_gem_vm *vm) { @@ -416,6 +399,7 @@ static struct msm_gem_vma *get_vma_locked(struct drm_gem_object *obj, struct msm_gem_vm *vm, u64 range_start, u64 range_end) { + struct msm_gem_object *msm_obj = to_msm_bo(obj); struct msm_gem_vma *vma; msm_gem_assert_locked(obj); @@ -423,18 +407,10 @@ static struct msm_gem_vma *get_vma_locked(struct drm_gem_object *obj, vma = lookup_vma(obj, vm); if (!vma) { - int ret; - - vma = add_vma(obj, vm); + vma = msm_gem_vma_new(vm, obj, range_start, range_end); if (IS_ERR(vma)) return vma; - - ret = msm_gem_vma_init(vma, obj->size, - range_start, range_end); - if (ret) { - del_vma(vma); - return ERR_PTR(ret); - } + list_add_tail(&vma->list, &msm_obj->vmas); } else { GEM_WARN_ON(vma->iova < range_start); GEM_WARN_ON((vma->iova + obj->size) > range_end); diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index c16b11182831..9bd78642671c 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -66,8 +66,8 @@ struct msm_gem_vma { bool mapped; }; -struct msm_gem_vma *msm_gem_vma_new(struct msm_gem_vm *vm); -int msm_gem_vma_init(struct msm_gem_vma *vma, int size, +struct msm_gem_vma * +msm_gem_vma_new(struct msm_gem_vm *vm, struct drm_gem_object *obj, u64 range_start, u64 range_end); void msm_gem_vma_purge(struct msm_gem_vma *vma); int msm_gem_vma_map(struct msm_gem_vma *vma, int prot, struct sg_table *sgt, int size); diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_gem_vma.c index 9419692f0cc8..6d18364f321c 100644 --- a/drivers/gpu/drm/msm/msm_gem_vma.c +++ b/drivers/gpu/drm/msm/msm_gem_vma.c @@ -106,47 +106,41 @@ void msm_gem_vma_close(struct msm_gem_vma *vma) msm_gem_vm_put(vm); } -struct msm_gem_vma *msm_gem_vma_new(struct msm_gem_vm *vm) +/* Create a new vma and allocate an iova for it */ +struct msm_gem_vma * +msm_gem_vma_new(struct msm_gem_vm *vm, struct drm_gem_object *obj, + u64 range_start, u64 range_end) { struct msm_gem_vma *vma; + int ret; vma = kzalloc(sizeof(*vma), GFP_KERNEL); if (!vma) - return NULL; + return ERR_PTR(-ENOMEM); vma->vm = vm; - return vma; -} - -/* Initialize a new vma and allocate an iova for it */ -int msm_gem_vma_init(struct msm_gem_vma *vma, int size, - u64 range_start, u64 range_end) -{ - struct msm_gem_vm *vm = vma->vm; - int ret; - - if (GEM_WARN_ON(!vm)) - return -EINVAL; - - if (GEM_WARN_ON(vma->iova)) - return -EBUSY; - spin_lock(&vm->lock); ret = drm_mm_insert_node_in_range(&vm->mm, &vma->node, - size, PAGE_SIZE, 0, + obj->size, PAGE_SIZE, 0, range_start, range_end, 0); spin_unlock(&vm->lock); if (ret) - return ret; + goto err_free_vma; vma->iova = vma->node.start; vma->mapped = false; + INIT_LIST_HEAD(&vma->list); + kref_get(&vm->kref); - return 0; + return vma; + +err_free_vma: + kfree(vma); + return ERR_PTR(ret); } struct msm_gem_vm * From patchwork Sat Dec 7 16:15:09 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 848322 Received: from mail-pl1-f175.google.com (mail-pl1-f175.google.com [209.85.214.175]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 095D114D717; Sat, 7 Dec 2024 16:17:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.175 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733588266; cv=none; b=JamtMXx2BblCR4pOhOO5hOpCoPNeRMvw1/s24u1nT3qb+xEsvykQ7ywYk/CqN1lU1A+eowkEqw8IL0Gm0Ue8zRq+ZLQ96IQPxCahFizChBbCfomxvaQ5vOjo0iBVD1IeGrGO/FgESWUxQ1YLo7EfVyOPGCr4iUsENr8JEwPv/k8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733588266; c=relaxed/simple; bh=H1jOr7hhxh9q/ZeIo4nh4aWqyRjbWUXG4Mz42VYN0j8=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=nNK86zlITIjrqBt2SiPZpiARuzFnyUD0cik+tpjEm4f44SinrhdwZ52htv8615T1oaWb8bppuBwrCq+19iT7C4IGRDZ/DOTluy//7QNGiLdPlVSI+6PhT+yUstRxDXpy6L82WUZPup4QzThWY6WGc+HXAx6JnBmf5PSzxcWyNBE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=Nrd5ro/2; arc=none smtp.client-ip=209.85.214.175 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="Nrd5ro/2" Received: by mail-pl1-f175.google.com with SMTP id d9443c01a7336-2163b0c09afso472775ad.0; Sat, 07 Dec 2024 08:17:45 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1733588264; x=1734193064; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=7cLXBR3G1z3bRKzv5+3ECPLKbEckhX9iHHy43Juue4w=; b=Nrd5ro/2cMC3tz0rUnoAcNP9Du8XJdExTRYr0jt0IaZYO24inwCtdDJpviOrpQHfp1 jAO9bhHzFursM+g3BzQP+9eOL2LkZAcZzFGgcj4YHyts2uYR782RZs74zSdpVAYe71ca 8n14RHKfAWORSondHFhoGpOiFOUI2/h3wiUZAbtFgq/sgSep8ES8ZsxYt2S+J3xMwZan f9mzMCwfJLamAvsFIK9DY0A+UDdZymCdK6CGqWXIaam0amSNHF8ao+69Ow6nh9iKVBvW jMoOsk3AsOCgHy84YpUPG9v6L45/DgswDwPnwxHEtEnFvB9cbncwcM+rU0OSD9XqhDjT AoUw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1733588264; x=1734193064; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=7cLXBR3G1z3bRKzv5+3ECPLKbEckhX9iHHy43Juue4w=; b=sKiKrWoayhNMRGGhz1gL7QIFFwR1bP5xSYyp7DEWPihpDbxA5dcQZsUjo9vyo3tyMS yAtT6ou0DwOuApjnM+yL6iIwe3VGciRJzxNaW4BbHjFmUoMI8m8U43f1480q/p7FCFtx OFyVy32c0hnrvNWczbnzc9zSzFR/7U3su/3IVaXxqZItmj+uT3NjoO3Psdofe8rAVqiK jk3RQS/avvIvxSHkQr5M8MhWUx6r/sjiMuWQcJPJHMSHvdCoHeeJuRi3ks6bKDiO7e7S eoSaYzTadLhJcH5ln992VCNIWlY8Q5N20vWQbGjGSFZOy4b4HX2ecC9/jqmAW7fh2S2d G1pQ== X-Forwarded-Encrypted: i=1; AJvYcCUULQ6RobdjnLZ3BmlWPW12+VwVtZoLAaviwyNaly9TFMrGeekb08W8Wqb3/HDuRFAcPILYGk6iu2lrC22G@vger.kernel.org, AJvYcCWKGChoPxBu2ooNqXInBJEVd7cGBMYgLbExoVcDMw5tjaOZEUT6cC1xPmrv6N8LpYQ9stvTOuU/dOWGOnAp@vger.kernel.org X-Gm-Message-State: AOJu0YxgdXyvt+tJB1z5c/K54m2RH00WdScyrGj4bVZekhAXTRz+Sbxd 0WlLBrZDxWbHabHy/NrIlkK37GREgPzG9+21UTsXkhlpwCpGGXu7 X-Gm-Gg: ASbGncuFeRwAN0Q8fk5MsY7HIXpQ60b8UWrWur7/aBuUK1T+8leQp7ZLAzQNVDAa+pe pzXi/uhllEX7DVr7MpiwL2ujeBSaqgVLcttvQjGUHuWBa6qmW3C8gqWyJrdfl7ja1F5gNe7yavf tQdFu+zFa2u5C4rdD/HeEFMgjcHz6Qr1Fryz6CBc4UsDNoAQ3eqw4lHVjQXvv/n/smeuQ0Z4Nnr mzfsnmNl9+4eM/stnyt7U+/6aMXrfynxb8fase50zdJ9SZ9wT3YL3TNk6kNsa7MHdfQHfi9pJD8 9fBar39d X-Google-Smtp-Source: AGHT+IFL+Sua+ojXGweDlYroK3L9HhW3YLR6oBasrEevNtDTKzyG2vcnQthcGfElhNM2StSthsN+4w== X-Received: by 2002:a17:903:18a:b0:20d:cb6:11e with SMTP id d9443c01a7336-21614d50de0mr84997175ad.26.1733588264429; Sat, 07 Dec 2024 08:17:44 -0800 (PST) Received: from localhost (c-73-37-105-206.hsd1.or.comcast.net. [73.37.105.206]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-2162b13d486sm13240525ad.191.2024.12.07.08.17.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 07 Dec 2024 08:17:43 -0800 (PST) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Akhil P Oommen , Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [RFC 09/24] drm/msm: Collapse vma close and delete Date: Sat, 7 Dec 2024 08:15:09 -0800 Message-ID: <20241207161651.410556-10-robdclark@gmail.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20241207161651.410556-1-robdclark@gmail.com> References: <20241207161651.410556-1-robdclark@gmail.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Rob Clark This fits better drm_gpuvm/drm_gpuva. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gem.c | 16 +++------------- drivers/gpu/drm/msm/msm_gem_vma.c | 2 ++ 2 files changed, 5 insertions(+), 13 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 6f11ce1d0191..326764026ebb 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -349,15 +349,6 @@ static struct msm_gem_vma *lookup_vma(struct drm_gem_object *obj, return NULL; } -static void del_vma(struct msm_gem_vma *vma) -{ - if (!vma) - return; - - list_del(&vma->list); - kfree(vma); -} - /* * If close is true, this also closes the VMA (releasing the allocated * iova range) in addition to removing the iommu mapping. In the eviction @@ -368,11 +359,11 @@ static void put_iova_spaces(struct drm_gem_object *obj, bool close) { struct msm_gem_object *msm_obj = to_msm_bo(obj); - struct msm_gem_vma *vma; + struct msm_gem_vma *vma, *tmp; msm_gem_assert_locked(obj); - list_for_each_entry(vma, &msm_obj->vmas, list) { + list_for_each_entry_safe(vma, tmp, &msm_obj->vmas, list) { if (vma->vm) { msm_gem_vma_purge(vma); if (close) @@ -391,7 +382,7 @@ put_iova_vmas(struct drm_gem_object *obj) msm_gem_assert_locked(obj); list_for_each_entry_safe(vma, tmp, &msm_obj->vmas, list) { - del_vma(vma); + msm_gem_vma_close(vma); } } @@ -556,7 +547,6 @@ static int clear_iova(struct drm_gem_object *obj, msm_gem_vma_purge(vma); msm_gem_vma_close(vma); - del_vma(vma); return 0; } diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_gem_vma.c index 6d18364f321c..ca29e81d79d2 100644 --- a/drivers/gpu/drm/msm/msm_gem_vma.c +++ b/drivers/gpu/drm/msm/msm_gem_vma.c @@ -102,8 +102,10 @@ void msm_gem_vma_close(struct msm_gem_vma *vma) spin_unlock(&vm->lock); vma->iova = 0; + list_del(&vma->list); msm_gem_vm_put(vm); + kfree(vma); } /* Create a new vma and allocate an iova for it */ From patchwork Sat Dec 7 16:15:10 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 848321 Received: from mail-pl1-f179.google.com (mail-pl1-f179.google.com [209.85.214.179]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7DB431917D9; Sat, 7 Dec 2024 16:17:48 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.179 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733588271; cv=none; b=UaXx/Fywt87uo89Jyv4oy10xAg01v5eGQMxSYE7jgLzRhUYNHxYKTPgQR93sW/a57nBtbCsmdpsdRF5/6e/+79TRZ4F5d8aTQ+Em6giNhOI9/cMp7j9oM8XyX5jgCKQiDr3pSge+7XdikqtIJssWSkXahkWwTap3bYQFihwKKc8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733588271; c=relaxed/simple; bh=KtxZhrCR5gxntSzZdSyIXq4mS3ya3VrJ6ogu21zMd2M=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=ZHwy3NztYzHE5DyExhrD/oz7jO3Oh6S6ioUEtj68AgV1EJlmVUiX2Y5u7h4BAFBS1fZnPLh9Nd+4wn1Syy9tTvTV5MM8Y7V+nAmZPTLlN3toR4bTZLEPmSnqPcUrgrKx9s8AvHUQg4XItlUJNJslRNyF1FDN9UWAF3CciQtJGS8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=mFkK9LVq; arc=none smtp.client-ip=209.85.214.179 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="mFkK9LVq" Received: by mail-pl1-f179.google.com with SMTP id d9443c01a7336-215a7e487bfso26177885ad.2; Sat, 07 Dec 2024 08:17:48 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1733588268; x=1734193068; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=BZMhVAbruUOuMzLPRzU2/czWJYT/swnfV5kQxwHKnPc=; b=mFkK9LVqmXOLjB6JjHxU5d5T/elKe5YoLjAV1u7akX4pwYlN9YcTvqLUTNBrGrNsUw 7vlEHz3jgU05ozvi79bkNQmQ1WF21WhL/FrNKKAur6ojiK4LZvOkJvwiNBLQ7bY9/7d0 s7blwRIMdVm4uMQXM4P/lq9FgS9nT2lQxFqRufCbYSKMXvfREM4ETPH0gM051tDW7IEC YSFmc5Rj5DYwg8uhjxVyuEW9+ihvGPZDRcIAV87tqKwJ0g3Lx3O/KwxVaCNOgZGubpnq jioKeRCxv02qLYe867Jek1cvzXT6iu9WGeWxA437STONFZIkLLFyTgfft4DrkzMFv5E1 Jqyg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1733588268; x=1734193068; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=BZMhVAbruUOuMzLPRzU2/czWJYT/swnfV5kQxwHKnPc=; b=CiZyOshLh3/iQR2zRc5zz10ZqiSh1k6GyGz7+G/mEnJO/PKcEAE5h3jgh3VO/VIlMU ubX1kSk1EFs9lfmSUOsHe4oNbZ96L4BII6LWiM308tj78UHfjUX7ZafrfFPqr2W1+Xfd QMwyc240jrOXp1BfMqEIFUeEFqIVAXnjODEe6A3gp/+Ah6wvmt5ktMSm0Y42WW/YpFQe TnqyKsaaJtB5m4VeHmMTQLGZjsH7n+P9VyqNpXHA7hN6vsPjnK0vC3Xp8GpeyxF6nwec k4wuyfyj1K/o0pQ5pU6d+4LJC3QyAjF7gN57C9xLn912njXcyMUX8poZjy1hBE8dPcZp KTPg== X-Forwarded-Encrypted: i=1; AJvYcCVMYBQXyRZ8BbbU9980uoKgl5kBHAoT8yrIj4dtbLrySVdLn/EvmrJVdBDvarNLMVHjlC64SDZP96JOq8gh@vger.kernel.org, AJvYcCW0NItZkY/3olO/k2FihWmTT0qtO31+tsN0NA03ixjeDGfzU0jFdIClvfVXYMSTks4Q4+SmcvhmyvCObRFy@vger.kernel.org X-Gm-Message-State: AOJu0YyrSmoLkpo5w4uRlU4vPH3k7fgxNXkSxpOUjq5zjtzx6ip4o2W9 RKwXdRzOZ7WMZ1hd/4zeYX+zFwGHL2ngnw6ZJtVYc1dIo/Ld8XtC X-Gm-Gg: ASbGncsdWytAI6B8SvARFNMbCuBGuCKIrALOvkdriAF6Kz0g+WLM7tW3KD2OeOeFDKN U1jeJBPe7gOPAXaPE11vJYpwZUJPrwSEiTFDsBYyCg41nWJS7CK0qTaAqTo+oP6aMReHS+6LhMa k6GRUBsIi8MqMx/xNTlDddoPtMkYb7FQA6ft/wC3SnJUFgnGSv1aqqzuKTRKcJEaJUuuVE2FXbN aowMlVn/ZqA28SCKB/dwJrvkqHXEpvQmrNFLYzzlThMAXnZt+5cfJAWKnBA2z6vE68uYMxWzg+W 2AsDu7Ht X-Google-Smtp-Source: AGHT+IHJHY+orF7bP3/H85kbQipjJJg7wExw8RfZbVxqUd1vGnSUj81wYJc9mxdydcQsStcK1VrngQ== X-Received: by 2002:a17:903:110f:b0:215:6f5d:b756 with SMTP id d9443c01a7336-21614d1ee97mr92961785ad.7.1733588267566; Sat, 07 Dec 2024 08:17:47 -0800 (PST) Received: from localhost (c-73-37-105-206.hsd1.or.comcast.net. [73.37.105.206]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-215f8f0a939sm45110735ad.219.2024.12.07.08.17.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 07 Dec 2024 08:17:46 -0800 (PST) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Akhil P Oommen , Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , Konrad Dybcio , linux-kernel@vger.kernel.org (open list) Subject: [RFC 10/24] drm/msm: drm_gpuvm conversion Date: Sat, 7 Dec 2024 08:15:10 -0800 Message-ID: <20241207161651.410556-11-robdclark@gmail.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20241207161651.410556-1-robdclark@gmail.com> References: <20241207161651.410556-1-robdclark@gmail.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Rob Clark Now that we've realigned deletion and allocation, switch over to using drm_gpuvm/drm_gpuva. This allows us to support multiple VMAs per BO per VM, to allow mapping different parts of a single BO at different virtual addresses, which is a key requirement for sparse/VM_BIND. This prepares us for using drm_gpuvm to translate a batch of MAP/ MAP_NULL/UNMAP operations from userspace into a sequence of map/remap/ unmap steps for updating the page tables. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/Kconfig | 1 + drivers/gpu/drm/msm/adreno/a2xx_gpu.c | 3 +- drivers/gpu/drm/msm/adreno/a6xx_gmu.c | 6 +- drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 5 +- drivers/gpu/drm/msm/adreno/adreno_gpu.c | 7 +- drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c | 5 +- drivers/gpu/drm/msm/msm_drv.c | 1 + drivers/gpu/drm/msm/msm_gem.c | 113 ++++++++++++------- drivers/gpu/drm/msm/msm_gem.h | 87 ++++++++++---- drivers/gpu/drm/msm/msm_gem_submit.c | 2 +- drivers/gpu/drm/msm/msm_gem_vma.c | 138 ++++++++++++++++------- drivers/gpu/drm/msm/msm_kms.c | 4 +- 12 files changed, 256 insertions(+), 116 deletions(-) diff --git a/drivers/gpu/drm/msm/Kconfig b/drivers/gpu/drm/msm/Kconfig index 7ec833b6d829..cdecf745af8d 100644 --- a/drivers/gpu/drm/msm/Kconfig +++ b/drivers/gpu/drm/msm/Kconfig @@ -21,6 +21,7 @@ config DRM_MSM select DRM_DISPLAY_HELPER select DRM_BRIDGE_CONNECTOR select DRM_EXEC + select DRM_GPUVM select DRM_KMS_HELPER select DRM_PANEL select DRM_BRIDGE diff --git a/drivers/gpu/drm/msm/adreno/a2xx_gpu.c b/drivers/gpu/drm/msm/adreno/a2xx_gpu.c index 5eb063ed0b46..e0f2e9c77976 100644 --- a/drivers/gpu/drm/msm/adreno/a2xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a2xx_gpu.c @@ -472,8 +472,7 @@ a2xx_create_vm(struct msm_gpu *gpu, struct platform_device *pdev) struct msm_mmu *mmu = a2xx_gpummu_new(&pdev->dev, gpu); struct msm_gem_vm *vm; - vm = msm_gem_vm_create(mmu, "gpu", SZ_16M, - 0xfff * SZ_64K); + vm = msm_gem_vm_create(gpu->dev, mmu, "gpu", SZ_16M, 0xfff * SZ_64K, true); if (IS_ERR(vm) && !IS_ERR(mmu)) mmu->funcs->destroy(mmu); diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c index 31cceb9eb51a..e278d7564642 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c @@ -1270,7 +1270,7 @@ static int a6xx_gmu_memory_alloc(struct a6xx_gmu *gmu, struct a6xx_gmu_bo *bo, return 0; } -static int a6xx_gmu_memory_probe(struct a6xx_gmu *gmu) +static int a6xx_gmu_memory_probe(struct drm_device *drm, struct a6xx_gmu *gmu) { struct msm_mmu *mmu; @@ -1280,7 +1280,7 @@ static int a6xx_gmu_memory_probe(struct a6xx_gmu *gmu) if (IS_ERR(mmu)) return PTR_ERR(mmu); - gmu->vm = msm_gem_vm_create(mmu, "gmu", 0x0, 0x80000000); + gmu->vm = msm_gem_vm_create(drm, mmu, "gmu", 0x0, 0x80000000, true); if (IS_ERR(gmu->vm)) return PTR_ERR(gmu->vm); @@ -1680,7 +1680,7 @@ int a6xx_gmu_init(struct a6xx_gpu *a6xx_gpu, struct device_node *node) if (ret) goto err_put_device; - ret = a6xx_gmu_memory_probe(gmu); + ret = a6xx_gmu_memory_probe(adreno_gpu->base.dev, gmu); if (ret) goto err_put_device; diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c index 6b961267614f..9e2721f8aff8 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c @@ -2259,9 +2259,8 @@ a6xx_create_private_vm(struct msm_gpu *gpu) if (IS_ERR(mmu)) return ERR_CAST(mmu); - return msm_gem_vm_create(mmu, - "gpu", 0x100000000ULL, - adreno_private_vm_size(gpu)); + return msm_gem_vm_create(gpu->dev, mmu, "gpu", 0x100000000ULL, + adreno_private_vm_size(gpu), true); } static uint32_t a6xx_get_rptr(struct msm_gpu *gpu, struct msm_ringbuffer *ring) diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c b/drivers/gpu/drm/msm/adreno/adreno_gpu.c index 14ac1900f031..5f82a56f17be 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c @@ -224,7 +224,8 @@ adreno_iommu_create_vm(struct msm_gpu *gpu, start = max_t(u64, SZ_16M, geometry->aperture_start); size = geometry->aperture_end - start + 1; - vm = msm_gem_vm_create(mmu, "gpu", start & GENMASK_ULL(48, 0), size); + vm = msm_gem_vm_create(gpu->dev, mmu, "gpu", start & GENMASK_ULL(48, 0), + size, true); if (IS_ERR(vm) && !IS_ERR(mmu)) mmu->funcs->destroy(mmu); @@ -366,12 +367,12 @@ int adreno_get_param(struct msm_gpu *gpu, struct msm_context *ctx, case MSM_PARAM_VA_START: if (ctx->vm == gpu->vm) return UERR(EINVAL, drm, "requires per-process pgtables"); - *value = ctx->vm->va_start; + *value = ctx->vm->base.mm_start; return 0; case MSM_PARAM_VA_SIZE: if (ctx->vm == gpu->vm) return UERR(EINVAL, drm, "requires per-process pgtables"); - *value = ctx->vm->va_size; + *value = ctx->vm->base.mm_range; return 0; case MSM_PARAM_HIGHEST_BANK_BIT: *value = adreno_gpu->ubwc_config.highest_bank_bit; diff --git a/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c b/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c index 3c5f8c3a5059..13176168ade2 100644 --- a/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c +++ b/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c @@ -451,8 +451,9 @@ static int mdp4_kms_init(struct drm_device *dev) "contig buffers for scanout\n"); vm = NULL; } else { - vm = msm_gem_vm_create(mmu, - "mdp4", 0x1000, 0x100000000 - 0x1000); + vm = msm_gem_vm_create(dev, mmu, "mdp4", + 0x1000, 0x100000000 - 0x1000, + true); if (IS_ERR(vm)) { if (!IS_ERR(mmu)) diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c index a5a95a53d2c8..ab0998c2e846 100644 --- a/drivers/gpu/drm/msm/msm_drv.c +++ b/drivers/gpu/drm/msm/msm_drv.c @@ -776,6 +776,7 @@ static const struct file_operations fops = { static const struct drm_driver msm_driver = { .driver_features = DRIVER_GEM | + DRIVER_GEM_GPUVA | DRIVER_RENDER | DRIVER_ATOMIC | DRIVER_MODESET | diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 326764026ebb..a8de7b158a37 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -43,9 +43,22 @@ static int msm_gem_open(struct drm_gem_object *obj, struct drm_file *file) return 0; } +static void put_iova_spaces(struct drm_gem_object *obj, bool close); + static void msm_gem_close(struct drm_gem_object *obj, struct drm_file *file) { update_ctx_mem(file, -obj->size); + + /* + * TODO we might need to kick this to a queue to avoid blocking + * in CLOSE ioctl + */ + dma_resv_wait_timeout(obj->resv, DMA_RESV_USAGE_READ, false, + msecs_to_jiffies(1000)); + + msm_gem_lock(obj); + put_iova_spaces(obj, true); + msm_gem_unlock(obj); } /* @@ -167,6 +180,13 @@ static void put_pages(struct drm_gem_object *obj) { struct msm_gem_object *msm_obj = to_msm_bo(obj); + /* + * Skip gpuvm in the object free path to avoid a WARN_ON() splat. + * See explaination in msm_gem_assert_locked() + */ + if (kref_read(&obj->refcount)) + drm_gpuvm_bo_gem_evict(obj, true); + if (msm_obj->pages) { if (msm_obj->sgt) { /* For non-cached buffers, ensure the new @@ -334,16 +354,25 @@ uint64_t msm_gem_mmap_offset(struct drm_gem_object *obj) } static struct msm_gem_vma *lookup_vma(struct drm_gem_object *obj, - struct msm_gem_vm *vm) + struct msm_gem_vm *vm) { - struct msm_gem_object *msm_obj = to_msm_bo(obj); - struct msm_gem_vma *vma; + struct drm_gpuvm_bo *vm_bo; msm_gem_assert_locked(obj); - list_for_each_entry(vma, &msm_obj->vmas, list) { - if (vma->vm == vm) - return vma; + drm_gem_for_each_gpuvm_bo (vm_bo, obj) { + struct drm_gpuva *vma; + + drm_gpuvm_bo_for_each_va (vma, vm_bo) { + if (vma->vm == &vm->base) { + /* lookup_vma() should only be used in paths + * with at most one vma per vm + */ + GEM_WARN_ON(!list_is_singular(&vm_bo->list.gpuva)); + + return to_msm_vma(vma); + } + } } return NULL; @@ -358,16 +387,19 @@ static struct msm_gem_vma *lookup_vma(struct drm_gem_object *obj, static void put_iova_spaces(struct drm_gem_object *obj, bool close) { - struct msm_gem_object *msm_obj = to_msm_bo(obj); - struct msm_gem_vma *vma, *tmp; + struct drm_gpuvm_bo *vm_bo, *tmp; msm_gem_assert_locked(obj); - list_for_each_entry_safe(vma, tmp, &msm_obj->vmas, list) { - if (vma->vm) { - msm_gem_vma_purge(vma); + drm_gem_for_each_gpuvm_bo_safe (vm_bo, tmp, obj) { + struct drm_gpuva *vma, *vmatmp; + + drm_gpuvm_bo_for_each_va_safe (vma, vmatmp, vm_bo) { + struct msm_gem_vma *msm_vma = to_msm_vma(vma); + + msm_gem_vma_purge(msm_vma); if (close) - msm_gem_vma_close(vma); + msm_gem_vma_close(msm_vma); } } } @@ -376,13 +408,18 @@ put_iova_spaces(struct drm_gem_object *obj, bool close) static void put_iova_vmas(struct drm_gem_object *obj) { - struct msm_gem_object *msm_obj = to_msm_bo(obj); - struct msm_gem_vma *vma, *tmp; + struct drm_gpuvm_bo *vm_bo, *tmp; msm_gem_assert_locked(obj); - list_for_each_entry_safe(vma, tmp, &msm_obj->vmas, list) { - msm_gem_vma_close(vma); + drm_gem_for_each_gpuvm_bo_safe (vm_bo, tmp, obj) { + struct drm_gpuva *vma, *vmatmp; + + drm_gpuvm_bo_for_each_va_safe (vma, vmatmp, vm_bo) { + struct msm_gem_vma *msm_vma = to_msm_vma(vma); + + msm_gem_vma_close(msm_vma); + } } } @@ -390,7 +427,6 @@ static struct msm_gem_vma *get_vma_locked(struct drm_gem_object *obj, struct msm_gem_vm *vm, u64 range_start, u64 range_end) { - struct msm_gem_object *msm_obj = to_msm_bo(obj); struct msm_gem_vma *vma; msm_gem_assert_locked(obj); @@ -399,12 +435,9 @@ static struct msm_gem_vma *get_vma_locked(struct drm_gem_object *obj, if (!vma) { vma = msm_gem_vma_new(vm, obj, range_start, range_end); - if (IS_ERR(vma)) - return vma; - list_add_tail(&vma->list, &msm_obj->vmas); } else { - GEM_WARN_ON(vma->iova < range_start); - GEM_WARN_ON((vma->iova + obj->size) > range_end); + GEM_WARN_ON(vma->base.va.addr < range_start); + GEM_WARN_ON((vma->base.va.addr + obj->size) > range_end); } return vma; @@ -484,7 +517,7 @@ static int get_and_pin_iova_range_locked(struct drm_gem_object *obj, ret = msm_gem_pin_vma_locked(obj, vma); if (!ret) { - *iova = vma->iova; + *iova = vma->base.va.addr; pin_obj_locked(obj); } @@ -530,7 +563,7 @@ int msm_gem_get_iova(struct drm_gem_object *obj, if (IS_ERR(vma)) { ret = PTR_ERR(vma); } else { - *iova = vma->iova; + *iova = vma->base.va.addr; } msm_gem_unlock(obj); @@ -571,7 +604,7 @@ int msm_gem_set_iova(struct drm_gem_object *obj, vma = get_vma_locked(obj, vm, iova, iova + obj->size); if (IS_ERR(vma)) { ret = PTR_ERR(vma); - } else if (GEM_WARN_ON(vma->iova != iova)) { + } else if (GEM_WARN_ON(vma->base.va.addr != iova)) { clear_iova(obj, vm); ret = -EBUSY; } @@ -861,7 +894,6 @@ void msm_gem_describe(struct drm_gem_object *obj, struct seq_file *m, { struct msm_gem_object *msm_obj = to_msm_bo(obj); struct dma_resv *robj = obj->resv; - struct msm_gem_vma *vma; uint64_t off = drm_vma_node_start(&obj->vma_node); const char *madv; @@ -904,14 +936,17 @@ void msm_gem_describe(struct drm_gem_object *obj, struct seq_file *m, seq_printf(m, " %08zu %9s %-32s\n", obj->size, madv, msm_obj->name); - if (!list_empty(&msm_obj->vmas)) { + if (!list_empty(&obj->gpuva.list)) { + struct drm_gpuvm_bo *vm_bo; seq_puts(m, " vmas:"); - list_for_each_entry(vma, &msm_obj->vmas, list) { - const char *name, *comm; - if (vma->vm) { - struct msm_gem_vm *vm = vma->vm; + drm_gem_for_each_gpuvm_bo (vm_bo, obj) { + struct drm_gpuva *vma; + + drm_gpuvm_bo_for_each_va (vma, vm_bo) { + const char *name, *comm; + struct msm_gem_vm *vm = to_msm_vm(vma->vm); struct task_struct *task = get_pid_task(vm->pid, PIDTYPE_PID); if (task) { @@ -920,15 +955,14 @@ void msm_gem_describe(struct drm_gem_object *obj, struct seq_file *m, } else { comm = NULL; } - name = vm->name; - } else { - name = comm = NULL; + name = vm->base.name; + + seq_printf(m, " [%s%s%s: vm=%p, %08llx,%smapped]", + name, comm ? ":" : "", comm ? comm : "", + vma->vm, vma->va.addr, + to_msm_vma(vma)->mapped ? "" : "un"); + kfree(comm); } - seq_printf(m, " [%s%s%s: vm=%p, %08llx,%s]", - name, comm ? ":" : "", comm ? comm : "", - vma->vm, vma->iova, - vma->mapped ? "mapped" : "unmapped"); - kfree(comm); } seq_puts(m, "\n"); @@ -1096,7 +1130,6 @@ static int msm_gem_new_impl(struct drm_device *dev, msm_obj->madv = MSM_MADV_WILLNEED; INIT_LIST_HEAD(&msm_obj->node); - INIT_LIST_HEAD(&msm_obj->vmas); *obj = &msm_obj->base; (*obj)->funcs = &msm_gem_object_funcs; diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index 9bd78642671c..5091892bbe2e 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -10,6 +10,7 @@ #include #include #include "drm/drm_exec.h" +#include "drm/drm_gpuvm.h" #include "drm/gpu_scheduler.h" #include "msm_drv.h" @@ -22,30 +23,67 @@ #define MSM_BO_STOLEN 0x10000000 /* try to use stolen/splash memory */ #define MSM_BO_MAP_PRIV 0x20000000 /* use IOMMU_PRIV when mapping */ +/** + * struct msm_gem_vm - VM object + * + * A VM object representing a GPU (or display or GMU or ...) virtual address + * space. + * + * In the case of GPU, if per-process address spaces are supported, the address + * space is split into two VMs, which map to TTBR0 and TTBR1 in the SMMU. TTBR0 + * is used for userspace objects, and is unique per msm_context/drm_file, while + * TTBR1 is the same for all processes. (The kernel controlled ringbuffer and + * a few other kernel controlled buffers live in TTBR1.) + * + * The GPU TTBR0 vm can be managed by userspace or by the kernel, depending on + * whether userspace supports VM_BIND. All other vm's are managed by the kernel. + * (Managed by kernel means the kernel is responsible for VA allocation.) + * + * Note that because VM_BIND allows a given BO to be mapped multiple times in + * a VM, and therefore have multiple VMA's in a VM, there is an extra object + * provided by drm_gpuvm infrastructure.. the drm_gpuvm_bo, which is not + * embedded in any larger driver structure. The GEM object holds a list of + * drm_gpuvm_bo, which in turn holds a list of msm_gem_vma. A linked vma + * holds a reference to the vm_bo, and drops it when the vma is unlinked. + * So we just need to call drm_gpuvm_bo_obtain() to return a ref to an + * existing vm_bo, or create a new one. Once the vma is linked, the ref + * to the vm_bo can be dropped (since the vma is holding one). + */ struct msm_gem_vm { - const char *name; - /* NOTE: mm managed at the page level, size is in # of pages - * and position mm_node->start is in # of pages: + /** @base: Inherit from drm_gpuvm. */ + struct drm_gpuvm base; + + /** + * @mm: Memory management for kernel managed VA allocations + * + * Only used for kernel managed VMs, unused for user managed VMs. + * + * Protected by @mm_lock. */ struct drm_mm mm; - spinlock_t lock; /* Protects drm_mm node allocation/removal */ + + /** @mm_lock: protects @mm node allocation/removal */ + struct spinlock mm_lock; + + /** @vm_lock: protects gpuvm insert/remove/traverse */ + struct mutex vm_lock; + + /** @mmu: The mmu object which manages the pgtables */ struct msm_mmu *mmu; - struct kref kref; - /* For address spaces associated with a specific process, this + /** + * @pid: For address spaces associated with a specific process, this * will be non-NULL: */ struct pid *pid; - /* @faults: the number of GPU hangs associated with this address space */ + /** @faults: the number of GPU hangs associated with this address space */ int faults; - /** @va_start: lowest possible address to allocate */ - uint64_t va_start; - - /** @va_size: the size of the address space (in bytes) */ - uint64_t va_size; + /** @managed: is this a kernel managed VM? */ + bool managed; }; +#define to_msm_vm(x) container_of(x, struct msm_gem_vm, base) struct msm_gem_vm * msm_gem_vm_get(struct msm_gem_vm *vm); @@ -53,18 +91,31 @@ msm_gem_vm_get(struct msm_gem_vm *vm); void msm_gem_vm_put(struct msm_gem_vm *vm); struct msm_gem_vm * -msm_gem_vm_create(struct msm_mmu *mmu, const char *name, - u64 va_start, u64 size); +msm_gem_vm_create(struct drm_device *drm, struct msm_mmu *mmu, const char *name, + u64 va_start, u64 va_size, bool managed); struct msm_fence_context; +/** + * struct msm_gem_vma - a VMA mapping + * + * Represents a combination of a GEM object plus a VM. + */ struct msm_gem_vma { + /** @base: inherit from drm_gpuva */ + struct drm_gpuva base; + + /** + * @node: mm node for VA allocation + * + * Only used by kernel managed VMs + */ struct drm_mm_node node; - uint64_t iova; - struct msm_gem_vm *vm; - struct list_head list; /* node in msm_gem_object::vmas */ + + /** @mapped: Is this VMA mapped? */ bool mapped; }; +#define to_msm_vma(x) container_of(x, struct msm_gem_vma, base) struct msm_gem_vma * msm_gem_vma_new(struct msm_gem_vm *vm, struct drm_gem_object *obj, @@ -100,8 +151,6 @@ struct msm_gem_object { struct sg_table *sgt; void *vaddr; - struct list_head vmas; /* list of msm_gem_vma */ - char name[32]; /* Identifier to print for the debugfs files */ /* userspace metadata backchannel */ diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c index 235ad4be7fd0..14845768f7af 100644 --- a/drivers/gpu/drm/msm/msm_gem_submit.c +++ b/drivers/gpu/drm/msm/msm_gem_submit.c @@ -312,7 +312,7 @@ static int submit_pin_objects(struct msm_gem_submit *submit) if (ret) break; - submit->bos[i].iova = vma->iova; + submit->bos[i].iova = vma->base.va.addr; } /* diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_gem_vma.c index ca29e81d79d2..f4655ae1d71b 100644 --- a/drivers/gpu/drm/msm/msm_gem_vma.c +++ b/drivers/gpu/drm/msm/msm_gem_vma.c @@ -5,14 +5,13 @@ */ #include "msm_drv.h" -#include "msm_fence.h" #include "msm_gem.h" #include "msm_mmu.h" static void -msm_gem_vm_destroy(struct kref *kref) +msm_gem_vm_free(struct drm_gpuvm *gpuvm) { - struct msm_gem_vm *vm = container_of(kref, struct msm_gem_vm, kref); + struct msm_gem_vm *vm = container_of(gpuvm, struct msm_gem_vm, base); drm_mm_takedown(&vm->mm); if (vm->mmu) @@ -25,14 +24,14 @@ msm_gem_vm_destroy(struct kref *kref) void msm_gem_vm_put(struct msm_gem_vm *vm) { if (vm) - kref_put(&vm->kref, msm_gem_vm_destroy); + drm_gpuvm_put(&vm->base); } struct msm_gem_vm * msm_gem_vm_get(struct msm_gem_vm *vm) { if (!IS_ERR_OR_NULL(vm)) - kref_get(&vm->kref); + drm_gpuvm_get(&vm->base); return vm; } @@ -40,14 +39,14 @@ msm_gem_vm_get(struct msm_gem_vm *vm) /* Actually unmap memory for the vma */ void msm_gem_vma_purge(struct msm_gem_vma *vma) { - struct msm_gem_vm *vm = vma->vm; - unsigned size = vma->node.size; + struct msm_gem_vm *vm = to_msm_vm(vma->base.vm); + unsigned size = vma->base.va.range; /* Don't do anything if the memory isn't mapped */ if (!vma->mapped) return; - vm->mmu->funcs->unmap(vm->mmu, vma->iova, size); + vm->mmu->funcs->unmap(vm->mmu, vma->base.va.addr, size); vma->mapped = false; } @@ -57,10 +56,10 @@ int msm_gem_vma_map(struct msm_gem_vma *vma, int prot, struct sg_table *sgt, int size) { - struct msm_gem_vm *vm = vma->vm; + struct msm_gem_vm *vm = to_msm_vm(vma->base.vm); int ret; - if (GEM_WARN_ON(!vma->iova)) + if (GEM_WARN_ON(!vma->base.va.addr)) return -EINVAL; if (vma->mapped) @@ -68,9 +67,6 @@ msm_gem_vma_map(struct msm_gem_vma *vma, int prot, vma->mapped = true; - if (!vm) - return 0; - /* * NOTE: iommu/io-pgtable can allocate pages, so we cannot hold * a lock across map/unmap which is also used in the job_run() @@ -80,7 +76,7 @@ msm_gem_vma_map(struct msm_gem_vma *vma, int prot, * Revisit this if we can come up with a scheme to pre-alloc pages * for the pgtable in map/unmap ops. */ - ret = vm->mmu->funcs->map(vm->mmu, vma->iova, sgt, size, prot); + ret = vm->mmu->funcs->map(vm->mmu, vma->base.va.addr, sgt, size, prot); if (ret) { vma->mapped = false; @@ -92,19 +88,20 @@ msm_gem_vma_map(struct msm_gem_vma *vma, int prot, /* Close an iova. Warn if it is still in use */ void msm_gem_vma_close(struct msm_gem_vma *vma) { - struct msm_gem_vm *vm = vma->vm; + struct msm_gem_vm *vm = to_msm_vm(vma->base.vm); GEM_WARN_ON(vma->mapped); - spin_lock(&vm->lock); - if (vma->iova) + spin_lock(&vm->mm_lock); + if (vma->base.va.addr) drm_mm_remove_node(&vma->node); - spin_unlock(&vm->lock); + spin_unlock(&vm->mm_lock); - vma->iova = 0; - list_del(&vma->list); + mutex_lock(&vm->vm_lock); + drm_gpuva_remove(&vma->base); + drm_gpuva_unlink(&vma->base); + mutex_unlock(&vm->vm_lock); - msm_gem_vm_put(vm); kfree(vma); } @@ -113,6 +110,7 @@ struct msm_gem_vma * msm_gem_vma_new(struct msm_gem_vm *vm, struct drm_gem_object *obj, u64 range_start, u64 range_end) { + struct drm_gpuvm_bo *vm_bo; struct msm_gem_vma *vma; int ret; @@ -120,36 +118,81 @@ msm_gem_vma_new(struct msm_gem_vm *vm, struct drm_gem_object *obj, if (!vma) return ERR_PTR(-ENOMEM); - vma->vm = vm; + if (vm->managed) { + spin_lock(&vm->mm_lock); + ret = drm_mm_insert_node_in_range(&vm->mm, &vma->node, + obj->size, PAGE_SIZE, 0, + range_start, range_end, 0); + spin_unlock(&vm->mm_lock); - spin_lock(&vm->lock); - ret = drm_mm_insert_node_in_range(&vm->mm, &vma->node, - obj->size, PAGE_SIZE, 0, - range_start, range_end, 0); - spin_unlock(&vm->lock); + if (ret) + goto err_free_vma; - if (ret) - goto err_free_vma; + range_start = vma->node.start; + range_end = range_start + obj->size; + } - vma->iova = vma->node.start; + GEM_WARN_ON((range_end - range_start) > obj->size); + + drm_gpuva_init(&vma->base, range_start, range_end - range_start, obj, 0); vma->mapped = false; - INIT_LIST_HEAD(&vma->list); + mutex_lock(&vm->vm_lock); + ret = drm_gpuva_insert(&vm->base, &vma->base); + mutex_unlock(&vm->vm_lock); + if (ret) + goto err_free_range; - kref_get(&vm->kref); + vm_bo = drm_gpuvm_bo_obtain(&vm->base, obj); + if (IS_ERR(vm_bo)) { + ret = PTR_ERR(vm_bo); + goto err_va_remove; + } + + mutex_lock(&vm->vm_lock); + drm_gpuva_link(&vma->base, vm_bo); + mutex_unlock(&vm->vm_lock); + GEM_WARN_ON(drm_gpuvm_bo_put(vm_bo)); return vma; +err_va_remove: + mutex_lock(&vm->vm_lock); + drm_gpuva_remove(&vma->base); + mutex_unlock(&vm->vm_lock); +err_free_range: + if (vm->managed) + drm_mm_remove_node(&vma->node); err_free_vma: kfree(vma); return ERR_PTR(ret); } +static const struct drm_gpuvm_ops msm_gpuvm_ops = { + .vm_free = msm_gem_vm_free, +}; + +/** + * msm_gem_vm_create() - Create and initialize a &msm_gem_vm + * @drm: the drm device + * @mmu: the backing MMU objects handling mapping/unmapping + * @name: the name of the VM + * @va_start: the start offset of the GPU VA space + * @va_size: the size of the GPU VA space + * @managed: is it a kernel managed VM? + * + * In a kernel managed VM, the kernel handles address allocation, and only + * synchronous operations are supported. In a user managed VM, userspace + * handles virtual address allocation, and both async and sync operations + * are supported. + */ struct msm_gem_vm * -msm_gem_vm_create(struct msm_mmu *mmu, const char *name, - u64 va_start, u64 size) +msm_gem_vm_create(struct drm_device *drm, struct msm_mmu *mmu, const char *name, + u64 va_start, u64 va_size, bool managed) { struct msm_gem_vm *vm; + struct drm_gem_object *dummy_gem; + int ret = 0; if (IS_ERR(mmu)) return ERR_CAST(mmu); @@ -158,15 +201,28 @@ msm_gem_vm_create(struct msm_mmu *mmu, const char *name, if (!vm) return ERR_PTR(-ENOMEM); - spin_lock_init(&vm->lock); - vm->name = name; - vm->mmu = mmu; - vm->va_start = va_start; - vm->va_size = size; + dummy_gem = drm_gpuvm_resv_object_alloc(drm); + if (!dummy_gem) { + ret = -ENOMEM; + goto err_free_vm; + } + + drm_gpuvm_init(&vm->base, name, 0, drm, dummy_gem, + va_start, va_size, 0, 0, &msm_gpuvm_ops); + drm_gem_object_put(dummy_gem); + + spin_lock_init(&vm->mm_lock); + mutex_init(&vm->vm_lock); - drm_mm_init(&vm->mm, va_start, size); + vm->mmu = mmu; + vm->managed = managed; - kref_init(&vm->kref); + drm_mm_init(&vm->mm, va_start, va_size); return vm; + +err_free_vm: + kfree(vm); + return ERR_PTR(ret); + } diff --git a/drivers/gpu/drm/msm/msm_kms.c b/drivers/gpu/drm/msm/msm_kms.c index 3649276ea1b2..4e90efaad714 100644 --- a/drivers/gpu/drm/msm/msm_kms.c +++ b/drivers/gpu/drm/msm/msm_kms.c @@ -190,8 +190,8 @@ struct msm_gem_vm *msm_kms_init_vm(struct drm_device *dev) return NULL; } - vm = msm_gem_vm_create(mmu, "mdp_kms", - 0x1000, 0x100000000 - 0x1000); + vm = msm_gem_vm_create(dev, mmu, "mdp_kms", + 0x1000, 0x100000000 - 0x1000, true); if (IS_ERR(vm)) { dev_err(mdp_dev, "vm create, error %pe\n", vm); mmu->funcs->destroy(mmu); From patchwork Sat Dec 7 16:15:11 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 848118 Received: from mail-pl1-f174.google.com (mail-pl1-f174.google.com [209.85.214.174]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id F011B1547E2; Sat, 7 Dec 2024 16:17:57 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.174 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733588280; cv=none; b=kZXQ1ucwjS6yCyU7jWq5sipPlwhwPT5WLiHJXb6Syy/MojRv7KP/CxjKFarDFCuMQr4yFyYlzvlDBvqDZ/fF/jwiMtI3oYcp9KDtxB7VEsND05TY/FYm68y60lbDt8QuSg4PcfP+9ftcyd3gQQS20Up8Jrf9HzYOk7ENBT+Cf7A= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733588280; c=relaxed/simple; bh=288CInIIUBzbweoUW1Ta0OSNpvSfBU/Jp5voIS5pj8U=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=TP9C3ymh9JgXvDOvRp0p3rh8CjQe1FIszvahCCO5TjxIipk6HFmjx1MqNqBpgy1mI9HnCaBIWaxkiSYKXrXMcqeraLNfK9fBdmm5UVE8WfBJ9Y4sa4n4QuvghzfUthUhKTf0QufQpiXWhzyUQ/RI/jPmWOBKL5i0VVh12rV5ko4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=Sc1Og6YP; arc=none smtp.client-ip=209.85.214.174 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="Sc1Og6YP" Received: by mail-pl1-f174.google.com with SMTP id d9443c01a7336-21636268e43so2438955ad.2; Sat, 07 Dec 2024 08:17:57 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1733588277; x=1734193077; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=R+JRnuXWhs08a+EDcfxe9pz6gKmsttHNz1O4hAWOZL4=; b=Sc1Og6YPRyijFgdxc9IwUJCgVJZZ4vSyI9kGfS+3L6rmJGxsB7qZi6zyOuJV+YyScc yZ3hZy7gFVcKvcOenaMEJfc2dLYS3z3rMOSeVhyp0rWpPWBGDBZQ87nZ7iJbAS4H2NRl CQg9lZEy0XQKnODmG4uE5Hv/GMDZg85lsExB6kOsCnmgb+Dp5SzRuSoYwlCH5kMW5wlW 4uDDcqfA/wThRw2weBw/xlYnRqWfHBFNXBwBVJXtPJxCmEw/ZMxvnrR8+G69b1V+TING 5iJ7Vhcc+CUtFRAMS8p8ppmCp8wRPE9Qndi/EzyBr4qVwvMcHM0zrj1x6U6SrRtQzFtw xPNw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1733588277; x=1734193077; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=R+JRnuXWhs08a+EDcfxe9pz6gKmsttHNz1O4hAWOZL4=; b=hl3uBBBwGLuETCJRYsGy65ezkfuboLuUPqg2PzaVkAoMQH10qKM0NjSEgjH+38aV3C X41GlZqyltvBcPC51jk0FBDFmM5aHLWYJV1knQDzuM6LFKPmFqyrY6LikPRi7Dlv0tD5 YtIeiqxoqpI4ypbKKgrzIdAN3NUeXudBcZZ+pxAIowzB1rGVm9Uq0K+kKvmF5yZeofun G+OVCC91qiBJj39+Ii282is7E4k2gBkQvomD9F2giOZ6qDmx5EBP4cMSxG/nW+D3Mwty Lplnlmr8DmGpEBOjreGA2NcHXFSTjORkrv9R8gOWgCT1A+NqeaiLx7bkQDrwdt5uQNMi 2ZNA== X-Forwarded-Encrypted: i=1; AJvYcCU1n+smlXoh0wttHoBxT74x91oAWuk9UnTcsfAEESHj4Rat4W7lRgURhS1aY104GmNZURT+9UJwFHWjfGsZ@vger.kernel.org, AJvYcCUW1XEAuBtLhixdEyJ7s9Zcuj90adlIdxDVKgdhRGO1cVwLnEuR0FlVK9nneaBQc8Vlh4p7aMTlwZ10zKY2@vger.kernel.org X-Gm-Message-State: AOJu0YzYIAUFz2yqIg3JxJNv1iR3J4I8wpgFjPYF9XirRkkP8lUXP3Bh nGwZSHiu7HaIDpzpje2UJAmoYoRR2as9ChRPpcXDxS8RSIZq+SvD X-Gm-Gg: ASbGncs04UruVdUS/xdkjSPDd2PqFCt60cDUq+oruQtsf3Faf0ZO0K9Mdq3kS8CIjaj bAZrhYhxOWxsUKenRVs1Ozml190B2PSn8vyz35H/ENV3ahNUUpIH7PWCmCAfAbYr7eGHSrby01r C4JzXvKQ9tlCFpGF4f68agx9RoD3623N/+zd38f9RxdfrO6AWvDtohf80NyW9hRfGNeFfGq6qof TbJvNkqfiFZ2EUjlnEZlIptmlXm7B/swZ3Ce4E0jUkToCvSByvjPydgqMEfLEuvJpkDArp+4lf0 rK+wNY89 X-Google-Smtp-Source: AGHT+IG9kI0ztCUDfXNin59RhS1SMZptcg9CT3wutjyCGL0/XL321SAxTI+pVyQfhJ5c7y9ro+Ja0g== X-Received: by 2002:a17:903:1cc:b0:216:3083:d03d with SMTP id d9443c01a7336-2163083de10mr21530565ad.44.1733588277012; Sat, 07 Dec 2024 08:17:57 -0800 (PST) Received: from localhost (c-73-37-105-206.hsd1.or.comcast.net. [73.37.105.206]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-216316d811asm10111915ad.212.2024.12.07.08.17.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 07 Dec 2024 08:17:56 -0800 (PST) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Akhil P Oommen , Rob Clark , Rob Clark , Sean Paul , Konrad Dybcio , Abhinav Kumar , Dmitry Baryshkov , Marijn Suijten , David Airlie , Simona Vetter , Paloma Arellano , Jani Nikula , =?utf-8?b?QmFybmFiw6FzIEN6w6ltw6Fu?= , Carl Vanderlip , Stephen Boyd , Jonathan Marek , Jessica Zhang , linux-kernel@vger.kernel.org (open list) Subject: [RFC 11/24] drm/msm: Use drm_gpuvm types more Date: Sat, 7 Dec 2024 08:15:11 -0800 Message-ID: <20241207161651.410556-12-robdclark@gmail.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20241207161651.410556-1-robdclark@gmail.com> References: <20241207161651.410556-1-robdclark@gmail.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Rob Clark Most of the driver code doesn't need to reach in to msm specific fields, so just use the drm_gpuvm/drm_gpuva types directly. This should hopefully improve commonality with other drivers and make the code easier to understand. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/adreno/a2xx_gpu.c | 6 +- drivers/gpu/drm/msm/adreno/a5xx_gpu.c | 6 +- drivers/gpu/drm/msm/adreno/a6xx_gmu.c | 6 +- drivers/gpu/drm/msm/adreno/a6xx_gmu.h | 2 +- drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 14 ++-- drivers/gpu/drm/msm/adreno/a6xx_preempt.c | 2 +- drivers/gpu/drm/msm/adreno/adreno_gpu.c | 16 ++-- drivers/gpu/drm/msm/adreno/adreno_gpu.h | 4 +- .../drm/msm/disp/dpu1/dpu_encoder_phys_wb.c | 4 +- drivers/gpu/drm/msm/disp/dpu1/dpu_formats.c | 6 +- drivers/gpu/drm/msm/disp/dpu1/dpu_formats.h | 2 +- drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c | 6 +- drivers/gpu/drm/msm/disp/dpu1/dpu_plane.h | 2 +- drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c | 11 +-- drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c | 11 +-- drivers/gpu/drm/msm/dsi/dsi_host.c | 6 +- drivers/gpu/drm/msm/msm_drv.h | 19 ++--- drivers/gpu/drm/msm/msm_fb.c | 14 ++-- drivers/gpu/drm/msm/msm_gem.c | 84 +++++++++---------- drivers/gpu/drm/msm/msm_gem.h | 53 +++++------- drivers/gpu/drm/msm/msm_gem_submit.c | 4 +- drivers/gpu/drm/msm/msm_gem_vma.c | 72 +++++++--------- drivers/gpu/drm/msm/msm_gpu.c | 19 +++-- drivers/gpu/drm/msm/msm_gpu.h | 10 +-- drivers/gpu/drm/msm/msm_kms.c | 4 +- drivers/gpu/drm/msm/msm_kms.h | 2 +- drivers/gpu/drm/msm/msm_submitqueue.c | 2 +- 27 files changed, 187 insertions(+), 200 deletions(-) diff --git a/drivers/gpu/drm/msm/adreno/a2xx_gpu.c b/drivers/gpu/drm/msm/adreno/a2xx_gpu.c index e0f2e9c77976..93a4eb38b88d 100644 --- a/drivers/gpu/drm/msm/adreno/a2xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a2xx_gpu.c @@ -113,7 +113,7 @@ static int a2xx_hw_init(struct msm_gpu *gpu) uint32_t *ptr, len; int i, ret; - a2xx_gpummu_params(gpu->vm->mmu, &pt_base, &tran_error); + a2xx_gpummu_params(to_msm_vm(gpu->vm)->mmu, &pt_base, &tran_error); DBG("%s", gpu->name); @@ -466,11 +466,11 @@ static struct msm_gpu_state *a2xx_gpu_state_get(struct msm_gpu *gpu) return state; } -static struct msm_gem_vm * +static struct drm_gpuvm * a2xx_create_vm(struct msm_gpu *gpu, struct platform_device *pdev) { struct msm_mmu *mmu = a2xx_gpummu_new(&pdev->dev, gpu); - struct msm_gem_vm *vm; + struct drm_gpuvm *vm; vm = msm_gem_vm_create(gpu->dev, mmu, "gpu", SZ_16M, 0xfff * SZ_64K, true); diff --git a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c index 4814c470e3a1..51f1915758af 100644 --- a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c @@ -1789,8 +1789,10 @@ struct msm_gpu *a5xx_gpu_init(struct drm_device *dev) return ERR_PTR(ret); } - if (gpu->vm) - msm_mmu_set_fault_handler(gpu->vm->mmu, gpu, a5xx_fault_handler); + if (gpu->vm) { + msm_mmu_set_fault_handler(to_msm_vm(gpu->vm)->mmu, gpu, + a5xx_fault_handler); + } /* Set up the preemption specific bits and pieces for each ringbuffer */ a5xx_preempt_init(gpu); diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c index e278d7564642..034f9e9000fb 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c @@ -1218,6 +1218,8 @@ int a6xx_gmu_stop(struct a6xx_gpu *a6xx_gpu) static void a6xx_gmu_memory_free(struct a6xx_gmu *gmu) { + struct msm_mmu *mmu = to_msm_vm(gmu->vm)->mmu; + msm_gem_kernel_put(gmu->hfi.obj, gmu->vm); msm_gem_kernel_put(gmu->debug.obj, gmu->vm); msm_gem_kernel_put(gmu->icache.obj, gmu->vm); @@ -1225,8 +1227,8 @@ static void a6xx_gmu_memory_free(struct a6xx_gmu *gmu) msm_gem_kernel_put(gmu->dummy.obj, gmu->vm); msm_gem_kernel_put(gmu->log.obj, gmu->vm); - gmu->vm->mmu->funcs->detach(gmu->vm->mmu); - msm_gem_vm_put(gmu->vm); + mmu->funcs->detach(mmu); + drm_gpuvm_put(gmu->vm); } static int a6xx_gmu_memory_alloc(struct a6xx_gmu *gmu, struct a6xx_gmu_bo *bo, diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.h b/drivers/gpu/drm/msm/adreno/a6xx_gmu.h index 5ffabc16e35a..7566e801b172 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.h +++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.h @@ -50,7 +50,7 @@ struct a6xx_gmu { /* For serializing communication with the GMU: */ struct mutex lock; - struct msm_gem_vm *vm; + struct drm_gpuvm *vm; void __iomem *mmio; void __iomem *rscc; diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c index 9e2721f8aff8..79a692288d18 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c @@ -120,7 +120,7 @@ static void a6xx_set_pagetable(struct a6xx_gpu *a6xx_gpu, if (ctx->seqno == ring->cur_ctx_seqno) return; - if (msm_iommu_pagetable_params(ctx->vm->mmu, &ttbr, &asid)) + if (msm_iommu_pagetable_params(to_msm_vm(ctx->vm)->mmu, &ttbr, &asid)) return; if (adreno_gpu->info->family >= ADRENO_7XX_GEN1) { @@ -2231,7 +2231,7 @@ static void a6xx_gpu_set_freq(struct msm_gpu *gpu, struct dev_pm_opp *opp, mutex_unlock(&a6xx_gpu->gmu.lock); } -static struct msm_gem_vm * +static struct drm_gpuvm * a6xx_create_vm(struct msm_gpu *gpu, struct platform_device *pdev) { struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu); @@ -2249,12 +2249,12 @@ a6xx_create_vm(struct msm_gpu *gpu, struct platform_device *pdev) return adreno_iommu_create_vm(gpu, pdev, quirks); } -static struct msm_gem_vm * +static struct drm_gpuvm * a6xx_create_private_vm(struct msm_gpu *gpu) { struct msm_mmu *mmu; - mmu = msm_iommu_pagetable_create(gpu->vm->mmu); + mmu = msm_iommu_pagetable_create(to_msm_vm(gpu->vm)->mmu); if (IS_ERR(mmu)) return ERR_CAST(mmu); @@ -2534,8 +2534,10 @@ struct msm_gpu *a6xx_gpu_init(struct drm_device *dev) adreno_gpu->uche_trap_base = 0x1fffffffff000ull; - if (gpu->vm) - msm_mmu_set_fault_handler(gpu->vm->mmu, gpu, a6xx_fault_handler); + if (gpu->vm) { + msm_mmu_set_fault_handler(to_msm_vm(gpu->vm)->mmu, gpu, + a6xx_fault_handler); + } a6xx_calc_ubwc_config(adreno_gpu); /* Set up the preemption specific bits and pieces for each ringbuffer */ diff --git a/drivers/gpu/drm/msm/adreno/a6xx_preempt.c b/drivers/gpu/drm/msm/adreno/a6xx_preempt.c index 41229c60aa06..bd40d0f26e2c 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_preempt.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_preempt.c @@ -376,7 +376,7 @@ static int preempt_init_ring(struct a6xx_gpu *a6xx_gpu, struct a7xx_cp_smmu_info *smmu_info_ptr = ptr; - msm_iommu_pagetable_params(gpu->vm->mmu, &ttbr, &asid); + msm_iommu_pagetable_params(to_msm_vm(gpu->vm)->mmu, &ttbr, &asid); smmu_info_ptr->magic = GEN7_CP_SMMU_INFO_MAGIC; smmu_info_ptr->ttbr0 = ttbr; diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c b/drivers/gpu/drm/msm/adreno/adreno_gpu.c index 5f82a56f17be..3104ad878cf1 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c @@ -191,21 +191,21 @@ int adreno_zap_shader_load(struct msm_gpu *gpu, u32 pasid) return zap_shader_load_mdt(gpu, adreno_gpu->info->zapfw, pasid); } -struct msm_gem_vm * +struct drm_gpuvm * adreno_create_vm(struct msm_gpu *gpu, struct platform_device *pdev) { return adreno_iommu_create_vm(gpu, pdev, 0); } -struct msm_gem_vm * +struct drm_gpuvm * adreno_iommu_create_vm(struct msm_gpu *gpu, struct platform_device *pdev, unsigned long quirks) { struct iommu_domain_geometry *geometry; struct msm_mmu *mmu; - struct msm_gem_vm *vm; + struct drm_gpuvm *vm; u64 start, size; mmu = msm_iommu_gpu_new(&pdev->dev, gpu, quirks); @@ -262,7 +262,9 @@ int adreno_fault_handler(struct msm_gpu *gpu, unsigned long iova, int flags, * it now. */ if (!do_devcoredump) { - gpu->vm->mmu->funcs->resume_translation(gpu->vm->mmu); + struct msm_mmu *mmu = to_msm_vm(gpu->vm)->mmu; + + mmu->funcs->resume_translation(mmu); } /* @@ -357,7 +359,7 @@ int adreno_get_param(struct msm_gpu *gpu, struct msm_context *ctx, return 0; case MSM_PARAM_FAULTS: if (ctx->vm) - *value = gpu->global_faults + ctx->vm->faults; + *value = gpu->global_faults + to_msm_vm(ctx->vm)->faults; else *value = gpu->global_faults; return 0; @@ -367,12 +369,12 @@ int adreno_get_param(struct msm_gpu *gpu, struct msm_context *ctx, case MSM_PARAM_VA_START: if (ctx->vm == gpu->vm) return UERR(EINVAL, drm, "requires per-process pgtables"); - *value = ctx->vm->base.mm_start; + *value = ctx->vm->mm_start; return 0; case MSM_PARAM_VA_SIZE: if (ctx->vm == gpu->vm) return UERR(EINVAL, drm, "requires per-process pgtables"); - *value = ctx->vm->base.mm_range; + *value = ctx->vm->mm_range; return 0; case MSM_PARAM_HIGHEST_BANK_BIT: *value = adreno_gpu->ubwc_config.highest_bank_bit; diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.h b/drivers/gpu/drm/msm/adreno/adreno_gpu.h index 728e4b0def3d..53e1830c1ba6 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.h +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.h @@ -611,11 +611,11 @@ void adreno_show_object(struct drm_printer *p, void **ptr, int len, * Common helper function to initialize the default address space for arm-smmu * attached targets */ -struct msm_gem_vm * +struct drm_gpuvm * adreno_create_vm(struct msm_gpu *gpu, struct platform_device *pdev); -struct msm_gem_vm * +struct drm_gpuvm * adreno_iommu_create_vm(struct msm_gpu *gpu, struct platform_device *pdev, unsigned long quirks); diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_wb.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_wb.c index 2c53c937485a..7acec7a3db01 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_wb.c +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_wb.c @@ -558,7 +558,7 @@ static void dpu_encoder_phys_wb_prepare_wb_job(struct dpu_encoder_phys *phys_enc struct drm_writeback_job *job) { const struct msm_format *format; - struct msm_gem_vm *vm; + struct drm_gpuvm *vm; struct dpu_hw_wb_cfg *wb_cfg; int ret; struct dpu_encoder_phys_wb *wb_enc = to_dpu_encoder_phys_wb(phys_enc); @@ -611,7 +611,7 @@ static void dpu_encoder_phys_wb_cleanup_wb_job(struct dpu_encoder_phys *phys_enc struct drm_writeback_job *job) { struct dpu_encoder_phys_wb *wb_enc = to_dpu_encoder_phys_wb(phys_enc); - struct msm_gem_vm *vm; + struct drm_gpuvm *vm; if (!job->fb) return; diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_formats.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_formats.c index d115b79af771..6aef29590a3d 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_formats.c +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_formats.c @@ -274,7 +274,7 @@ int dpu_format_populate_plane_sizes( return _dpu_format_populate_plane_sizes_linear(fmt, fb, layout); } -static void _dpu_format_populate_addrs_ubwc(struct msm_gem_vm *vm, +static void _dpu_format_populate_addrs_ubwc(struct drm_gpuvm *vm, struct drm_framebuffer *fb, struct dpu_hw_fmt_layout *layout) { @@ -355,7 +355,7 @@ static void _dpu_format_populate_addrs_ubwc(struct msm_gem_vm *vm, } } -static void _dpu_format_populate_addrs_linear(struct msm_gem_vm *vm, +static void _dpu_format_populate_addrs_linear(struct drm_gpuvm *vm, struct drm_framebuffer *fb, struct dpu_hw_fmt_layout *layout) { @@ -373,7 +373,7 @@ static void _dpu_format_populate_addrs_linear(struct msm_gem_vm *vm, * @fb: framebuffer pointer * @layout: format layout structure to populate */ -void dpu_format_populate_addrs(struct msm_gem_vm *vm, +void dpu_format_populate_addrs(struct drm_gpuvm *vm, struct drm_framebuffer *fb, struct dpu_hw_fmt_layout *layout) { diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_formats.h b/drivers/gpu/drm/msm/disp/dpu1/dpu_formats.h index 989f3e13c497..127bf4f586db 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_formats.h +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_formats.h @@ -31,7 +31,7 @@ static inline bool dpu_find_format(u32 format, const u32 *supported_formats, return false; } -void dpu_format_populate_addrs(struct msm_gem_vm *vm, +void dpu_format_populate_addrs(struct drm_gpuvm *vm, struct drm_framebuffer *fb, struct dpu_hw_fmt_layout *layout); diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c index 37475f2a20ac..9f5df25bd42c 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c @@ -1054,17 +1054,17 @@ static void _dpu_kms_mmu_destroy(struct dpu_kms *dpu_kms) if (!dpu_kms->base.vm) return; - mmu = dpu_kms->base.vm->mmu; + mmu = to_msm_vm(dpu_kms->base.vm)->mmu; mmu->funcs->detach(mmu); - msm_gem_vm_put(dpu_kms->base.vm); + drm_gpuvm_put(dpu_kms->base.vm); dpu_kms->base.vm = NULL; } static int _dpu_kms_mmu_init(struct dpu_kms *dpu_kms) { - struct msm_gem_vm *vm; + struct drm_gpuvm *vm; vm = msm_kms_init_vm(dpu_kms->dev); if (IS_ERR(vm)) diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.h b/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.h index 3a76b57c137c..80b9ef650585 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.h +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.h @@ -34,7 +34,7 @@ */ struct dpu_plane_state { struct drm_plane_state base; - struct msm_gem_vm *vm; + struct drm_gpuvm *vm; struct dpu_sw_pipe pipe; struct dpu_sw_pipe r_pipe; struct dpu_sw_pipe_cfg pipe_cfg; diff --git a/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c b/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c index 13176168ade2..f239594417ec 100644 --- a/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c +++ b/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c @@ -120,15 +120,16 @@ static void mdp4_destroy(struct msm_kms *kms) { struct mdp4_kms *mdp4_kms = to_mdp4_kms(to_mdp_kms(kms)); struct device *dev = mdp4_kms->dev->dev; - struct msm_gem_vm *vm = kms->vm; if (mdp4_kms->blank_cursor_iova) msm_gem_unpin_iova(mdp4_kms->blank_cursor_bo, kms->vm); drm_gem_object_put(mdp4_kms->blank_cursor_bo); - if (vm) { - vm->mmu->funcs->detach(vm->mmu); - msm_gem_vm_put(vm); + if (kms->vm) { + struct msm_mmu *mmu = to_msm_vm(kms->vm)->mmu; + + mmu->funcs->detach(mmu); + drm_gpuvm_put(kms->vm); } if (mdp4_kms->rpm_enabled) @@ -380,7 +381,7 @@ static int mdp4_kms_init(struct drm_device *dev) struct mdp4_kms *mdp4_kms = to_mdp4_kms(to_mdp_kms(priv->kms)); struct msm_kms *kms = NULL; struct msm_mmu *mmu; - struct msm_gem_vm *vm; + struct drm_gpuvm *vm; int ret; u32 major, minor; unsigned long max_clk; diff --git a/drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c b/drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c index bfbec278d19a..26541d195f4b 100644 --- a/drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c +++ b/drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c @@ -198,11 +198,12 @@ static void mdp5_destroy(struct mdp5_kms *mdp5_kms); static void mdp5_kms_destroy(struct msm_kms *kms) { struct mdp5_kms *mdp5_kms = to_mdp5_kms(to_mdp_kms(kms)); - struct msm_gem_vm *vm = kms->vm; - if (vm) { - vm->mmu->funcs->detach(vm->mmu); - msm_gem_vm_put(vm); + if (kms->vm) { + struct msm_mmu *mmu = to_msm_vm(kms->vm)->mmu; + + mmu->funcs->detach(mmu); + drm_gpuvm_put(kms->vm); } mdp_kms_destroy(&mdp5_kms->base); @@ -500,7 +501,7 @@ static int mdp5_kms_init(struct drm_device *dev) struct mdp5_kms *mdp5_kms; struct mdp5_cfg *config; struct msm_kms *kms = priv->kms; - struct msm_gem_vm *vm; + struct drm_gpuvm *vm; int i, ret; ret = mdp5_init(to_platform_device(dev->dev), dev); diff --git a/drivers/gpu/drm/msm/dsi/dsi_host.c b/drivers/gpu/drm/msm/dsi/dsi_host.c index 6ef3aaac1450..752720f65ecf 100644 --- a/drivers/gpu/drm/msm/dsi/dsi_host.c +++ b/drivers/gpu/drm/msm/dsi/dsi_host.c @@ -143,7 +143,7 @@ struct msm_dsi_host { /* DSI 6G TX buffer*/ struct drm_gem_object *tx_gem_obj; - struct msm_gem_vm *vm; + struct drm_gpuvm *vm; /* DSI v2 TX buffer */ void *tx_buf; @@ -1158,7 +1158,7 @@ int dsi_tx_buf_alloc_6g(struct msm_dsi_host *msm_host, int size) uint64_t iova; u8 *data; - msm_host->vm = msm_gem_vm_get(priv->kms->vm); + msm_host->vm = drm_gpuvm_get(priv->kms->vm); data = msm_gem_kernel_new(dev, size, MSM_BO_WC, msm_host->vm, @@ -1206,7 +1206,7 @@ void msm_dsi_tx_buf_free(struct mipi_dsi_host *host) if (msm_host->tx_gem_obj) { msm_gem_kernel_put(msm_host->tx_gem_obj, msm_host->vm); - msm_gem_vm_put(msm_host->vm); + drm_gpuvm_put(msm_host->vm); msm_host->tx_gem_obj = NULL; msm_host->vm = NULL; } diff --git a/drivers/gpu/drm/msm/msm_drv.h b/drivers/gpu/drm/msm/msm_drv.h index 20a0f8f23490..80582c0c2bf7 100644 --- a/drivers/gpu/drm/msm/msm_drv.h +++ b/drivers/gpu/drm/msm/msm_drv.h @@ -48,8 +48,6 @@ struct msm_rd_state; struct msm_perf_state; struct msm_gem_submit; struct msm_fence_context; -struct msm_gem_vm; -struct msm_gem_vma; struct msm_disp_state; #define MAX_CRTCS 8 @@ -230,7 +228,7 @@ void msm_crtc_disable_vblank(struct drm_crtc *crtc); int msm_register_mmu(struct drm_device *dev, struct msm_mmu *mmu); void msm_unregister_mmu(struct drm_device *dev, struct msm_mmu *mmu); -struct msm_gem_vm *msm_kms_init_vm(struct drm_device *dev); +struct drm_gpuvm *msm_kms_init_vm(struct drm_device *dev); bool msm_use_mmu(struct drm_device *dev); int msm_ioctl_gem_submit(struct drm_device *dev, void *data, @@ -251,13 +249,14 @@ struct drm_gem_object *msm_gem_prime_import_sg_table(struct drm_device *dev, int msm_gem_prime_pin(struct drm_gem_object *obj); void msm_gem_prime_unpin(struct drm_gem_object *obj); -int msm_framebuffer_prepare(struct drm_framebuffer *fb, - struct msm_gem_vm *vm, bool needs_dirtyfb); -void msm_framebuffer_cleanup(struct drm_framebuffer *fb, - struct msm_gem_vm *vm, bool needed_dirtyfb); -uint32_t msm_framebuffer_iova(struct drm_framebuffer *fb, - struct msm_gem_vm *vm, int plane); -struct drm_gem_object *msm_framebuffer_bo(struct drm_framebuffer *fb, int plane); +int msm_framebuffer_prepare(struct drm_framebuffer *fb, struct drm_gpuvm *vm, + bool needs_dirtyfb); +void msm_framebuffer_cleanup(struct drm_framebuffer *fb, struct drm_gpuvm *vm, + bool needed_dirtyfb); +uint32_t msm_framebuffer_iova(struct drm_framebuffer *fb, struct drm_gpuvm *vm, + int plane); +struct drm_gem_object *msm_framebuffer_bo(struct drm_framebuffer *fb, + int plane); const struct msm_format *msm_framebuffer_format(struct drm_framebuffer *fb); struct drm_framebuffer *msm_framebuffer_create(struct drm_device *dev, struct drm_file *file, const struct drm_mode_fb_cmd2 *mode_cmd); diff --git a/drivers/gpu/drm/msm/msm_fb.c b/drivers/gpu/drm/msm/msm_fb.c index 6df318b73534..d267aa1cb218 100644 --- a/drivers/gpu/drm/msm/msm_fb.c +++ b/drivers/gpu/drm/msm/msm_fb.c @@ -75,9 +75,8 @@ void msm_framebuffer_describe(struct drm_framebuffer *fb, struct seq_file *m) /* prepare/pin all the fb's bo's for scanout. */ -int msm_framebuffer_prepare(struct drm_framebuffer *fb, - struct msm_gem_vm *vm, - bool needs_dirtyfb) +int msm_framebuffer_prepare(struct drm_framebuffer *fb, struct drm_gpuvm *vm, + bool needs_dirtyfb) { struct msm_framebuffer *msm_fb = to_msm_framebuffer(fb); int ret, i, n = fb->format->num_planes; @@ -98,9 +97,8 @@ int msm_framebuffer_prepare(struct drm_framebuffer *fb, return 0; } -void msm_framebuffer_cleanup(struct drm_framebuffer *fb, - struct msm_gem_vm *vm, - bool needed_dirtyfb) +void msm_framebuffer_cleanup(struct drm_framebuffer *fb, struct drm_gpuvm *vm, + bool needed_dirtyfb) { struct msm_framebuffer *msm_fb = to_msm_framebuffer(fb); int i, n = fb->format->num_planes; @@ -115,8 +113,8 @@ void msm_framebuffer_cleanup(struct drm_framebuffer *fb, memset(msm_fb->iova, 0, sizeof(msm_fb->iova)); } -uint32_t msm_framebuffer_iova(struct drm_framebuffer *fb, - struct msm_gem_vm *vm, int plane) +uint32_t msm_framebuffer_iova(struct drm_framebuffer *fb, struct drm_gpuvm *vm, + int plane) { struct msm_framebuffer *msm_fb = to_msm_framebuffer(fb); return msm_fb->iova[plane] + fb->offsets[plane]; diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index a8de7b158a37..99e0ce38cd92 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -353,8 +353,8 @@ uint64_t msm_gem_mmap_offset(struct drm_gem_object *obj) return offset; } -static struct msm_gem_vma *lookup_vma(struct drm_gem_object *obj, - struct msm_gem_vm *vm) +static struct drm_gpuva *lookup_vma(struct drm_gem_object *obj, + struct drm_gpuvm *vm) { struct drm_gpuvm_bo *vm_bo; @@ -364,13 +364,13 @@ static struct msm_gem_vma *lookup_vma(struct drm_gem_object *obj, struct drm_gpuva *vma; drm_gpuvm_bo_for_each_va (vma, vm_bo) { - if (vma->vm == &vm->base) { + if (vma->vm == vm) { /* lookup_vma() should only be used in paths * with at most one vma per vm */ GEM_WARN_ON(!list_is_singular(&vm_bo->list.gpuva)); - return to_msm_vma(vma); + return vma; } } } @@ -395,11 +395,9 @@ put_iova_spaces(struct drm_gem_object *obj, bool close) struct drm_gpuva *vma, *vmatmp; drm_gpuvm_bo_for_each_va_safe (vma, vmatmp, vm_bo) { - struct msm_gem_vma *msm_vma = to_msm_vma(vma); - - msm_gem_vma_purge(msm_vma); + msm_gem_vma_purge(vma); if (close) - msm_gem_vma_close(msm_vma); + msm_gem_vma_close(vma); } } } @@ -416,18 +414,16 @@ put_iova_vmas(struct drm_gem_object *obj) struct drm_gpuva *vma, *vmatmp; drm_gpuvm_bo_for_each_va_safe (vma, vmatmp, vm_bo) { - struct msm_gem_vma *msm_vma = to_msm_vma(vma); - - msm_gem_vma_close(msm_vma); + msm_gem_vma_close(vma); } } } -static struct msm_gem_vma *get_vma_locked(struct drm_gem_object *obj, - struct msm_gem_vm *vm, - u64 range_start, u64 range_end) +static struct drm_gpuva *get_vma_locked(struct drm_gem_object *obj, + struct drm_gpuvm *vm, u64 range_start, + u64 range_end) { - struct msm_gem_vma *vma; + struct drm_gpuva *vma; msm_gem_assert_locked(obj); @@ -436,14 +432,14 @@ static struct msm_gem_vma *get_vma_locked(struct drm_gem_object *obj, if (!vma) { vma = msm_gem_vma_new(vm, obj, range_start, range_end); } else { - GEM_WARN_ON(vma->base.va.addr < range_start); - GEM_WARN_ON((vma->base.va.addr + obj->size) > range_end); + GEM_WARN_ON(vma->va.addr < range_start); + GEM_WARN_ON((vma->va.addr + obj->size) > range_end); } return vma; } -int msm_gem_pin_vma_locked(struct drm_gem_object *obj, struct msm_gem_vma *vma) +int msm_gem_pin_vma_locked(struct drm_gem_object *obj, struct drm_gpuva *vma) { struct msm_gem_object *msm_obj = to_msm_bo(obj); struct page **pages; @@ -496,17 +492,17 @@ void msm_gem_unpin_active(struct drm_gem_object *obj) update_lru_active(obj); } -struct msm_gem_vma *msm_gem_get_vma_locked(struct drm_gem_object *obj, - struct msm_gem_vm *vm) +struct drm_gpuva *msm_gem_get_vma_locked(struct drm_gem_object *obj, + struct drm_gpuvm *vm) { return get_vma_locked(obj, vm, 0, U64_MAX); } static int get_and_pin_iova_range_locked(struct drm_gem_object *obj, - struct msm_gem_vm *vm, uint64_t *iova, - u64 range_start, u64 range_end) + struct drm_gpuvm *vm, uint64_t *iova, + u64 range_start, u64 range_end) { - struct msm_gem_vma *vma; + struct drm_gpuva *vma; int ret; msm_gem_assert_locked(obj); @@ -517,7 +513,7 @@ static int get_and_pin_iova_range_locked(struct drm_gem_object *obj, ret = msm_gem_pin_vma_locked(obj, vma); if (!ret) { - *iova = vma->base.va.addr; + *iova = vma->va.addr; pin_obj_locked(obj); } @@ -529,8 +525,8 @@ static int get_and_pin_iova_range_locked(struct drm_gem_object *obj, * limits iova to specified range (in pages) */ int msm_gem_get_and_pin_iova_range(struct drm_gem_object *obj, - struct msm_gem_vm *vm, uint64_t *iova, - u64 range_start, u64 range_end) + struct drm_gpuvm *vm, uint64_t *iova, + u64 range_start, u64 range_end) { int ret; @@ -542,8 +538,8 @@ int msm_gem_get_and_pin_iova_range(struct drm_gem_object *obj, } /* get iova and pin it. Should have a matching put */ -int msm_gem_get_and_pin_iova(struct drm_gem_object *obj, - struct msm_gem_vm *vm, uint64_t *iova) +int msm_gem_get_and_pin_iova(struct drm_gem_object *obj, struct drm_gpuvm *vm, + uint64_t *iova) { return msm_gem_get_and_pin_iova_range(obj, vm, iova, 0, U64_MAX); } @@ -552,10 +548,10 @@ int msm_gem_get_and_pin_iova(struct drm_gem_object *obj, * Get an iova but don't pin it. Doesn't need a put because iovas are currently * valid for the life of the object */ -int msm_gem_get_iova(struct drm_gem_object *obj, - struct msm_gem_vm *vm, uint64_t *iova) +int msm_gem_get_iova(struct drm_gem_object *obj, struct drm_gpuvm *vm, + uint64_t *iova) { - struct msm_gem_vma *vma; + struct drm_gpuva *vma; int ret = 0; msm_gem_lock(obj); @@ -563,7 +559,7 @@ int msm_gem_get_iova(struct drm_gem_object *obj, if (IS_ERR(vma)) { ret = PTR_ERR(vma); } else { - *iova = vma->base.va.addr; + *iova = vma->va.addr; } msm_gem_unlock(obj); @@ -571,9 +567,9 @@ int msm_gem_get_iova(struct drm_gem_object *obj, } static int clear_iova(struct drm_gem_object *obj, - struct msm_gem_vm *vm) + struct drm_gpuvm *vm) { - struct msm_gem_vma *vma = lookup_vma(obj, vm); + struct drm_gpuva *vma = lookup_vma(obj, vm); if (!vma) return 0; @@ -592,7 +588,7 @@ static int clear_iova(struct drm_gem_object *obj, * Setting an iova of zero will clear the vma. */ int msm_gem_set_iova(struct drm_gem_object *obj, - struct msm_gem_vm *vm, uint64_t iova) + struct drm_gpuvm *vm, uint64_t iova) { int ret = 0; @@ -600,11 +596,11 @@ int msm_gem_set_iova(struct drm_gem_object *obj, if (!iova) { ret = clear_iova(obj, vm); } else { - struct msm_gem_vma *vma; + struct drm_gpuva *vma; vma = get_vma_locked(obj, vm, iova, iova + obj->size); if (IS_ERR(vma)) { ret = PTR_ERR(vma); - } else if (GEM_WARN_ON(vma->base.va.addr != iova)) { + } else if (GEM_WARN_ON(vma->va.addr != iova)) { clear_iova(obj, vm); ret = -EBUSY; } @@ -619,10 +615,9 @@ int msm_gem_set_iova(struct drm_gem_object *obj, * purged until something else (shrinker, mm_notifier, destroy, etc) decides * to get rid of it */ -void msm_gem_unpin_iova(struct drm_gem_object *obj, - struct msm_gem_vm *vm) +void msm_gem_unpin_iova(struct drm_gem_object *obj, struct drm_gpuvm *vm) { - struct msm_gem_vma *vma; + struct drm_gpuva *vma; msm_gem_lock(obj); vma = lookup_vma(obj, vm); @@ -1240,9 +1235,9 @@ struct drm_gem_object *msm_gem_import(struct drm_device *dev, return ERR_PTR(ret); } -void *msm_gem_kernel_new(struct drm_device *dev, uint32_t size, - uint32_t flags, struct msm_gem_vm *vm, - struct drm_gem_object **bo, uint64_t *iova) +void *msm_gem_kernel_new(struct drm_device *dev, uint32_t size, uint32_t flags, + struct drm_gpuvm *vm, struct drm_gem_object **bo, + uint64_t *iova) { void *vaddr; struct drm_gem_object *obj = msm_gem_new(dev, size, flags); @@ -1275,8 +1270,7 @@ void *msm_gem_kernel_new(struct drm_device *dev, uint32_t size, } -void msm_gem_kernel_put(struct drm_gem_object *bo, - struct msm_gem_vm *vm) +void msm_gem_kernel_put(struct drm_gem_object *bo, struct drm_gpuvm *vm) { if (IS_ERR_OR_NULL(bo)) return; diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index 5091892bbe2e..acb976722580 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -85,12 +85,7 @@ struct msm_gem_vm { }; #define to_msm_vm(x) container_of(x, struct msm_gem_vm, base) -struct msm_gem_vm * -msm_gem_vm_get(struct msm_gem_vm *vm); - -void msm_gem_vm_put(struct msm_gem_vm *vm); - -struct msm_gem_vm * +struct drm_gpuvm * msm_gem_vm_create(struct drm_device *drm, struct msm_mmu *mmu, const char *name, u64 va_start, u64 va_size, bool managed); @@ -117,12 +112,12 @@ struct msm_gem_vma { }; #define to_msm_vma(x) container_of(x, struct msm_gem_vma, base) -struct msm_gem_vma * -msm_gem_vma_new(struct msm_gem_vm *vm, struct drm_gem_object *obj, +struct drm_gpuva * +msm_gem_vma_new(struct drm_gpuvm *vm, struct drm_gem_object *obj, u64 range_start, u64 range_end); -void msm_gem_vma_purge(struct msm_gem_vma *vma); -int msm_gem_vma_map(struct msm_gem_vma *vma, int prot, struct sg_table *sgt, int size); -void msm_gem_vma_close(struct msm_gem_vma *vma); +void msm_gem_vma_purge(struct drm_gpuva *vma); +int msm_gem_vma_map(struct drm_gpuva *vma, int prot, struct sg_table *sgt, int size); +void msm_gem_vma_close(struct drm_gpuva *vma); struct msm_gem_object { struct drm_gem_object base; @@ -167,22 +162,21 @@ struct msm_gem_object { #define to_msm_bo(x) container_of(x, struct msm_gem_object, base) uint64_t msm_gem_mmap_offset(struct drm_gem_object *obj); -int msm_gem_pin_vma_locked(struct drm_gem_object *obj, struct msm_gem_vma *vma); +int msm_gem_pin_vma_locked(struct drm_gem_object *obj, struct drm_gpuva *vma); void msm_gem_unpin_locked(struct drm_gem_object *obj); void msm_gem_unpin_active(struct drm_gem_object *obj); -struct msm_gem_vma *msm_gem_get_vma_locked(struct drm_gem_object *obj, - struct msm_gem_vm *vm); -int msm_gem_get_iova(struct drm_gem_object *obj, - struct msm_gem_vm *vm, uint64_t *iova); -int msm_gem_set_iova(struct drm_gem_object *obj, - struct msm_gem_vm *vm, uint64_t iova); +struct drm_gpuva *msm_gem_get_vma_locked(struct drm_gem_object *obj, + struct drm_gpuvm *vm); +int msm_gem_get_iova(struct drm_gem_object *obj, struct drm_gpuvm *vm, + uint64_t *iova); +int msm_gem_set_iova(struct drm_gem_object *obj, struct drm_gpuvm *vm, + uint64_t iova); int msm_gem_get_and_pin_iova_range(struct drm_gem_object *obj, - struct msm_gem_vm *vm, uint64_t *iova, - u64 range_start, u64 range_end); -int msm_gem_get_and_pin_iova(struct drm_gem_object *obj, - struct msm_gem_vm *vm, uint64_t *iova); -void msm_gem_unpin_iova(struct drm_gem_object *obj, - struct msm_gem_vm *vm); + struct drm_gpuvm *vm, uint64_t *iova, + u64 range_start, u64 range_end); +int msm_gem_get_and_pin_iova(struct drm_gem_object *obj, struct drm_gpuvm *vm, + uint64_t *iova); +void msm_gem_unpin_iova(struct drm_gem_object *obj, struct drm_gpuvm *vm); void msm_gem_pin_obj_locked(struct drm_gem_object *obj); struct page **msm_gem_pin_pages_locked(struct drm_gem_object *obj); void msm_gem_unpin_pages_locked(struct drm_gem_object *obj); @@ -203,11 +197,10 @@ int msm_gem_new_handle(struct drm_device *dev, struct drm_file *file, uint32_t size, uint32_t flags, uint32_t *handle, char *name); struct drm_gem_object *msm_gem_new(struct drm_device *dev, uint32_t size, uint32_t flags); -void *msm_gem_kernel_new(struct drm_device *dev, uint32_t size, - uint32_t flags, struct msm_gem_vm *vm, - struct drm_gem_object **bo, uint64_t *iova); -void msm_gem_kernel_put(struct drm_gem_object *bo, - struct msm_gem_vm *vm); +void *msm_gem_kernel_new(struct drm_device *dev, uint32_t size, uint32_t flags, + struct drm_gpuvm *vm, struct drm_gem_object **bo, + uint64_t *iova); +void msm_gem_kernel_put(struct drm_gem_object *bo, struct drm_gpuvm *vm); struct drm_gem_object *msm_gem_import(struct drm_device *dev, struct dma_buf *dmabuf, struct sg_table *sgt); __printf(2, 3) @@ -301,7 +294,7 @@ struct msm_gem_submit { struct kref ref; struct drm_device *dev; struct msm_gpu *gpu; - struct msm_gem_vm *vm; + struct drm_gpuvm *vm; struct list_head node; /* node in ring submit list */ struct drm_exec exec; uint32_t seqno; /* Sequence number of the submit on the ring */ diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c index 14845768f7af..8a4f4c403404 100644 --- a/drivers/gpu/drm/msm/msm_gem_submit.c +++ b/drivers/gpu/drm/msm/msm_gem_submit.c @@ -299,7 +299,7 @@ static int submit_pin_objects(struct msm_gem_submit *submit) for (i = 0; i < submit->nr_bos; i++) { struct drm_gem_object *obj = submit->bos[i].obj; - struct msm_gem_vma *vma; + struct drm_gpuva *vma; /* if locking succeeded, pin bo: */ vma = msm_gem_get_vma_locked(obj, submit->vm); @@ -312,7 +312,7 @@ static int submit_pin_objects(struct msm_gem_submit *submit) if (ret) break; - submit->bos[i].iova = vma->base.va.addr; + submit->bos[i].iova = vma->va.addr; } /* diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_gem_vma.c index f4655ae1d71b..b37bfd80bca9 100644 --- a/drivers/gpu/drm/msm/msm_gem_vma.c +++ b/drivers/gpu/drm/msm/msm_gem_vma.c @@ -20,52 +20,38 @@ msm_gem_vm_free(struct drm_gpuvm *gpuvm) kfree(vm); } - -void msm_gem_vm_put(struct msm_gem_vm *vm) -{ - if (vm) - drm_gpuvm_put(&vm->base); -} - -struct msm_gem_vm * -msm_gem_vm_get(struct msm_gem_vm *vm) -{ - if (!IS_ERR_OR_NULL(vm)) - drm_gpuvm_get(&vm->base); - - return vm; -} - /* Actually unmap memory for the vma */ -void msm_gem_vma_purge(struct msm_gem_vma *vma) +void msm_gem_vma_purge(struct drm_gpuva *vma) { - struct msm_gem_vm *vm = to_msm_vm(vma->base.vm); - unsigned size = vma->base.va.range; + struct msm_gem_vma *msm_vma = to_msm_vma(vma); + struct msm_gem_vm *vm = to_msm_vm(vma->vm); + unsigned size = vma->va.range; /* Don't do anything if the memory isn't mapped */ - if (!vma->mapped) + if (!msm_vma->mapped) return; - vm->mmu->funcs->unmap(vm->mmu, vma->base.va.addr, size); + vm->mmu->funcs->unmap(vm->mmu, vma->va.addr, size); - vma->mapped = false; + msm_vma->mapped = false; } /* Map and pin vma: */ int -msm_gem_vma_map(struct msm_gem_vma *vma, int prot, +msm_gem_vma_map(struct drm_gpuva *vma, int prot, struct sg_table *sgt, int size) { - struct msm_gem_vm *vm = to_msm_vm(vma->base.vm); + struct msm_gem_vma *msm_vma = to_msm_vma(vma); + struct msm_gem_vm *vm = to_msm_vm(vma->vm); int ret; - if (GEM_WARN_ON(!vma->base.va.addr)) + if (GEM_WARN_ON(!vma->va.addr)) return -EINVAL; - if (vma->mapped) + if (msm_vma->mapped) return 0; - vma->mapped = true; + msm_vma->mapped = true; /* * NOTE: iommu/io-pgtable can allocate pages, so we cannot hold @@ -76,40 +62,44 @@ msm_gem_vma_map(struct msm_gem_vma *vma, int prot, * Revisit this if we can come up with a scheme to pre-alloc pages * for the pgtable in map/unmap ops. */ - ret = vm->mmu->funcs->map(vm->mmu, vma->base.va.addr, sgt, size, prot); + ret = vm->mmu->funcs->map(vm->mmu, vma->va.addr, sgt, size, prot); if (ret) { - vma->mapped = false; + msm_vma->mapped = false; } return ret; } /* Close an iova. Warn if it is still in use */ -void msm_gem_vma_close(struct msm_gem_vma *vma) +void msm_gem_vma_close(struct drm_gpuva *vma) { - struct msm_gem_vm *vm = to_msm_vm(vma->base.vm); + struct msm_gem_vm *vm = to_msm_vm(vma->vm); + struct msm_gem_vma *msm_vma = to_msm_vma(vma); - GEM_WARN_ON(vma->mapped); + GEM_WARN_ON(msm_vma->mapped); spin_lock(&vm->mm_lock); - if (vma->base.va.addr) - drm_mm_remove_node(&vma->node); + if (vma->va.addr && vm->managed) + drm_mm_remove_node(&msm_vma->node); spin_unlock(&vm->mm_lock); + dma_resv_lock(drm_gpuvm_resv(vma->vm), NULL); mutex_lock(&vm->vm_lock); - drm_gpuva_remove(&vma->base); - drm_gpuva_unlink(&vma->base); + drm_gpuva_remove(vma); + drm_gpuva_unlink(vma); mutex_unlock(&vm->vm_lock); + dma_resv_unlock(drm_gpuvm_resv(vma->vm)); kfree(vma); } /* Create a new vma and allocate an iova for it */ -struct msm_gem_vma * -msm_gem_vma_new(struct msm_gem_vm *vm, struct drm_gem_object *obj, +struct drm_gpuva * +msm_gem_vma_new(struct drm_gpuvm *_vm, struct drm_gem_object *obj, u64 range_start, u64 range_end) { + struct msm_gem_vm *vm = to_msm_vm(_vm); struct drm_gpuvm_bo *vm_bo; struct msm_gem_vma *vma; int ret; @@ -154,7 +144,7 @@ msm_gem_vma_new(struct msm_gem_vm *vm, struct drm_gem_object *obj, mutex_unlock(&vm->vm_lock); GEM_WARN_ON(drm_gpuvm_bo_put(vm_bo)); - return vma; + return &vma->base; err_va_remove: mutex_lock(&vm->vm_lock); @@ -186,7 +176,7 @@ static const struct drm_gpuvm_ops msm_gpuvm_ops = { * handles virtual address allocation, and both async and sync operations * are supported. */ -struct msm_gem_vm * +struct drm_gpuvm * msm_gem_vm_create(struct drm_device *drm, struct msm_mmu *mmu, const char *name, u64 va_start, u64 va_size, bool managed) { @@ -219,7 +209,7 @@ msm_gem_vm_create(struct drm_device *drm, struct msm_mmu *mmu, const char *name, drm_mm_init(&vm->mm, va_start, va_size); - return vm; + return &vm->base; err_free_vm: kfree(vm); diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c index b61cc939363d..17d8be47db19 100644 --- a/drivers/gpu/drm/msm/msm_gpu.c +++ b/drivers/gpu/drm/msm/msm_gpu.c @@ -378,7 +378,7 @@ static void recover_worker(struct kthread_work *work) /* Increment the fault counts */ submit->queue->faults++; if (submit->vm) - submit->vm->faults++; + to_msm_vm(submit->vm)->faults++; get_comm_cmdline(submit, &comm, &cmd); @@ -454,6 +454,7 @@ static void fault_worker(struct kthread_work *work) { struct msm_gpu *gpu = container_of(work, struct msm_gpu, fault_work); struct msm_gem_submit *submit; + struct msm_mmu *mmu = to_msm_vm(gpu->vm)->mmu; struct msm_ringbuffer *cur_ring = gpu->funcs->active_ring(gpu); char *comm = NULL, *cmd = NULL; @@ -483,7 +484,7 @@ static void fault_worker(struct kthread_work *work) resume_smmu: memset(&gpu->fault_info, 0, sizeof(gpu->fault_info)); - gpu->vm->mmu->funcs->resume_translation(gpu->vm->mmu); + mmu->funcs->resume_translation(mmu); mutex_unlock(&gpu->lock); } @@ -820,10 +821,11 @@ static int get_clocks(struct platform_device *pdev, struct msm_gpu *gpu) } /* Return a new address space for a msm_drm_private instance */ -struct msm_gem_vm * +struct drm_gpuvm * msm_gpu_create_private_vm(struct msm_gpu *gpu, struct task_struct *task) { - struct msm_gem_vm *vm = NULL; + struct drm_gpuvm *vm = NULL; + if (!gpu) return NULL; @@ -834,11 +836,11 @@ msm_gpu_create_private_vm(struct msm_gpu *gpu, struct task_struct *task) if (gpu->funcs->create_private_vm) { vm = gpu->funcs->create_private_vm(gpu); if (!IS_ERR(vm)) - vm->pid = get_pid(task_pid(task)); + to_msm_vm(vm)->pid = get_pid(task_pid(task)); } if (IS_ERR_OR_NULL(vm)) - vm = msm_gem_vm_get(gpu->vm); + vm = drm_gpuvm_get(gpu->vm); return vm; } @@ -1011,8 +1013,9 @@ void msm_gpu_cleanup(struct msm_gpu *gpu) msm_gem_kernel_put(gpu->memptrs_bo, gpu->vm); if (!IS_ERR_OR_NULL(gpu->vm)) { - gpu->vm->mmu->funcs->detach(gpu->vm->mmu); - msm_gem_vm_put(gpu->vm); + struct msm_mmu *mmu = to_msm_vm(gpu->vm)->mmu; + mmu->funcs->detach(mmu); + drm_gpuvm_put(gpu->vm); } if (gpu->worker) { diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h index edbdd894adfb..ad6f14891205 100644 --- a/drivers/gpu/drm/msm/msm_gpu.h +++ b/drivers/gpu/drm/msm/msm_gpu.h @@ -78,8 +78,8 @@ struct msm_gpu_funcs { /* note: gpu_set_freq() can assume that we have been pm_resumed */ void (*gpu_set_freq)(struct msm_gpu *gpu, struct dev_pm_opp *opp, bool suspended); - struct msm_gem_vm *(*create_vm)(struct msm_gpu *gpu, struct platform_device *pdev); - struct msm_gem_vm *(*create_private_vm)(struct msm_gpu *gpu); + struct drm_gpuvm *(*create_vm)(struct msm_gpu *gpu, struct platform_device *pdev); + struct drm_gpuvm *(*create_private_vm)(struct msm_gpu *gpu); uint32_t (*get_rptr)(struct msm_gpu *gpu, struct msm_ringbuffer *ring); /** @@ -226,7 +226,7 @@ struct msm_gpu { void __iomem *mmio; int irq; - struct msm_gem_vm *vm; + struct drm_gpuvm *vm; /* Power Control: */ struct regulator *gpu_reg, *gpu_cx; @@ -355,7 +355,7 @@ struct msm_context { int queueid; /** @vm: the per-process GPU address-space */ - struct msm_gem_vm *vm; + struct drm_gpuvm *vm; /** @kref: the reference count */ struct kref ref; @@ -665,7 +665,7 @@ int msm_gpu_init(struct drm_device *drm, struct platform_device *pdev, struct msm_gpu *gpu, const struct msm_gpu_funcs *funcs, const char *name, struct msm_gpu_config *config); -struct msm_gem_vm * +struct drm_gpuvm * msm_gpu_create_private_vm(struct msm_gpu *gpu, struct task_struct *task); void msm_gpu_cleanup(struct msm_gpu *gpu); diff --git a/drivers/gpu/drm/msm/msm_kms.c b/drivers/gpu/drm/msm/msm_kms.c index 4e90efaad714..53ec3dfc5f57 100644 --- a/drivers/gpu/drm/msm/msm_kms.c +++ b/drivers/gpu/drm/msm/msm_kms.c @@ -164,9 +164,9 @@ void msm_crtc_disable_vblank(struct drm_crtc *crtc) vblank_ctrl_queue_work(priv, crtc, false); } -struct msm_gem_vm *msm_kms_init_vm(struct drm_device *dev) +struct drm_gpuvm *msm_kms_init_vm(struct drm_device *dev) { - struct msm_gem_vm *vm; + struct drm_gpuvm *vm; struct msm_mmu *mmu; struct device *mdp_dev = dev->dev; struct device *mdss_dev = mdp_dev->parent; diff --git a/drivers/gpu/drm/msm/msm_kms.h b/drivers/gpu/drm/msm/msm_kms.h index 73da232237bc..79e494f954ca 100644 --- a/drivers/gpu/drm/msm/msm_kms.h +++ b/drivers/gpu/drm/msm/msm_kms.h @@ -129,7 +129,7 @@ struct msm_kms { bool irq_requested; /* mapper-id used to request GEM buffer mapped for scanout: */ - struct msm_gem_vm *vm; + struct drm_gpuvm *vm; /* disp snapshot support */ struct kthread_worker *dump_worker; diff --git a/drivers/gpu/drm/msm/msm_submitqueue.c b/drivers/gpu/drm/msm/msm_submitqueue.c index 6298233c3568..8ced49c7557b 100644 --- a/drivers/gpu/drm/msm/msm_submitqueue.c +++ b/drivers/gpu/drm/msm/msm_submitqueue.c @@ -59,7 +59,7 @@ void __msm_context_destroy(struct kref *kref) kfree(ctx->entities[i]); } - msm_gem_vm_put(ctx->vm); + drm_gpuvm_put(ctx->vm); kfree(ctx->comm); kfree(ctx->cmdline); kfree(ctx); From patchwork Sat Dec 7 16:15:12 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 848320 Received: from mail-pf1-f177.google.com (mail-pf1-f177.google.com [209.85.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7CEB619415D; Sat, 7 Dec 2024 16:17:59 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.177 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733588281; cv=none; b=codpMjmrHGGVIl3ka4BjGMFs8Qd+FCDn0Gfv58qaT72yRXiMNyHpjmIeInzf4D951CVfMzbu1LV8oR2nKMkHP0gK/Rtn/TZm9LG4l9zdb8UuaWrrkpeWGvpBg4nXr3S0kQogyWNyY2HJWKiFRHNv1qdpO57Y2mRKsqbfiRjXER0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733588281; c=relaxed/simple; bh=4B0DyawciV/rkEoO74tCAkZydTyIMiAEsxbicM8zZ3Y=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=J4D8ERHPPF4KCqYO5Fk58AKC5bitTP5XICz6fMladjppxPlmIWrSOr9ob/G5b6faV81E7Ne3p0LrOtRQI1aEXsAuFTN/mXLvOXRzN9EekuuN8yqybU+okXWV5uyltlzSWRlTfPdl705EFn/xEi7tdlnpxZZACLYBSGiYkP/Zz/w= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=PX3Qiv2k; arc=none smtp.client-ip=209.85.210.177 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="PX3Qiv2k" Received: by mail-pf1-f177.google.com with SMTP id d2e1a72fcca58-7251331e756so3323858b3a.3; Sat, 07 Dec 2024 08:17:59 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1733588279; x=1734193079; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=W8+aynnXkT2Tv8509cM1dO0nO9C5LPhaKj9kzwLt5iA=; b=PX3Qiv2kgCDnE7lJ6KrY3JeZYwXMXJ7mn5p+WOhifGyrDBsiMDf9p2TfxcW7wER7x3 IRs2hSbdfGV7XOgMLvfA2kUdbYjfPYtV44T06vvMezL1S0sNPvUR6oaDVmkuNkMx+l8C VYC34YD3THVDNIEKr8kF/tqUnEmwqCqRcyt8AMiB6vU4TqUIZXqfqHp437eDRS3Px1lj t+jXU9CBJMFR3p4MFB8xCoyL/yZJI6GW5RoWzf+K9AuUg9QU9M7cRzAUq6J/wv07tF3u Dl5LALm0Rr5hBQdN+LqGo7NElcGHjuAvbup+v6Mx0qnMC/gDH5UyIC+IgYeNw+f4qfrU FVbg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1733588279; x=1734193079; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=W8+aynnXkT2Tv8509cM1dO0nO9C5LPhaKj9kzwLt5iA=; b=DZmSy6TpoiLlgJIeVaiuPLpiEQQqlQkYJSlqt8g+PvblHw+6vfaUN2PRAeWtHeOLxg vmk4zInAnk6tfqPT76Uu8vDZD3mJiFrBVhfuLwrptvghv249JJFQIaaYXQs3PpuNHqn+ qI2+x8LXWVlnBTIwTbMB74VuYj1giwpRvhQso8l37pIUx2QcP9LgcXotvWSmmCuQD3K1 fPWTEIiMXxu3yHGvY/9LoowvQG/Km+khKoaqyLf+8rn0rHK9ttXOZjn3aPy9tNbjbNZM dyE51E8WzbW1BOiQXDHlG4U4IVp2mc+o+dAb0mmQqNRqTEF0UuuKdD7/mv6vdhjI7/zP 2Xbw== X-Forwarded-Encrypted: i=1; AJvYcCVffyTzqV2PuEq5o42QiEXSTSbRxmttj40AcmriavssI6HM3S/P7Mj6pfViRw/lagNAxeSwRoBpj5hlglCW@vger.kernel.org, AJvYcCWIxrUfebojUSHTMipztp5ie5bYX5ILOGQxe0BoGzhA/DLGinS1OTy2UZJEokkhXy+kMQLGxBSaCP2sq8Gx@vger.kernel.org X-Gm-Message-State: AOJu0Yz/WOTvctV/mYusJ9HZkX8oxQPUtHJV8kffQjxyXVCkBa3tBPXT DFmpsGh193AHo/7js38lGOlKmpL2/ci9hlUSBcHmAXbp2qg13TsJ X-Gm-Gg: ASbGncvpQ9zFDA6zxrs5Ck6y6Q1poDgX4WTaUD9ivLA9N+mgHCbHoLvZABxnXdmSPa6 DaPHPXOuwGE7Jp6v0dERqSDx85B6RH7/3KDAWejlLYYm1WrRNAPYG+nHVYNTnAGJ+cm1p40MEXw FlB5ApkQIpn4HgMOtElUEAHqVp1Gv6CkD92xQyqUc44BDCkrzMh7paWqHkg9C4CT0AxFgcO+xH0 E9WeYDVSXCOYVkW3hmHCGOJPfOPx4emXBqQQ+QPM/4Iwcs4nOLoAuL9ULDa9NAEVWGgNeB7RlDE GJOVsggz X-Google-Smtp-Source: AGHT+IHIjdGL/dpBA/1X2A/Uf+yw41v8np6MWGp4a+GqznIPYiIdC8DVSHtSkopR6oq532/kgOxirA== X-Received: by 2002:a05:6a20:7348:b0:1e0:e000:ca60 with SMTP id adf61e73a8af0-1e18712303amr11457387637.28.1733588278791; Sat, 07 Dec 2024 08:17:58 -0800 (PST) Received: from localhost (c-73-37-105-206.hsd1.or.comcast.net. [73.37.105.206]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-725deb60815sm391621b3a.109.2024.12.07.08.17.58 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 07 Dec 2024 08:17:58 -0800 (PST) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Akhil P Oommen , Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [RFC 12/24] drm/msm: Split submit_pin_objects() Date: Sat, 7 Dec 2024 08:15:12 -0800 Message-ID: <20241207161651.410556-13-robdclark@gmail.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20241207161651.410556-1-robdclark@gmail.com> References: <20241207161651.410556-1-robdclark@gmail.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Rob Clark For VM_BIND, in the first step, we just want to get the backing pages, but defer creating the vma until the map/unmap/ops are evaluated. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gem_submit.c | 27 +++++++++++++++++++-------- 1 file changed, 19 insertions(+), 8 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c index 8a4f4c403404..51c92fe1146f 100644 --- a/drivers/gpu/drm/msm/msm_gem_submit.c +++ b/drivers/gpu/drm/msm/msm_gem_submit.c @@ -292,12 +292,16 @@ static int submit_fence_sync(struct msm_gem_submit *submit) return ret; } -static int submit_pin_objects(struct msm_gem_submit *submit) +static int submit_pin_vmas(struct msm_gem_submit *submit) { - struct msm_drm_private *priv = submit->dev->dev_private; - int i, ret = 0; + int ret = 0; - for (i = 0; i < submit->nr_bos; i++) { + /* + * First loop, before holding the LRU lock, avoids holding the + * LRU lock while calling msm_gem_pin_vma_locked (which could + * trigger get_pages()) + */ + for (int i = 0; i < submit->nr_bos; i++) { struct drm_gem_object *obj = submit->bos[i].obj; struct drm_gpuva *vma; @@ -315,6 +319,13 @@ static int submit_pin_objects(struct msm_gem_submit *submit) submit->bos[i].iova = vma->va.addr; } + return ret; +} + +static void submit_pin_objects(struct msm_gem_submit *submit) +{ + struct msm_drm_private *priv = submit->dev->dev_private; + /* * A second loop while holding the LRU lock (a) avoids acquiring/dropping * the LRU lock for each individual bo, while (b) avoiding holding the @@ -323,14 +334,12 @@ static int submit_pin_objects(struct msm_gem_submit *submit) * could trigger deadlock with the shrinker). */ mutex_lock(&priv->lru.lock); - for (i = 0; i < submit->nr_bos; i++) { + for (int i = 0; i < submit->nr_bos; i++) { msm_gem_pin_obj_locked(submit->bos[i].obj); } mutex_unlock(&priv->lru.lock); submit->bos_pinned = true; - - return ret; } static void submit_unpin_objects(struct msm_gem_submit *submit) @@ -760,10 +769,12 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data, goto out; } - ret = submit_pin_objects(submit); + ret = submit_pin_vmas(submit); if (ret) goto out; + submit_pin_objects(submit); + for (i = 0; i < args->nr_cmds; i++) { struct drm_gem_object *obj; uint64_t iova; From patchwork Sat Dec 7 16:15:13 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 848117 Received: from mail-pf1-f169.google.com (mail-pf1-f169.google.com [209.85.210.169]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 52D6D1917D9; Sat, 7 Dec 2024 16:18:01 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.169 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733588283; cv=none; b=c/98M6Y8yrl2qckKpjXN7Xy4Lw3cQ1CgUQJAcBSJeUt501iPYXpOwhXzAd5yrYh5oQhuLCKZEY0BUq7m0K/WixtPTIpJEk+DMaCw3BWPHuA3GXVSjmWPrp2DXJvd8LaWWpYF3c1AiEwq97+WAtOZOufJLJSRlUrsT1GJjJxQiZ8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733588283; c=relaxed/simple; bh=tGcTGZBdk808F/NK25zh1z1OPF8aAj3ycpvYHyPriAQ=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Rt+1EVR4lwXJGo/lSzXHvoJ23CW27H2I27zuSq1ZJgYi6j/I0mm+SA9NJlkIgk6wPa8g4NSUzBxjrC+ET6lE7qQYQN8tLA2MeZulbetRttRLTF2xwcxy+EkGJFpi/R5A8/fGOSOGBOcbAzp7QClelfmA39rNiygtg8Fm0khWvZI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=YkghaRza; arc=none smtp.client-ip=209.85.210.169 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="YkghaRza" Received: by mail-pf1-f169.google.com with SMTP id d2e1a72fcca58-725aa5c597cso2331669b3a.1; Sat, 07 Dec 2024 08:18:01 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1733588281; x=1734193081; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Egv1uLLtlIR3aubbAbZDOIXdcxKbfhpt7HeuljF3hFc=; b=YkghaRzaOoTZr+nbNC1llc6w4pRtidnC2l1FSJWwZ1GfURWh7MpXp9/Ks471WECQqU cXPccKFfR2PLdbRY0E2BMntfc8vIG2p5uoeKEF1nSJl9Z0fYTw5AMFk7uNEYFlGve2Ja TpuxOseGBqG7r4+vmbDnyT7AC1ScAztTyqVpAEXMzN7PuGRQgv3NsdP2PVevJsYkJACj aUgDzLHMuIWJ7h3WRNw4faVADgulskakc5XaZN2vGYuzkvUwY1pJcaX9YlmlujmOJgqx L9blRoeN6Z/BFKlac29ZvTLfjq3AHGKo9Dx7UpHXVi1Nt9yD+2BdKSDy5xfwcCEoahxX wr5Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1733588281; x=1734193081; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Egv1uLLtlIR3aubbAbZDOIXdcxKbfhpt7HeuljF3hFc=; b=Z1aeJEBOczl+M013wx95z75K08pAsR2F5P0BAkQ9ibQfWTBkdK2halR7q6pNLdVDmh ltQT6xbqETaaRx2+z+8sKDP8nAsEIKJLx7jpvc0qivfHsBXaiDLY33JN1vHF1qFed6vc ebCbrs9LukSw3NJtML6et32kMYGlKak76IBSTq3iHbGcXRpKcdZys9iuNGANs4MFhQ26 eMouIpUI5rsOyNxCDq/d7CvfgsR2zdzuFu62wXRSJxAxi8iaZT3z3DuSoCD9HSUebqCn AePLa0rTecyJ9OB4kEDQsuM5a7p/7pS3n2jFlx1FRXBl4KdLfuFb0UOLIHzvqeldaLuq Gg1Q== X-Forwarded-Encrypted: i=1; AJvYcCUwlxiXb78KtoKBvqvEDzW3i/NtMFlGQmeSXXnfBOe15WULkn9zt8bWQxb6V/m9PDbWs5ELyzU1MQUzqVO6@vger.kernel.org, AJvYcCXTD6ILhd+H4bjuncxCCTEe0LyEb9BTCOjCc9lWEkTyDeuf/bDnOCg2gFsrfvwkT6n8yBo3CIX0GqLTyNUS@vger.kernel.org X-Gm-Message-State: AOJu0YyxuO0jGj/ed3hDuBZGUwbIBJsK23tMkOhhijh8ZyKcO1RAx0PH 2nnSwflLqDzMYIXOHw/d9flwsFTVSwA0VdzqfdBdnCu3yzSj21u/ X-Gm-Gg: ASbGncvQNZfsTvx5fkx7B2XSGZyBzEOefQleCugVAaky0jkMCZU1bgyyAdm+ogSSOaA p0OWfev8uTRiRo1xJu33va19OhsR69+/T1d4WH/l8LNmAC7X7lpZ5bib5th0ntsl3ypBNHI329W xq6xqWT0h99ELl0UepB0zoLzA+aH0d7kipefwnZfEcNSNKbr6mK1+M+GlvxMmq63bWEjPpw413i a/zyd9ReftU9GqQFqxyE/jTjaOnbPQ+78KD7pFWA/ucyfAUjMZNJTGCgdrBCJbNXJPxyFS818st 9ADv1L/y X-Google-Smtp-Source: AGHT+IHhk4V5sRBXUwQ9Yocv9BWmM2RNwIxei2ATn5zpP5G7+85uR/g8Hd5cxybxh40GaRhyv+IIIw== X-Received: by 2002:a17:902:f651:b0:216:30f9:93d4 with SMTP id d9443c01a7336-21630f995b5mr29663105ad.8.1733588280606; Sat, 07 Dec 2024 08:18:00 -0800 (PST) Received: from localhost (c-73-37-105-206.hsd1.or.comcast.net. [73.37.105.206]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-7fd156e14afsm4168293a12.30.2024.12.07.08.17.59 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 07 Dec 2024 08:17:59 -0800 (PST) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Akhil P Oommen , Rob Clark , Rob Clark , Sean Paul , Konrad Dybcio , Abhinav Kumar , Dmitry Baryshkov , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [RFC 13/24] drm/msm: Lazily create context VM Date: Sat, 7 Dec 2024 08:15:13 -0800 Message-ID: <20241207161651.410556-14-robdclark@gmail.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20241207161651.410556-1-robdclark@gmail.com> References: <20241207161651.410556-1-robdclark@gmail.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Rob Clark In the next commit, a way for userspace to opt-in to userspace managed VM is added. For this to work, we need to defer creation of the VM until it is needed. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 3 ++- drivers/gpu/drm/msm/adreno/adreno_gpu.c | 13 ++++++----- drivers/gpu/drm/msm/msm_drv.c | 29 ++++++++++++++++++++----- drivers/gpu/drm/msm/msm_gem_submit.c | 2 +- drivers/gpu/drm/msm/msm_gpu.h | 9 +++++++- 5 files changed, 42 insertions(+), 14 deletions(-) diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c index 79a692288d18..97ec1dedeb98 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c @@ -112,6 +112,7 @@ static void a6xx_set_pagetable(struct a6xx_gpu *a6xx_gpu, { bool sysprof = refcount_read(&a6xx_gpu->base.base.sysprof_active) > 1; struct msm_context *ctx = submit->queue->ctx; + struct drm_gpuvm *vm = msm_context_vm(submit->dev, ctx); struct adreno_gpu *adreno_gpu = &a6xx_gpu->base; phys_addr_t ttbr; u32 asid; @@ -120,7 +121,7 @@ static void a6xx_set_pagetable(struct a6xx_gpu *a6xx_gpu, if (ctx->seqno == ring->cur_ctx_seqno) return; - if (msm_iommu_pagetable_params(to_msm_vm(ctx->vm)->mmu, &ttbr, &asid)) + if (msm_iommu_pagetable_params(to_msm_vm(vm)->mmu, &ttbr, &asid)) return; if (adreno_gpu->info->family >= ADRENO_7XX_GEN1) { diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c b/drivers/gpu/drm/msm/adreno/adreno_gpu.c index 3104ad878cf1..033c1c9c457e 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c @@ -313,6 +313,7 @@ int adreno_get_param(struct msm_gpu *gpu, struct msm_context *ctx, { struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu); struct drm_device *drm = gpu->dev; + struct drm_gpuvm *vm = msm_context_vm(drm, ctx); /* No pointer params yet */ if (*len != 0) @@ -358,8 +359,8 @@ int adreno_get_param(struct msm_gpu *gpu, struct msm_context *ctx, *value = 0; return 0; case MSM_PARAM_FAULTS: - if (ctx->vm) - *value = gpu->global_faults + to_msm_vm(ctx->vm)->faults; + if (vm) + *value = gpu->global_faults + to_msm_vm(vm)->faults; else *value = gpu->global_faults; return 0; @@ -367,14 +368,14 @@ int adreno_get_param(struct msm_gpu *gpu, struct msm_context *ctx, *value = gpu->suspend_count; return 0; case MSM_PARAM_VA_START: - if (ctx->vm == gpu->vm) + if (vm == gpu->vm) return UERR(EINVAL, drm, "requires per-process pgtables"); - *value = ctx->vm->mm_start; + *value = vm->mm_start; return 0; case MSM_PARAM_VA_SIZE: - if (ctx->vm == gpu->vm) + if (vm == gpu->vm) return UERR(EINVAL, drm, "requires per-process pgtables"); - *value = ctx->vm->mm_range; + *value = vm->mm_range; return 0; case MSM_PARAM_HIGHEST_BANK_BIT: *value = adreno_gpu->ubwc_config.highest_bank_bit; diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c index ab0998c2e846..7a23549db97d 100644 --- a/drivers/gpu/drm/msm/msm_drv.c +++ b/drivers/gpu/drm/msm/msm_drv.c @@ -214,10 +214,29 @@ static void load_gpu(struct drm_device *dev) mutex_unlock(&init_lock); } +/** + * msm_context_vm - lazily create the context's VM + * + * @dev: the drm device + * @ctx: the context + * + * The VM is lazily created, so that userspace has a chance to opt-in to having + * a userspace managed VM before the VM is created. + * + * Note that this does not return a reference to the VM. Once the VM is created, + * it exists for the lifetime of the context. + */ +struct drm_gpuvm *msm_context_vm(struct drm_device *dev, struct msm_context *ctx) +{ + struct msm_drm_private *priv = dev->dev_private; + if (!ctx->vm) + ctx->vm = msm_gpu_create_private_vm(priv->gpu, current); + return ctx->vm; +} + static int context_init(struct drm_device *dev, struct drm_file *file) { static atomic_t ident = ATOMIC_INIT(0); - struct msm_drm_private *priv = dev->dev_private; struct msm_context *ctx; ctx = kzalloc(sizeof(*ctx), GFP_KERNEL); @@ -230,7 +249,6 @@ static int context_init(struct drm_device *dev, struct drm_file *file) kref_init(&ctx->ref); msm_submitqueue_init(dev, ctx); - ctx->vm = msm_gpu_create_private_vm(priv->gpu, current); file->driver_priv = ctx; ctx->seqno = atomic_inc_return(&ident); @@ -408,7 +426,7 @@ static int msm_ioctl_gem_info_iova(struct drm_device *dev, * Don't pin the memory here - just get an address so that userspace can * be productive */ - return msm_gem_get_iova(obj, ctx->vm, iova); + return msm_gem_get_iova(obj, msm_context_vm(dev, ctx), iova); } static int msm_ioctl_gem_info_set_iova(struct drm_device *dev, @@ -417,18 +435,19 @@ static int msm_ioctl_gem_info_set_iova(struct drm_device *dev, { struct msm_drm_private *priv = dev->dev_private; struct msm_context *ctx = file->driver_priv; + struct drm_gpuvm *vm = msm_context_vm(dev, ctx); if (!priv->gpu) return -EINVAL; /* Only supported if per-process address space is supported: */ - if (priv->gpu->vm == ctx->vm) + if (priv->gpu->vm == vm) return UERR(EOPNOTSUPP, dev, "requires per-process pgtables"); if (should_fail(&fail_gem_iova, obj->size)) return -ENOMEM; - return msm_gem_set_iova(obj, ctx->vm, iova); + return msm_gem_set_iova(obj, vm, iova); } static int msm_ioctl_gem_info_set_metadata(struct drm_gem_object *obj, diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c index 51c92fe1146f..5e37e1dad5bb 100644 --- a/drivers/gpu/drm/msm/msm_gem_submit.c +++ b/drivers/gpu/drm/msm/msm_gem_submit.c @@ -63,7 +63,7 @@ static struct msm_gem_submit *submit_create(struct drm_device *dev, kref_init(&submit->ref); submit->dev = dev; - submit->vm = queue->ctx->vm; + submit->vm = msm_context_vm(dev, queue->ctx); submit->gpu = gpu; submit->cmd = (void *)&submit->bos[nr_bos]; submit->queue = queue; diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h index ad6f14891205..5efbca0b9fb1 100644 --- a/drivers/gpu/drm/msm/msm_gpu.h +++ b/drivers/gpu/drm/msm/msm_gpu.h @@ -354,7 +354,12 @@ struct msm_context { */ int queueid; - /** @vm: the per-process GPU address-space */ + /** + * @vm: + * + * The per-process GPU address-space. Do not access directly, use + * msm_context_vm(). + */ struct drm_gpuvm *vm; /** @kref: the reference count */ @@ -439,6 +444,8 @@ struct msm_context { atomic64_t ctx_mem; }; +struct drm_gpuvm *msm_context_vm(struct drm_device *dev, struct msm_context *ctx); + /** * msm_gpu_convert_priority - Map userspace priority to ring # and sched priority * From patchwork Sat Dec 7 16:15:14 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 848319 Received: from mail-pl1-f169.google.com (mail-pl1-f169.google.com [209.85.214.169]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 600DC197556; Sat, 7 Dec 2024 16:18:03 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.169 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733588285; cv=none; b=s2IX9pnIYN/Gv1s9KqiG3YhHVRUAf0EWQbIRZZdGAJc43QF1WBaTnEMUAQ8yKRYK0zXk9aEaPeul+/zC6ecifGBOLLZVUbpBS6T5AGNNQUfSKGsCi5d/eRpuZ6AMKzUDP/OcvYpZU3pGIPhz1w55igG0T5XaQem6sNApE9/hPZA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733588285; c=relaxed/simple; bh=xP3p0ysRUEklP7iKC0DFjhWBLF6RvRtsCW7zdp1WO5Y=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=TF4FHrtjypfL1VgP1r46XbgPJ2tXWtEYxuaxq7vXog3pPoQ8hcL1eY4fIoqU/KIiIe3djrexChyWf1VgOdBoblme58tikfxkLAHXCDS7ysxZNNR0bb6xrA4DQQ6DkFCYDIkh58R2BkYEMZtJIZBEdpp7QwBKqeovT517DaisRJc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=e2Pqptvx; arc=none smtp.client-ip=209.85.214.169 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="e2Pqptvx" Received: by mail-pl1-f169.google.com with SMTP id d9443c01a7336-21619108a6bso11251045ad.3; Sat, 07 Dec 2024 08:18:03 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1733588283; x=1734193083; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=rR0VyUIds0OeglVRG0rtcASMffgWhQUAZ/+UBKnhdc8=; b=e2PqptvxZ4207qlxiI9n+8lpgyq7mJeajlvSPSRfTFVoT5GHjYRilL9hPDk2uPlcdS 83mzIUz+NysAHdGHVhlnmCjcp4I09cKU2P6QPA9y4JpbfSPP/bTEMGk8+nOQMTcBspbn 5+ml6Nz4zY7/QtdYoKgsLzsy94Hprat8qE4w+5FOp2aG2F8y0dWWcXlL0eqHv3A4jIjW vKbx+YTwvH66AbqSZnHVOWZ06vLXeWlIgFDEJXWPt4XJiJzF/0ewUHJRz1BJQeIuweC7 TI0rdPlJXLh85STzmvaCVT3Bp51/h8lcaZore17qyF8Z19Squw6ugQ06qcUW18ZhCNkC XAOg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1733588283; x=1734193083; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=rR0VyUIds0OeglVRG0rtcASMffgWhQUAZ/+UBKnhdc8=; b=RS/TwAniBP8/g/+/vVxuep0js89LHi+5G4l8fvRQdTye6zh/tfSHhcHRIf1ee31S2G kxBrza8l9hksVY8dqwnnOTOIlxcCJYhW+67fvX8QAC5uc+lgzvPqpfcpSI/o3Dw18DBA YLowYtJL1tbmoVDzge/N48ysagIyNKjZtV9L/W2+eGGUmplRhrdtSzCDk2ZwaLPwI2vU yme/AHzld90JK4o/4xRLyu3WBZjTCZzKGsMpdDXkTZnPbz5agTNRC1BxPAIBFVMgDpys lkkJcZAx5cXF3Psyxb3QAz4bFxlN7U363U2+YNcOneS7Rcy1IjTpwWBR3rhUXHZBIGXh vgaQ== X-Forwarded-Encrypted: i=1; AJvYcCVOZfgyKuWhNcei3pQHd9elgPSJBiUt1bxkGe6982khat+8S5UV8EOsdwJyVN9suRnGUQfIu9xn2XCnM2Ca@vger.kernel.org, AJvYcCWGGBYgonAwybXgbvbmoUohNbS+RPIGUgBPcag4T/JNPdAoSq6pYxAuy9CekK2uOWhxB5zuFb9ATyI0V29Q@vger.kernel.org X-Gm-Message-State: AOJu0Yy3A7uK+Lx3S3mI4Frtf383xHlYc6C0JP7YSTmxMPGQ9rrJV1A9 KTJFxjAe91vtXYinwjJBwRSlyUjQVNNI9G6rBwssATkGIeekfuZD X-Gm-Gg: ASbGncuC38EYzhtC9NPgzlmoAokhSw2yc/SPSPLB3biJ+DbFCYXbFK+lrL3a1YpnBBy ZKPRopCtJGMjlGYlbsX/WVjLloAg99BsYsV1WzGqsnM6r+ZzoHKcGHztqst7T/w2HJHex9XVNgt AK4H/p4xtbrBPF9HwCGgW49E284bAU7rAsXTzJDuEtgsOcETy7NitLjPJ/FEgYoP28RAYrr5p0A S6wx9EVw5GCmHqVCpHHqI9xxtXt+F0ep6losgFS8BDcf85/Ucr6L26Gf5jwIOGkiLQm4pB/lVFb +/mAz5DL X-Google-Smtp-Source: AGHT+IGzizwrpsZXPYuNGc1R0w30RUBv8T4B9GyR7kRST9Ud9FnvSGzqVWbe3tz7xQeGoQ9YxhQkTA== X-Received: by 2002:a17:902:ce0f:b0:215:7446:2151 with SMTP id d9443c01a7336-21614d16ef4mr96271875ad.4.1733588282643; Sat, 07 Dec 2024 08:18:02 -0800 (PST) Received: from localhost (c-73-37-105-206.hsd1.or.comcast.net. [73.37.105.206]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-215f8e5fe68sm45494005ad.67.2024.12.07.08.18.01 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 07 Dec 2024 08:18:02 -0800 (PST) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Akhil P Oommen , Rob Clark , Rob Clark , Sean Paul , Konrad Dybcio , Abhinav Kumar , Dmitry Baryshkov , Marijn Suijten , David Airlie , Simona Vetter , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , linux-kernel@vger.kernel.org (open list) Subject: [RFC 14/24] drm/msm: Add opt-in for VM_BIND Date: Sat, 7 Dec 2024 08:15:14 -0800 Message-ID: <20241207161651.410556-15-robdclark@gmail.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20241207161651.410556-1-robdclark@gmail.com> References: <20241207161651.410556-1-robdclark@gmail.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Rob Clark Add a SET_PARAM for userspace to request to manage to the VM itself, instead of getting a kernel managed VM. In order to transition to a userspace managed VM, this param must be set before any mappings are created. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 4 ++-- drivers/gpu/drm/msm/adreno/adreno_gpu.c | 15 +++++++++++++ drivers/gpu/drm/msm/msm_drv.c | 13 +++++++++-- drivers/gpu/drm/msm/msm_gem.c | 5 +++++ drivers/gpu/drm/msm/msm_gpu.c | 5 +++-- drivers/gpu/drm/msm/msm_gpu.h | 29 +++++++++++++++++++++++-- include/uapi/drm/msm_drm.h | 24 ++++++++++++++++++++ 7 files changed, 87 insertions(+), 8 deletions(-) diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c index 97ec1dedeb98..ced5206bdc81 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c @@ -2251,7 +2251,7 @@ a6xx_create_vm(struct msm_gpu *gpu, struct platform_device *pdev) } static struct drm_gpuvm * -a6xx_create_private_vm(struct msm_gpu *gpu) +a6xx_create_private_vm(struct msm_gpu *gpu, bool kernel_managed) { struct msm_mmu *mmu; @@ -2261,7 +2261,7 @@ a6xx_create_private_vm(struct msm_gpu *gpu) return ERR_CAST(mmu); return msm_gem_vm_create(gpu->dev, mmu, "gpu", 0x100000000ULL, - adreno_private_vm_size(gpu), true); + adreno_private_vm_size(gpu), kernel_managed); } static uint32_t a6xx_get_rptr(struct msm_gpu *gpu, struct msm_ringbuffer *ring) diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c b/drivers/gpu/drm/msm/adreno/adreno_gpu.c index 033c1c9c457e..90848852ee50 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c @@ -444,6 +444,21 @@ int adreno_set_param(struct msm_gpu *gpu, struct msm_context *ctx, if (!capable(CAP_SYS_ADMIN)) return UERR(EPERM, drm, "invalid permissions"); return msm_context_set_sysprof(ctx, gpu, value); + case MSM_PARAM_EN_VM_BIND: + /* We can only support VM_BIND with per-process pgtables: */ + if (ctx->vm == gpu->vm) + return UERR(EINVAL, drm, "requires per-process pgtables"); + + /* + * We can only swtich to VM_BIND mode if the VM has not yet + * been created: + */ + if (ctx->vm) + return UERR(EBUSY, drm, "VM already created"); + + ctx->userspace_managed_vm = value; + + return 0; default: return UERR(EINVAL, drm, "%s: invalid param: %u", gpu->name, param); } diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c index 7a23549db97d..b31ec287c600 100644 --- a/drivers/gpu/drm/msm/msm_drv.c +++ b/drivers/gpu/drm/msm/msm_drv.c @@ -229,8 +229,11 @@ static void load_gpu(struct drm_device *dev) struct drm_gpuvm *msm_context_vm(struct drm_device *dev, struct msm_context *ctx) { struct msm_drm_private *priv = dev->dev_private; - if (!ctx->vm) - ctx->vm = msm_gpu_create_private_vm(priv->gpu, current); + if (!ctx->vm) { + ctx->vm = msm_gpu_create_private_vm( + priv->gpu, current, !ctx->userspace_managed_vm); + + } return ctx->vm; } @@ -419,6 +422,9 @@ static int msm_ioctl_gem_info_iova(struct drm_device *dev, if (!priv->gpu) return -EINVAL; + if (msm_context_is_vmbind(ctx)) + return UERR(EINVAL, dev, "VM_BIND is enabled"); + if (should_fail(&fail_gem_iova, obj->size)) return -ENOMEM; @@ -440,6 +446,9 @@ static int msm_ioctl_gem_info_set_iova(struct drm_device *dev, if (!priv->gpu) return -EINVAL; + if (msm_context_is_vmbind(ctx)) + return UERR(EINVAL, dev, "VM_BIND is enabled"); + /* Only supported if per-process address space is supported: */ if (priv->gpu->vm == vm) return UERR(EOPNOTSUPP, dev, "requires per-process pgtables"); diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 99e0ce38cd92..0bfc993571fc 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -47,8 +47,13 @@ static void put_iova_spaces(struct drm_gem_object *obj, bool close); static void msm_gem_close(struct drm_gem_object *obj, struct drm_file *file) { + struct msm_context *ctx = file->driver_priv; + update_ctx_mem(file, -obj->size); + if (msm_context_is_vmbind(ctx)) + return; + /* * TODO we might need to kick this to a queue to avoid blocking * in CLOSE ioctl diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c index 17d8be47db19..5def12abac6c 100644 --- a/drivers/gpu/drm/msm/msm_gpu.c +++ b/drivers/gpu/drm/msm/msm_gpu.c @@ -822,7 +822,8 @@ static int get_clocks(struct platform_device *pdev, struct msm_gpu *gpu) /* Return a new address space for a msm_drm_private instance */ struct drm_gpuvm * -msm_gpu_create_private_vm(struct msm_gpu *gpu, struct task_struct *task) +msm_gpu_create_private_vm(struct msm_gpu *gpu, struct task_struct *task, + bool kernel_managed) { struct drm_gpuvm *vm = NULL; @@ -834,7 +835,7 @@ msm_gpu_create_private_vm(struct msm_gpu *gpu, struct task_struct *task) * the global one */ if (gpu->funcs->create_private_vm) { - vm = gpu->funcs->create_private_vm(gpu); + vm = gpu->funcs->create_private_vm(gpu, kernel_managed); if (!IS_ERR(vm)) to_msm_vm(vm)->pid = get_pid(task_pid(task)); } diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h index 5efbca0b9fb1..70abbd93e11b 100644 --- a/drivers/gpu/drm/msm/msm_gpu.h +++ b/drivers/gpu/drm/msm/msm_gpu.h @@ -79,7 +79,7 @@ struct msm_gpu_funcs { void (*gpu_set_freq)(struct msm_gpu *gpu, struct dev_pm_opp *opp, bool suspended); struct drm_gpuvm *(*create_vm)(struct msm_gpu *gpu, struct platform_device *pdev); - struct drm_gpuvm *(*create_private_vm)(struct msm_gpu *gpu); + struct drm_gpuvm *(*create_private_vm)(struct msm_gpu *gpu, bool kernel_managed); uint32_t (*get_rptr)(struct msm_gpu *gpu, struct msm_ringbuffer *ring); /** @@ -354,6 +354,14 @@ struct msm_context { */ int queueid; + /** + * @userspace_managed_vm: + * + * Has userspace opted-in to userspace managed VM (ie. VM_BIND) via + * MSM_PARAM_EN_VM_BIND? + */ + bool userspace_managed_vm; + /** * @vm: * @@ -446,6 +454,22 @@ struct msm_context { struct drm_gpuvm *msm_context_vm(struct drm_device *dev, struct msm_context *ctx); +/** + * msm_context_is_vm_bind() - has userspace opted in to VM_BIND? + * + * @ctx: the drm_file context + * + * See MSM_PARAM_EN_VM_BIND. If userspace is managing the VM, it can + * do sparse binding including having multiple, potentially partial, + * mappings in the VM. Therefore certain legacy uabi (ie. GET_IOVA, + * SET_IOVA) are rejected because they don't have a sensible meaning. + */ +static inline bool +msm_context_is_vmbind(struct msm_context *ctx) +{ + return ctx->userspace_managed_vm; +} + /** * msm_gpu_convert_priority - Map userspace priority to ring # and sched priority * @@ -673,7 +697,8 @@ int msm_gpu_init(struct drm_device *drm, struct platform_device *pdev, const char *name, struct msm_gpu_config *config); struct drm_gpuvm * -msm_gpu_create_private_vm(struct msm_gpu *gpu, struct task_struct *task); +msm_gpu_create_private_vm(struct msm_gpu *gpu, struct task_struct *task, + bool kernel_managed); void msm_gpu_cleanup(struct msm_gpu *gpu); diff --git a/include/uapi/drm/msm_drm.h b/include/uapi/drm/msm_drm.h index 2342cb90857e..072e82a80607 100644 --- a/include/uapi/drm/msm_drm.h +++ b/include/uapi/drm/msm_drm.h @@ -91,6 +91,30 @@ struct drm_msm_timespec { #define MSM_PARAM_UBWC_SWIZZLE 0x12 /* RO */ #define MSM_PARAM_MACROTILE_MODE 0x13 /* RO */ #define MSM_PARAM_UCHE_TRAP_BASE 0x14 /* RO */ +/* MSM_PARAM_EN_VM_BIND is set to 1 to enable VM_BIND ops. + * + * With VM_BIND enabled, userspace is required to allocate iova and use the + * VM_BIND ops for map/unmap ioctls. MSM_INFO_SET_IOVA and MSM_INFO_GET_IOVA + * will be rejected. (The latter does not have a sensible meaning when a BO + * can have multiple and/or partial mappings.) + * + * With VM_BIND enabled, userspace does not include a submit_bo table in the + * SUBMIT ioctl (this will be rejected), the resident set is determined by + * the the VM_BIND ops. + * + * Enabling VM_BIND will fail on devices which do not have per-process pgtables. + * And it is not allowed to disable VM_BIND once it has been enabled. + * + * Enabling VM_BIND should be done (attempted) prior to allocating any BOs or + * submitqueues of type MSM_SUBMITQUEUE_VM_BIND. + * + * Relatedly, when VM_BIND mode is enabled, the kernel will not try to recover + * from GPU faults or failed async VM_BIND ops, in particular because it is + * difficult to communicate to userspace which op failed so that userspace + * could rewind and try again. When the VM is marked unusable, the SUBMIT + * ioctl will throw -EPIPE. + */ +#define MSM_PARAM_EN_VM_BIND 0x15 /* WO, once */ /* For backwards compat. The original support for preemption was based on * a single ring per priority level so # of priority levels equals the # From patchwork Sat Dec 7 16:15:15 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 848116 Received: from mail-pj1-f46.google.com (mail-pj1-f46.google.com [209.85.216.46]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 84F26155756; Sat, 7 Dec 2024 16:18:05 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.46 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733588287; cv=none; b=Igvq3u9gzFE4irTfY9yXnmwknEzfRqND5VP+SVBOiYRoti+E7FSYZcqxFelNlyLWZA9meTiZzIUVAHxxSoIsIu4krRKzSc+KxJb9IepXp73R35UQz+XknevDUR/5YTM4ZV9WWan84lgzifLwIHRYAvYCseS6E2T/AE+9au/d+m0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733588287; c=relaxed/simple; bh=4rKkRajRcWjn73bweLkaKucSxW3mkpCSI+200+BbdaU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=obuVmVnaHT/PNSTmR1gPpo5sb3WiBoaGHXvbBr4260ULYLlNpn5HM3kPtGqGldUvj+Pq8Yoob55XeixGDtGyZmAcNtPRyg958AqAJFwyWjc9QErh0k5A5KkL2mdJxVDbNCG3OFBT5JZsRXt03rn90TfP/kKK8PxtE1iBdZ55HwA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=YkHQ5624; arc=none smtp.client-ip=209.85.216.46 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="YkHQ5624" Received: by mail-pj1-f46.google.com with SMTP id 98e67ed59e1d1-2ef8c012913so322723a91.3; Sat, 07 Dec 2024 08:18:05 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1733588285; x=1734193085; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=LXke5IvCE9wZOiP+FCq6Xvomp2+4wRzsSagcwog/vHw=; b=YkHQ5624B9n3oH7t/PCP90jRShAyxMaERB8dHB8O2BKis2ZjgrjTSI/QgdaW3K5Qf3 c1eGQlWzhbkxaxbHxQa/yyIxbbI1FdySMVEh6+8eMLUlLeH3En6Oflps+MU2WWZdj6hJ 1jLoXbnOrtvhc087tLLtllkEFCup9LAdD71iaFWxULYtGikjismou/R0fy6wZm1E1Ldk hb+u5Ge31R6Wy18KxpijD3yAtupBAt/8NaPGw0ai5S4+QKaMUKSUtAhg1+9eh8+u8N3I 8uB467c4wPT8b2iQ3IwpVwfLF2tWqV1RIenKufcOK9eboIiJkTLkMe/xogYgzySy8SL3 xevg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1733588285; x=1734193085; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=LXke5IvCE9wZOiP+FCq6Xvomp2+4wRzsSagcwog/vHw=; b=TiwlKb5AiUa8ifhdXkyRqhy0OGK70Y5Q5SaiMv4ZUvbN5aaf5x1ja6AtKY2/ut5mkC wvsZ1qW81vs9KJrfdTFvUwuBdVL7theZDAi33AJDO2WHI+5mLo1QKf1n6qVAXIH2B0oK WQ14p8Zg5EcTI39VH7lNwEksSA8dX+a7le1xD02QIx9NjRe8u8uH5rmi1iYepDtOS1tC /vLwfdwsj86fKE7IMbqzMEGI7boerf9ia8WwQvyG+Bv8qyx/oDxYwvklWdGnE+fKBlTa p2PCT8XkRsXm08IxhjwQOgWPALTlnW0l3SE6pDB0qs+trkw3m97YlQNk7eVcLPuCNS1Q 16oA== X-Forwarded-Encrypted: i=1; AJvYcCUXQDciCNR+QX8BOA861RbwUGdhvaa9mvXogU+hk/RUsH6VlOSygZ1Rb2216EgiOIww+1HGThnPxT79QHFM@vger.kernel.org, AJvYcCWIEcQtonl0wQW2gRolojU7No4fND0U72F/NpEbAj8fT3Wcve8/tfZIIezrFeE42/uTocgp4t3jY3J3hkSL@vger.kernel.org X-Gm-Message-State: AOJu0YyK4AFleRIKQOYJhT5P9+MwXE6ddMKykis4N/WplYay1M+AFyQa Y0VTFyntk3Ig+lp/mCM8ujxMprLMSf8e20+KY7egqTB7OFmDINq+ X-Gm-Gg: ASbGncvWjFsFtC9VY1Zk8qLi2KLFlsXU6qfu5+lM+5SqbUYD/Oawg5wk4sQ1Uv8p219 9OTikACUSDF25R/+MwRQ3M+G3F2pNCX9xGEJ4UwDMk6rzLHQxf/Cv5MHuhjLWAdaMg72+TLmHNV Gam9p7HxkrW+SWnZwAL22/UA5RkCbYUwvTS4aKLCeVLgwvQuu89afP6a3fBkkWc54o6n31PkJVo TcQqbuAg5cV7aJ4g9dxZzqxqMcXPF0Oh3r0ctpytga9rpA30LicGO6qhTabgbw+Rk0lif/wU2fr dmBL/36B X-Google-Smtp-Source: AGHT+IHSa++PBWaXO+f4dEH83U9tiDUIyMo+WVP1SNHlXxU8XN/RM6foAtjSKMWkqB9vnXfqmdxvmg== X-Received: by 2002:a17:90a:c106:b0:2ee:f076:20f1 with SMTP id 98e67ed59e1d1-2ef69199e7dmr10174908a91.0.1733588284796; Sat, 07 Dec 2024 08:18:04 -0800 (PST) Received: from localhost (c-73-37-105-206.hsd1.or.comcast.net. [73.37.105.206]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-2ef7dd1f583sm1967668a91.14.2024.12.07.08.18.03 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 07 Dec 2024 08:18:04 -0800 (PST) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Akhil P Oommen , Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , Konrad Dybcio , linux-kernel@vger.kernel.org (open list) Subject: [RFC 15/24] drm/msm: Mark VM as unusable on faults Date: Sat, 7 Dec 2024 08:15:15 -0800 Message-ID: <20241207161651.410556-16-robdclark@gmail.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20241207161651.410556-1-robdclark@gmail.com> References: <20241207161651.410556-1-robdclark@gmail.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Rob Clark If userspace has opted-in to VM_BIND, then GPU faults and VM_BIND errors will mark the VM as unusable. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gem.h | 17 +++++++++++++++++ drivers/gpu/drm/msm/msm_gem_submit.c | 3 +++ drivers/gpu/drm/msm/msm_gpu.c | 16 ++++++++++++++-- 3 files changed, 34 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index acb976722580..7cb720137548 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -82,6 +82,23 @@ struct msm_gem_vm { /** @managed: is this a kernel managed VM? */ bool managed; + + /** + * @unusable: True if the VM has turned unusable because something + * bad happened during an asynchronous request. + * + * We don't try to recover from such failures, because this implies + * informing userspace about the specific operation that failed, and + * hoping the userspace driver can replay things from there. This all + * sounds very complicated for little gain. + * + * Instead, we should just flag the VM as unusable, and fail any + * further request targeting this VM. + * + * As an analogy, this would be mapped to a VK_ERROR_DEVICE_LOST + * situation, where the logical device needs to be re-created. + */ + bool unusable; }; #define to_msm_vm(x) container_of(x, struct msm_gem_vm, base) diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c index 5e37e1dad5bb..79bbe552f23e 100644 --- a/drivers/gpu/drm/msm/msm_gem_submit.c +++ b/drivers/gpu/drm/msm/msm_gem_submit.c @@ -668,6 +668,9 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data, if (args->pad) return -EINVAL; + if (to_msm_vm(ctx->vm)->unusable) + return UERR(EPIPE, dev, "context is unusable"); + /* for now, we just have 3d pipe.. eventually this would need to * be more clever to dispatch to appropriate gpu module: */ diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c index 5def12abac6c..72e5ad69a08c 100644 --- a/drivers/gpu/drm/msm/msm_gpu.c +++ b/drivers/gpu/drm/msm/msm_gpu.c @@ -377,8 +377,20 @@ static void recover_worker(struct kthread_work *work) /* Increment the fault counts */ submit->queue->faults++; - if (submit->vm) - to_msm_vm(submit->vm)->faults++; + if (submit->vm) { + struct msm_gem_vm *vm = to_msm_vm(submit->vm); + + vm->faults++; + + /* + * If userspace has opted-in to VM_BIND (and therefore userspace + * management of the VM), faults mark the VM as unusuable. This + * matches vulkan expectations (vulkan is the main target for + * VM_BIND) + */ + if (!vm->managed) + vm->unusable = true; + } get_comm_cmdline(submit, &comm, &cmd); From patchwork Sat Dec 7 16:15:16 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 848318 Received: from mail-pl1-f179.google.com (mail-pl1-f179.google.com [209.85.214.179]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8C09719CC2A; Sat, 7 Dec 2024 16:18:07 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.179 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733588290; cv=none; b=shO8k/ByT+KIs5MHhnrAb9juzOGJN9VOYuB7F5vfPL1ol/bYbIhhZgDWZh7T3qlp4S2YSK9ncjbpJdaXjM1PVJd0bmIxFSmJebmufmcojMimXBuMIfUPMUOwOI5uC2ZkgXvXESS7Wmk7UsevEv2JvaYzkodgG12uhrIkIQJRK5E= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733588290; c=relaxed/simple; bh=6F5rXoC0rbYASUq61SmXckbMTT9CcibObyF+eUK2tgY=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=qNvtBCJ2f124EPQDaorRfmBYH61xAiIUUUZHeCh+pKJAVwhjRElDlAZbKnF3Z/QvIHLUrXLywzBPViqu56dTJYrpAevmgRgNSyhMkrRNW/4z1E9lMH2svc3I8D7dIN+DRoEDR5Jxr63UqMgp/1iHSKENrYBF/69J8UHchI6bCfw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=G5lKRPnF; arc=none smtp.client-ip=209.85.214.179 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="G5lKRPnF" Received: by mail-pl1-f179.google.com with SMTP id d9443c01a7336-21561af95c3so25781035ad.3; Sat, 07 Dec 2024 08:18:07 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1733588287; x=1734193087; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=5h1uhcGe1shthBGxBVlrXLlTzIKG363gSG371Gar4pg=; b=G5lKRPnFEyCv1UrIGho9ERr03kMK9r3dGyXqqs4HkzRRgu3Nczgg9wisuVRirAKsUj MJiaXsfHZR6jvsvhdBVh4ZWtj20SrXCmZ/sGyXo7niVFjxFnkfKqlOumk4gQkF8C5BVr o5KSIClOA12flfop8iG89sguyXBKm6wJ16sHzfde0VjBNgDHuX+3bwXNpokioj8bdjvj qSYqNYtRWkJT8JKd4czwDtMaa44xGXvadX04PySOL/1IFBccRlKqi48BLtFbNaXcjJaG uljU6bTBtdr1FavC0/O6wlPELIAvHI2b8t83R6ko7BNx/qvrun9bcyvFhNraMGa0cteF ggOQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1733588287; x=1734193087; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=5h1uhcGe1shthBGxBVlrXLlTzIKG363gSG371Gar4pg=; b=rdoZlawxa5e+NMqZC6qtFQ6S/azi1EjZak1mR4oqzT0e+wIMgZNBJfyMcAHZE69kQ7 qIPPT2QOVMzKqNWgBKSisU0g6+NNFZTBrtKVWEkFlYw+8VgB6L8PJOHR1ZYil6r2Sxmj 99ER+Vm5LXfR3WNrpY3XxzpKpR5JDL6XoSRltUBk+nwL59DHdILRgUbFxJnw8Pv3Mc2K kuPwyWWp4at1JiDCDtak0Q6CDZPZheJs8vrHEhIW4l7+6A4ifT9Cl8BEtmn814KymiWB Fqgix2CMSf9WwQ86CkDxepiSMFO/8hkuEWIIOShAco3gKjjYAfOmaAVDalXzQUd/Jsxy usfQ== X-Forwarded-Encrypted: i=1; AJvYcCWEqAxnpBHLXqX6hCszzk0BvW1YKvexLK6rgDuItI8iVBL/Nw62WdCfR99ta+cp5RkBSI8VRGyg/I4uMuM=@vger.kernel.org, AJvYcCXU3DLXSsKzNxcNoErJTHZG6yzp1dL21xKEr3WVPVLitVS6gLhOb6j3JNI8Jt90FPXZ6gitQu6NjvKyNf0f@vger.kernel.org, AJvYcCXsFfW0kKzXY6nQcK5pB8/2xsTS0Qm/Z/TFh7JC3V7ktRpxuXdmLybOAS38uRcMLFgwKwr058FnkNPSpQbA@vger.kernel.org X-Gm-Message-State: AOJu0YxzAAs0MhOq0J8rmqT7Ebvk8nfk3n2K2ZOepjwSVIvsDqsic9wU Sru3sKRervItDoMd2YMymyDzQTpdkE3PxhmLkGc8rlJ6jhH4JTkJ X-Gm-Gg: ASbGncssXQMCl/yQhlkYr4ewRHI+HsI7MlBuZEaI679xp08p6P2WVbPmVYJaJv3v4bz FJ6irwRtdocyXbIpTYKcS8/OAkkh5t5Kf0DA4E63Ly6Wz1c0hNe7B8LbYlTIyLLQxW9ckdqvIjp 9Y0ykOF/CTAdJxNcIZbogPyft5+i5O+FJuX2/icJ2hihZDF8Vi3nMayrEaexxa5sR7kyfxSdnpD b3WjLLi1dr4BWoY4WseiDX9ronUnRwxbOlSogGif+gwXsE7IMWTCwOXSDKat2jOtlNF5U583sJL SlRpvvcu X-Google-Smtp-Source: AGHT+IHoc0cdu0fl4BkIxjrGdBeLkPtIQTBBV4VaO5/T66Kf51XXGOzTpAEPXBeRnjy6Om3Oobigjg== X-Received: by 2002:a17:903:240a:b0:212:fa3:f627 with SMTP id d9443c01a7336-21614d457ebmr125472915ad.16.1733588286604; Sat, 07 Dec 2024 08:18:06 -0800 (PST) Received: from localhost (c-73-37-105-206.hsd1.or.comcast.net. [73.37.105.206]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-21631bd2c2dsm9980865ad.263.2024.12.07.08.18.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 07 Dec 2024 08:18:06 -0800 (PST) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Akhil P Oommen , Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , Konrad Dybcio , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= , linux-kernel@vger.kernel.org (open list), linux-media@vger.kernel.org (open list:DMA BUFFER SHARING FRAMEWORK:Keyword:\bdma_(?:buf|fence|resv)\b), linaro-mm-sig@lists.linaro.org (moderated list:DMA BUFFER SHARING FRAMEWORK:Keyword:\bdma_(?:buf|fence|resv)\b) Subject: [RFC 16/24] drm/msm: Extend SUBMIT ioctl for VM_BIND Date: Sat, 7 Dec 2024 08:15:16 -0800 Message-ID: <20241207161651.410556-17-robdclark@gmail.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20241207161651.410556-1-robdclark@gmail.com> References: <20241207161651.410556-1-robdclark@gmail.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Rob Clark This is a bit different than the path taken by other clean-slate drivers. But there is a lot in similar with BO pinning in the legacy "EXEC" path and "VM_BIND" MAP path. Also, we want the same fence and syncobj handling. (Why bother with fence fd's? Because for virtgpu nctx for, guest syncobj's exist only as dma_fence's between the guest kernel and host.) Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gem.h | 10 ++--- drivers/gpu/drm/msm/msm_gem_submit.c | 65 ++++++++++++++++++++++++---- include/uapi/drm/msm_drm.h | 49 ++++++++++++++++++--- 3 files changed, 103 insertions(+), 21 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index 7cb720137548..8e29e36ca9c5 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -345,13 +345,13 @@ struct msm_gem_submit { uint32_t nr_relocs; struct drm_msm_gem_submit_reloc *relocs; } *cmd; /* array of size nr_cmds */ - struct { + struct msm_gem_submit_bo { uint32_t flags; - union { - struct drm_gem_object *obj; - uint32_t handle; - }; + uint32_t handle; + struct drm_gem_object *obj; uint64_t iova; + uint64_t bo_offset; + uint64_t range; } bos[]; }; diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c index 79bbe552f23e..9ac74f9a139e 100644 --- a/drivers/gpu/drm/msm/msm_gem_submit.c +++ b/drivers/gpu/drm/msm/msm_gem_submit.c @@ -115,23 +115,37 @@ void __msm_gem_submit_destroy(struct kref *kref) kfree(submit); } +static bool invalid_alignment(uint64_t addr) +{ + /* + * Technically this is about GPU alignment, not CPU alignment. But + * I've not seen any qcom SoC where the SMMU does not support the + * CPU's smallest page size. + */ + return !PAGE_ALIGNED(addr); +} + static int submit_lookup_objects(struct msm_gem_submit *submit, struct drm_msm_gem_submit *args, struct drm_file *file) { - unsigned i; + unsigned i, bo_stride = args->bos_stride; int ret = 0; + if (!bo_stride) + bo_stride = sizeof(struct drm_msm_gem_submit_bo); + for (i = 0; i < args->nr_bos; i++) { - struct drm_msm_gem_submit_bo submit_bo; + struct drm_msm_gem_submit_bo_v2 submit_bo = {0}; void __user *userptr = - u64_to_user_ptr(args->bos + (i * sizeof(submit_bo))); + u64_to_user_ptr(args->bos + (i * bo_stride)); + unsigned copy_sz = min(bo_stride, sizeof(submit_bo)); /* make sure we don't have garbage flags, in case we hit * error path before flags is initialized: */ submit->bos[i].flags = 0; - if (copy_from_user(&submit_bo, userptr, sizeof(submit_bo))) { + if (copy_from_user(&submit_bo, userptr, copy_sz)) { ret = -EFAULT; i = 0; goto out; @@ -141,14 +155,27 @@ static int submit_lookup_objects(struct msm_gem_submit *submit, #define MANDATORY_FLAGS (MSM_SUBMIT_BO_READ | MSM_SUBMIT_BO_WRITE) if ((submit_bo.flags & ~MSM_SUBMIT_BO_FLAGS) || - !(submit_bo.flags & MANDATORY_FLAGS)) { + !(submit_bo.flags & MANDATORY_FLAGS)) ret = SUBMIT_ERROR(EINVAL, submit, "invalid flags: %x\n", submit_bo.flags); + + if (invalid_alignment(submit_bo.address)) + ret = SUBMIT_ERROR(EINVAL, submit, "invalid address: %016llx\n", submit_bo.address); + + if (invalid_alignment(submit_bo.bo_offset)) + ret = SUBMIT_ERROR(EINVAL, submit, "invalid bo_offset: %016llx\n", submit_bo.bo_offset); + + if (invalid_alignment(submit_bo.range)) + ret = SUBMIT_ERROR(EINVAL, submit, "invalid range: %016llx\n", submit_bo.range); + + if (ret) { i = 0; goto out; } submit->bos[i].handle = submit_bo.handle; submit->bos[i].flags = submit_bo.flags; + submit->bos[i].bo_offset = submit_bo.bo_offset; + submit->bos[i].range = submit_bo.range; } spin_lock(&file->table_lock); @@ -167,6 +194,15 @@ static int submit_lookup_objects(struct msm_gem_submit *submit, drm_gem_object_get(obj); + if (submit->bos[i].bo_offset > obj->size) + ret = SUBMIT_ERROR(EINVAL, submit, "bo_offset to large: %016llx\n", submit->bos[i].bo_offset); + + if ((submit->bos[i].bo_offset + submit->bos[i].range) > obj->size) + ret = SUBMIT_ERROR(EINVAL, submit, "range to large: %016llx\n", submit->bos[i].range); + + if (ret) + goto out_unlock; + submit->bos[i].obj = obj; } @@ -182,6 +218,7 @@ static int submit_lookup_objects(struct msm_gem_submit *submit, static int submit_lookup_cmds(struct msm_gem_submit *submit, struct drm_msm_gem_submit *args, struct drm_file *file) { + struct msm_context *ctx = file->driver_priv; unsigned i; size_t sz; int ret = 0; @@ -213,6 +250,19 @@ static int submit_lookup_cmds(struct msm_gem_submit *submit, goto out; } + if (msm_context_is_vmbind(ctx)) { + if (submit_cmd.nr_relocs) { + ret = SUBMIT_ERROR(EINVAL, submit, "nr_relocs must be zero"); + goto out; + } + if (submit_cmd.submit_idx || submit_cmd.submit_offset) { + ret = SUBMIT_ERROR(EINVAL, submit, "submit_idx/offset must be zero"); + goto out; + } + + submit->cmd[i].iova = submit_cmd.iova; + } + submit->cmd[i].type = submit_cmd.type; submit->cmd[i].size = submit_cmd.size / 4; submit->cmd[i].offset = submit_cmd.submit_offset / 4; @@ -665,9 +715,6 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data, if (!gpu) return -ENXIO; - if (args->pad) - return -EINVAL; - if (to_msm_vm(ctx->vm)->unusable) return UERR(EPIPE, dev, "context is unusable"); @@ -677,7 +724,7 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data, if (MSM_PIPE_ID(args->flags) != MSM_PIPE_3D0) return UERR(EINVAL, dev, "invalid pipe"); - if (MSM_PIPE_FLAGS(args->flags) & ~MSM_SUBMIT_FLAGS) + if (MSM_PIPE_FLAGS(args->flags) & ~MSM_SUBMIT_EXEC_FLAGS) return UERR(EINVAL, dev, "invalid flags"); if (args->flags & MSM_SUBMIT_SUDO) { diff --git a/include/uapi/drm/msm_drm.h b/include/uapi/drm/msm_drm.h index 072e82a80607..1a948d49c610 100644 --- a/include/uapi/drm/msm_drm.h +++ b/include/uapi/drm/msm_drm.h @@ -245,7 +245,10 @@ struct drm_msm_gem_submit_cmd { __u32 size; /* in, cmdstream size */ __u32 pad; __u32 nr_relocs; /* in, number of submit_reloc's */ - __u64 relocs; /* in, ptr to array of submit_reloc's */ + union { + __u64 relocs; /* in, ptr to array of submit_reloc's */ + __u64 iova; /* cmdstream address (for VM_BIND contexts) */ + }; }; /* Each buffer referenced elsewhere in the cmdstream submit (ie. the @@ -264,6 +267,19 @@ struct drm_msm_gem_submit_cmd { #define MSM_SUBMIT_BO_DUMP 0x0004 #define MSM_SUBMIT_BO_NO_IMPLICIT 0x0008 +/* Map OP for for submits to a VM_BIND submitqueue: + * - MAP: map a specified range of the BO into the VM + * - MAP_NULL: map a NULL page into the specified range of the VM, handle + * and bo_offset MBZ. A NULL range will return zero on reads + * and discard writes + * see: VkPhysicalDeviceSparseProperties::residencyNonResidentStrict + * - UNMAP: unmap a specified VM range, handle and bo_offset MBZ + */ +#define MSM_SUBMIT_BO_OP_MASK 0xf000 +#define MSM_SUBMIT_BO_OP_MAP 0x0000 +#define MSM_SUBMIT_BO_OP_MAP_NULL 0x1000 +#define MSM_SUBMIT_BO_OP_UNMAP 0x2000 + #define MSM_SUBMIT_BO_FLAGS (MSM_SUBMIT_BO_READ | \ MSM_SUBMIT_BO_WRITE | \ MSM_SUBMIT_BO_DUMP | \ @@ -272,7 +288,16 @@ struct drm_msm_gem_submit_cmd { struct drm_msm_gem_submit_bo { __u32 flags; /* in, mask of MSM_SUBMIT_BO_x */ __u32 handle; /* in, GEM handle */ - __u64 presumed; /* in/out, presumed buffer address */ + __u64 address; /* in/out, presumed buffer address */ +}; + +struct drm_msm_gem_submit_bo_v2 { + __u32 flags; /* in, mask of MSM_SUBMIT_BO_x */ + __u32 handle; /* in, GEM handle */ + __u64 address; /* in/out, presumed buffer address */ + /* Remaining fields are only used with MSM_SUBMIT_OP_VM_BIND/_ASYNC: */ + __u64 bo_offset; + __u64 range; }; /* Valid submit ioctl flags: */ @@ -283,7 +308,8 @@ struct drm_msm_gem_submit_bo { #define MSM_SUBMIT_SYNCOBJ_IN 0x08000000 /* enable input syncobj */ #define MSM_SUBMIT_SYNCOBJ_OUT 0x04000000 /* enable output syncobj */ #define MSM_SUBMIT_FENCE_SN_IN 0x02000000 /* userspace passes in seqno fence */ -#define MSM_SUBMIT_FLAGS ( \ + +#define MSM_SUBMIT_EXEC_FLAGS ( \ MSM_SUBMIT_NO_IMPLICIT | \ MSM_SUBMIT_FENCE_FD_IN | \ MSM_SUBMIT_FENCE_FD_OUT | \ @@ -293,6 +319,13 @@ struct drm_msm_gem_submit_bo { MSM_SUBMIT_FENCE_SN_IN | \ 0) +#define MSM_SUBMIT_VM_BIND_FLAGS ( \ + MSM_SUBMIT_FENCE_FD_IN | \ + MSM_SUBMIT_FENCE_FD_OUT | \ + MSM_SUBMIT_SYNCOBJ_IN | \ + MSM_SUBMIT_SYNCOBJ_OUT | \ + 0) + #define MSM_SUBMIT_SYNCOBJ_RESET 0x00000001 /* Reset syncobj after wait. */ #define MSM_SUBMIT_SYNCOBJ_FLAGS ( \ MSM_SUBMIT_SYNCOBJ_RESET | \ @@ -307,14 +340,17 @@ struct drm_msm_gem_submit_syncobj { /* Each cmdstream submit consists of a table of buffers involved, and * one or more cmdstream buffers. This allows for conditional execution * (context-restore), and IB buffers needed for per tile/bin draw cmds. + * + * For MSM_SUBMIT_VM_BIND/_ASYNC operations, the queue must have been + * created with the MSM_SUBMITQUEUE_VM_BIND flag. */ struct drm_msm_gem_submit { __u32 flags; /* MSM_PIPE_x | MSM_SUBMIT_x */ __u32 fence; /* out (or in with MSM_SUBMIT_FENCE_SN_IN flag) */ __u32 nr_bos; /* in, number of submit_bo's */ - __u32 nr_cmds; /* in, number of submit_cmd's */ + __u32 nr_cmds; /* in, number of submit_cmd's, MBZ for VM_BIND queue */ __u64 bos; /* in, ptr to array of submit_bo's */ - __u64 cmds; /* in, ptr to array of submit_cmd's */ + __u64 cmds; /* in, ptr to array of submit_cmd's, MBZ for VM_BIND queue */ __s32 fence_fd; /* in/out fence fd (see MSM_SUBMIT_FENCE_FD_IN/OUT) */ __u32 queueid; /* in, submitqueue id */ __u64 in_syncobjs; /* in, ptr to array of drm_msm_gem_submit_syncobj */ @@ -322,8 +358,7 @@ struct drm_msm_gem_submit { __u32 nr_in_syncobjs; /* in, number of entries in in_syncobj */ __u32 nr_out_syncobjs; /* in, number of entries in out_syncobj. */ __u32 syncobj_stride; /* in, stride of syncobj arrays. */ - __u32 pad; /*in, reserved for future use, always 0. */ - + __u32 bos_stride; /* in, stride of bos array, if zero 16bytes used. */ }; #define MSM_WAIT_FENCE_BOOST 0x00000001 From patchwork Sat Dec 7 16:15:17 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 848115 Received: from mail-pf1-f181.google.com (mail-pf1-f181.google.com [209.85.210.181]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6C2A319CD13; Sat, 7 Dec 2024 16:18:09 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.181 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733588291; cv=none; b=GFuyDyo41cLn8mM65KXxxCwhYJPYwGA4bNCEWbMIUP8j3rIxi+93dlYFPhuIM4bSBjx3d8Wb6yHhNd93m9Bj8Zv/dx1xd8xtE9yDcooOciiDdBdkXmcESCDirJNVj+t0OGtcdZmZszAYyMk3Z4pXWlAxL4TbIuAZ0yk4jAIiLkk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733588291; c=relaxed/simple; bh=zYMLqLkxUsuQekNDuGucfhHnSJa+PyNOZ/cspC82yEs=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=pge/P04MFxIXMacaibDlijZKFs3k7+cpZdy1GsKeRPNmoizqf4KGoHkozaxc9tRMJQ8NAnF6KFi1pMGjXkkrzAxjST4weHmTJ/C0rDWnA6h+WhsoToQ/sMHUM1zQdV2gTFThZYrehZUvXLZKkWHWARA7uAVVP5QQo/ORKeTJ7TI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=lpUF//eA; arc=none smtp.client-ip=209.85.210.181 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="lpUF//eA" Received: by mail-pf1-f181.google.com with SMTP id d2e1a72fcca58-72599fc764bso2371637b3a.1; Sat, 07 Dec 2024 08:18:09 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1733588288; x=1734193088; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=qKCpI+52kVi4Rx1wcui054uYtvI87ujf/h7knQvNsG4=; b=lpUF//eATm9FjbcZxOQ4LnLz86e2yw1zPz/I0hGj2TrZCguWY9U6jeY/sXgxTtTqZN poEBdhAtC5Wfz5DDKxAFD0eD57J1WVxCLjjd1EsMXpn6NqakkXyBoa06adQZLkxRgI/L gTgcYSCndgQFdnuyXdjYHAyhVJ1V9+oczo4B0K1KKl1BnY7EYTaQGGb3k+dt/r1PeaE6 2Jz6uz71JgCGYd1Ie5ESMAW7KcGM1ge6Hkht8IdsXoX1lzmlclRvGfA+gubT/JlZ9fVp 9vrUUfHEv2ahc554owiXQ1HHV6jdSQ5wcZ5ZQ/2oEpNUqmyLJBK0BQxF48DlDFtfpQa1 eA0g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1733588288; x=1734193088; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=qKCpI+52kVi4Rx1wcui054uYtvI87ujf/h7knQvNsG4=; b=UO4xY3yH2Bspm2b+nO0r36zWGnLtc320Gos6Xbk+cEIw/T0HTMXg7TUxGMRxaUV3RM ri4Tfr2st//NgBjZMzauXb+pAKq5r9R408c4t7oZ1wLkRnNFbK+X5KllNRbJDPnu9TYU 81bfGjFBVbDxFEIGshrR5FwpJ+x0CEdDVRjDh0B+d9+0YWqWqGPo5zG8eKeNL9kiFCIB eplb8O6HVTe0UNQLX6uAZBcF6nreOW0C7Cwbj2mXZsYya6sUXyijnxFk1IK/k1/Pu+fz HTsO5oYO7uTXZPwMZ92mEdoXdHtyQkPFULOw0+Yak97Cn0FEXwuqwp0mpEwxdYD8SwYY B/IA== X-Forwarded-Encrypted: i=1; AJvYcCVTIMLHSAfepMA/JpBacAXOqautDVMS23f5CWeJh7ypbctTKxJGkBv5vldEv/kAO3bblm1bABq89hWh8ZcO@vger.kernel.org, AJvYcCW5hR/jpBo7oB/mCqV297PNQ3kJD4pS300XxmIaZg0C+r2Hpb3rSdNfoZMnmkGDJaCMITAhPJX6h6bRRFk=@vger.kernel.org, AJvYcCWOx3CA2qAvcRK/ZlF+nEjr+ZR8Q8BsQoiaEyKwcj/Iwj41V9LumDZQGdh2HJK6Bbmvn1JnZdKnIw3rUWDx@vger.kernel.org X-Gm-Message-State: AOJu0YxFnowMHCmhNUTf3eVJKT4ek4KRrFOPlJhNbCKkmafG13YHfBq9 2iCe9SCDh9jnyrBvtl4H3AXdg1U4urcPLXA7s1a7sVFAC5i3Mci3 X-Gm-Gg: ASbGncsP4sEGCFQmInnj6KRugXKhkuxsSDuI5eXkQ/sJyJaZyLLrBlSEq349FTwzyfV /lnfyfURyazLFPUyAnw/ETjYYljjmtrwy0X42UXfVC1p6k1GKKx/bZNZbmlxq6sglsKdfzPVA+r iXX5Sa3hKJUfxKclhqYUPvb/yauCVLOKE6KwHZDMD3An1p9yc2e5jR+HeQuC6GD1cX/jftUanFS mdbdnr67+w4a/lTGEnKWkh53uo+2pPiRtOyc0Ohm7xVhVUtztMz9YR0QAbdO1h80HhZzKce7y1y /U1RarW5 X-Google-Smtp-Source: AGHT+IFVe7LT9zqghbghnI8xmGLxyzI7v2+qeT3zO0rAup288qgwevJ18oNOFIUBgkY19EHM1zPw1w== X-Received: by 2002:a05:6a20:158b:b0:1e0:ea70:7494 with SMTP id adf61e73a8af0-1e186c0e6a6mr11887408637.9.1733588288406; Sat, 07 Dec 2024 08:18:08 -0800 (PST) Received: from localhost (c-73-37-105-206.hsd1.or.comcast.net. [73.37.105.206]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-725e0195143sm225224b3a.63.2024.12.07.08.18.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 07 Dec 2024 08:18:07 -0800 (PST) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Akhil P Oommen , Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , Konrad Dybcio , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= , linux-kernel@vger.kernel.org (open list), linux-media@vger.kernel.org (open list:DMA BUFFER SHARING FRAMEWORK:Keyword:\bdma_(?:buf|fence|resv)\b), linaro-mm-sig@lists.linaro.org (moderated list:DMA BUFFER SHARING FRAMEWORK:Keyword:\bdma_(?:buf|fence|resv)\b) Subject: [RFC 17/24] drm/msm: Add VM_BIND submitqueue Date: Sat, 7 Dec 2024 08:15:17 -0800 Message-ID: <20241207161651.410556-18-robdclark@gmail.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20241207161651.410556-1-robdclark@gmail.com> References: <20241207161651.410556-1-robdclark@gmail.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Rob Clark This submitqueue type isn't tied to a hw ringbuffer, but instead executes on the CPU for performing async VM_BIND ops. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gem.c | 3 +- drivers/gpu/drm/msm/msm_gem.h | 10 +++ drivers/gpu/drm/msm/msm_gem_submit.c | 123 ++++++++++++++++++++++---- drivers/gpu/drm/msm/msm_gem_vma.c | 100 +++++++++++++++++++++ drivers/gpu/drm/msm/msm_gpu.h | 3 + drivers/gpu/drm/msm/msm_submitqueue.c | 57 +++++++++--- include/uapi/drm/msm_drm.h | 9 +- 7 files changed, 275 insertions(+), 30 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 0bfc993571fc..66332481c4c3 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -215,8 +215,7 @@ static void put_pages(struct drm_gem_object *obj) } } -static struct page **msm_gem_get_pages_locked(struct drm_gem_object *obj, - unsigned madv) +struct page **msm_gem_get_pages_locked(struct drm_gem_object *obj, unsigned madv) { struct msm_gem_object *msm_obj = to_msm_bo(obj); diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index 8e29e36ca9c5..a2255fd269ca 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -53,6 +53,13 @@ struct msm_gem_vm { /** @base: Inherit from drm_gpuvm. */ struct drm_gpuvm base; + /** + * @sched: Scheduler used for asynchronous VM_BIND request. + * + * Unused for kernel managed VMs (where all operations are synchronous). + */ + struct drm_gpu_scheduler sched; + /** * @mm: Memory management for kernel managed VA allocations * @@ -106,6 +113,8 @@ struct drm_gpuvm * msm_gem_vm_create(struct drm_device *drm, struct msm_mmu *mmu, const char *name, u64 va_start, u64 va_size, bool managed); +void msm_gem_vm_close(struct drm_gpuvm *vm); + struct msm_fence_context; /** @@ -195,6 +204,7 @@ int msm_gem_get_and_pin_iova(struct drm_gem_object *obj, struct drm_gpuvm *vm, uint64_t *iova); void msm_gem_unpin_iova(struct drm_gem_object *obj, struct drm_gpuvm *vm); void msm_gem_pin_obj_locked(struct drm_gem_object *obj); +struct page **msm_gem_get_pages_locked(struct drm_gem_object *obj, unsigned madv); struct page **msm_gem_pin_pages_locked(struct drm_gem_object *obj); void msm_gem_unpin_pages_locked(struct drm_gem_object *obj); int msm_gem_dumb_create(struct drm_file *file, struct drm_device *dev, diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c index 9ac74f9a139e..8295c21e4ca0 100644 --- a/drivers/gpu/drm/msm/msm_gem_submit.c +++ b/drivers/gpu/drm/msm/msm_gem_submit.c @@ -115,6 +115,17 @@ void __msm_gem_submit_destroy(struct kref *kref) kfree(submit); } +static bool invalid_bo_flags(bool vm_bind, uint32_t flags) +{ + if (vm_bind) { + return flags & ~(MSM_SUBMIT_BO_FLAGS | MSM_SUBMIT_BO_OP_MASK); + } else { + /* at least one of READ and/or WRITE flags should be set: */ + return (flags & ~MSM_SUBMIT_BO_FLAGS) || + !(flags & (MSM_SUBMIT_BO_READ | MSM_SUBMIT_BO_WRITE)); + } +} + static bool invalid_alignment(uint64_t addr) { /* @@ -129,9 +140,10 @@ static int submit_lookup_objects(struct msm_gem_submit *submit, struct drm_msm_gem_submit *args, struct drm_file *file) { unsigned i, bo_stride = args->bos_stride; + bool vm_bind = !!(submit->queue->flags & MSM_SUBMITQUEUE_VM_BIND); int ret = 0; - if (!bo_stride) + if (!bo_stride || !vm_bind) bo_stride = sizeof(struct drm_msm_gem_submit_bo); for (i = 0; i < args->nr_bos; i++) { @@ -151,11 +163,7 @@ static int submit_lookup_objects(struct msm_gem_submit *submit, goto out; } -/* at least one of READ and/or WRITE flags should be set: */ -#define MANDATORY_FLAGS (MSM_SUBMIT_BO_READ | MSM_SUBMIT_BO_WRITE) - - if ((submit_bo.flags & ~MSM_SUBMIT_BO_FLAGS) || - !(submit_bo.flags & MANDATORY_FLAGS)) + if (invalid_bo_flags(vm_bind, submit_bo.flags)) ret = SUBMIT_ERROR(EINVAL, submit, "invalid flags: %x\n", submit_bo.flags); if (invalid_alignment(submit_bo.address)) @@ -174,6 +182,7 @@ static int submit_lookup_objects(struct msm_gem_submit *submit, submit->bos[i].handle = submit_bo.handle; submit->bos[i].flags = submit_bo.flags; + submit->bos[i].iova = submit_bo.address; submit->bos[i].bo_offset = submit_bo.bo_offset; submit->bos[i].range = submit_bo.range; } @@ -183,6 +192,12 @@ static int submit_lookup_objects(struct msm_gem_submit *submit, for (i = 0; i < args->nr_bos; i++) { struct drm_gem_object *obj; + if (vm_bind) { + unsigned op = submit->bos[i].flags & MSM_SUBMIT_BO_OP_MASK; + if (op != MSM_SUBMIT_BO_OP_MAP) + continue; + } + /* normally use drm_gem_object_lookup(), but for bulk lookup * all under single table_lock just hit object_idr directly: */ @@ -297,13 +312,21 @@ static int submit_lookup_cmds(struct msm_gem_submit *submit, /* This is where we make sure all the bo's are reserved and pin'd: */ static int submit_lock_objects(struct msm_gem_submit *submit) { + unsigned flags = DRM_EXEC_INTERRUPTIBLE_WAIT; int ret; - drm_exec_init(&submit->exec, DRM_EXEC_INTERRUPTIBLE_WAIT, submit->nr_bos); + if (submit->queue->flags & MSM_SUBMITQUEUE_VM_BIND) + flags |= DRM_EXEC_IGNORE_DUPLICATES; + + drm_exec_init(&submit->exec, flags, submit->nr_bos); drm_exec_until_all_locked (&submit->exec) { for (unsigned i = 0; i < submit->nr_bos; i++) { struct drm_gem_object *obj = submit->bos[i].obj; + + if (!obj) + continue; + ret = drm_exec_prepare_obj(&submit->exec, obj, 1); drm_exec_retry_on_contention(&submit->exec); if (ret) @@ -372,6 +395,28 @@ static int submit_pin_vmas(struct msm_gem_submit *submit) return ret; } +static int submit_get_pages(struct msm_gem_submit *submit) +{ + /* + * First loop, before holding the LRU lock, avoids holding the + * LRU lock while calling msm_gem_pin_vma_locked (which could + * trigger get_pages()) + */ + for (int i = 0; i < submit->nr_bos; i++) { + struct drm_gem_object *obj = submit->bos[i].obj; + struct page **pages; + + if (!obj) + continue; + + pages = msm_gem_get_pages_locked(obj, MSM_MADV_WILLNEED); + if (IS_ERR(pages)) + return PTR_ERR(pages); + } + + return 0; +} + static void submit_pin_objects(struct msm_gem_submit *submit) { struct msm_drm_private *priv = submit->dev->dev_private; @@ -385,7 +430,12 @@ static void submit_pin_objects(struct msm_gem_submit *submit) */ mutex_lock(&priv->lru.lock); for (int i = 0; i < submit->nr_bos; i++) { - msm_gem_pin_obj_locked(submit->bos[i].obj); + struct drm_gem_object *obj = submit->bos[i].obj; + + if (!obj) + continue; + + msm_gem_pin_obj_locked(obj); } mutex_unlock(&priv->lru.lock); @@ -400,6 +450,9 @@ static void submit_unpin_objects(struct msm_gem_submit *submit) for (int i = 0; i < submit->nr_bos; i++) { struct drm_gem_object *obj = submit->bos[i].obj; + if (!obj) + continue; + msm_gem_unpin_locked(obj); } @@ -413,6 +466,9 @@ static void submit_attach_object_fences(struct msm_gem_submit *submit) for (i = 0; i < submit->nr_bos; i++) { struct drm_gem_object *obj = submit->bos[i].obj; + if (!obj) + continue; + if (submit->bos[i].flags & MSM_SUBMIT_BO_WRITE) dma_resv_add_fence(obj->resv, submit->user_fence, DMA_RESV_USAGE_WRITE); @@ -708,6 +764,7 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data, struct msm_ringbuffer *ring; struct msm_submit_post_dep *post_deps = NULL; struct drm_syncobj **syncobjs_to_reset = NULL; + unsigned cmds_to_parse; int out_fence_fd = -1; unsigned i; int ret; @@ -724,9 +781,6 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data, if (MSM_PIPE_ID(args->flags) != MSM_PIPE_3D0) return UERR(EINVAL, dev, "invalid pipe"); - if (MSM_PIPE_FLAGS(args->flags) & ~MSM_SUBMIT_EXEC_FLAGS) - return UERR(EINVAL, dev, "invalid flags"); - if (args->flags & MSM_SUBMIT_SUDO) { if (!IS_ENABLED(CONFIG_DRM_MSM_GPU_SUDO) || !capable(CAP_SYS_RAWIO)) @@ -737,6 +791,26 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data, if (!queue) return -ENOENT; + if (queue->flags & MSM_SUBMITQUEUE_VM_BIND) { + if (args->nr_cmds || args->cmds) { + ret = UERR(EINVAL, dev, "nr_cmds should be zero for VM_BIND queue"); + goto out_post_unlock; + } + if (MSM_PIPE_FLAGS(args->flags) & ~MSM_SUBMIT_VM_BIND_FLAGS) { + ret = UERR(EINVAL, dev, "invalid flags"); + goto out_post_unlock; + } + } else { + if (msm_context_is_vmbind(ctx) && (args->nr_bos || args->bos)) { + ret = UERR(EINVAL, dev, "nr_bos should be zero for VM_BIND contexts"); + goto out_post_unlock; + } + if (MSM_PIPE_FLAGS(args->flags) & ~MSM_SUBMIT_EXEC_FLAGS) { + ret = UERR(EINVAL, dev, "invalid flags"); + goto out_post_unlock; + } + } + ring = gpu->rb[queue->ring_nr]; if (args->flags & MSM_SUBMIT_FENCE_FD_OUT) { @@ -813,19 +887,38 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data, if (ret) goto out; - if (!(args->flags & MSM_SUBMIT_NO_IMPLICIT)) { + if (msm_context_is_vmbind(ctx) && !(queue->flags & MSM_SUBMITQUEUE_VM_BIND)) { + /* + * If we are not using VM_BIND, submit_pin_vmas() will validate + * just the BOs attached to the submit. In that case we don't + * need to validate the _entire_ vm, because userspace tracked + * what BOs are associated with the submit. + */ + ret = drm_gpuvm_validate(submit->vm, &submit->exec); + if (ret) + goto out; + } + + if (!(args->flags & MSM_SUBMIT_NO_IMPLICIT) && + !(queue->flags & MSM_SUBMITQUEUE_VM_BIND)) { ret = submit_fence_sync(submit); if (ret) goto out; } - ret = submit_pin_vmas(submit); + if (queue->flags & MSM_SUBMITQUEUE_VM_BIND) { + ret = submit_get_pages(submit); + } else { + ret = submit_pin_vmas(submit); + } if (ret) goto out; submit_pin_objects(submit); - for (i = 0; i < args->nr_cmds; i++) { + cmds_to_parse = msm_context_is_vmbind(ctx) ? 0 : args->nr_cmds; + + for (i = 0; i < cmds_to_parse; i++) { struct drm_gem_object *obj; uint64_t iova; @@ -857,7 +950,7 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data, goto out; } - submit->nr_cmds = i; + submit->nr_cmds = args->nr_cmds; idr_preload(GFP_KERNEL); diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_gem_vma.c index b37bfd80bca9..2160d492a999 100644 --- a/drivers/gpu/drm/msm/msm_gem_vma.c +++ b/drivers/gpu/drm/msm/msm_gem_vma.c @@ -162,6 +162,70 @@ static const struct drm_gpuvm_ops msm_gpuvm_ops = { .vm_free = msm_gem_vm_free, }; +static int +run_bo_op(struct msm_gem_submit *submit, const struct msm_gem_submit_bo *bo) +{ + unsigned op = bo->flags & MSM_SUBMIT_BO_OP_MASK; + + switch (op) { + case MSM_SUBMIT_BO_OP_MAP: + case MSM_SUBMIT_BO_OP_MAP_NULL: + return drm_gpuvm_sm_map(submit->vm, submit->vm, bo->iova, + bo->range, bo->obj, bo->bo_offset); + break; + case MSM_SUBMIT_BO_OP_UNMAP: + return drm_gpuvm_sm_unmap(submit->vm, submit->vm, bo->iova, + bo->bo_offset); + } + + return -EINVAL; +} + +static struct dma_fence * +msm_vma_job_run(struct drm_sched_job *job) +{ + struct msm_gem_submit *submit = to_msm_submit(job); + + for (unsigned i = 0; i < submit->nr_bos; i++) { + int ret = run_bo_op(submit, &submit->bos[i]); + if (ret) { + to_msm_vm(submit->vm)->unusable = true; + return ERR_PTR(ret); + } + } + + /* VM_BIND ops run on CPU, so we are done now: */ + msm_submit_retire(submit); + + for (int i = 0; i < submit->nr_bos; i++) { + struct drm_gem_object *obj = submit->bos[i].obj; + + if (!obj) + continue; + + msm_gem_lock(obj); + msm_gem_unpin_locked(obj); + msm_gem_unlock(obj); + } + + /* VM_BIND ops are synchronous, so no fence to wait on: */ + return NULL; +} + +static void +msm_vma_job_free(struct drm_sched_job *job) +{ + struct msm_gem_submit *submit = to_msm_submit(job); + + drm_sched_job_cleanup(job); + msm_gem_submit_put(submit); +} + +static const struct drm_sched_backend_ops msm_vm_bind_ops = { + .run_job = msm_vma_job_run, + .free_job = msm_vma_job_free +}; + /** * msm_gem_vm_create() - Create and initialize a &msm_gem_vm * @drm: the drm device @@ -197,6 +261,14 @@ msm_gem_vm_create(struct drm_device *drm, struct msm_mmu *mmu, const char *name, goto err_free_vm; } + if (!managed) { + ret = drm_sched_init(&vm->sched, &msm_vm_bind_ops, NULL, 1, 1, 0, + MAX_SCHEDULE_TIMEOUT, NULL, NULL, + "msm-vm-bind", drm->dev); + if (ret) + goto err_free_dummy; + } + drm_gpuvm_init(&vm->base, name, 0, drm, dummy_gem, va_start, va_size, 0, 0, &msm_gpuvm_ops); drm_gem_object_put(dummy_gem); @@ -211,8 +283,36 @@ msm_gem_vm_create(struct drm_device *drm, struct msm_mmu *mmu, const char *name, return &vm->base; +err_free_dummy: + drm_gem_object_put(dummy_gem); + err_free_vm: kfree(vm); return ERR_PTR(ret); } + +/** + * msm_gem_vm_close() - Close a VM + * @_vm: The VM to close + * + * Called when the drm device file is closed, to tear down VM related resources + * (which will drop refcounts to GEM objects that were still mapped into the + * VM at the time). + */ +void +msm_gem_vm_close(struct drm_gpuvm *_vm) +{ + struct msm_gem_vm *vm = to_msm_vm(_vm); + + /* + * For kernel managed VMs, the VMAs are torn down when the handle is + * closed, so nothing more to do. + */ + if (vm->managed) + return; + + /* Kill the scheduler now, so we aren't racing with it for cleanup: */ + drm_sched_stop(&vm->sched, NULL); + drm_sched_fini(&vm->sched); +} diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h index 70abbd93e11b..fe716f0004f2 100644 --- a/drivers/gpu/drm/msm/msm_gpu.h +++ b/drivers/gpu/drm/msm/msm_gpu.h @@ -554,6 +554,9 @@ struct msm_gpu_submitqueue { struct mutex lock; struct kref ref; struct drm_sched_entity *entity; + + /** @_vm_bind_entity: used for @entity pointer for VM_BIND queues */ + struct drm_sched_entity _vm_bind_entity[0]; }; struct msm_gpu_state_bo { diff --git a/drivers/gpu/drm/msm/msm_submitqueue.c b/drivers/gpu/drm/msm/msm_submitqueue.c index 8ced49c7557b..99ab780d5d7b 100644 --- a/drivers/gpu/drm/msm/msm_submitqueue.c +++ b/drivers/gpu/drm/msm/msm_submitqueue.c @@ -72,6 +72,9 @@ void msm_submitqueue_destroy(struct kref *kref) idr_destroy(&queue->fence_idr); + if (queue->entity == &queue->_vm_bind_entity[0]) + drm_sched_entity_destroy(queue->entity); + msm_context_put(queue->ctx); kfree(queue); @@ -115,6 +118,11 @@ void msm_submitqueue_close(struct msm_context *ctx) list_del(&entry->node); msm_submitqueue_put(entry); } + + if (!ctx->vm) + return; + + msm_gem_vm_close(ctx->vm); } static struct drm_sched_entity * @@ -160,8 +168,6 @@ int msm_submitqueue_create(struct drm_device *drm, struct msm_context *ctx, struct msm_drm_private *priv = drm->dev_private; struct msm_gpu_submitqueue *queue; enum drm_sched_priority sched_prio; - extern int enable_preemption; - bool preemption_supported; unsigned ring_nr; int ret; @@ -171,26 +177,53 @@ int msm_submitqueue_create(struct drm_device *drm, struct msm_context *ctx, if (!priv->gpu) return -ENODEV; - preemption_supported = priv->gpu->nr_rings == 1 && enable_preemption != 0; + if (flags & MSM_SUBMITQUEUE_VM_BIND) { + unsigned sz; - if (flags & MSM_SUBMITQUEUE_ALLOW_PREEMPT && preemption_supported) - return -EINVAL; + /* Not allowed for kernel managed VMs (ie. kernel allocs VA) */ + if (!msm_context_is_vmbind(ctx)) + return -EINVAL; - ret = msm_gpu_convert_priority(priv->gpu, prio, &ring_nr, &sched_prio); - if (ret) - return ret; + if (prio) + return -EINVAL; + + sz = struct_size(queue, _vm_bind_entity, 1); + queue = kzalloc(sz, GFP_KERNEL); + } else { + extern int enable_preemption; + bool preemption_supported = + priv->gpu->nr_rings == 1 && enable_preemption != 0; + + if (flags & MSM_SUBMITQUEUE_ALLOW_PREEMPT && preemption_supported) + return -EINVAL; - queue = kzalloc(sizeof(*queue), GFP_KERNEL); + ret = msm_gpu_convert_priority(priv->gpu, prio, &ring_nr, &sched_prio); + if (ret) + return ret; + + queue = kzalloc(sizeof(*queue), GFP_KERNEL); + } if (!queue) return -ENOMEM; kref_init(&queue->ref); queue->flags = flags; - queue->ring_nr = ring_nr; - queue->entity = get_sched_entity(ctx, priv->gpu->rb[ring_nr], - ring_nr, sched_prio); + if (flags & MSM_SUBMITQUEUE_VM_BIND) { + struct drm_gpu_scheduler *sched = &to_msm_vm(msm_context_vm(drm, ctx))->sched; + + queue->entity = &queue->_vm_bind_entity[0]; + + drm_sched_entity_init(queue->entity, DRM_SCHED_PRIORITY_KERNEL, + &sched, 1, NULL); + } else { + queue->ring_nr = ring_nr; + + queue->entity = get_sched_entity(ctx, priv->gpu->rb[ring_nr], + ring_nr, sched_prio); + } + if (IS_ERR(queue->entity)) { ret = PTR_ERR(queue->entity); kfree(queue); diff --git a/include/uapi/drm/msm_drm.h b/include/uapi/drm/msm_drm.h index 1a948d49c610..39b55c8d7413 100644 --- a/include/uapi/drm/msm_drm.h +++ b/include/uapi/drm/msm_drm.h @@ -404,12 +404,19 @@ struct drm_msm_gem_madvise { /* * Draw queues allow the user to set specific submission parameter. Command * submissions specify a specific submitqueue to use. ID 0 is reserved for - * backwards compatibility as a "default" submitqueue + * backwards compatibility as a "default" submitqueue. + * + * Because VM_BIND async updates happen on the CPU, they must run on a + * virtual queue created with the flag MSM_SUBMITQUEUE_VM_BIND. If we had + * a way to do pgtable updates on the GPU, we could drop this restriction. */ #define MSM_SUBMITQUEUE_ALLOW_PREEMPT 0x00000001 +#define MSM_SUBMITQUEUE_VM_BIND 0x00000002 /* virtual queue for VM_BIND ops */ + #define MSM_SUBMITQUEUE_FLAGS ( \ MSM_SUBMITQUEUE_ALLOW_PREEMPT | \ + MSM_SUBMITQUEUE_VM_BIND | \ 0) /* From patchwork Sat Dec 7 16:15:18 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 848317 Received: from mail-pf1-f174.google.com (mail-pf1-f174.google.com [209.85.210.174]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 09C21156661; Sat, 7 Dec 2024 16:18:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.174 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733588292; cv=none; b=EulTlZNoAFdNaJVwN6YsDA6QhhOWJ4LwyJ2okcSQoHZEuQkF0v4XbYugW71U1qPuNJSRJpPP8pXEB1LgzhsBp8gNsz0ZNJhG5Jrs4TAO4DUmZwpOwPWuANR7LM7DPYXluzVKfFg5JeFBLhRUBV53061bSA45xn1BdmxXrhMYhNQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733588292; c=relaxed/simple; bh=MthxseiiB1qweTv/ZsRRJcdWTkF+jsxsAi3/Ie+L3Kc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=X9cdvQ6zs+YB6Oqt1JwKvdte4PCZI4KMgH052WG+Iu2Ekz8Tw/Oa3M1jBVQqRiNRWpWiKtJJJOXtF/e8jga/X5sf0rw+fLWXraGS78EdATGf4EV0jZ96YhKu4kJiYbQvci/b2ExHIPiSu9HwuhS32d/2m1hi/Y5lSa4sRp9IwCY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=mLst9bBa; arc=none smtp.client-ip=209.85.210.174 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="mLst9bBa" Received: by mail-pf1-f174.google.com with SMTP id d2e1a72fcca58-7256dc42176so3399351b3a.3; Sat, 07 Dec 2024 08:18:10 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1733588290; x=1734193090; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=IyIQTPEzZRAeS1qQla708q/4zgKpLleeCQ9ko3M4emo=; b=mLst9bBaFEeZC1g1+ZYFj7sVJOWGBazByg55o2rH9v2UDuqI3Bl+LCo7QU3JA/XQRx ie48tUW1KZ9hY81WqzIQvIz3ECh09+fOCTd62oKoiL6nSdwyn+EBt2uY9Zy8naYEPswG K5WasLgIotfbGpDrkDyQFGDtVaeY3h31EVyNJbIPardvIMuTeSeyIj2CHQxHAgwsU15J iYF7kCd6avSrKwyb0Y4mcNVsCCnc1XW3lhKMOerpPyG/R7Yng1WeyjVeiqWMEUsYE7nX Yna4C66TF/jr8Lx8Lr+vHdh0rl42GnSDaacGQSp/a2J7a403KNoW4TF2/Cd/qzKJV0+W /dQg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1733588290; x=1734193090; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=IyIQTPEzZRAeS1qQla708q/4zgKpLleeCQ9ko3M4emo=; b=WRwULVQcIKUMkMIVT4SoZbQvuXe06Iw9iCMjlOYAWkmwBnaqMNJa0i/wEz926AV/E4 O5VxnvnG1CFehCApo7dTDSxttO0QhivqozT59zspHVFVvFh9uewC9iV0PrRav4spz/2y 97XW5bO0puy0vP5W6yTJu5nej0DGro7BBLqoVuxKhxvA9rK2TgseJ+RL3QH9iSU6Aas3 CAYvE9I+sh5rf3MbeYrkye5BNTYG2fr69361an7K5fTqege/lxTYzgeoeO6wqSXakYpA pYunpZCCsMyvyJxJ6ConBQ5+wuF77wjbIqDCLRM2HpGWrB1YsaaTlGnmOhNsFVcLvF2g gTJg== X-Forwarded-Encrypted: i=1; AJvYcCUG7DMNshyEJBSr5JlT6hfSkOcNUkP6Hilw5XyAQ4upoR0tFRmNzKIo4CWVkxtMJrca0XDZd4kgWutDN2iA@vger.kernel.org, AJvYcCVOP/hxQerCrd4dyvUx5jnq6sdqnOwj5uNJc/UHnBKZ4mB2LnNKbzSgSNThIezQhxJXMq96w0hTa0RLlYjF@vger.kernel.org, AJvYcCXf9sA8z2M0Gvm9KaRzeVlggeuzKwRWXxEJMx9NVAxh6he2wjtUwKhWvvpDpHHfgW36oMHpaOmA4I2uqYA=@vger.kernel.org X-Gm-Message-State: AOJu0YwTrGZd9+gGmv8aOaxAMj2PKhf3nGltLikYD8O0GYEQILDy/6xh 8xXYiB/081Ei02flZgyLvTofYk7kxhemgg4MAk7khKNXBjvzLnwe X-Gm-Gg: ASbGncutdaz35lzt6L7ve6MDG8QoXl9jAxRy581J1MfK04lRPMAVWTmL7/sUQFGElFi jbWcnxee7uCCTf+yVL1X7zFVmfKMgBPT1C8LdyOz0BoPHSzOEmkunYkp2Ig3uh8m0O4eq0HI9ed o+Fqye8ztiAFjeWLzv6FNanExDfg2m1y8PuNZ/XcKf/qG+KXPmwmvwE4dOv3XRdg954FEIrdaWX yg/zGxjWD9I8mqkKHEqlP12Ic9MbP1aIZrE/zWILXYZpMdziWQBL1N9g9vFGLADrD2gN249UcT1 pjGgxM8i X-Google-Smtp-Source: AGHT+IH4HL1U3hdxxpSBUgHr8euIo6pL350TQqGVjNEoKJJrUHlPQJvlC5hz1/gTOJO0CknwxlmDkg== X-Received: by 2002:a05:6a20:6a07:b0:1d8:a9c0:8853 with SMTP id adf61e73a8af0-1e1870c78e5mr9414508637.23.1733588290215; Sat, 07 Dec 2024 08:18:10 -0800 (PST) Received: from localhost (c-73-37-105-206.hsd1.or.comcast.net. [73.37.105.206]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-725a2ca671asm4650775b3a.153.2024.12.07.08.18.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 07 Dec 2024 08:18:09 -0800 (PST) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Akhil P Oommen , Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , Konrad Dybcio , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= , linux-kernel@vger.kernel.org (open list), linux-media@vger.kernel.org (open list:DMA BUFFER SHARING FRAMEWORK:Keyword:\bdma_(?:buf|fence|resv)\b), linaro-mm-sig@lists.linaro.org (moderated list:DMA BUFFER SHARING FRAMEWORK:Keyword:\bdma_(?:buf|fence|resv)\b) Subject: [RFC 18/24] drm/msm: Add _NO_SHARE flag Date: Sat, 7 Dec 2024 08:15:18 -0800 Message-ID: <20241207161651.410556-19-robdclark@gmail.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20241207161651.410556-1-robdclark@gmail.com> References: <20241207161651.410556-1-robdclark@gmail.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Rob Clark Buffers that are not shared between contexts can share a single resv object. This way drm_gpuvm will not track them as external objects, and submit-time validating overhead will be O(1) for all N non-shared BOs, instead of O(n). Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_drv.h | 1 + drivers/gpu/drm/msm/msm_gem.c | 23 +++++++++++++++++++++++ drivers/gpu/drm/msm/msm_gem_prime.c | 15 +++++++++++++++ include/uapi/drm/msm_drm.h | 14 ++++++++++++++ 4 files changed, 53 insertions(+) diff --git a/drivers/gpu/drm/msm/msm_drv.h b/drivers/gpu/drm/msm/msm_drv.h index 80582c0c2bf7..4c7ff83a0a20 100644 --- a/drivers/gpu/drm/msm/msm_drv.h +++ b/drivers/gpu/drm/msm/msm_drv.h @@ -246,6 +246,7 @@ int msm_gem_prime_vmap(struct drm_gem_object *obj, struct iosys_map *map); void msm_gem_prime_vunmap(struct drm_gem_object *obj, struct iosys_map *map); struct drm_gem_object *msm_gem_prime_import_sg_table(struct drm_device *dev, struct dma_buf_attachment *attach, struct sg_table *sg); +struct dma_buf *msm_gem_prime_export(struct drm_gem_object *obj, int flags); int msm_gem_prime_pin(struct drm_gem_object *obj); void msm_gem_prime_unpin(struct drm_gem_object *obj); diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 66332481c4c3..c21e1284f289 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -511,6 +511,9 @@ static int get_and_pin_iova_range_locked(struct drm_gem_object *obj, msm_gem_assert_locked(obj); + if (to_msm_bo(obj)->flags & MSM_BO_NO_SHARE) + return -EINVAL; + vma = get_vma_locked(obj, vm, range_start, range_end); if (IS_ERR(vma)) return PTR_ERR(vma); @@ -1026,6 +1029,16 @@ static void msm_gem_free_object(struct drm_gem_object *obj) put_iova_vmas(obj); } + if (msm_obj->flags & MSM_BO_NO_SHARE) { + struct drm_gem_object *r_obj = + container_of(obj->resv, struct drm_gem_object, _resv); + + BUG_ON(obj->resv == &obj->_resv); + + /* Drop reference we hold to shared resv obj: */ + drm_gem_object_put(r_obj); + } + drm_gem_object_release(obj); kfree(msm_obj->metadata); @@ -1058,6 +1071,15 @@ int msm_gem_new_handle(struct drm_device *dev, struct drm_file *file, if (name) msm_gem_object_set_name(obj, "%s", name); + if (flags & MSM_BO_NO_SHARE) { + struct msm_context *ctx = file->driver_priv; + struct drm_gem_object *r_obj = drm_gpuvm_resv_obj(ctx->vm); + + drm_gem_object_get(r_obj); + + obj->resv = r_obj->resv; + } + ret = drm_gem_handle_create(file, obj, handle); /* drop reference from allocate - handle holds it now */ @@ -1090,6 +1112,7 @@ static const struct drm_gem_object_funcs msm_gem_object_funcs = { .free = msm_gem_free_object, .open = msm_gem_open, .close = msm_gem_close, + .export = msm_gem_prime_export, .pin = msm_gem_prime_pin, .unpin = msm_gem_prime_unpin, .get_sg_table = msm_gem_prime_get_sg_table, diff --git a/drivers/gpu/drm/msm/msm_gem_prime.c b/drivers/gpu/drm/msm/msm_gem_prime.c index ee267490c935..1a6d8099196a 100644 --- a/drivers/gpu/drm/msm/msm_gem_prime.c +++ b/drivers/gpu/drm/msm/msm_gem_prime.c @@ -16,6 +16,9 @@ struct sg_table *msm_gem_prime_get_sg_table(struct drm_gem_object *obj) struct msm_gem_object *msm_obj = to_msm_bo(obj); int npages = obj->size >> PAGE_SHIFT; + if (msm_obj->flags & MSM_BO_NO_SHARE) + return ERR_PTR(-EINVAL); + if (WARN_ON(!msm_obj->pages)) /* should have already pinned! */ return ERR_PTR(-ENOMEM); @@ -45,6 +48,15 @@ struct drm_gem_object *msm_gem_prime_import_sg_table(struct drm_device *dev, return msm_gem_import(dev, attach->dmabuf, sg); } + +struct dma_buf *msm_gem_prime_export(struct drm_gem_object *obj, int flags) +{ + if (to_msm_bo(obj)->flags & MSM_BO_NO_SHARE) + return ERR_PTR(-EPERM); + + return drm_gem_prime_export(obj, flags); +} + int msm_gem_prime_pin(struct drm_gem_object *obj) { struct page **pages; @@ -53,6 +65,9 @@ int msm_gem_prime_pin(struct drm_gem_object *obj) if (obj->import_attach) return 0; + if (to_msm_bo(obj)->flags & MSM_BO_NO_SHARE) + return -EINVAL; + pages = msm_gem_pin_pages_locked(obj); if (IS_ERR(pages)) ret = PTR_ERR(pages); diff --git a/include/uapi/drm/msm_drm.h b/include/uapi/drm/msm_drm.h index 39b55c8d7413..a7e48ee1dd95 100644 --- a/include/uapi/drm/msm_drm.h +++ b/include/uapi/drm/msm_drm.h @@ -138,6 +138,19 @@ struct drm_msm_param { #define MSM_BO_SCANOUT 0x00000001 /* scanout capable */ #define MSM_BO_GPU_READONLY 0x00000002 +/* Private buffers do not need to be explicitly listed in the SUBMIT + * ioctl, unless referenced by a drm_msm_gem_submit_cmd. Private + * buffers may NOT be imported/exported or used for scanout (or any + * other situation where buffers can be indefinitely pinned, but + * cases other than scanout are all kernel owned BOs which are not + * visible to userspace). + * + * In exchange for those constraints, all private BOs associated with + * a single context (drm_file) share a single dma_resv, and if there + * has been no eviction since the last submit, there are no per-BO + * bookeeping to do, significantly cutting the SUBMIT overhead. + */ +#define MSM_BO_NO_SHARE 0x00000004 #define MSM_BO_CACHE_MASK 0x000f0000 /* cache modes */ #define MSM_BO_CACHED 0x00010000 @@ -147,6 +160,7 @@ struct drm_msm_param { #define MSM_BO_FLAGS (MSM_BO_SCANOUT | \ MSM_BO_GPU_READONLY | \ + MSM_BO_NO_SHARE | \ MSM_BO_CACHE_MASK) struct drm_msm_gem_new { From patchwork Sat Dec 7 16:15:19 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 848114 Received: from mail-pf1-f173.google.com (mail-pf1-f173.google.com [209.85.210.173]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5A38919F127; Sat, 7 Dec 2024 16:18:12 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.173 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733588294; cv=none; b=CR3Uq+Xpxx/U26yBx1ZbFNl1FsS+vwIjcA1sD3wtByJJCUMX9cv6aHFFoyvAnXl49RIa5Bo/U1iDP+yBX9MGz+nRww2VugR1bcOxWG5EfZyEfdE5BqeO7wOoszjFiFh80m82khIHtdvolZM3am2xv6LVMi1AmEVqdn2cQu6T3yU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733588294; c=relaxed/simple; bh=3xOXJbXRsk/kxA++i/gkyvQ0oPsQhRPpNDgykwHrHso=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=B3IMPCNWr5A0Fur5+n2Qp1zwSl8EE1FnK7Kq6PfJGQ3RIEde/VINswoHZO114UJWC0PRABVFYGAdM+V1TOs7zJQOBMbisCU371ob8Zidw7P5DRwkoKT8lGNAwPDTabz3OrNdp/Kb+oR5n7R35h+5Dpr/oyjmyW1ot/6XLRqtM1s= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=Uv+V8tbi; arc=none smtp.client-ip=209.85.210.173 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="Uv+V8tbi" Received: by mail-pf1-f173.google.com with SMTP id d2e1a72fcca58-725958d5ee0so2960227b3a.1; Sat, 07 Dec 2024 08:18:12 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1733588291; x=1734193091; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=3SNShEp4K4WM48mYMhdPsbSo9pFVciKzwPQ20GScC7s=; b=Uv+V8tbi/mAcMgWdsttHjdT2FMLyfyqx/GlOqBaihpgj/PzI/MQOLFJmn6A6vv+vQg 1EBmmj65esE77AoNZk6E247p30wpQDvVKt9/s+86YAN11561unkUh0w233ayFfK4z3wf RHNsq/AjJNtSBDai3S7rqrjfStdahK+TqJetHop55jNO8hHWkb0a43fb3fVX8sEW/4dT 7TGLZZ8oGfZG/DWSc9HoonaYiQiQxcUgrCLZItViy7h36R6WNvYCOmoC8uU6Hi4ZH3yP wHemRPfGvS1zcwQHO3E/XdgIedXO0vbGdC+8cJzoHcqOovMO2i8i2MAShq3gw/IZ6OAO ok5Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1733588291; x=1734193091; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=3SNShEp4K4WM48mYMhdPsbSo9pFVciKzwPQ20GScC7s=; b=EJMOFpLbBrWBfpSVDjL2RPB77Hh2mD/snLoQaNuK2BzhfN21Q1JCIpyyiG3rIoaGUk mTIb+QBVxeqLWu1as0yoaZTiywjwvPi5KpsrzFd2sKshhvAXxb38NPpQcloGIMr0XZH4 GbnXv1G/lKLIO8vScTB4CeH/Umz7WM2H6ya5apf+1Y2FsYCybjXINi8BnQjDW6QVSGaR Wg7T6LrJUDjyf4uIHf0IwYLLlCP4XreQ5ueQWpS+QKBBOIoaWinOBG8c0/DPsbychqW/ m36dcsJSpj1tGB7Om8yi4zJLiIP79jEZBMl1yfEiCBmzSK3Z0e1nsAnHvuii7l5sk2eX aHqA== X-Forwarded-Encrypted: i=1; AJvYcCU8IKKetCXJ8+M13EySwhcgjD3asDM1ZBmjYnIl+Gx60l/R4sX1GPrbBXlstetnoSbPQ6QpIXLVGVk4HiNu@vger.kernel.org, AJvYcCWx7Yw4iOp85bg+nanlq68bBtdG6M5ScxyF3UHJRKnumHCZKoUpXzmTat6YMVQGfRv9QObOQ8UhsLnhku1w@vger.kernel.org X-Gm-Message-State: AOJu0YydAi7JXfD+VWhCSWEE54jNGV+LZJskLGHMtyHUdRjd9mVIFNBr bz9VPySpHUYhpy/y25G7gQYi1Ik5YxRPLlbLaqxX79C8jS+PmSYf X-Gm-Gg: ASbGncua2mKle7CMpwbTEf2e7nfmWTUrbwA2/EEsTGzJ6kZIYFs0vpl16sAdTBdDKK9 JspATePdCiwEwWzLq9B5of5kjswPatN0Xdf2OMk2qbr5eqX3AgDGXxWfWkwfAnDUzazUXLhVsDZ 0HxJ6bUCKl+X2W5v9Xo9m7OrFIQ4t6NWcJEBIjmQn6DGYwCPspAhCuNGQLNShHUEz8+pi5VSvm8 JDtmX/gP3/yNqe32YoLO5GI+jRs1aogREJ6gvh6LGy7F1nsHTMp/F6BFtWs8HT64gQR+jQakeOK ClUJeDfE X-Google-Smtp-Source: AGHT+IG538J1QokDjPJHjgNZ+sHayhY29qYDR8B7YchPK8u7YmQbZ+Ik3gDvEojKkQdXazUhqeQ8gA== X-Received: by 2002:a05:6a00:13a4:b0:725:9f02:489a with SMTP id d2e1a72fcca58-725b8177bbdmr11676460b3a.17.1733588291585; Sat, 07 Dec 2024 08:18:11 -0800 (PST) Received: from localhost (c-73-37-105-206.hsd1.or.comcast.net. [73.37.105.206]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-725a29c5d28sm4676949b3a.17.2024.12.07.08.18.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 07 Dec 2024 08:18:11 -0800 (PST) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Akhil P Oommen , Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [RFC 19/24] drm/msm: Split out helper to get iommu prot flags Date: Sat, 7 Dec 2024 08:15:19 -0800 Message-ID: <20241207161651.410556-20-robdclark@gmail.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20241207161651.410556-1-robdclark@gmail.com> References: <20241207161651.410556-1-robdclark@gmail.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Rob Clark We'll re-use this in the vm_bind path. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gem.c | 12 ++++++++++-- drivers/gpu/drm/msm/msm_gem.h | 1 + 2 files changed, 11 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index c21e1284f289..7cc4b8955687 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -443,10 +443,9 @@ static struct drm_gpuva *get_vma_locked(struct drm_gem_object *obj, return vma; } -int msm_gem_pin_vma_locked(struct drm_gem_object *obj, struct drm_gpuva *vma) +int msm_gem_prot(struct drm_gem_object *obj) { struct msm_gem_object *msm_obj = to_msm_bo(obj); - struct page **pages; int prot = IOMMU_READ; if (!(msm_obj->flags & MSM_BO_GPU_READONLY)) @@ -458,6 +457,15 @@ int msm_gem_pin_vma_locked(struct drm_gem_object *obj, struct drm_gpuva *vma) if (msm_obj->flags & MSM_BO_CACHED_COHERENT) prot |= IOMMU_CACHE; + return prot; +} + +int msm_gem_pin_vma_locked(struct drm_gem_object *obj, struct drm_gpuva *vma) +{ + struct msm_gem_object *msm_obj = to_msm_bo(obj); + struct page **pages; + int prot = msm_gem_prot(obj); + msm_gem_assert_locked(obj); pages = msm_gem_get_pages_locked(obj, MSM_MADV_WILLNEED); diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index a2255fd269ca..a00149d66d37 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -188,6 +188,7 @@ struct msm_gem_object { #define to_msm_bo(x) container_of(x, struct msm_gem_object, base) uint64_t msm_gem_mmap_offset(struct drm_gem_object *obj); +int msm_gem_prot(struct drm_gem_object *obj); int msm_gem_pin_vma_locked(struct drm_gem_object *obj, struct drm_gpuva *vma); void msm_gem_unpin_locked(struct drm_gem_object *obj); void msm_gem_unpin_active(struct drm_gem_object *obj); From patchwork Sat Dec 7 16:15:20 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 848316 Received: from mail-pg1-f180.google.com (mail-pg1-f180.google.com [209.85.215.180]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 026061A0BD8; Sat, 7 Dec 2024 16:18:13 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.180 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733588295; cv=none; b=cUdrRofXMcYhKsEchuNHrT2dewqkaLBHKhkZfEcoMCoYCWTeSpkXriZH21HiKYHRGW8A5aaXzt0p2d0ZhCD7uj+ZCTnpPV0pvDpga3X1yLNlIA25dhZ1Pbd6XnHWp2FGS+POMAjVhf1eQ40fOCI+dDx4VgS8Kee9VAZPAnszO6Y= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733588295; c=relaxed/simple; bh=Q9taLKeFDvj7Hdy7jZc1vEdPZ67XTk57haTdxXsr2Xk=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=sF/3Rdmy5fPGDgE50ix+cdBKej5TEod/u+Ym8McctVyZbgLPXQRMb3uuyOOIfyMmw+RZsH1rEv5x7YlwvjpwOjARJJPdCC7/qbq7v40+IelpUHqJmWo/3lyEpNMGZDJfczY0PVQBaJioYAEL5o7YHhx5Wy7Jo3DT1n7j0Rm19jQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=RNzspDID; arc=none smtp.client-ip=209.85.215.180 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="RNzspDID" Received: by mail-pg1-f180.google.com with SMTP id 41be03b00d2f7-7fd17f231a7so2036911a12.0; Sat, 07 Dec 2024 08:18:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1733588293; x=1734193093; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=yyAWAFKr3/hyjo80WY0bJWE8UZA7qC9B69l9mmpyXAY=; b=RNzspDIDHLem6h6APvdjvAYsL3C/+5gGEUUPahJfUWeEVHBIoCjWi3rNT+nekln4v+ MJQczA/GcyCCWlEs/0Dk+FqoqqlL6rfdGvQW3P/AI00gCZSEOkrJbypx/4XhqkzZJqKf LgBPAUeyxXAbVJclDfog8uTtDH4zRo9c+Q1nTmqbmUrdlTgM/TtYpYiXEDKvdKnDVFz5 BsUgHjYd/62tdcBKi1zwMGXg6SI3T940ek4vYdWYZ+hE48Jms5ifvovq38vXDGt5rADU Ijvzi9aB2NjfkuzBWY99D1m4TRrNlxkh2IU6ytVrFO57EOZmCLgjcPv1HT765ovhCLRs gPeg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1733588293; x=1734193093; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=yyAWAFKr3/hyjo80WY0bJWE8UZA7qC9B69l9mmpyXAY=; b=iMNFUvgRpJdSprAtCDhzVUoz+RF0ag61oxKY29dh7idbxjmUdkU8PLAC1hzzrxb0mh IQZ52yJx9sTAb31nUXKExkWmkaHLn1GzO9kQFwhWM6iPsT2WyDXKblIZCWAJ60puyzyT VOHZ/lMdu4T4EwzpWon7Ah0YgkcigT+Ru2DoWEJpkRPNb0WqSMqNam1AuSLmvW+J+AsR Vv15QuCk3KViC0KxDmovgjv+pA0ot/dHfsuB9SbEyo4y0hplXT/IAx+Vgo62RwKgjfiW q4+Da4G67uqHWDry9P/KB17sBXUS364nL5gW8Q5w2JiKJTKaIhl1pP4rjCI+IAImpQG+ I2Tg== X-Forwarded-Encrypted: i=1; AJvYcCVTfZOUrQ28QuuwJkoXtnANNe9Uonx+gKBMM9hcNSrcs/KdFzuXtOhcoFgBXrFGbzFFKm8qp6mSprlxKElR@vger.kernel.org, AJvYcCWQkfcLudHMILbTQSUZb5f8hj4gGEviTj1u+KL4UUQSMWBsmOfzdOd0qlOwfRvcusYEqWTLjSv0NhhdtKUi@vger.kernel.org X-Gm-Message-State: AOJu0Yw1/K0ICfYnBdK6GGxr3/Gq3Cez38bfgI4N3iHgFUyijOJCknVP qpvh3ebwsvMHaEPRYV9Eu1oviScDzawNZT6nHeXsEET6W/9L90Je X-Gm-Gg: ASbGncsw5hG8/6dV6tMTtD7PIL7zkZW6FlN3dzJhYDmvWnr+035K5DNW71ru9JUzT+v Z1W9yppe2cd1PdR3TodEuKhxbqyqmhjdQ09PrvB7TMd7hFAEcn9mdCqEh9du24wUrzu5bZkFn4V j0hUi3poxsSRWJBUKa6oG1WzNAoRyTZcPkZjYvkYYynShfyCwn00T6W35sid0XK9CGewQssp8wf uyUDrweMjj3puhOQlKtYY3jaYlwnCr3gLjpTTIS1TIbsWpIwasx4qdX3gcQ9rxYuHhWX3iTtkpu FPnavyjI X-Google-Smtp-Source: AGHT+IHhT6HtECecLuYdY7XMLLhSIO6wdgkY+V+ETmTV12mWAOqO/UOJtxOFTCMFaaVYW6pMEl7EIw== X-Received: by 2002:a17:90b:520b:b0:2ee:e518:c1cb with SMTP id 98e67ed59e1d1-2ef69367705mr11337910a91.7.1733588293123; Sat, 07 Dec 2024 08:18:13 -0800 (PST) Received: from localhost (c-73-37-105-206.hsd1.or.comcast.net. [73.37.105.206]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-7fd156d41desm4180582a12.27.2024.12.07.08.18.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 07 Dec 2024 08:18:12 -0800 (PST) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Akhil P Oommen , Rob Clark , Rob Clark , Sean Paul , Konrad Dybcio , Abhinav Kumar , Dmitry Baryshkov , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [RFC 20/24] drm/msm: Add mmu support for non-zero offset Date: Sat, 7 Dec 2024 08:15:20 -0800 Message-ID: <20241207161651.410556-21-robdclark@gmail.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20241207161651.410556-1-robdclark@gmail.com> References: <20241207161651.410556-1-robdclark@gmail.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Rob Clark Only needs to be supported for iopgtables mmu, the other cases are either only used for kernel managed mappings (where offset is always zero) or devices which do not support sparse bindings. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/adreno/a2xx_gpummu.c | 5 ++++- drivers/gpu/drm/msm/msm_gem.c | 4 ++-- drivers/gpu/drm/msm/msm_gem.h | 4 ++-- drivers/gpu/drm/msm/msm_gem_vma.c | 13 +++++++------ drivers/gpu/drm/msm/msm_iommu.c | 22 ++++++++++++++++++++-- drivers/gpu/drm/msm/msm_mmu.h | 2 +- 6 files changed, 36 insertions(+), 14 deletions(-) diff --git a/drivers/gpu/drm/msm/adreno/a2xx_gpummu.c b/drivers/gpu/drm/msm/adreno/a2xx_gpummu.c index 39641551eeb6..6124336af2ec 100644 --- a/drivers/gpu/drm/msm/adreno/a2xx_gpummu.c +++ b/drivers/gpu/drm/msm/adreno/a2xx_gpummu.c @@ -29,13 +29,16 @@ static void a2xx_gpummu_detach(struct msm_mmu *mmu) } static int a2xx_gpummu_map(struct msm_mmu *mmu, uint64_t iova, - struct sg_table *sgt, size_t len, int prot) + struct sg_table *sgt, size_t off, size_t len, + int prot) { struct a2xx_gpummu *gpummu = to_a2xx_gpummu(mmu); unsigned idx = (iova - GPUMMU_VA_START) / GPUMMU_PAGE_SIZE; struct sg_dma_page_iter dma_iter; unsigned prot_bits = 0; + WARN_ON(off != 0); + if (prot & IOMMU_WRITE) prot_bits |= 1; if (prot & IOMMU_READ) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 7cc4b8955687..b6bad702e0c8 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -434,7 +434,7 @@ static struct drm_gpuva *get_vma_locked(struct drm_gem_object *obj, vma = lookup_vma(obj, vm); if (!vma) { - vma = msm_gem_vma_new(vm, obj, range_start, range_end); + vma = msm_gem_vma_new(vm, obj, 0, range_start, range_end); } else { GEM_WARN_ON(vma->va.addr < range_start); GEM_WARN_ON((vma->va.addr + obj->size) > range_end); @@ -472,7 +472,7 @@ int msm_gem_pin_vma_locked(struct drm_gem_object *obj, struct drm_gpuva *vma) if (IS_ERR(pages)) return PTR_ERR(pages); - return msm_gem_vma_map(vma, prot, msm_obj->sgt, obj->size); + return msm_gem_vma_map(vma, prot, msm_obj->sgt); } void msm_gem_unpin_locked(struct drm_gem_object *obj) diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index a00149d66d37..71499ec60a5d 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -140,9 +140,9 @@ struct msm_gem_vma { struct drm_gpuva * msm_gem_vma_new(struct drm_gpuvm *vm, struct drm_gem_object *obj, - u64 range_start, u64 range_end); + u64 offset, u64 range_start, u64 range_end); void msm_gem_vma_purge(struct drm_gpuva *vma); -int msm_gem_vma_map(struct drm_gpuva *vma, int prot, struct sg_table *sgt, int size); +int msm_gem_vma_map(struct drm_gpuva *vma, int prot, struct sg_table *sgt); void msm_gem_vma_close(struct drm_gpuva *vma); struct msm_gem_object { diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_gem_vma.c index 2160d492a999..035d29623519 100644 --- a/drivers/gpu/drm/msm/msm_gem_vma.c +++ b/drivers/gpu/drm/msm/msm_gem_vma.c @@ -38,8 +38,7 @@ void msm_gem_vma_purge(struct drm_gpuva *vma) /* Map and pin vma: */ int -msm_gem_vma_map(struct drm_gpuva *vma, int prot, - struct sg_table *sgt, int size) +msm_gem_vma_map(struct drm_gpuva *vma, int prot, struct sg_table *sgt) { struct msm_gem_vma *msm_vma = to_msm_vma(vma); struct msm_gem_vm *vm = to_msm_vm(vma->vm); @@ -62,8 +61,9 @@ msm_gem_vma_map(struct drm_gpuva *vma, int prot, * Revisit this if we can come up with a scheme to pre-alloc pages * for the pgtable in map/unmap ops. */ - ret = vm->mmu->funcs->map(vm->mmu, vma->va.addr, sgt, size, prot); - + ret = vm->mmu->funcs->map(vm->mmu, vma->va.addr, sgt, + vma->gem.offset, vma->va.range, + prot); if (ret) { msm_vma->mapped = false; } @@ -97,7 +97,7 @@ void msm_gem_vma_close(struct drm_gpuva *vma) /* Create a new vma and allocate an iova for it */ struct drm_gpuva * msm_gem_vma_new(struct drm_gpuvm *_vm, struct drm_gem_object *obj, - u64 range_start, u64 range_end) + u64 offset, u64 range_start, u64 range_end) { struct msm_gem_vm *vm = to_msm_vm(_vm); struct drm_gpuvm_bo *vm_bo; @@ -109,6 +109,7 @@ msm_gem_vma_new(struct drm_gpuvm *_vm, struct drm_gem_object *obj, return ERR_PTR(-ENOMEM); if (vm->managed) { + BUG_ON(offset != 0); spin_lock(&vm->mm_lock); ret = drm_mm_insert_node_in_range(&vm->mm, &vma->node, obj->size, PAGE_SIZE, 0, @@ -124,7 +125,7 @@ msm_gem_vma_new(struct drm_gpuvm *_vm, struct drm_gem_object *obj, GEM_WARN_ON((range_end - range_start) > obj->size); - drm_gpuva_init(&vma->base, range_start, range_end - range_start, obj, 0); + drm_gpuva_init(&vma->base, range_start, range_end - range_start, obj, offset); vma->mapped = false; mutex_lock(&vm->vm_lock); diff --git a/drivers/gpu/drm/msm/msm_iommu.c b/drivers/gpu/drm/msm/msm_iommu.c index 2a94e82316f9..41cb629e25f3 100644 --- a/drivers/gpu/drm/msm/msm_iommu.c +++ b/drivers/gpu/drm/msm/msm_iommu.c @@ -113,7 +113,8 @@ static int msm_iommu_pagetable_unmap(struct msm_mmu *mmu, u64 iova, } static int msm_iommu_pagetable_map(struct msm_mmu *mmu, u64 iova, - struct sg_table *sgt, size_t len, int prot) + struct sg_table *sgt, size_t off, size_t len, + int prot) { struct msm_iommu_pagetable *pagetable = to_pagetable(mmu); struct io_pgtable_ops *ops = pagetable->pgtbl_ops; @@ -125,6 +126,19 @@ static int msm_iommu_pagetable_map(struct msm_mmu *mmu, u64 iova, size_t size = sg->length; phys_addr_t phys = sg_phys(sg); + if (!len) + break; + + if (size <= off) { + off -= size; + continue; + } + + phys += off; + size -= off; + size = min_t(size_t, size, len); + off = 0; + while (size) { size_t pgsize, count, mapped = 0; int ret; @@ -140,6 +154,7 @@ static int msm_iommu_pagetable_map(struct msm_mmu *mmu, u64 iova, phys += mapped; addr += mapped; size -= mapped; + len -= mapped; if (ret) { msm_iommu_pagetable_unmap(mmu, iova, addr - iova); @@ -359,11 +374,14 @@ static void msm_iommu_detach(struct msm_mmu *mmu) } static int msm_iommu_map(struct msm_mmu *mmu, uint64_t iova, - struct sg_table *sgt, size_t len, int prot) + struct sg_table *sgt, size_t off, size_t len, + int prot) { struct msm_iommu *iommu = to_msm_iommu(mmu); size_t ret; + WARN_ON(off != 0); + /* The arm-smmu driver expects the addresses to be sign extended */ if (iova & BIT_ULL(48)) iova |= GENMASK_ULL(63, 49); diff --git a/drivers/gpu/drm/msm/msm_mmu.h b/drivers/gpu/drm/msm/msm_mmu.h index 88af4f490881..45f928671e3f 100644 --- a/drivers/gpu/drm/msm/msm_mmu.h +++ b/drivers/gpu/drm/msm/msm_mmu.h @@ -12,7 +12,7 @@ struct msm_mmu_funcs { void (*detach)(struct msm_mmu *mmu); int (*map)(struct msm_mmu *mmu, uint64_t iova, struct sg_table *sgt, - size_t len, int prot); + size_t off, size_t len, int prot); int (*unmap)(struct msm_mmu *mmu, uint64_t iova, size_t len); void (*destroy)(struct msm_mmu *mmu); void (*resume_translation)(struct msm_mmu *mmu); From patchwork Sat Dec 7 16:15:21 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 848113 Received: from mail-pf1-f174.google.com (mail-pf1-f174.google.com [209.85.210.174]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8B7161A9B47; Sat, 7 Dec 2024 16:18:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.174 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733588297; cv=none; b=Z+fK4wyUBzdCooqK3ZJD9l7hY7HzbbRbojf5yvGj2IeZk+XqoRZkqaSfw/Icb1qu6THj7GX5yH9J553xdceEjZNKKGsRWCf2pVt3P6RgvxKQ4IppIM6MB7nxyamxDHbSymdJ69SkI1zw1vc4LVyw4FeMGJ3tvttY2F15V4qwbQw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733588297; c=relaxed/simple; bh=5YKu2Xcnqr1yTk4U0g5Eg+ZDnKHzMYrwdeuzwe9cTqU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=vB6MGIjz0LjZRVIBUSF+82XTYmCP1ChnGRtnN1ha7Bb4z5qrKUt1cRo0q9xzXis2VRD9QqyBPrZRuwCLDgRYvd8A4mkL3CWXluUb31tmxJi8ikgYGORG42JTe/tuS+1MJCzmrhtqyyAt+546vLjfBGa5EgXJtr0oBRzNmU4Ta+8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=WDVeb6mI; arc=none smtp.client-ip=209.85.210.174 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="WDVeb6mI" Received: by mail-pf1-f174.google.com with SMTP id d2e1a72fcca58-724f42c1c38so2729572b3a.1; Sat, 07 Dec 2024 08:18:15 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1733588295; x=1734193095; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=fIPC3wN4RB22AtTIYzOFUGMqESGQ71MfkPRGJ75Lpqo=; b=WDVeb6mIEq2EJmdScEp+zmG+A7tidbsB2vaihnQyQCY8RvUw5lNnrgxRqHVRNP5ywE ZIPuzDM6kICyJjKkQxwbV4wz/BLYQxlx/deS5XzvkrkWWS++GLtRZzuaJqMbX17R6iex RXymy71FlYkbmMiGqDjul1GLkXW8korvFJmVhdD2uXuGKmbvQlHLZNQVR2gLJ1FgYIzZ kiVkH/kXU4N5hdnfhpNsTElzJ4R78Qo1AroHUDtaGDGxxZRggWaCSX3PQGogvOb2+fXi ft098UcbSQtzTLDFxZVbWkX9nL9bMBgWApvXP+iTUekLqaOCtj2yxf0MgBWxFHGSOxcU i56Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1733588295; x=1734193095; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=fIPC3wN4RB22AtTIYzOFUGMqESGQ71MfkPRGJ75Lpqo=; b=lYIO+mDUl/96WoSFbsItJg0vovmE4trwNmZqCcsBrwT6mzHDqfWHs46ZSTXq5h/tok E26qatizG+JtGmd1Rb/tHsNlysfExtfW9tNts+YSzeSneSPdUdSk5cKak1Nm5pAoBWPu zuFHKDsmb8NVhJLHuFuQ3sa7GZ1Tbd4JUtSHsJNch2YdmB4DWnUzuImadoWhuopSlP3o uQ+jGeCws0+d/LSH+yqyVRDSZ34zU8e0UyuaM3flzj4dVIilelpKhWyd0qtEc757rRru 5785nS4/DvujR8qgX+NeplkxrBeNgNV09LF/RFR2e2ifdq8TSJ/D22s4vfJfiYYRrBJK 8EZg== X-Forwarded-Encrypted: i=1; AJvYcCW+7rxkxuXoMQHNnMvsqLtmxVKexI4NC34NY784d0pA+nRp/HJnumlQZjeiKB5lX0P83b7HqJ4BhoQIIdxp@vger.kernel.org, AJvYcCWk3xTyUlVuqHASC+6wghhthghMMJziWu+bVA9JMTY74Q5qpsK4q+RCbaBOCcekWUb/gz/EOBAVxwXTWsSN@vger.kernel.org X-Gm-Message-State: AOJu0YybeFW2TlS3wupQh7I1ynEc4lih1buvRfVRER7q1Z1ejYUveei0 vpT/IkZgMK0+Hi309ZaBKmB9fARvcAY/tA4YkMIFfhBwMbR3JRA3 X-Gm-Gg: ASbGncsXb9n0dVu41hDDGRvCZH9XOkcqQkOU6HVZ0Mi1XOXWyqESKytx3jj8HVKOXzn ortihN2E1n4D59OSJeuCoFd+Yp7Zfqg8SFpvwMJUqKT1UvPw+8jE5gP/TT41aS6lrBgJfzgmBNi ZRcde/rHxiLaOWKLw9NgE8glJY0z8YdRmXiebW/A5oNlGpg/HOI15yifHLtvLJpbGJ7t/B+v1fL HR61nrqSd9aM9REbCLRcsieFSDECSzvn1Jag0jF1IUTOHjq6Thu/7YrYkk3Fj7kkHEoWCtmDkc3 NMD8YZqk X-Google-Smtp-Source: AGHT+IFWDLy7ds21MKUC3Cq7AuB7/7Sndz2UK9nF/2GtDakUzwnVagIsycjTJ4PrKiaKXWcXS6yeyQ== X-Received: by 2002:a05:6a20:9185:b0:1dc:7907:6d67 with SMTP id adf61e73a8af0-1e18717aed4mr13758619637.40.1733588294811; Sat, 07 Dec 2024 08:18:14 -0800 (PST) Received: from localhost (c-73-37-105-206.hsd1.or.comcast.net. [73.37.105.206]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-7fd1568eadbsm4764479a12.12.2024.12.07.08.18.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 07 Dec 2024 08:18:14 -0800 (PST) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Akhil P Oommen , Rob Clark , Rob Clark , Sean Paul , Konrad Dybcio , Abhinav Kumar , Dmitry Baryshkov , Marijn Suijten , David Airlie , Simona Vetter , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , linux-kernel@vger.kernel.org (open list) Subject: [RFC 21/24] drm/msm: Add PRR support Date: Sat, 7 Dec 2024 08:15:21 -0800 Message-ID: <20241207161651.410556-22-robdclark@gmail.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20241207161651.410556-1-robdclark@gmail.com> References: <20241207161651.410556-1-robdclark@gmail.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Rob Clark Add PRR (Partial Resident Region) is a bypass address which make GPU writes go to /dev/null and reads return zero. This is used to implement vulkan sparse residency. To support PRR/NULL mappings, we allocate a page to reserve a physical address which we know will not be used as part of a GEM object, and configure the SMMU to use this address for PRR/NULL mappings. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/adreno/adreno_gpu.c | 10 ++++ drivers/gpu/drm/msm/msm_iommu.c | 62 ++++++++++++++++++++++++- include/uapi/drm/msm_drm.h | 2 + 3 files changed, 73 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c b/drivers/gpu/drm/msm/adreno/adreno_gpu.c index 90848852ee50..140b4e54bc96 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c @@ -308,6 +308,13 @@ int adreno_fault_handler(struct msm_gpu *gpu, unsigned long iova, int flags, return 0; } +static bool +adreno_smmu_has_prr(struct msm_gpu *gpu) +{ + struct adreno_smmu_priv *adreno_smmu = dev_get_drvdata(&gpu->pdev->dev); + return adreno_smmu && adreno_smmu->set_prr_addr; +} + int adreno_get_param(struct msm_gpu *gpu, struct msm_context *ctx, uint32_t param, uint64_t *value, uint32_t *len) { @@ -392,6 +399,9 @@ int adreno_get_param(struct msm_gpu *gpu, struct msm_context *ctx, case MSM_PARAM_UCHE_TRAP_BASE: *value = adreno_gpu->uche_trap_base; return 0; + case MSM_PARAM_HAS_PRR: + *value = adreno_smmu_has_prr(gpu); + return 0; default: return UERR(EINVAL, drm, "%s: invalid param: %u", gpu->name, param); } diff --git a/drivers/gpu/drm/msm/msm_iommu.c b/drivers/gpu/drm/msm/msm_iommu.c index 41cb629e25f3..bb65be95f7db 100644 --- a/drivers/gpu/drm/msm/msm_iommu.c +++ b/drivers/gpu/drm/msm/msm_iommu.c @@ -13,6 +13,7 @@ struct msm_iommu { struct msm_mmu base; struct iommu_domain *domain; atomic_t pagetables; + struct page *prr_page; }; #define to_msm_iommu(x) container_of(x, struct msm_iommu, base) @@ -112,6 +113,36 @@ static int msm_iommu_pagetable_unmap(struct msm_mmu *mmu, u64 iova, return (size == 0) ? 0 : -EINVAL; } +static int msm_iommu_pagetable_map_prr(struct msm_mmu *mmu, u64 iova, size_t len, int prot) +{ + struct msm_iommu_pagetable *pagetable = to_pagetable(mmu); + struct io_pgtable_ops *ops = pagetable->pgtbl_ops; + struct msm_iommu *iommu = to_msm_iommu(pagetable->parent); + phys_addr_t phys = page_to_phys(iommu->prr_page); + u64 addr = iova; + + while (len) { + size_t mapped = 0; + size_t size = PAGE_SIZE; + int ret; + + ret = ops->map_pages(ops, addr, phys, size, 1, prot, GFP_KERNEL, &mapped); + + /* map_pages could fail after mapping some of the pages, + * so update the counters before error handling. + */ + addr += mapped; + len -= mapped; + + if (ret) { + msm_iommu_pagetable_unmap(mmu, iova, addr - iova); + return -EINVAL; + } + } + + return 0; +} + static int msm_iommu_pagetable_map(struct msm_mmu *mmu, u64 iova, struct sg_table *sgt, size_t off, size_t len, int prot) @@ -122,6 +153,9 @@ static int msm_iommu_pagetable_map(struct msm_mmu *mmu, u64 iova, u64 addr = iova; unsigned int i; + if (!sgt) + return msm_iommu_pagetable_map_prr(mmu, iova, len, prot); + for_each_sgtable_sg(sgt, sg, i) { size_t size = sg->length; phys_addr_t phys = sg_phys(sg); @@ -177,9 +211,16 @@ static void msm_iommu_pagetable_destroy(struct msm_mmu *mmu) * If this is the last attached pagetable for the parent, * disable TTBR0 in the arm-smmu driver */ - if (atomic_dec_return(&iommu->pagetables) == 0) + if (atomic_dec_return(&iommu->pagetables) == 0) { adreno_smmu->set_ttbr0_cfg(adreno_smmu->cookie, NULL); + if (adreno_smmu->set_prr_bit) { + adreno_smmu->set_prr_bit(adreno_smmu->cookie, false); + __free_page(iommu->prr_page); + iommu->prr_page = NULL; + } + } + free_io_pgtable_ops(pagetable->pgtbl_ops); kfree(pagetable); } @@ -314,6 +355,25 @@ struct msm_mmu *msm_iommu_pagetable_create(struct msm_mmu *parent) kfree(pagetable); return ERR_PTR(ret); } + + BUG_ON(iommu->prr_page); + if (adreno_smmu->set_prr_bit) { + /* + * We need a zero'd page for two reasons: + * + * 1) Reserve a known physical address to use when + * mapping NULL / sparsely resident regions + * 2) Read back zero + * + * It appears the hw drops writes to the PRR region + * on the floor, but reads actually return whatever + * is in the PRR page. + */ + iommu->prr_page = alloc_page(GFP_KERNEL | __GFP_ZERO); + adreno_smmu->set_prr_addr(adreno_smmu->cookie, + page_to_phys(iommu->prr_page)); + adreno_smmu->set_prr_bit(adreno_smmu->cookie, true); + } } /* Needed later for TLB flush */ diff --git a/include/uapi/drm/msm_drm.h b/include/uapi/drm/msm_drm.h index a7e48ee1dd95..48bc0374e2ae 100644 --- a/include/uapi/drm/msm_drm.h +++ b/include/uapi/drm/msm_drm.h @@ -115,6 +115,8 @@ struct drm_msm_timespec { * ioctl will throw -EPIPE. */ #define MSM_PARAM_EN_VM_BIND 0x15 /* WO, once */ +/* PRR (Partially Resident Region) is required for sparse residency: */ +#define MSM_PARAM_HAS_PRR 0x16 /* RO */ /* For backwards compat. The original support for preemption was based on * a single ring per priority level so # of priority levels equals the # From patchwork Sat Dec 7 16:15:22 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 848315 Received: from mail-pf1-f178.google.com (mail-pf1-f178.google.com [209.85.210.178]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 27A711AAA28; Sat, 7 Dec 2024 16:18:16 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.178 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733588299; cv=none; b=Pg5QkzhpbB4mHCK2V+v2D6b6zvp3ST7vSSt1dap8vZcuvBSptba1RIn6bNWZPKaedSHRAYHw7e+vk6ExbpJKgydvMVLmeOZVCJkr0YfWdfAcvmu7j4bC2Ic88FnuHFiAhBz6b5QPGMwL4eyiDWv/CvnbsbBL6BUP8Mv0vHbyTno= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733588299; c=relaxed/simple; bh=BeklckVR+07WbN5X72o1AGgijfD8qNwZFz1oRzr1SV0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=j+CUGOmYlIY6xhLY/9ZnAABCaCspaNegoh1cbrVBeU3WWl5rCRd4hsKi3ZGeFdFqkldYIQDBNizD83/fHrjVh/3euffFWFItroIbM13SAr1e39Mm3NnrI+63RUVY/opoH2roafAYentnq5rfYoCL6EDLzrxz1JRhcNVyR5cLPA0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=NuUaMs3F; arc=none smtp.client-ip=209.85.210.178 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="NuUaMs3F" Received: by mail-pf1-f178.google.com with SMTP id d2e1a72fcca58-725c0dd1fbcso1591603b3a.0; Sat, 07 Dec 2024 08:18:16 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1733588296; x=1734193096; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=D7rYcDWqXxEUEa2Af4XZ/usxa/bFTIN296GBT4M9754=; b=NuUaMs3FvxcEU56m0nMuCLe9+xHRuDj9WdsWfzHHp1NPuLpmqhz1WDbKocQQ7RwU7W YznKFc9ckU00TWXHCJaw+yTnkmJYqWf4GOYbb7wspKzv05ZzkQepKlxJYLqyKyRYgIIh +vM/FONmeJ85Lw0KYZrehCqu1NUpXN+Q9SMnLmVExv6WRV3fZP5xIdFwb6K4h64mz5t/ RHdHggykLTaH9Z7Oe3w0ydLk/mLJvvYCvnDawAZK8vOL/ZaYyX5w48LfeFMqWKIMSrZK 958HPVQ3hPRBPAcGzyRg0u78CzLXheTWpR2zx5rnpaTDYQAQlfUeLZFBU9zcqr5efNTe w99w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1733588296; x=1734193096; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=D7rYcDWqXxEUEa2Af4XZ/usxa/bFTIN296GBT4M9754=; b=SLBdzyUy2HM5AlsUlghRO8vQDAjjqtjYt8pSxMqAknKysfEfmHSoFIo8jGFah6m6l2 P7uTPq78OtOwOz7u3TxThoUdmJp0QUIN4e8cCUGzFyo+pHi9xpTAKIEVsxEoXXM/EoY7 a+jHz8k8eIGxFGFqg8GJTYh4C9Uc3VX89VuKJuGimpZnqDR/2AB/NK7aCjgSo3fqs9X7 kmz3DIvuRkNgJ33npqbeLsECnpTT4n5zsl0q3mRWFdgNOx7gL6VCpDeJhOdxZn93Z4Cl 3s2IGPqhnUqWnY4gJnNACRe2El5dtSR0Ii4UeSHjxAl5oxjo3RCP5+gRogpxW6qjZ3IJ OcpA== X-Forwarded-Encrypted: i=1; AJvYcCV+baUx7aPWKT1VRPtjJt0aOhdDGwa5Wvdnziy09rokj5QCV45EIXUioc/tqiZQJ1OjI+UB3+vfv9JJXMFD@vger.kernel.org, AJvYcCVUZOYXJI0m1Uv+GK5C6AJ87lRF+talFGhGo5QZsGkB74Gw+5sY/j3b6HMPDmSJrTLqgDewvHPQ7kbG0yYe@vger.kernel.org X-Gm-Message-State: AOJu0YzXa8J/1yuPd8V2UFhKwXE10kvDijx5pQvNKpwnyy6U6XP6J6Vd vv6TcDs0xOQcd67pIEwhXbNMjExOF7AgBkwmITswNHfqbRKVDAeo X-Gm-Gg: ASbGncsYU7x3tzB76GMspiiz+x2aVZfqKqYvq5ecuwUezzKVMoeQLkFP2a14H/ajwrw LpY0Npz1zB9onv+YqLE05TqqUKExpiuawx2tK5F1t4W6gFLILLLQ4bOAHh9iEFbqa6hgJR04PNp FwnR5nO4Sn6mE8+4eMTLN+1OpNfVGBITs9qfYPR3PJOtMdO9VGd5wacpg++6l8N5DbJ2fiMGdyT 0IvIYdpAhNYDQY8bWxg4k9E8vRBhPm14uLYNjk1FgcKxllLT5PgsTjkNNjhD7FqXc6aTCr3oujK BWSUscfpGGqO X-Google-Smtp-Source: AGHT+IEzKmW3bzXgODVcqBwDW1qOiEHNm8CJqYXGejknfGCaBwshShHfgB3H/YbzCZsR9HcoHiUjdA== X-Received: by 2002:a05:6a00:1d8b:b0:725:dab9:f732 with SMTP id d2e1a72fcca58-725dab9f7f2mr1849376b3a.22.1733588296403; Sat, 07 Dec 2024 08:18:16 -0800 (PST) Received: from localhost (c-73-37-105-206.hsd1.or.comcast.net. [73.37.105.206]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-725dc0bb650sm632191b3a.101.2024.12.07.08.18.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 07 Dec 2024 08:18:15 -0800 (PST) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Akhil P Oommen , Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [RFC 22/24] drm/msm: Rename msm_gem_vma_purge() -> _unmap() Date: Sat, 7 Dec 2024 08:15:22 -0800 Message-ID: <20241207161651.410556-23-robdclark@gmail.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20241207161651.410556-1-robdclark@gmail.com> References: <20241207161651.410556-1-robdclark@gmail.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Rob Clark This is a more descriptive name. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gem.c | 4 ++-- drivers/gpu/drm/msm/msm_gem.h | 2 +- drivers/gpu/drm/msm/msm_gem_vma.c | 2 +- 3 files changed, 4 insertions(+), 4 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index b6bad702e0c8..7dd881f8eaff 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -399,7 +399,7 @@ put_iova_spaces(struct drm_gem_object *obj, bool close) struct drm_gpuva *vma, *vmatmp; drm_gpuvm_bo_for_each_va_safe (vma, vmatmp, vm_bo) { - msm_gem_vma_purge(vma); + msm_gem_vma_unmap(vma); if (close) msm_gem_vma_close(vma); } @@ -589,7 +589,7 @@ static int clear_iova(struct drm_gem_object *obj, if (!vma) return 0; - msm_gem_vma_purge(vma); + msm_gem_vma_unmap(vma); msm_gem_vma_close(vma); return 0; diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index 71499ec60a5d..27ed5bde7893 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -141,7 +141,7 @@ struct msm_gem_vma { struct drm_gpuva * msm_gem_vma_new(struct drm_gpuvm *vm, struct drm_gem_object *obj, u64 offset, u64 range_start, u64 range_end); -void msm_gem_vma_purge(struct drm_gpuva *vma); +void msm_gem_vma_unmap(struct drm_gpuva *vma); int msm_gem_vma_map(struct drm_gpuva *vma, int prot, struct sg_table *sgt); void msm_gem_vma_close(struct drm_gpuva *vma); diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_gem_vma.c index 035d29623519..8d79e123ed9a 100644 --- a/drivers/gpu/drm/msm/msm_gem_vma.c +++ b/drivers/gpu/drm/msm/msm_gem_vma.c @@ -21,7 +21,7 @@ msm_gem_vm_free(struct drm_gpuvm *gpuvm) } /* Actually unmap memory for the vma */ -void msm_gem_vma_purge(struct drm_gpuva *vma) +void msm_gem_vma_unmap(struct drm_gpuva *vma) { struct msm_gem_vma *msm_vma = to_msm_vma(vma); struct msm_gem_vm *vm = to_msm_vm(vma->vm); From patchwork Sat Dec 7 16:15:23 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 848112 Received: from mail-pl1-f169.google.com (mail-pl1-f169.google.com [209.85.214.169]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9C9A71AAA2B; Sat, 7 Dec 2024 16:18:18 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.169 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733588300; cv=none; b=Y9HNvCS93+hUHD/0wyKW5qiIvimrQhZ9eEx3vzrkjLuns8D29aavUH+02IPcvbIpnY72dD5kmlb6mcVSLv9pr1a4CxC6ZDLCW6U6CAzGAQAMsZmmNII54wI4DeTaTSwHedGjwGssWzJseFNHmclCSyGNTtVjjF9g130j/RogtGo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733588300; c=relaxed/simple; bh=UyItdtnL2oeiTvXYiI7gp8SWGOXyXGXpt174w2GPeYA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=dv74TpRcUud6iw1rPmJMXHRF4J1cnlx/KMoUxp9fuAZNGo8iN+RGmKUYVP+dob/lGp9ZPBvbLfj7wNP78DZwT7HGU0Cwvyr1Br2jJlcCUA9hlxQ8IA8zuUy+E0GP81x9JQQLyRjsmqMk818xgGilakZr2xN+UQ8VFCmh4+7cAsk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=LfxRy68B; arc=none smtp.client-ip=209.85.214.169 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="LfxRy68B" Received: by mail-pl1-f169.google.com with SMTP id d9443c01a7336-21561af95c3so25781495ad.3; Sat, 07 Dec 2024 08:18:18 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1733588298; x=1734193098; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=acnaTz9Qz3U3hTbfKKAEfNYoY/uXenV7dJ0Db24xEM0=; b=LfxRy68Bw3HzhUcYT0Fp3Lnyk/D+LnGHv7oo+Aq79l1gvHeUiV4jWF3r95OKzX9TQf AotqtZKhrAyGZmpf2ZTSvw9nrW/ceqe93LqxTC+HxFPmt9ovWFucZE+un1ghAJD+5+AO sqgja2Pt1Y5pKdCO1kXQKTzlsAQojs4mOzUAS0SjTPLMtT1b/zeMUrW/Y9if9Zxz1P2N v6xJF96QJbpAS+qU+b0TgXVXUlPoOhFr1Ps5IT7wkJVWDYcY9VbsU4JBftcNwkoRlYfK qVfG/kvjLl3apMbn1fW7tsV5F1DwjdgXZB/NddNSGcAeqRFHUu6CmYYNVq8wn9+xqqrT UVBQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1733588298; x=1734193098; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=acnaTz9Qz3U3hTbfKKAEfNYoY/uXenV7dJ0Db24xEM0=; b=cJlnvypEyBgesWsSqaOlpUd+u1T97CilqomRnCxsDs6GZ8PnLmDu4VQjKaPNA9l5QU SrzHRoJagEl9EGT89ITmqptmqnJnyHtLSxiQQ3z+ldhO8qYNW5iYruwqdf+vSiYtL5Of PLJmkWgvkEN29xMytWpSVTWz7zsoTIexcIE0s+TB/5cO8DcaO27/e+sxlxD9YJZkJ3tN SZ1EzOIx3LyHc60AoGEyYCi69C7/7Q/gQ66M3TRumF/5M3AlmBicLGauRkxN770yU2bC +xpc4rMcKD7EG+rr2kwO7+PGbKD5dersv5bE8VqlPiUbAZNMlUWuOOPwdUANqCuA62hU 9YeA== X-Forwarded-Encrypted: i=1; AJvYcCXFRnpKNEVSN19Rc2EwdQr+9kvRKD38jJ2WQxRDXPGbxdeAZXxEL7k7h1qnoNrgRaGDviZDgJvin/AT4Ssj@vger.kernel.org, AJvYcCXlXjyo2jpe6NUpf+m7HYrPAVRgYg6UxDWMW0KkNCIQwvSZdxIgIl/Wq1gczsFpN2t/GSyk4q3ecYbJ6VpY@vger.kernel.org X-Gm-Message-State: AOJu0YwqVu+eehnnmj8TZphQKQXSxZtZKC/3nV3eG7Vo1YzvUIpO6iYQ 8qnjOLg4k/MF6FVxO0zil2C4wMaYIbR+NQJMZoDx1XDa3j+RfKkx X-Gm-Gg: ASbGncswVWNfOQurgluF2FkJGtRZ+sLQWFUcqHw8K85Tr4xdq3NRZtc6s17Nz1KmcZq kZ4L8yRPiKeyrfhNkMb36vP2rqmWOqjKZUHcV0so5jV/WkDk6qPZ69dH4IfcKgJPSovMjXkvsKc 7HJ8BBc8HFyR3oOMmYhQ+W65lBr7h+J77mn2OaulD3LW9eeDUzXZ5Tc8WyjrE5xfWX079VhAdjO gLaiWJ1+ik/zmyEUnd8LhDqiuyrz8asXudESrhabD4JERxPMjiX7LGj/wY9+nXZws9ikJLX+H9h elSQA9nX X-Google-Smtp-Source: AGHT+IEEIY/NWs/APAaC5gCzW3w7uuF3DOjm9F5VzP5FH7UnFLgz8Ca6qsVWNLt1vhIazNyu7btbCw== X-Received: by 2002:a17:903:234a:b0:215:6093:e384 with SMTP id d9443c01a7336-21614dd7fbcmr99352225ad.49.1733588297951; Sat, 07 Dec 2024 08:18:17 -0800 (PST) Received: from localhost (c-73-37-105-206.hsd1.or.comcast.net. [73.37.105.206]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-7fd430cec7bsm260676a12.67.2024.12.07.08.18.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 07 Dec 2024 08:18:17 -0800 (PST) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Akhil P Oommen , Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [RFC 23/24] drm/msm: Wire up gpuvm ops Date: Sat, 7 Dec 2024 08:15:23 -0800 Message-ID: <20241207161651.410556-24-robdclark@gmail.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20241207161651.410556-1-robdclark@gmail.com> References: <20241207161651.410556-1-robdclark@gmail.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Rob Clark Hook up the map/remap/unmap ops to apply MAP/UNMAP operations. The MAP/UNMAP operations are split up by drm_gpuvm into a series of map/ remap/unmap ops, for example an UNMAP operation which spans multiple vmas will get split up into a sequence of unmap (and possibly remap) ops which each apply to a single vma. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gem.h | 10 ++ drivers/gpu/drm/msm/msm_gem_vma.c | 269 ++++++++++++++++++++++++++++-- 2 files changed, 263 insertions(+), 16 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index 27ed5bde7893..5655eb026fba 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -75,6 +75,16 @@ struct msm_gem_vm { /** @vm_lock: protects gpuvm insert/remove/traverse */ struct mutex vm_lock; + /** + * @op_lock: + * + * Serializes VM operations. Typically operations are serialized + * by virtue of running on the VM_BIND queue, but in the cleanup + * path (or if multiple VM_BIND queues) the @op_lock provides the + * needed serialization. + */ + struct mutex op_lock; + /** @mmu: The mmu object which manages the pgtables */ struct msm_mmu *mmu; diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_gem_vma.c index 8d79e123ed9a..00d70784da22 100644 --- a/drivers/gpu/drm/msm/msm_gem_vma.c +++ b/drivers/gpu/drm/msm/msm_gem_vma.c @@ -8,6 +8,8 @@ #include "msm_gem.h" #include "msm_mmu.h" +#define vm_dbg(fmt, ...) pr_debug("%s:%d: "fmt"\n", __func__, __LINE__, ##__VA_ARGS__) + static void msm_gem_vm_free(struct drm_gpuvm *gpuvm) { @@ -20,18 +22,29 @@ msm_gem_vm_free(struct drm_gpuvm *gpuvm) kfree(vm); } +static void +msm_gem_vma_unmap_range(struct drm_gpuva *vma, uint64_t unmap_start, uint64_t unmap_range) +{ + struct msm_gem_vm *vm = to_msm_vm(vma->vm); + + vm_dbg("%p:%p: %016llx %016llx", vma->vm, vma, unmap_start, unmap_start + unmap_range); + + if (vma->gem.obj) + msm_gem_assert_locked(vma->gem.obj); + + vm->mmu->funcs->unmap(vm->mmu, unmap_start, unmap_range); +} + /* Actually unmap memory for the vma */ void msm_gem_vma_unmap(struct drm_gpuva *vma) { struct msm_gem_vma *msm_vma = to_msm_vma(vma); - struct msm_gem_vm *vm = to_msm_vm(vma->vm); - unsigned size = vma->va.range; /* Don't do anything if the memory isn't mapped */ if (!msm_vma->mapped) return; - vm->mmu->funcs->unmap(vm->mmu, vma->va.addr, size); + msm_gem_vma_unmap_range(vma, vma->va.addr, vma->va.range); msm_vma->mapped = false; } @@ -52,6 +65,11 @@ msm_gem_vma_map(struct drm_gpuva *vma, int prot, struct sg_table *sgt) msm_vma->mapped = true; + vm_dbg("%p: %016llx %016llx", vma, vma->va.addr, vma->va.range); + + if (vma->gem.obj) + msm_gem_assert_locked(vma->gem.obj); + /* * NOTE: iommu/io-pgtable can allocate pages, so we cannot hold * a lock across map/unmap which is also used in the job_run() @@ -79,17 +97,16 @@ void msm_gem_vma_close(struct drm_gpuva *vma) GEM_WARN_ON(msm_vma->mapped); - spin_lock(&vm->mm_lock); - if (vma->va.addr && vm->managed) + if (vma->va.addr && vm->managed) { + spin_lock(&vm->mm_lock); drm_mm_remove_node(&msm_vma->node); - spin_unlock(&vm->mm_lock); + spin_unlock(&vm->mm_lock); + } - dma_resv_lock(drm_gpuvm_resv(vma->vm), NULL); mutex_lock(&vm->vm_lock); drm_gpuva_remove(vma); drm_gpuva_unlink(vma); mutex_unlock(&vm->vm_lock); - dma_resv_unlock(drm_gpuvm_resv(vma->vm)); kfree(vma); } @@ -110,6 +127,7 @@ msm_gem_vma_new(struct drm_gpuvm *_vm, struct drm_gem_object *obj, if (vm->managed) { BUG_ON(offset != 0); + BUG_ON(!obj); /* NULL mappings not valid for kernel managed VM */ spin_lock(&vm->mm_lock); ret = drm_mm_insert_node_in_range(&vm->mm, &vma->node, obj->size, PAGE_SIZE, 0, @@ -123,7 +141,8 @@ msm_gem_vma_new(struct drm_gpuvm *_vm, struct drm_gem_object *obj, range_end = range_start + obj->size; } - GEM_WARN_ON((range_end - range_start) > obj->size); + if (obj) + GEM_WARN_ON((range_end - range_start) > obj->size); drm_gpuva_init(&vma->base, range_start, range_end - range_start, obj, offset); vma->mapped = false; @@ -134,6 +153,9 @@ msm_gem_vma_new(struct drm_gpuvm *_vm, struct drm_gem_object *obj, if (ret) goto err_free_range; + if (!obj) + return &vma->base; + vm_bo = drm_gpuvm_bo_obtain(&vm->base, obj); if (IS_ERR(vm_bo)) { ret = PTR_ERR(vm_bo); @@ -159,38 +181,234 @@ msm_gem_vma_new(struct drm_gpuvm *_vm, struct drm_gem_object *obj, return ERR_PTR(ret); } +static int +msm_gem_vm_bo_validate(struct drm_gpuvm_bo *vm_bo, struct drm_exec *exec) +{ + // TODO + pr_err("%s:%d\n", __func__, __LINE__); + return 0; +} + +static struct drm_gpuva * +vma_from_op(struct drm_gpuvm *vm, struct drm_gpuva_op_map *op) +{ + return msm_gem_vma_new(vm, op->gem.obj, op->gem.offset, op->va.addr, + op->va.addr + op->va.range); +} + +/* + * In a few places, we have to deal with map/unmap of potentially NULL (PRR) + * mappings. The cond_lock()/cond_unlock() helpers simplify that. + */ + +static void +cond_lock(struct drm_gem_object *obj) +{ + if (!obj) + return; + + /* + * Hold a ref while we have the obj locked, so drm_gpuvm doesn't + * manage to drop the last ref to the obj while it is locked: + */ + drm_gem_object_get(obj); + msm_gem_lock(obj); +} + +static void +cond_unlock(struct drm_gem_object *obj) +{ + if (!obj) + return; + + msm_gem_unlock(obj); + /* Drop the ref obtained in cond_lock(): */ + drm_gem_object_put(obj); +} + +static int +msm_gem_vm_sm_step_map(struct drm_gpuva_op *op, void *priv) +{ + struct drm_gem_object *obj = op->map.gem.obj; + struct drm_gpuvm *vm = priv; + struct drm_gpuva *vma; + struct sg_table *sgt; + unsigned prot; + int ret; + + cond_lock(obj); + vma = vma_from_op(vm, &op->map); + + vm_dbg("%p:%p: %016llx %016llx", vma->vm, vma, vma->va.addr, vma->va.range); + + if (obj) { + sgt = to_msm_bo(obj)->sgt; + prot = msm_gem_prot(obj); + } else { + sgt = NULL; + prot = IOMMU_READ | IOMMU_WRITE; + } + + if (WARN_ON(IS_ERR(vma))) { + ret = PTR_ERR(vma); + goto out_unlock; + } + + ret = msm_gem_vma_map(vma, prot, sgt); + +out_unlock: + cond_unlock(obj); + + return ret; +} + +static int +msm_gem_vm_sm_step_remap(struct drm_gpuva_op *op, void *priv) +{ + struct drm_gpuvm *vm = priv; + struct drm_gpuva *orig_vma = op->remap.unmap->va; + struct drm_gpuva *prev_vma = NULL, *next_vma = NULL; + struct drm_gem_object *obj = orig_vma->gem.obj; + uint64_t unmap_start, unmap_range; + + vm_dbg("orig_vma: %p:%p: %016llx %016llx", vm, orig_vma, orig_vma->va.addr, orig_vma->va.range); + + drm_gpuva_op_remap_to_unmap_range(&op->remap, &unmap_start, &unmap_range); + + cond_lock(obj); + msm_gem_vma_unmap_range(op->remap.unmap->va, unmap_start, unmap_range); + + /* + * Part of this GEM obj is still mapped, but we're going to kill the + * existing VMA and replace it with one or two new ones (ie. two if) + * the unmapped range is in the middle of the existing (unmap) VMA. + * So just set the state to unmapped: + */ + to_msm_vma(orig_vma)->mapped = false; + + msm_gem_vma_close(orig_vma); + + if (op->remap.prev) { + prev_vma = vma_from_op(vm, op->remap.prev); + if (WARN_ON(IS_ERR(prev_vma))) + return PTR_ERR(prev_vma); + vm_dbg("prev_vma: %p:%p: %016llx %016llx", vm, prev_vma, prev_vma->va.addr, prev_vma->va.range); + to_msm_vma(prev_vma)->mapped = true; + } + + if (op->remap.next) { + next_vma = vma_from_op(vm, op->remap.next); + if (WARN_ON(IS_ERR(next_vma))) + return PTR_ERR(next_vma); + vm_dbg("next_vma: %p:%p: %016llx %016llx", vm, next_vma, next_vma->va.addr, next_vma->va.range); + to_msm_vma(next_vma)->mapped = true; + } + + cond_unlock(obj); + + return 0; +} + +static int +msm_gem_vm_sm_step_unmap(struct drm_gpuva_op *op, void *priv) +{ + struct drm_gpuva *vma = op->unmap.va; + struct drm_gem_object *obj = vma->gem.obj; + + vm_dbg("%p:%p: %016llx %016llx", vma->vm, vma, vma->va.addr, vma->va.range); + + cond_lock(obj); + msm_gem_vma_unmap(vma); + msm_gem_vma_close(vma); + cond_unlock(obj); + + return 0; +} + static const struct drm_gpuvm_ops msm_gpuvm_ops = { .vm_free = msm_gem_vm_free, + .vm_bo_validate = msm_gem_vm_bo_validate, + .sm_step_map = msm_gem_vm_sm_step_map, + .sm_step_remap = msm_gem_vm_sm_step_remap, + .sm_step_unmap = msm_gem_vm_sm_step_unmap, }; +static int +run_and_free_steps(struct drm_gpuvm *vm, struct drm_gpuva_ops *ops) +{ + struct drm_gpuva_op *op; + int ret = 0; + + if (IS_ERR(ops)) + return PTR_ERR(ops); + + drm_gpuva_for_each_op (op, ops) { + + switch (op->op) { + case DRM_GPUVA_OP_MAP: + ret = msm_gem_vm_sm_step_map(op, vm); + break; + case DRM_GPUVA_OP_REMAP: + ret = msm_gem_vm_sm_step_remap(op, vm); + break; + case DRM_GPUVA_OP_UNMAP: + ret = msm_gem_vm_sm_step_unmap(op, vm); + break; + default: + ret = -EINVAL; + } + + if (ret) + break; + } + + drm_gpuva_ops_free(vm, ops); + + return ret; +} + static int run_bo_op(struct msm_gem_submit *submit, const struct msm_gem_submit_bo *bo) { - unsigned op = bo->flags & MSM_SUBMIT_BO_OP_MASK; + struct msm_gem_vm *vm = to_msm_vm(submit->vm); + struct drm_gpuva_ops *ops; - switch (op) { + mutex_lock(&vm->vm_lock); + switch (bo->flags & MSM_SUBMIT_BO_OP_MASK) { case MSM_SUBMIT_BO_OP_MAP: case MSM_SUBMIT_BO_OP_MAP_NULL: - return drm_gpuvm_sm_map(submit->vm, submit->vm, bo->iova, - bo->range, bo->obj, bo->bo_offset); + vm_dbg("MAP: %p: %016llx %016llx", vm, bo->iova, bo->range); + ops = drm_gpuvm_sm_map_ops_create(submit->vm, bo->iova, + bo->range, bo->obj, + bo->bo_offset); break; case MSM_SUBMIT_BO_OP_UNMAP: - return drm_gpuvm_sm_unmap(submit->vm, submit->vm, bo->iova, - bo->bo_offset); + vm_dbg("UNMAP: %p: %016llx %016llx", vm, bo->iova, bo->range); + ops = drm_gpuvm_sm_unmap_ops_create(submit->vm, bo->iova, + bo->bo_offset); + break; + default: + ops = ERR_PTR(-EINVAL); + break; } + mutex_unlock(&vm->vm_lock); - return -EINVAL; + return run_and_free_steps(submit->vm, ops); } static struct dma_fence * msm_vma_job_run(struct drm_sched_job *job) { struct msm_gem_submit *submit = to_msm_submit(job); + struct msm_gem_vm *vm = to_msm_vm(submit->vm); + + mutex_lock(&vm->op_lock); for (unsigned i = 0; i < submit->nr_bos; i++) { int ret = run_bo_op(submit, &submit->bos[i]); if (ret) { to_msm_vm(submit->vm)->unusable = true; + mutex_unlock(&vm->op_lock); return ERR_PTR(ret); } } @@ -209,6 +427,8 @@ msm_vma_job_run(struct drm_sched_job *job) msm_gem_unlock(obj); } + mutex_unlock(&vm->op_lock); + /* VM_BIND ops are synchronous, so no fence to wait on: */ return NULL; } @@ -276,6 +496,7 @@ msm_gem_vm_create(struct drm_device *drm, struct msm_mmu *mmu, const char *name, spin_lock_init(&vm->mm_lock); mutex_init(&vm->vm_lock); + mutex_init(&vm->op_lock); vm->mmu = mmu; vm->managed = managed; @@ -305,6 +526,7 @@ void msm_gem_vm_close(struct drm_gpuvm *_vm) { struct msm_gem_vm *vm = to_msm_vm(_vm); + struct drm_gpuva_ops *ops; /* * For kernel managed VMs, the VMAs are torn down when the handle is @@ -316,4 +538,19 @@ msm_gem_vm_close(struct drm_gpuvm *_vm) /* Kill the scheduler now, so we aren't racing with it for cleanup: */ drm_sched_stop(&vm->sched, NULL); drm_sched_fini(&vm->sched); + + /* Serialize against vm scheduler thread: */ + mutex_lock(&vm->op_lock); + + /* + * To avoid nested locking problems, while still holding the lock + * during the necessary vm traversal, generate a list of unmap ops: + */ + mutex_lock(&vm->vm_lock); + ops = drm_gpuvm_sm_unmap_ops_create(_vm, _vm->mm_start, _vm->mm_range); + mutex_unlock(&vm->vm_lock); + + run_and_free_steps(_vm, ops); + + mutex_unlock(&vm->op_lock); } From patchwork Sat Dec 7 16:15:24 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 848314 Received: from mail-pf1-f173.google.com (mail-pf1-f173.google.com [209.85.210.173]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4D7AD1AAA23; Sat, 7 Dec 2024 16:18:20 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.173 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733588301; cv=none; b=rt8icnEVHUqg3KvMR2b05iaOnrl/fewCPn5V6Pg5aM94zr7T+Me5EHkvqwLVjHFooE5cZ1KLQ+fgQt0GCcQrM6K6a+GJwUd4QzF36sCoA23uJKRkXCOVwjn86TJzQI9t0xRcWb/5C4dOQdHPHxlkMB7+AZewFQsEeZIzFoqWE/M= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733588301; c=relaxed/simple; bh=QXnpwEVCLXghIrOPaleLII85eJmHZ8XOt/HFoWM7A30=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=s8CxWBAy9Yo+pdkItuSm2uekAZb45vZhAN4ZPnxYDFvYjsl6MNjHC10NewLQV01S6ESCFr8b9HrGrntCdsj2ikESF4VQpa4MkS9IlA8Efk96J43WnrdVFAVTFMr5yRnIev+qJcxf1RMBUHCD3qRn0dbkyVp53OJjHbD5BvfJjpQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=nb3L3veM; arc=none smtp.client-ip=209.85.210.173 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="nb3L3veM" Received: by mail-pf1-f173.google.com with SMTP id d2e1a72fcca58-7256dc42176so3399436b3a.3; Sat, 07 Dec 2024 08:18:20 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1733588300; x=1734193100; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=rKEc7pfcoe2pEfex+pxqSNNJz1K7IYFf9h97WA1H0CA=; b=nb3L3veMRDCRBry6l/KxKI9IKH9b5NkHDPpK9mtgboom3NSK18jRtzg4HuMEk4oS0C WvjjWSvGIGTSVbAn4KUAP+SCmviHYL2kqEqBXX7X/OYxnZ/TeTxpG8JfhZStOVzF34pS u0M7QdgIp2DbwrEYaEY90a2MYMJQN4+S0ZBceyLtNNvcp9ytwkOdEmaxT8Q3Ivndg+qG QpIm5M+MHTomKYJxSqdWtijNourIGkRwJV82BL3tZqU94acpDY7hdQpC3eDIDnws8okf UW3e4U2QtAzEaa0kRlHUc0P5l3Px3VUm2i+neSuAL/XbJTTtGLMGJO6+1/B6nIWWhRPL QTHg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1733588300; x=1734193100; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=rKEc7pfcoe2pEfex+pxqSNNJz1K7IYFf9h97WA1H0CA=; b=utReVKFn00djIRpQUyoPOH/wKICIKeTvmEFpBJKbUcEjvS7A3KVPSJZxbSal0+fJ+B j2XzeZXc35YnOOkxh1/w/VdADeGr3sPoTK5OzkqhOLk4uW/RE568ptJ8NqrtkOIcPTWg PrNGIB3raTOwmd09UJoTuBQSlcwFxCfqNXPsdp9bnQsvPDy0pCNDfFwwl4lSL/3gqt2L mXRBxsehdO5xp0ZdiD1XOECkM05w0y2hW5aZF1Y+A11MhrxHGvEQm2P+Mbe4C4QQnlpl KIkCI38n7ah2MklI5sMWu8Q2Cec931WOVDD3RK5NH2cDdAOk/Mzycc8lnKs6e4Bhsae1 FJAQ== X-Forwarded-Encrypted: i=1; AJvYcCVMeUGaBlT/pgZxPbIw+LhGaQYwYiCheco2NkhkAHvB4ZpV/+YQFZId+7rHLnBaXbSKhS/JZjzBY0HbZd+W@vger.kernel.org, AJvYcCWOt9rsM1c2w7nLvnEKW5ijuKGES1R6WQJQpsr8Sj4t2sIPIIbLulKNYFUcPlfwTSx2DedP1MMyZiJZqikW@vger.kernel.org X-Gm-Message-State: AOJu0Yw6NyqOqer6XHaJ4WG73l4vDmeIRy22bttQtGL3flPH2NyaDDxJ MPjakVD/Q22HgfQBikGd520TllFlzI/7RLSjpG39D1yFryGfQLhn X-Gm-Gg: ASbGncuLezGkMO1nhu5n7/m6mRaTn/+aoeIC86lIHY21G2AxwGtcJkdIgBM7kA0Uc+R Ve0F/ESEOmk0Zgpivlu7cJjVm8Bls5k1+ZxlabIdtW7Mi80nh2/jQbkgJGpOTl/HyvOhz0fNLSO 13EftQhPbQOUz6Lck6drgF0gK5RZAxfeJfludf3bRs4PnL8jOp+ciAuubrb5vTYcLxBYBRlyWss 7wQqJ05arcaG6bCzLz46KjBF24OUvFWD2e0wYSXf5HEYfrWRWil1ppsCDr2sW+sFvhE/91M2Dkd 5qzyuqMc X-Google-Smtp-Source: AGHT+IGDDW73BrrybWnhgDLvwKJyffdiYjbRHUUDZGdP2yCEnF2SVu6E9reKoz8edCA+hBCTjpoxkQ== X-Received: by 2002:a05:6a20:2589:b0:1d6:fb3e:78cf with SMTP id adf61e73a8af0-1e187156a70mr9628465637.41.1733588299724; Sat, 07 Dec 2024 08:18:19 -0800 (PST) Received: from localhost (c-73-37-105-206.hsd1.or.comcast.net. [73.37.105.206]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-7fd2f40dc81sm1874742a12.64.2024.12.07.08.18.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 07 Dec 2024 08:18:19 -0800 (PST) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Akhil P Oommen , Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [RFC 24/24] drm/msm: Bump UAPI version Date: Sat, 7 Dec 2024 08:15:24 -0800 Message-ID: <20241207161651.410556-25-robdclark@gmail.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20241207161651.410556-1-robdclark@gmail.com> References: <20241207161651.410556-1-robdclark@gmail.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Rob Clark Bump version to signal to userspace that VM_BIND is supported. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_drv.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c index b31ec287c600..dc00781a099d 100644 --- a/drivers/gpu/drm/msm/msm_drv.c +++ b/drivers/gpu/drm/msm/msm_drv.c @@ -41,9 +41,10 @@ * - 1.10.0 - Add MSM_SUBMIT_BO_NO_IMPLICIT * - 1.11.0 - Add wait boost (MSM_WAIT_FENCE_BOOST, MSM_PREP_BOOST) * - 1.12.0 - Add MSM_INFO_SET_METADATA and MSM_INFO_GET_METADATA + * - 1.13.0 - Add VM_BIND */ #define MSM_VERSION_MAJOR 1 -#define MSM_VERSION_MINOR 12 +#define MSM_VERSION_MINOR 13 #define MSM_VERSION_PATCHLEVEL 0 bool dumpstate;